text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
International Journal of Technology Assessment in Health Care
Volume 16 Issue 04
URL: /core/journals/international-journal-of-technology-assessment-in-health-care
Special Collection - Policy Forums
Special collection - Methods
IN MEMORIAM: SEYMOUR PERRY, MD, MACP
Published online by Cambridge University Press: 25 May 2001, pp. 949-953
It was an honor to have known and worked with Seymour Perry, MD, MACP, and to have witnessed his many accomplishments and contributions in several fields—cancer research, government administration, the development and promotion of procedures for evaluating health technologies, particularly consensus development, and leadership in fostering national and international collaboration in health technology assessment—representing a sustained effort in which both quantitative and qualitative benefits have been demonstrated.
INTRODUCTION: PRACTICE GUIDELINES: Helpful Aids or Paradigm Shift?
J. Jaime Caro
The profusion of clinical practice guidelines has been remarked on by many (3;6;8) and ascribed to a need to address the wide geographic differences in practice documented in the last quarter century (9;10), as well as to the increasing concern with the cost of health care. The idea is that guidelines, by outlining efficient care strategies, will enhance the quality of care and reduce unnecessary or unproductive expenditures. Others hold that guidelines are simply a means of transferring the results of research from the literature to clinicians (2;5). A darker view of guidelines sees them as instruments of control of medical practice by uncaring administrators concerned solely with cost reduction (4;7).
IMPROVING CLINICAL PRACTICE GUIDELINES FOR THE 21ST CENTURY: Attitudinal Barriers and Not Technology Are the Main Challenges
George P. Browman
Through the use of three scenarios, this paper presents the challenges for clinical practice guidelines in the 21st century. Such challenges relate to technological developments to improve the efficiency and pace of the development process, to ensure that clinical practice guidelines are kept up to date, and to facilitate implementation of guidelines in the clinical setting. To improve and ensure the validity of the content of clinical practice guidelines, we need to address the important problem of publication bias, for which researchers, granting agencies, industry, and journal editors share responsibility. This means insisting on registration of trials at their inception, and incentives backed up by rules for funding and peer review publication that would promote behaviors to avoid publication bias. The more difficult challenges for clinical practice guidelines relate to what are referred to as attitudinal factors. To achieve optimal efficiencies in development and maintenance of clinical practice guidelines, we need to promote cooperation among various information resource providers internationally and to stress partnership over leadership. Finally, there needs to be reconciliation of the different stakeholder perspectives of the value and purpose of clinical practice guidelines so that they are used appropriately as aids to decision making and are not abused as tools for controlling clinical practice.
FROM CLINICAL RECOMMENDATIONS TO MANDATORY PRACTICE: The Introduction of Regulatory Practice Guidelines in the French Healthcare System
Pierre Durieux, Carine Chaix-Couturier, Isabelle Durand-Zaleski, Philippe Ravaud
In an effort to control ambulatory care costs, regulatory practice guidelines (références médicales opposables or RMOs) were introduced by law in France in 1993. RMOs are short sentences, negatively formulated ("it is inappropriate to \ldots"), covering medical and surgical topics, diagnosis, and treatment. Since their introduction, physicians who do not comply with RMOs can be fined. The fine is determined by a weighted combination of indices of harm, cost, and the number of violations.
The impact of the RMO policy on physician practice has been questioned, but so far few evaluations had been performed. At the end of 1997, only 121 physicians had been fined (0.1% of French private physicians). The difficulty of controlling physicians, the large number of RMOs, and the lack of a relevant information system limit the credibility of this policy.
The simultaneous development of a clinical guideline program to improve the quality of care and of a program to control medical practice can lead to a misunderstanding among clinicians and health policy makers. Financial incentives or disincentives could be used to change physician behavior, in addition to other measures such as education and organizational changes, if they are simple, well explained, and do not raise any ethical conflict. But these measures are dependent on the structure and financing of the healthcare system and on the socioeconomic and cultural context. More research is needed to assess the impact of interventions using financial incentives and disincentives on physician behavior.
WHERE ARE THE ECONOMIC GUIDELINES COMING FROM?
Alistair McGuire, Stephen Morris, Maria Raikou
Economic guidelines recommend methods that should be employed in conducting economic evaluations of healthcare programs. The nature of the efficiency or equity goal underpinning economic guidelines is unclear. What is also unclear is how the methods recommended in the guidelines are linked to the underlying efficiency or equity goal being targeted. If it is unclear what efficiency/equity objectives are being pursued, then it is unlikely that even full implementation of economic guidelines will improve resource allocation.
USING PRACTICE GUIDELINES TO ALLOCATE MEDICAL TECHNOLOGIES: An Ethics Framework
Mita K. Giacomini, Deborah J. Cook, David L. Streiner, Sonia S. Anand
Clinical practice guidelines are expanding their scope of authority from clinical decision making to collective policy making, and promise to gain ground as resource allocation tools in coming years. A close examination of how guidelines approach patient selection criteria offers insight into their ethical implications when used as resource allocation or rationing instruments. The purposes of this paper are: a) to examine the structure of allocative reasoning found in clinical guidelines; b) to identify the ethical principles implied and compare how guidelines enact these principles with how explicit systems-level rationing exercises and health policy analyses have approached them; and c) to offer some preliminary suggestions for how these ethical issues might be addressed in the process of guideline development. The resulting framework can be used by guideline developers and users to understand and address some of the ethical issues raised by guidelines for the use of scarce technologies.
THE DEVELOPMENT OF EVIDENCE-BASED CLINICAL PRACTICE GUIDELINES: Integrating Medical Science and Practice
Richard T. Connis, David G. Nickinovich, Robert A. Caplan, James F. Arens
Practice guidelines are rapidly becoming preferred decision-making resources in medicine, as advances in technology and pharmaceutics continue to expand. An evidence-based approach to the development of practice guidelines serves to anchor healthcare policy to scientific documentation, and in conjunction with practitioner opinion can provide a powerful and practical clinical tool. Three sources of information are essential to an evidence-based approach: a) an exhaustive literature synthesis; b) meta-analysis; and c) consensus opinion. The systematic merging of evidence from these sources offers healthcare providers a scientifically supportable document that is flexible enough to deal with clinically complex problems. Evidence-based practice guidelines, in conjunction with practice standards and practice advisories, are invaluable resources for clinical decision making. The judicious use of these documents by practitioners will serve to improve the efficiency and safety of health care well.
MEASURING THE EFFECT OF CLINICAL GUIDELINES ON PATIENT OUTCOMES
Deborah A. Marshall, Kit N. Simpson, Edward C. Norton, Andrea K. Biddle, Mike Youle
Objectives: To identify and examine the methodologic issues related to evaluating the effectiveness of treatment adherence to clinical guidelines. The example of antiretroviral therapy guidelines for human immunodeficiency virus (HIV) disease is used to illustrate the points.
Methods: Regression analysis was applied to observational HIV clinic data for patients with CD4+ cell counts less than 500 per μL and greater than 50 per μL at baseline (n = 704),using Cox proportional hazards time-varying covariates models controlling for baseline risk. The results are compared with simpler models (Cox model [without time-varying covariates] and logistic regression). In addition, the effect of including a measure of exposure to antiretroviral guidelines in the model is explored.
Results: This study has three implications for modeling clinical guideline effectiveness. To capture events that are time-sensitive, a duration model should be used, and covariates that are time-varying should be modeled as time-varying. Thirdly, incorporating a threshold measure of exposure to reflect the minimum period of time for guideline adherence required for a measurable effect on patient outcome should be considered.
Conclusions: The methods proposed in this paper are important to consider if guidelines are to evolve from being a tool for summarizing and transferring the results of research from the literature to clinicians into a practical tool that influences clinical practice patterns. However, the methodology tested in this study needs to be validated using additional data on similar patients and using data on patients with other diseases.
A COMPARISON OF CLINICAL PRACTICE GUIDELINE APPRAISAL INSTRUMENTS
Ian D. Graham, Lisa A. Calder, Paul C. Hébert, Anne O. Carter, Jacqueline M. Tetroe
Objective: To identify and compare clinical practice guideline appraisal instruments.
Methods: Appraisal instruments, defined as instruments intended to be used for guideline evaluation, were identified by searching MEDLINE (1966–99) using the Medical Subject Heading (MeSH) practice guidelines, reviewing bibliographies of the retrieved articles, and contacting authors of guideline appraisal instruments. Two reviewers independently examined the questions/statements from all the instruments and thematically grouped them. The 44 groupings were collapsed into 10 guideline attributes. Using the items, two reviewers independently undertook a content analysis of the instruments.
Results: Fifteen instruments were identified, and two were excluded because they were not focused on evaluation. All instruments were developed after 1992 and contained 8 to 142 questions/statements. Of the 44 items used for the content analysis, the number of items covered by each instrument ranged from 6 to 34. Only the instrument by Cluzeau and colleagues included at least one item for each of the 10 attributes, and it addressed 28 of the 44 items. This instrument and that of Shaneyfelt et al. are the only instruments that have so far been validated.
Conclusions: A comprehensive, concise, and valid instrument could help users systematically judge the quality and utility of clinical practice guidelines. The current instruments vary widely in length and comprehensiveness. There is insufficient evidence to support the exclusive use of any one instrument, although the Cluzeau instrument has received the greatest evaluation. More research is required on the reliability and validity of existing guideline appraisal instruments before any one instrument can become widely adopted.
GUIDELINE DEVELOPMENT IN EUROPE: An International Comparison
Objectives: To identify major differences and similarities in the development of clinical guidelines in different European countries.
Methods: A collaboration of researchers is funded by the European Commission to compare the approaches to guideline development in collaborators' countries. The program encompasses a series of tasks, the first being to identify and document current guideline procedures in the collaborating countries. A survey gathered information on guideline production, dissemination, and implementation in the 10 European countries involved in the project consortium: Denmark, England and Wales, Finland, France, Germany, Italy, the Netherlands, Scotland, Spain (both the Basque Country and Catalonia), and Switzerland.
Results: Seven countries have a national policy on guideline production, dissemination, and implementation, and three countries are discussing their policies. A majority of guidelines are currently produced at the national level in six of the countries and at the regional or local level in the other four. Central or national funding supports guideline production in six countries. Additional sources of funding include medical societies, pharmaceutical companies, and health insurance companies. Several of the countries have published or are preparing evaluations of their dissemination strategies.
Conclusions: The survey highlighted wide variation in the methods and policies of guideline development in Europe. The Appraisal of Guidelines, Research, and Evaluation in Europe (AGREE) Collaboration research program will identify the characteristics of the "better" guideline programs and will provide the basis for more research-generated policy initiatives in the future, helping to ensure that guidelines play a major role in improving patient care in the millennium.
CLINICAL JUDGMENT AND CLINICAL PRACTICE GUIDELINES
Frances B. Garfield, Joseph M. Garfield
Clinicians make judgments under conditions of uncertainty. Decision research has shown that in uncertain situations individuals do not always act rationally, coherently, or to maximize their expected utility. Advocates of clinical guidelines believe that these guidelines will eliminate some of the cognitive biases that the practitioner may introduce into the medical decision-making process in an attempt to reduce its uncertainty. Other physicians have grave doubts about guidelines' application in practice. Guideline implementation lags well behind their development. Studies of practicing physicians and a survey of clinicians in one specialty and setting indicate that experienced clinicians may be implementing guidelines selectively. Many clinicians are concerned that guidelines are based on randomized trials and do not reflect the complexity of the real world, in which a decision's context and framework are important. Their reluctance also may be due to the difficulty of applying general guidelines to specific clinical situations. The problem will only increase in the future. The patients of the 21st century will be older and have more complex disease states. Physicians will have more patient-specific therapies and need to exercise more sophisticated clinical judgment. They may be more willing to use guidelines in making those judgments if research can demonstrate guidelines' effectiveness in improving decision making for individual patients.
THE CLINICAL GUIDELINE PROCESS WITHIN A MANAGED CARE ORGANIZATION
Robin Richman, Diane R. Lancaster
Clinical practice guidelines have been available to clinicians for almost two decades, but the consistency of their implementation in practice remains highly variable. This paper describes the various processes and mechanisms used by one managed care organization to develop clinical guidelines and promote their adoption. Some of these mechanisms include provision of individual physician report cards, financial incentives, and various documentation tools that serve as reminders of guideline recommendations and provide an easy format to document recommended services.
There have been measurement challenges in evaluating the effectiveness of selected interventions designed to enhance guideline compliance. Most of these challenges relate to reliability and validity concerns regarding the three primary data sources used in the evaluation process: medical records, administrative claims data, and member survey data. Some of the interventions the health plan has implemented to address these measurement challenges include using hybrid methods of data collection and developing collaborative partnerships with outside organizations to enhance the accuracy and completeness of the available data. Outcomes of these efforts are described, as are physician response and recommendations for future enhancement of practice guidelines.
CLINICAL PRACTICE GUIDELINES AND THE COST OF CARE: A Growing Alliance
Judith A. O'Brien, Lenworth M. Jacobs, Jr., Danielle Pierce
Healthcare policy, medical practice, and cost of care are no longer considered distinct entities. Each is an integral factor in determining not only what, but how, patient care will be delivered. Clinical practice guidelines are the lynchpin that connects them. This paper addresses the various components of the clinical practice guideline—cost alliance.
Objective: To examine the bidirectional influence of choice of care on costs and of cost of care on decision making.
Methods: The literature was used to identify cost-related factors that influence development of guidelines and change in physician practice behavior. In a MEDLINE search with modifiers to the keywords "clinical practice guidelines," particular attention was paid to identifying surveys of practitioners. An analysis, prompted by a recently published guideline, of treating penetrating intraperitoneal colon injuries by different surgical approaches (primary repair versus diverting colostomy) exemplified how implementation of a guideline can affect the cost of care. Inpatient cost estimates, adjusted for medical inflation and cost-to-charge ratios and reported in 1999 U.S. dollars, were developed using data from 1996 and 1997 discharge databases from California and Massachusetts.
Results: The results showed that a substantial savings in hospital costs was achieved when a primary repair surgical technique, as advocated by the guideline, was used. The effect of cost influences on the development of clinical practice guidelines was established by demonstrating the cyclical effect between usual and customary practices, guideline implementation, changing practice patterns, and the economic considerations influencing the process.
Conclusions: A growing, albeit uneasy, alliance between costs and clinical practice guidelines is evident.
ASSESSMENT OF THE LEARNING CURVE IN HEALTH TECHNOLOGIES: A Systematic Review
Craig R. Ramsay, Adrian M. Grant, Sheila A. Wallace, Paul H. Garthwaite, Andrew F. Monk, Ian T. Russell
Objective: We reviewed and appraised the methods by which the issue of the learning curve has been addressed during health technology assessment in the past.
Method: We performed a systematic review of papers in clinical databases (BIOSIS, CINAHL, Cochrane Library, EMBASE, HealthSTAR, MEDLINE, Science Citation Index, and Social Science Citation Index) using the search term "learning curve."
Results: The clinical search retrieved 4,571 abstracts for assessment, of which 559 (12%) published articles were eligible for review. Of these, 272 were judged to have formally assessed a learning curve. The procedures assessed were minimal access (51%), other surgical (41%), and diagnostic (8%). The majority of the studies were case series (95%). Some 47% of studies addressed only individual operator performance and 52% addressed institutional performance. The data were collected prospectively in 40%, retrospectively in 26%, and the method was unclear for 31%. The statistical methods used were simple graphs (44%), splitting the data chronologically and performing a t test or chi-squared test (60%), curve fitting (12%), and other model fitting (5%).
Conclusions: Learning curves are rarely considered formally in health technology assessment. Where they are, the reporting of the studies and the statistical methods used are weak. As a minimum, reporting of learning should include the number and experience of the operators and a detailed description of data collection. Improved statistical methods would enhance the assessment of health technologies that require learning.
WHAT SHOULD BE INCLUDED IN META-ANALYSES?: An Exploration of Methodological Issues Using the ISPOT Meta-Analyses
Dean Fergusson, Andreas Laupacis, L. Rachid Salmi, Finlay A. McAlister, Charlotte Huet
Objective: To explore the impact of methodologic issues on the results of meta-analyses. The following issues were examined: the type of literature search strategy used; inclusion or exclusion of non–peer-reviewed studies; the inclusion or exclusion of non-English language publications; the effect of trial quality; and the inclusion or exclusion of non–placebo-controlled studies.
Methods: The International Study of Perioperative Transfusion (ISPOT) meta-analyses were used to evaluate each of the methodologic issues. The 10 meta-analyses consisted of technologies to reduce the need for perioperative red blood cell transfusion. The number of trials for each of the meta-analyses varied from 2 to 45. Both EMBASE and MEDLINE searches were conducted, including the use of systematic search strategies.
Results: MEDLINE identified the vast majority of trials. Alone, MEDLINE would have missed 8 studies compared to 10 for EMBASE. Use of the systematic search strategies greatly reduced the number of articles to be reviewed compared to open searches. Type of publication, country of study origin, inclusion of non-English publications, and trial quality had very little impact on the estimates of effect. The use of placebo versus open-label control affected the magnitude of the odds ratio for two of the meta-analyses. The results of the two meta-analyses were not statistically significant if only placebo-controlled trials were included.
Conclusions: While methodologic issues had very little impact on the ISPOT meta-analyses, further studies are needed in a variety of other clinical settings. Because MEDLINE, coupled with a review of the references in the identified trials, identified the vast majority of trials, one needs to consider the costs and benefits of searching EMBASE and the pursuance of unpublished and unindexed trials.
THE SOCIETAL COSTS OF SEVERE TO PROFOUND HEARING LOSS IN THE UNITED STATES
Penny E. Mohr, Jacob J. Feldman, Jennifer L. Dunbar, Amy McConkey-Robbins, John K. Niparko, Robert K. Rittenhouse, Margaret W. Skinner
Objective: Severe to profound hearing impairment affects one-half to three-quarters of a million Americans. To function in a hearing society, hearing-impaired persons require specialized educational, social services, and other resources. The primary purpose of this study is to provide a comprehensive, national, and recent estimate of the economic burden of hearing impairment.
Methods: We constructed a cohort-survival model to estimate the lifetime costs of hearing impairment. Data for the model were derived principally from the analyses of secondary data sources, including the National Health Interview Survey Hearing Loss and Disability Supplements (1990–91 and 1994–95), the Department of Education's National Longitudinal Transition Study (1987), and Gallaudet University's Annual Survey of Deaf and Hard of Hearing Youth (1997–98). These analyses were supplemented by a review of the literature and consultation with a four-member expert panel. Monte Carlo analysis was used for sensitivity testing.
Results: Severe to profound hearing loss is expected to cost society $297,000 over the lifetime of an individual. Most of these losses (67%) are due to reduced work productivity, although the use of special education resources among children contributes an additional 21%. Life time costs for those with prelingual onset exceed $1 million.
Conclusions: Results indicate that an additional $4.6 billion will be spent over the lifetime of persons who acquired their impairment in 1998. The particularly high costs associated with prelingual onset of severe to profound hearing impairment suggest interventions aimed at children, such as early identification and/or aggressive medical intervention, may have a substantial payback.
IMPACT OF QUALITY ITEMS ON STUDY OUTCOME: Treatments in Acute Lateral Ankle Sprains
Arianne P. Verhagen, Robert A. de Bie, Anton F. Lenssen, Henrica C. W. de Vet, Alphons G. H. Kessels, Maarten Boers, Piet A. van den Brandt
Objective: This study investigates the influence of different aspects of methodologic quality on the conclusions of a systematic review concerning treatments of acute lateral ankle sprain.
Method: A data set of a systematic review of 44 trials was used, of which 22 trials could be included in this study. Quality assessment of the individual studies was performed using the Delphi list. We calculated effect sizes of the main outcome measure in each study in order to evaluate the relationship between overall quality scores and outcome. Next, we investigated the impact of design attributes on pooled effect sizes by subgroup analysis.
Results: The quality of most studies (82%) was low; only 4 of 22 trials were of high quality. Studies with proper randomization and blinding procedure produce a slightly higher (not statistically significant) effect estimate compared to the other studies.
Conclusion: Previous research has suggested that methodologically poorly designed studies tend to over-estimate the effect estimate. Our study does not confirm these conclusions.
COST SAVINGS AND HEALTH LOSSES FROM REDUCING INAPPROPRIATE ADMISSIONS TO A DEPARTMENT OF INTERNAL MEDICINE
Bjørn O. Eriksen, Olav H. Førde, Ivar S. Kristiansen, Erik Nord, Jan F. Pape, Sven M. Almdahl, Anne Hensrud, Steinar Jaeger, Fred A. Mürer
Objectives: Inappropriate hospital admissions are commonly believed to represent a potential for significant cost reductions. However, this presumes that these patients can be identified before the hospital stay. The present study aimed to investigate to what extent this is possible.
Methods: Consecutive admissions to a department of internal medicine were assessed by two expert panels. One panel predicted the appropriateness of the stays from the information available at admission, while final judgments of appropriateness were made after discharge by the other.
Results: The panels correctly classified 88% of the appropriate and 27% of the inappropriate admissions. If the elective admissions predicted to be inappropriate had been excluded, 9% of the costs would have been saved, and 5% of the gain in quality-adjusted life-years lost. The corresponding results for emergency admissions were 14% and 18%.
Conclusions: The savings obtained by excluding admissions predicted to be inappropriate were small relative to the health losses. Programs for reducing inappropriate health care should not be implemented without investigating their effects on both health outcomes and costs.
MODELING AGE DIFFERENCES IN COST-EFFECTIVENESS ANALYSIS: A Review of the Literature
Louise B. Russell, Jane E. Sisk
Objectives: Cost-effectiveness analysts often present cost-effectiveness results by age to help inform decisions about the use of an intervention. Yet it is not known how well studies model the risks and costs associated with age. We reviewed published studies to examine their modeling of age differences.
Methods: MEDLINE searches identified all cost-effectiveness analyses published between 1985 and 1997 that included adults 50 years of age and older, were based on data for developed countries, and compared cost-effectiveness ratios for adults of different ages or for initiation of an intervention at different ages; 36 articles met these criteria. They were reviewed to determine the extent to which they incorporated age-specific data. Studies that justified using the same data for all ages were counted as having varied the data element by age.
Results: All studies varied life expectancy by age. Most also varied the incidence/prevalence of the target condition and the case fatality rate. Only 36% varied the effectiveness rate of the intervention by age. Costs were usually assumed constant: 42% of studies varied the cost of treating adverse effects and 17% varied the cost of treating the target condition. Whether a data element was varied did not appear to be related to the pattern of cost-effectiveness ratios by age.
Conclusions: Many studies have not modeled age differences in sufficient detail to ensure that differences in cost-effectiveness ratios by age are accurate and a sound basis for decisions. As cost-effectiveness analysis becomes more widespread, analysts should strive to incorporate more complete age-specific data.
BREAST CANCER: BETTER CARE FOR LESS COST: Is It Possible?
William K. Evans, B. Phyllis Will, Jean-Marie Berthelot, Diane M. Logan, Douglas J. Mirsky, Nancy Kelly
Objectives: To estimate the potential for cost reduction in the acute care setting and the required investment in the home care setting of implementing an outpatient/early discharge strategy for operable (stages I and II) breast cancer in Canada.
Methods: Data from a community hospital were augmented by expert knowledge and incorporated into the breast cancer submodel of Statistics Canada's Population Health Model. For the estimated 90% of patients for whom this approach was assumed to be appropriate, the resource utilization for outpatient breast-conserving surgery and 2 days of hospitalization for those women undergoing mastectomy was quantified and costed, as were the appropriate home care services. A 5% readmission rate for complications was assumed. Cost per case, total cost burden, investment in home care, savings in acute care, and net savings were calculated. Sensitivity analyses were performed around readmission rates and home care/surgical follow-up costs. All costs were determined in 1995 Canadian dollars.
Results: The cost of initial treatment for the 15,399 women diagnosed with stages I and II breast cancer in 1995 in Canada was estimated to be $127.6 million. Hospitalization made up 53% of these costs. Under the outpatient/early discharge strategy, the acute care cost of initial breast cancer management could be reduced by $47.2 million, with an investment in home care of $14.5 million ($453 per patient), resulting in an overall net saving of $33 million. Under this strategy, hospitalization would contribute only 21% to the total care cost.
Conclusions: If Canadian surgeons and healthcare administrators were to work together to put in place processes to support ambulatory breast cancer surgery and if resources were redirected to the provision of home-based post-operative care, there would be potential for a large net healthcare saving and preservation of high-quality patient care.
|
CommonCrawl
|
Search all SpringerOpen articles
ROBOMECH Journal
Issues and approach
Grasping position detection
Grasping motion determination
A method of picking up a folded fabric product by a single-armed robot
Yusuke Moriya1,
Daisuke Tanaka1,
Kimitoshi Yamazaki1Email author and
Keisuke Takeshita2
ROBOMECH Journal20185:1
https://doi.org/10.1186/s40648-017-0098-y
© The Author(s) 2018
Received: 19 July 2017
Accepted: 18 December 2017
Published: 4 January 2018
This paper describes a method to pick up a folded cloth product by a single-armed robot. We focus on a problem on picking up a folded cloth, and organize tasks to attack it. Then, we propose methods of grasp position estimation composed of two stages: detection of the thickest folded hem and pose estimation of the cloth product. In addition, we attempt to search for appropriate grasping postures, and show that there are regions where the success rate of grasp was high. In experiments using an actual robot, we achieved a picking task with 92% success rate.
A folded cloth
Deformable object recognition and manipulation
A single-armed robot
One desired ability for autonomous robots engaging daily assistance is "pick-and-place an object" on a designated place. Among them, object grasping is difficult and important issue. A conventional approach to robotic grasping in daily assistance assumes to cope with rigid objects, and employs geometrical models, and then cooperates model-based recognition with motion planning [1–3]. In this procedure, how to grasp the object is one issue. In many cases, it is assumed that the object is a rigid body, and the point-to-point contact between the robot finger and the object is determined. However, in daily environment, we can find essential tasks that non-rigid objects are needed to manipulate. For instance people use various types of clothing in the course of their daily lives. If robots have an ability to handle a folded cloth, e.g. handing over a towel and putting a shirt in a chest, it might be one of the effective contributions of autonomous robots, especially for handicapped people [4]. When doing grasping of the folded fabric product, it is desirable to grasp a proper position of the fabric product so as not to destroy the original folded shape.
Picking up a folded cloth product: success and failure cases
The basic procedure and structured data for grasping a folded cloth product
The procedure of contour extraction
The purpose of this study is to develop a method to pick up a folded cloth item by a single-armed robot. Cloth products are often folded in a rectangular shape when they are going to go to shelves and dressers. This is a common matter with various types of cloth products. Therefore, we will proceed with the study assuming the situation that the cloth product folded in a rectangle is placed on the horizontal plane. The cloth grasp assumed in this paper is required to be reversible deformation. That is, it is unacceptable that if the original folded shape of the cloth is collapsed when the gripped cloth is placed on a designated place.
The contributions of this paper are as follows:
We focus on a problem on picking up a folded cloth, and organize tasks to attack it.
We propose a method to determine the grasping position from a folded cloth product placed on a table. The proposed method consists of two stages: detection of the thickest folded hem and pose estimation of the cloth product.
To obtain robust grasping, we attempted to search for appropriate grasping postures. As a result, we were able to find regions where the success rate of grasp was high.
The paper is organized as follows: "Related work" section shows related work, and "Issues and approach" section explains issues and our approach. "Grasping position detection" and "Grasping motion determination" sections explains the proposed method. "Experiments" section shows experimental results, and "Conclusion" section presents the conclusions of this paper.
In many previous studies on automatic operation of cloth products, there are a phase to make a suspended state. Osawa et al. [5] showed that the type of cloth product can be determined by repeating the following procedure: a robot holds a cloth product by hanging it with one hand, and then grasps the lower end portion by another hand, and finally hangs the product by the hand. This idea was later referred to by many researchers and contributed to implementation of several cloth product operations such as type discrimination and folding. Willimon et al. [6] introduced the task of picking one gripping point for suspending a single cloth product placed on a table casually. Kita et al. [7] proposed a method of matching the model with a 3D point cloud measured using a trinocular stereo camera using a deformable shape model for the hanging state. Abbeel et al. [8] succeeded in identifying the type of cloth product by a robot observing the contour and the position of the lower end point while operating the cloth product.
There are also studies that aimed at more efficient operation, sophisticated selection of gripping points and introduction of operation methods other than picking and moving. Doumanoglou et al. [9] succeeded in recognizing clothing type and shape using a 3D range camera while unfolding. Their framework also provided a next grasping point. Li et al. [10] proposed a framework for recognizing the categories and the poses of a deformable object. They used RGB-D data, and matched it with garment shape registered in database. Yuba et al. [11] proposed a method for unfolding cloth products placed casually in a few steps by introducing "pinch and slide" proposed by Shibata et al. [12].
Counting the number of edges from the result of canny edge detection
The experimental environment
Three types of hem
In these studies, a robot manipulated a cloth product that was placed in a casual way or was suspended. Of course, they are difficult tasks due to being complex shape state. However, it is clearly different from the approach we are assuming about grasping cloth products. In the abovementioned studies, they actively changed the shape of the cloth, to obtain information or to transform to the desired shape. On the other hand, the task assumed in this study is to grasp the folded cloth product without collapsing the shape as shown in the right side of Fig. 1. If we cannot select the folded hem properly, we must grasp by clipping multiple cloths together. In this case, it was often occurred in the our preliminary examination that because of difficult task of inserting fingers under the cloth, it was not possible to grasp or the shape of the cloth collapsed even if gripped. Based on the above, we have selected study topics from selecting the parts to be gripped, proposing and demonstrating the solution method.
Successful grasp definition and issues
A single-armed robot exists in front of a folded cloth product. A parallel jaw gripper, which is a simple and popular equipment for robot manipulators, is attached as the end-effector. A 3D range image sensor is installed to observe robot's workspace. The purpose is to pick up the cloth product from the table.
First, we define successful grasping state. When a cloth product is folded in a rectangle shape, if we grasp the thickest folded hem that was made when we folded at the end, we are often grasped without collapsing the shape. We can set such fact in various types of clothe products: towel, T-shirt, pants and so on. Therefore, we will proceed with the premise of such a way of folding. Let us assume that the grasping position assumed in this paper is on the middle of the thickest folded hem depicted as a red point in center picture of Fig. 1. If the robot can grasp that part and lifts it without breaking the shape of the cloth, it will be successful. However, if the shape of the cloth is irreversibly deformed after picking up, e.g. when the shape of the cloth collapsed because the place to grasp was not properly grasped, it becomes failure case. Meanwhile, a robot grasps another points on the cloth, it is also failure.
This problem setting is pretty simple but includes outstanding issues as follows:
How to detect a grasped position from a folded cloth product: since the shape of the cloth has a certain regularity, it is relatively easy to detect a hem portion as a border. However, it is necessary to verify whether the detected border is a suitable site for lifting without collapsing the shape of the cloth. That is, it is necessary to recognize the state of folding of each hem.
How to make grasping motion sequence of the robot: cloth products are flexible material, so the success rate of grasping changes depending on how the hand is brought close to and how to grasp. Therefore, consideration should be given not only to the pose at the time of grasping but also how to bring the end-effector closer to the grasping position.
The next subsection introduces our approach to solving them.
Approach for acquiring method of grasping cloth products
The left flowchart in Fig. 2 shows a basic procedure for grasping a folded cloth product. First, the cloth placed on a table is measured by a 3D range image sensor, and a pair of color image and depth image are obtained. Using these images, a grasp position is determined, and then a grasp motion of a robot arm is determined. Finally, the result is performed by the real robot.
For the second and third block of the flowchart, two types of pre-experimented dataset was used, respectively. The first is information on the grasping point, which saves pairs of an instructed grasping point and a depth image. The other is information for bringing a hand closer to the grasping point. It is composed of a pair of a grasping posture and via posture of the end-effector.
The relationship between the type of hems, the position and the number of edges when only one hem is visible from the camera. The number (1)–(3) means three types of hem explained in "Orientation estimation of cloth products" section
The relationship between the type and position of the hem when two edges are visible on the near side
Results of the thickest hem detection in a case where one hem is visible on the near side
These pre-experimented data are collected in advance: that is, picking up a folded cloth product is performed with an instructed grasping position, and sensor data therebetween is recorded. In the remaining of this paper, we call one data unit (a pair of \(\mathbf P\) and \(\mathbf R\)) "task experience data," and call a dataset consisting of all of the data "task experience dataset," and a dataset collecting only successful case "successfully experience dataset."
In order to solve the issues mentioned in the previous subsection, the following processing is performed by using experience data. In the following two sections, each of them will be explained in detail.
In order to determine a grasping position, it is necessary to recognize how a cloth product is placed and then to find the position to grasp. Recognition of the placed situation is accomplished by performing shape-based registration processing between a task experience data and the current sensor data. On the other hand, grasping point determination is accomplished by detecting visually recognizable borders and counting overlapping of cloth that can be observed there. However, since ambiguity remains, by observing the relationship of the number of the overlapping on a neighboring border, the determination accuracy is improved.
We solve the problem of finding an appropriate posture transition from via posture to grasping posture of the end-effector. Via posture means the preparatory posture of the end-effector just before reaching the grasping posture. In order to obtain an appropriate combination of these two postures, we take an approach to selecting the posture parameters from advance experiments that perform actual grasping a folded cloth product with various posture parameters.
Extraction of the area where a cloth product exists
A color image and a depth image are captured for a cloth product placed on a table. A three-dimensional point cloud is generated from the depth image, and a plane equation of the table top is calculated by plane detection. Here, by estimating the plane parameters to which RANSAC [13] is applied, a plane coincident with the table top is detected without being affected by the existence of the cloth product.
After that, only the three-dimensional points on the sensor side than that plane are selected, and they are projected on a two-dimensional plane. This two-dimensional plane virtually constitutes an image obtained by observing the top plate of the table from vertically above. As a result, we obtain a projected image of the point cloud belonging to the cloth product when looking from directly above.
Orientation estimation of cloth products
In order to find a border including a point to be grasped, the thickest folded hem is detected. For this purpose, a color image taken from obliquely above the cloth product is used. When comparing the thickest folded hem with other hems, a clear difference appears depending on whether there is a gap due to overlapping of cloths. Therefore, edge detection is applied to the obtained color image. Then, a processing focusing on the fact that the number of edges appearing depends on the type of hem is performed.
Results of the thickest hem detection in a case where two hems are visible on the near side
Examples of fine adjustment for grasp point detection
Tendency of success/failure to the difference of angle \(\alpha\)
First, the Canny operator [14] is applied to the color image, and as shown in the top right panel in Fig. 3, gaps between the cloth parts are obtained as edges. On the other hand, as shown in the lower left panel, the contour of the cloth product is obtained. Then the folded cloth product is approximated to a quadrangle as shown in the lower right panel. From this shape, the edge positioned on the camera side is selected, and processing of state estimation of folded parts is performed.
Let \(\mathbf u_c = (u_c, v_c)\) be image coordinates that belong to quadrangle line segments shown in Fig. 3 (4). L, the number of edge, is calculated as follows.
$$\begin{aligned} L = \displaystyle {\frac{1}{l}} \sum ^l_{c=1} f_{u_c} (v_c), \end{aligned}$$
where l is the number of pixels in the horizontal direction (u direction) in the area where a hem exists. \(f_{u_c} (v_c)\) is the number of edges detected when operating in the vertical direction in the column of the coordinate \(u_c\). That is
$$\begin{aligned} f_{u_c} (v_c) = \displaystyle {\frac{1}{2}} \sum ^{v_{max}}_{v=v_{min}+1} \{ 1 - \delta _{I(v)I(v-1)} \}, \end{aligned}$$
where I represents the resulted image of canny edge detection, and I(v) is the pixel values on the coordinates \((u_c, v_c + v)\). \(v_{min}\) is a negative integer whereas \(v_{max}\) is positive, and \(\delta\) is Kronecker's delta. In Eq. (2), a column of pixels passing through the hem is selected in order. A pixel in the column is compared with another pixel immediately before that, 0 is added if it is the same value, and 1 is added if it is different value. That is, the number of times of crossing the white line is calculated. This process is applied throughout one border, after which the average is calculated by Eq. (1). Fig. 4 visually shows this process. By looking at the number of edges obtained, it is possible to judge whether the border of interest is the thickest folded hem or not. Also, by looking at the average of the number of edges in adjacent hems, it is possible to specify the orientation of the cloth product. These are described in "Experiment" section.
Fine adjustment of position and orientation
Due to the above-mentioned processing, the rough 2-dimensional position of the thickest folded hem is known. Next, in order to accurately obtain the grasping position, additional processing is performed. In this study, as we assume that cloth is folded in a rectangular shape, geometorical fitting of rectangular shape might be one convenient way. However, the depth data from the surface of cloth is affected by the location of the cloth or the existence of wrinkle, and some data might be missing. When geometrical shape fitting is performed on such data, we empirically confirmed that errors were remained particularly in the angular direction. Therefore, pose adjustment by particle filter [15] was adopted. The procedure is as follows.
First, one learning data whose placed direction was similar to the current cloth product is identified and used as reference data. In this identification, each learning data and input data are converted into an image on a viewpoint looked down from vertically above. Next, a process of collating the reference data with the shape of the input data is performed. If the matching degree between the two data is high, it is assumed that the grasping position recorded in the reference data is mapped on the input data, and the grasping position can be determined.
Examples of fine adjustment for grasp point detection (\(\beta _v < \beta _g\))
Examples of fine adjustment for grasp point detection (\(\beta _v > \beta _g\))
The reason for preparing dozens of learning data for the cloth is as follows. As the orientation of the cloth changes, the depth data for the cloth will also change. In particular, since the measurement result around hem changes, it directly affects the error of the gripping position. Therefore, we added the selection process to pick up the data that resembled the current placement.
An issue in the pose adjustment procedure is that the shape and inclination of the cloth product in the reference data is not completely the same as the input data. Therefore, in order to overlap the input data well, the reference data is aligned by means of a particle filter. Originally, there are six variables in the posture alignment. However, as described above, if the transformation for directly above viewpoint has been added, the posture variables can be thought of as a total of three degrees of freedom; two parallel movement parameters (x, y) and a rotation parameter \(\theta\) on the plane.
In the particle filter, a posture of a target object \(\mathbf x_t\) is estimated from measurements \(\mathbf z_t\) by external sensors according to the following two equations:
$$\begin{aligned} \begin{array}{ll} p (\mathbf x_t | Z_{t-1}) = \displaystyle {\int } p (\mathbf x_t | \mathbf x_{t-1}) p( \mathbf x_{t-1} | Z_{t-1}) d{\mathbf x}_{t-1}, \\ p (\mathbf x_t | Z_t) \propto p( \mathbf z_t | \mathbf x_t) p (\mathbf x_t | Z_{t-1}), \end{array} \end{aligned}$$
where \(z_t\) indicates a sensor measurement, that is, the perspected transformed image in our case. \(Z_t\) is a group of \(\mathbf z_i^t (i = 1, \ldots , n)\) at time t. In Eq. (3), it is a depth value obtained for each coordinate (u, v) on a depth image. The former equation is a prior probability which is calculated before image processing at time t, and the latter is a posterior probability which includes the estimation result. In our approach, a likelihood \(p(\mathbf z_t | \mathbf x_t)\) is calculated by comparing 3D points derived from a cloth product. The evaluation equation is as follows:
$$\begin{aligned} p (\mathbf z_t | \mathbf x_t) = \Sigma _{d} \displaystyle {\frac{1}{\{d_{ref} (u', v') ) - d_{input} (u, v) \}^2 + C}}, \end{aligned}$$
where (u, v) and \((u', v')\) are image coordinates. \((u', v')\) are a result of transformation using posture parameters \((x, y, \theta ).d_* (u, v)\) is the depth value on (u, v). ref indicates a training data for comparizon, and input indicates the input data. The subscript d of the symbol indicates all the depth values deemd to belong to the fabric product in the depth image. C is a constant.
In this equation, the difference from the input data is taken for all three dimensional points after posture conversion of the reference data. The more the many points overlap with small differences, the better the evaluation is obtained.
Concept of determining grasping motion
Since cloth products are a flexible material, deformation of cloth might occur by touching it. That is, even in a grasping operation, it is possible to assume operations such as sliding a finger under the cloth or letting a part of the cloth between fingertips. Thereby there is a possibility that the success rate of grasping can be improved. From the above, it is desirable to take into consideration not only the hand posture of grasping but also posture change in sequential order up to grasping.
Therefore, we take a policy of looking for suitable posture sequence in advance. Instead of manually giving a grasping posture in a descending manner, we take an approach to repeating trial and error according to various grasping methods. However, there is a big problem with this approach: As the number of target postures increases, the dimension of the parameter space to be searched becomes larger, so that it is not realistic to obtain an appropriate solution. Therefore, we decided to find an appropriate grasping method by limiting the end-effector postures to be searched to two kinds; a via posture and a grasping posture.
A proper grasping motion seems to have a part depending on the shape of end effector. Therefore, in the course of trying grasping, we try to clarify two aspects: elements generally common with two fingered hands and elements dependent on end-effector. If we grasp the former, it can be expected to find appropriate grasping motions through realistic number of trials even when using different end-effectors.
Search for an posture pair
A posture of an end-effector is represented by six parameters \((x, y, z, \phi , \theta , \psi )\). Therefore, it is necessary to consider a combination of a total of 12 variables in order to decide an appropriate grasping motion. However, even if it is simplified so far, the search space is still high-dimensional.
We select grasping postures according to the policy as follows. First, the grasping position \((x_g, y_g, z_g)\), which is fixed by the method in "Fine adjustment of position and orientation" section , is set as the center part of the thickest folded hem. Then nine posture variables are defined as \((x_v, y_v, z_v, \phi _v, \theta _v, \psi _v)\) for a via posture and \((\phi _g, \theta _g, \psi _g)\) for a grasping posture, Next, they are randomly changed within a pre-defined spatial range to grasp the cloth product. Both the combination of variables at the time of success and the combination of variables at the time of failure are recorded, respectively. From the results, we identify the area where successful grasps are concentrated in the posture parameter space, and specify the posture parameters with high importance for stable grasping. Then, by selecting appropriate ranges of values for the posture parameters, set of parameters which are the center of the ranges are set to the via posture/grasping posture.
By this procedure, the number of posture variables to be noticed can be reduced. It is considered that posture variables defined by such procedure are also effective for hands having similar mechanical structure but another fingertip shape. Therefore, when using another two fingered hand, it is sufficient to search for two hand postures in the reduced low dimensional search space.
Experimental settings
NEKONOTE 6 DOF for Academic manufactured by RT CORPORATION was used as an experimental robot. As a three-dimensional range image sensor, Xtion PRO LIVE manufactured by ASUS was used, and a web camera (BSW32KM04WH) made by Buffalo Co., Ltd. was also used. A color image and a depth image of the size of \(640 \times 480\) [pixel] can be acquired from the three-dimensional range image sensor, and a color image of the same size can be acquired from the camera. As shown in Fig. 5, the manipulator was fixed on a table, and the 3D range image sensor was installed at the point of view where a cloth product and the manipulator can be seen from above. Also, the camera was installed in a position where hems on the near side of the cloth product was easy to see. For cloth products, a rectangular cloth towel, which is \(340 \times 340\) (mm) size, 35.7 (g) weight, and 1.23 (mm) thickness, was used. In doing grabbing task, we folded this towel in four and put it on the table.
When the folded cloth product is shot with a camera, the number of hems that can be observed from the camera is one or two. As shown in Fig. 6, the types of observable hem can be classified as follows.
The thickest folded hem
There is one gap between overlapped cloths.
There are two or more gaps between overlapped cloths.
When these images were obtained, it was examined whether the position of the hem to be grasped can be specified by using the number of edges detected from the border.
Figure 7 shows the relationship between the type of hems, the position and the number of edges when only one hem is visible from the camera. Each numerical value is after rounding off. In the table, "position of the hem" shows the rough position of the hem when viewed from the sensor. From this table, when the average number of edges detected from the hem on the near side is about two, there is a high possibility that the hem is the thickest folded hem. This is because the average number of edges will be greater than three if it is another hem. Likewise, even if a hem exists in the front or the lower right corner and the average number of edges is about three, it can also be identified as the thickest folded hem.
On the other hand, Fig. 8 shows the relationship between the type and position of the hem when two edges are visible on the near side. In this case, the position of the visible hem is divided on the right side or the left side from the sensor. Naturally, it does not happen when the thickest folded hems appear on both sides. The same is true for cases that hems only with one gap appears on both sides. From these results, it was found that when the thickest folded hem is visible on the left side, the average number of edges becomes one or two, and when it is visible on the right side, the average number of edges becomes three.
These trends basically depend on the number of gaps due to overlapping of cloths. However, although the ratio of the number to edges of this experiment has some degree of invariance, the number itself is due to cloth hardness, lighting conditions, etc. It is necessary to clarify these experimentally. Another important point is that if only one side is visible and the hem type is (2) or (3) shown in Fig. 6, the thickest folded hem cannot be specified. Also, when two sides are visible, ambiguity remains between left (3)–right (2) and left (3)–right (3). In such a case, additional method such as actively moving the cloth and observing it again are necessary. However, in cases other than the above, it is possible to specify even if the position of the thickest border is not visible if we use the relationship in Fig. 8.
Figures 9 and 10 show examples of recognition results of hem on the basis of the above. Fig. 9 shows a case where one hem is visible on the near side, and Fig. 10 shows two cases. The right side is the output image of the recognition result, green line is the result of contour extraction, blue line is the result of specifying the hem, the area painted white is an area used to calculate the number of edges. Results that a hem with red line was recognized as the thickest folded hem.
Fine adjustment of reference data with input data
By the processing mentioned in the previous section, the position of the thickest folded hem is specified. Next, a process of determining the grasping position is performed using particle filter. In this section, the procedure is explained. First, 30 pieces of learning data for grasping position determination were prepared. In collecting this data, a folded cloth product was randomly placed in a region where length \(300\,{\text{mm}} \times\) width 500 mm in front of the manipulator. With respect to the direction of the cloth, it was also randomly placed in the range of -90° < θ < 90°, assuming \(\theta = 0{^{\circ}}\) when the direction of the thickest folded hem is perpendicular to the axis in the front direction of the robot. The number of particles of the particle filter used in the alignment process was set to 250. The standard deviation of particles on prediction process was empirically set to \((x, y, \theta ) = (10\,{\text{mm}}, 10\,{\text{mm}}, 5{^{\circ}})\).
Examples of alignment using particle filter are shown in the Fig. 11. The red part is input data, the green part is learning data, and the part where the two data overlap is represented by yellow. The orange points represent grasping position candidates. Points without filling are the original grasping position linked to the learning data and another point with filling is the grasping position with respect to the input data newly obtained by the alignment process. As can be seen, the original grasping position was moved near the midpoint of the edge. This result shows that it was possible to determine an appropriate grasping position.
In the positioning process according to the Eq. (3), if processing was performed at all the existing points (10,000–15,000), it takes a long processing time. Therefore, we decided to thin out the points to be compared. We sampled points every n pixels while doing raster scanning, and examined the accuracy of alignment in each sampling. As a result of reducing the sampling to 1/20, the accuracy of alignment was almost the same as before the thinning. On the other hand, the processing time was greatly reduced from about 11 seconds to about 0.6 seconds.
Parameter search for via posture and grasping posture
In this sub-section, we report experiments that determine the appropriate via posture and grasping posture through actual grasping trials. First, as shown in Fig. 5, a cloth product was placed in a predetermined position in front of a robot. The position of the thickest folded hem was made to be the farthest from the installation position of the robot. That is, in the case where the x axis is forward, the y axis is on the horizontal plane perpendicular to the x axis, and the z axis is upward, the orientation of the hem was parallel to the y axis.
Let \((d_x, d_y, d_z)\) be the via position of the end-effector as seen from the coordinate system of the grasping position, and let (\(\alpha _v, \beta _v, \gamma _v)\) be roll-pitch-yaw angles of the via posture and (\(\alpha _g, \beta _g, \gamma _g)\) be roll-pitch-yaw angles of the grasing posture, respectively. \(\alpha _g = \beta _g = \gamma _g = 0{^{\circ}}\) when grasping position is grasped from just above cloth product and the direction of fingertips are parallel to the thickect folded hem. The ranges are limited as follows: \(- \pi /4 \le \{ \alpha _v, \beta _v, \alpha _g, \beta _g \} \le \pi /4\), and \(-30 \le \{ d_x, d_y, d_z \} \le 30\,{\text{mm}}\) for the position of via posture. On the other hand, as for \(\gamma\), it was clarified by prior examination that the success rate of grasping drops greatly unless the value is set to a value close to 0. Therefore, \(\gamma = 0\) was fixed.
Within the above range, posture parameters of the end-effector were randomly selected according to a uniform distribution, and 100 grasping trials were performed. The method of determining whether or not grasping was successful was as described in "Successful grasp definition and issues" section. That is, if the robot grasped the thickest folded hem and lifted it without collapsing the shape of the cloth, the worker visually recognized and judged it to be successful. Otherwise, it was judged as a failure. The result was that the number of times of grasping succeeded was 50 times and that of failures was 50 times. The purpose of this experiment was to find the range of via/grasping posture that was easy to succeed.
Figure 12 shows two graphs plotting success/failure with the wrist roll angle \(\alpha\) as the horizontal axis. The blue dot indicates that the grasping was successful, and the red dot indicates that it failed. From this result, since there is no noticeable trend in the value of \(\alpha\), we decided to always set \(\alpha =0\) in "Experiment" of the next subsection. On the other hand, Figs. 13 and 14 shows the result of plotting four posture parameters: two positional parameters \((d_x, d_z)\) and pitch angles \((\beta _v, \beta _g)\) that were considered to have a large influence on successful grasping. Figure 13 plots the samples when \(\beta _v < \beta _g\). A blue color means a success, a red color means a failure sample, a circle mark indicates a via posture, and a triangle mark connected by a line indicates a grasping posture shifted from the via posture. From these graphs, it turns out that successful grasps are concentrated when via posture was started from the area surrounded by green square. This means a movement that puts the fingertip between the cloth product and the desk. On the other hand, Fig. 14 plots samples for \(\beta _v > \beta _g\). There were many successes in the square part of the figure. This was a grasping method in which the cloth product was pressed down with the fingertip and then the other fingertip was hooked on a hem. From the above results, it is appropriate to shift the posture so that the cloth product is pressed down by a fingertip through the back side as viewed from the robot \((\beta _v > \beta _g\) and \(dx > 0)\).
Experiment with integrated system
Based on the experiments introduced in "Orientation estimation of cloth products", "Fine adjustment of reference data with input data", "Parameter search for via posture and grasping posture" sections, a robot system that performs from detection of cloth products to grasping had conducted. With the same placement method as the learning data collection described in "Fine adjustment of reference data with input data" section, a folded fabric products was randomly placed in front of the robot, and it was investigated whether grasping can be done consistently. It included the detection of the thickest holded hem, determination of the grasping position, and determination of via/grasping posture.
As described in "Parameter search for via posture and grasping posture" section, end-effector pose where grasping is successful with a high success rate had already been investigated. For a proof experiment explained here, the average value of pose parameters of via/grasping postures (\(d_x, d_z, \beta _v\) and \(\beta _g\)) in the light blue area shown in Fig. 13 were used. That is, the relative via/grasping posture with respect to the cloth product was determined from the average value, and grasping operation was performed based on the inverse kinematics calculation according to the posture estimation result of the cloth product. The cloth product was in a quadrant state as shown in Fig. 1. The result was 46 successes and 4 failures out of 50 trials. All of the cause of the failure was that inverse kinematics of the robot arm could not be solved.
In this paper, we described a method to pick up a folded cloth product by a single-armed robot. We focused on a problem on picking up a folded cloth, and organize tasks to attack it. Then, we proposed methods of grasp position estimation composed of two stages: detection of the thickest folded hem and pose estimation of the cloth product. In addition, we attempted to search for appropriate grasping postures, and found that there are regions where the success rate of grasp was high. In experiments using a real robot, we achieved a picking task with 92% success rate.
As future work, we apply the proposed methods to other types of folded cloth products. It is also needed to perform the same experiment with another single arm robot. Furthermore, it is desired to improve the proposed method so tha robots grasp even if there are multiple overlapped cloth products.
YM and DT implemented the proposed method and carried out actual experiments. KY and KT proposed the method and wrote the paper. All authors read and approved the final manuscript.
No funding.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Faculty of Engineering, Nagano, Nagano, Japan
Toyota Motor Corp., Toyota, Aichi, Japan
Kitahama K, Tsukada K, Galpin F, Matsubara T, Hirano Y (2006) Vision-based scene representation for 3D interaction of service robots. In: Proceedings of IEEE/RSJ international conference on intelligent robots and systems, pp 4756–4761Google Scholar
Kuehnle J, Verl A, Xue Z, Ruehl S, Zoellner J, Dillmann R, Grundmann T, Eidenberger R, Zoellner R (2009) 6d object localization and obstacle detection for collision-free manipulation with a mobile service robot. In: Proceedings of international conference on advanced roboticsGoogle Scholar
Lee S, Moradib H, Jangc D, Jangd H, Kime E, Lef PM, Seog J, Hanh J (2008) Toward human-like real-time manipulation: from perception to motion planning. Adv Robot 22(9):983–1005View ArticleGoogle Scholar
Hashimoto K, Saito F, Yamamoto T, Ikeda K (2013) A field study of the human support robot in the home environment. In: Proceedings of IEEE workshop on advanced robotics and its social inpacts, pp 143–150Google Scholar
Osawa F, Seki H, Kamiya Y (2007) Unfolding of massive laundry and classification types by dual manipulator. J Adv Comput Intell Intell Inf 11(5):457–463View ArticleGoogle Scholar
Willimon B, Birchfleld S, Walker I (2011) Model for unfolding laundry using interactive perception. In: Proceedings of IEEE international conference on intelligent robots and systems, pp 4871–4876Google Scholar
Kita Y, Saito F, Kita N (2004) A deformable model driven method for handling clothes. In: Proceedings of international conference on pattern recognition, vol 4, pp 3889–3895Google Scholar
Maitin-Sphepard J, et al. (2010) Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding. In: 2010 IEEE international conference on robotics and automation (ICRA), pp 2308–2315Google Scholar
Doumanoglou A, Kargakos A, Kim T, Malassiotis S (2014) Autonomous active recognition and unfolding of clothes using random decision forests and probabilistic planning. In: Proceedings of internationl conference on robotics and automation, pp 987–993Google Scholar
Li Y, Chen CF, Allen PK (2014) Recognition of deformable object category and pose. In: Proceedings of IEEE international conference on robotics and automation, pp 5558–5564Google Scholar
Yuba H, Arnold S, Yamazaki K (2015) Unfolding of a rectangular cloth based on action selection depending on recognition uncertainty. In: Proceedings of IEEE/SICE international symposium on system integration, pp 623–628Google Scholar
Shibata M, Ota T, Endo Y, Hirai S (2008) Handling of hemmed fabrics by a single-armed robot. In: Proceedings of IEEE international conference on automation science and engineering, pp 882–887Google Scholar
Fischer MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395MathSciNetView ArticleGoogle Scholar
Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 8:679–714View ArticleGoogle Scholar
Isard M, Blake A (1998) Condensation—conditional density propagation of visual tracking. Int J Comput Vision 29(1):5–28View ArticleGoogle Scholar
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
Labor market mobility and the early-career outcomes of immigrant men
Mohsen Javdani ORCID: orcid.org/0000-0002-5341-68811 &
Andrew McGee2,3
We examine the role of between- and within-firm mobility in the early-career outcomes of immigrant men. Among Canadian workers with less than 10 years of potential experience, we find that visible minority immigrants were significantly less likely to have been promoted with their initial employers than similar white natives but were just as likely to have moved to new employers over the course of a year between interviews. White immigrants, on the other hand, were just as likely to be promoted as white natives but much more likely to move to new employers—suggesting that they enjoyed more overall mobility than white natives and other immigrants. We present tentative evidence linking these mobility patterns to differences in wage growth and occupational change between immigrants and natives. Overall, our findings suggest that the between- and within-firm mobility of white immigrants may play an important role in their relative economic success in Canada, while adding to growing evidence that visible minority immigrants experience frictions in the labor market that hinder their mobility and thus their economic prospects.
JEL Classification: J61, J71
Integrating immigrants into the labor market is a key policy objective in countries with large and growing immigrant populations like Canada, which admitted over five million immigrants between 1995 and 2010 (Statistics Canada 2016) and where 20.6% of residents were born abroad (Chui 2013). Given that many immigrants experience a decline in occupational status when immigrating, studies such as Green (1999), Chiswick et al. (2005), and Abramitzky et al. (2014) have highlighted the importance for immigrants of (upward) occupational mobility.Footnote 1 In Canada, the first years following immigration have been shown to be particularly important for immigrants in moving to jobs in their preferred occupations (Grenier and Xue 2011).
In this study, we note that occupational mobility requires job mobility and investigate native-immigrant differences in job mobility early in the career through moves to new employers and within firms via promotions. We then assess the extent to which differences in within- and between-firm mobility contribute to differences in wage growth and occupational mobility between early-career immigrants and natives. To the best of our knowledge, ours is the first study of native-immigrant differences in both within- and between-firm mobility.Footnote 2 Furthermore, while an extensive literature documents gender and race differences in promotion outcomes, we provide the first evidence of native-immigrant differences in the rates of and wage returns to promotions.
Our focus on the within- and between-firm mobility of early-career immigrants is further motivated by three important facts from the labor literature. First, there is evidence that immigrants in Canada may encounter search frictions that limit their between-firm mobility. Aydemir and Skuterud (2008) find that the native-immigrant pay gap in Canada can be largely explained by differences in employers as natives tend to be employed in higher wage firms. This suggests that immigrants may face barriers to mobility keeping them in jobs with low-paying firms.Footnote 3 Along these lines, Oreopoulos (2011) in a resume field experiment finds that the resumes of skilled immigrants in Canada were less likely to elicit contacts from potential employers than resumes from similar natives, while Bowlus et al. (2016) provide evidence of frictions faced by immigrants to Canada in a structural model of job search.
Second, search and matching models imply that all workers (natives and immigrants) benefit on average from early-career moves as they find better matches to their skills (Burdett 1978; Jovanovic 1979)—which is likely even more true for immigrants looking to upgrade their occupations to match their skills. Topel and Ward (1992) found that the average man in the USA has seven jobs in the first 10 years in the labor market and that these movements between jobs account for at least a third of early-career wage growth.Footnote 4 Thus, differences in mobility early in the career may translate into significant differences in earnings over time.
Finally, our interest in within-firm mobility via promotions follows from the observation that workers need not change employers to find better job matches, and an important avenue for changing jobs with an employer is through promotions—which also tend to be concentrated early in the career (Javdani and McGee 2017). Furthermore, promotions are important drivers of wage growth. Among many others, Pergamit and Veum (1999), Cobb-Clark (2001), Francesconi (2001), Blau and DeVaro (2007), and Kosteas (2009) find that promotions are associated with, on average, 5 to 12% increases in wage growth, while McCue (1996) finds that promotions account for 9% of total wage growth in the first 10 years of the career for white men. As a consequence, the failure of immigrants to keep pace with natives in climbing the corporate ladder via promotions early in the career may contribute to native-immigrant wage gaps.
Using a sample of male workers in Canada from 1999 to 2004 from the Workplace and Employee Survey (WES), we estimate the probabilities that natives and immigrants make different transitions over the course of a year between interviews. From one interview to the next, workers enter unemployment, remain with their initial employers without being promoted, remain with their initial employers having been promoted, or move to new employers. We find that visible minority immigrants are 15 percentage points less likely to have been promoted with their current employers but are just as likely to have moved to new employers as white natives. White immigrants, on the other hand, were just as likely to have been promoted with their initial employers as white natives but were as much as 12 percentage points more likely than white natives to move to new employers—suggesting that white immigrants are more mobile than both white natives and visible minority immigrants.
We then examine whether the greater between-firm mobility of white immigrants and lower within-firm mobility of visible minority immigrants translate into differences in wage growth relative to white natives. We note, however, that our analysis of the contributions of mobility to wage growth gaps is necessarily speculative given that the gaps themselves are not precisely estimated. In our sample, the wages of white (visible minority) immigrants grow by 9.5 (6.6) percent on average between interviews compared to 8.3% among white natives. Mobility is clearly related to wage growth in our sample as moves to new employers and promotions are associated with wage growth between interviews of 15 and 2.5%, respectively. Using Oaxaca-Blinder decompositions of the native-immigrant wage growth gaps, we find that—while imprecisely estimated—differences between white immigrants and natives in the rates of moves to new employers and promotions can account for a difference in wage growth equal to the whole wage growth gap—driven by white immigrants' higher likelihood of moving to new employers. By contrast, visible minority immigrants' slightly higher inter-firm mobility and the large wage gains to employer changes offset the negative effect on wage growth of their lower promotion rates, which renders the total contribution of mobility to the wage growth gap between white natives and visible minority immigrants close to zero.
Finally, we examine the relationship between within- and between-firm mobility and occupational mobility early in the career in light of evidence that immigrants to Canada experience occupational "downgrading" upon arrival.Footnote 5 Here too, the importance of between-firm mobility for white immigrants is evident. Some 91% of white immigrants changing employers switch occupations compared to only 71% of white natives and 81% of visible minority immigrants. As a consequence of their greater likelihood of changing employers and changing occupations conditional on changing employers, 26% of white immigrants in our sample change occupations between interviews compared to only 16% of white natives and 19% of visible minority immigrants. Both moves to new employers and promotions are important channels for changing occupations early in the career in our sample, and white immigrants are particularly good at using the former channel to change—and likely upgrade—occupations.
While Aydemir and Skuterud (2008), Oreopoulos (2011), and Bowlus et al. (2016) provide evidence consistent with visible minority immigrants in Canada encountering job search frictions not encountered by white Canadian-born workers, our primary contribution is to provide direct evidence of actual differences between natives and immigrants in mobility. These differences are surprising because the Canadian immigration system imposed few mobility constraints. Newly arrived permanent residents were not for the most part tied to employers or regions. Likewise, immigrants in Canada on "open work permits" could change employers without restriction, while immigrants on employer-specific work permits needed only apply for a new work permit to change employers. Moreover, work permits could be renewed indefinitely as long as the worker remained employed.
Our second contribution is to note the marked differences among immigrants in mobility patterns and the potential importance of these differences to the diverging fortunes of different groups of immigrants. Specifically, ours is the first study to highlight the role of between-firm mobility in the relative economic success of white immigrants to Canada. This is particularly important for interpreting the existing evidence on mobility among immigrants to Canada. Notably, Skuterud and Su (2012) provide evidence that immigrants to Canada were less likely than natives to transition into high-wage jobs (and more likely to transition out of these jobs), but equally likely to transition into low-wage jobs. Our findings suggest that these patterns may be driven by visible minority (and not white) immigrants.
In terms of interpreting these mobility patterns, we note that the fact that visible minority immigrants were just as likely to move to new employers as white natives should not be taken as evidence against the existence of search frictions. If visible minority immigrants are not promoted at rates commensurate with their skills, presumably they should be pursuing outside options more than white natives. Likewise, immigrants may have stronger incentives to move to new employers in order to upgrade occupations than natives. Both possibilities suggest that visible minority immigrants should be moving to new employers with greater frequency than white natives as we observe among white immigrants. Indeed, Oreopoulos' findings imply that even if visible minority immigrants were sending out resumes at a rate similar to white immigrants, they would generate fewer contacts with employers and thus lead to a lower rate of transitioning to new employers.
We consider three potential explanations for the mobility patterns that we observe: unobserved productivity differences, taste-based discrimination on the part of employers, and information asymmetries and other search frictions. Visible minority immigrants may be less likely to be promoted if they are less productive than natives in ways unobserved by the econometrician. For instance, immigrants may have language difficulties that limit their prospects for promotion. Splitting our sample by age-at-immigration, however, we find that immigrants who arrived in Canada as children actually fare the worse where promotions are concerned. Alternatively, unobserved differences among visible minority immigrants in our sample may have resulted from changes in Canadian immigration policy in the early 1990s that prioritized admitting skilled immigrants as opposed to immigrants with family ties. While we find that visible minority immigrants who arrived before the shift to an immigration policy focusing on admitting skilled immigrants (most of whom arrived as children) fare worse in terms of promotion probabilities than those who arrived after this policy change, the difference in promotion probabilities is not statistically significant.
The lower promotion rates among visible minority immigrants could also arise if employers prefer to promote white workers (natives and immigrants) rather than visible minorities. If this were the case, however, then high-ability visible minority immigrants ought to be more likely to move to new employers as competition gives firms incentives to hire away talented visible minorities experiencing discrimination in promotion outcomes. As a consequence, visible minority immigrants ought, on average, to be more likely to change employers than white immigrants and natives. In our sample, however, visible minority immigrants move to new employers at a similar rate as white natives. Nevertheless, the struggles of visible minority natives observed in our sample in terms of wage growth suggest that taste-based discrimination may play a role in the outcomes of visible minorities regardless of whether they are immigrants.
Of course, taste-based discrimination could persist without visible minority immigrants being more likely to change employers if search frictions exist that prevent visible minority immigrants from moving to new employers. For instance, visible minorities may lack networks for job search or be less familiar with how job search works in Canada. Alternatively, potential employers may have less information about visible minority immigrants. In the "invisibility hypothesis" of Milgrom and Oster (1987), this information asymmetry between current and prospective employers gives employers incentives to "hide" employees about whom the market has less information by denying them promotions that are assumed to convey positive information about the worker to other employers. We find some evidence consistent with such information asymmetries. Specifically, we find that visible minority immigrants about whom the market likely has more information—those with a bachelor's degree or higher—enjoy similar mobility between- and within-firms and wage returns to this mobility as white natives. Ultimately, however, establishing whether information asymmetries or other search frictions lead to the mobility differences between visible minority immigrants and white immigrants and natives requires further investigation, an issue we discuss in the conclusion.
The remainder of the paper proceeds as follows. Section 2 discusses the data as well as the implications of our sample selection criteria. Section 3 discusses the Canadian immigration policies that affected immigrants in our sample and their implications for mobility. Section 4 presents our main findings. Section 5 concludes and poses the questions for future research.
Our sample is drawn from the Workplace and Employee Survey (WES), a longitudinal survey of employers and their employees collected by Statistics Canada between 1999 and 2006. In every year, a representative sample of approximately 6000 employers was surveyed.Footnote 6 A maximum of 24 employees were interviewed from each sampled firm in each odd year and re-interviewed the following year regardless of whether they remained with their initial employer.Footnote 7 The employee sample is representative of the Canadian workforce in the target population of employers when properly weighted, and all of our analysis incorporates sample weights from Statistics Canada. While a longer longitudinal dimension would have been preferred, the WES is particularly well-suited for our study insofar promotions and moves to new employers between interviews are well-measured.
Three dependent variables are used in our study. First, we use a categorical variable that identifies the transition made by each worker between interviews to study native-immigrant differences in within- and between-firm mobility. A worker either transitions to unemployment (i.e., the employee has left the initial employer and does not have a new employer—including self-employment), transitions to a new employer, remains with the initial employer and has been promoted since the first interview, or remains with the initial employer without having been promoted. Changes in pay and responsibilities are thought to be the distinguishing features of promotions (Pergamit and Veum 1999), and our data identify promotions using precisely these two features. Specifically, whether the employee has been promoted between interviews is based on the questions: "Have you ever been promoted while working for this employer? (By promotion we mean a change in duties/responsibilities that lead to both an increase in pay and the complexity or responsibility of the job)" and "When did your most recent promotion occur?"Footnote 8 The caveat that a promotion must entail a change in job complexity or responsibility is important insofar our interest in promotions stems largely from their role in enabling workers to change occupations. Second, we use the change in the worker's log-hourly wage between interviews to examine the extent to which differences in mobility contribute to differences in wage growth. Third, we create an indicator that equals one when a worker changes occupations between interviews to study the relationship between occupational mobility and between- and within-firm mobility.Footnote 9
Our main analysis is based on the pooled 1999, 2001, and 2003 cross sections of employees; the 2005 cross section cannot be used because WES did not field an employee survey in 2006. We restrict our sample to non-aboriginal men who were interviewed twice with less than 10 years of potential labor market experience (defined as age minus years of schooling minus six).Footnote 10 The full sample used to study labor market transitions includes observations from 4907 men after the sample restrictions are imposed—including 260 (266) white (visible minority) immigrants. When studying wage growth and occupation switching, we further restrict the sample to workers who are employed at both interviews (i.e., dropping men who transition to unemployment) resulting in a sample with observations from 4585 men.
We focus on early-career workers for three reasons. First, most job shopping occurs early in the career (e.g., Topel and Ward 1992, van der Klaauw and Dias da Silva 2011). Likewise, most promotion activity occurs early in the career among Canadian workers in the WES (Javdani and McGee 2017). Second, focusing on workers who enter Canada before or shortly after entering the workforce allows us to abstract from issues arising from differences in the returns to labor market experience acquired in different countries that complicate native-immigrant comparisons among older workers. Third, focusing on workers beginning their careers over a single decade enables us to abstract to some extent from differences in macroeconomic conditions upon labor market entry that have affected long-run immigrant and native career trajectories in other cohorts (Green and Worswick 2012).
We create indicators for being a Canadian-born visible minority, a white immigrant, or a visible minority immigrant with white Canadian-born workers serving as the reference category because significant differences exist among both natives and immigrants of different races.Footnote 11 Our controls include the highest level of schooling, the number of dependent children, an indicator for marital status, a quadratic in age, a quadratic in years of (actual) full-time labor market experience, a quadratic in years of seniority with the current employer, an indicator for full-time employment, an indicator for membership in a union or collective bargaining agreement, an indicator for the language spoken at work being different from the language spoken at home, occupation (six categories), industry (14 categories), and the worker's place in the firm-level wage distribution.Footnote 12 We control for the worker's standing in the firm-level wage distribution as a proxy for the worker's position in the firm's hierarchy because different hierarchical levels could be associated with different rates of transitions and the returns to these transitions.
Before discussing the summary statistics, some discussion of our sample selection rules is warranted. Workers who are not interviewed a second time ("attriters") are eliminated from our analysis because we do not observe their employment transitions, wage growth, or occupation switching between interviews. Workers may not be re-interviewed for the usual reasons (e.g., refusals, inability to locate), but immigrants may also attrit because they return to their home country (i.e., population attrition). Systematic, unobserved differences between natives and immigrants may bias our estimates if return migration is correlated with the unobserved attributes of immigrants. To assess the potential importance of non-random attrition, Appendix: Table 9 reports the estimated marginal effects from a probit model of the probability of attrition for visible minority Canadian-born workers, white immigrants, and visible minority immigrants observed in the first (odd year) interview using different sets of controls. While both white and visible minority immigrant men are more likely to attrit than white natives, the difference is only statistically significant for white immigrants.Footnote 13 If this attrition is due to population attrition, our findings should be interpreted as applying to the population of white immigrants who remain in Canada—presumably the population of interest in the long run.Footnote 14
Alternatively, it may be the case that workers who change employers between interviews are harder to locate than workers who remain with their initial employers. Attrition along these lines would imply that our estimates understate the between-firm mobility of white immigrants. We have two reasons, however, to doubt that "movers" were more likely to attrit than "stayers." First, the WES documentation indicates following workers who changed employers between interviews was one of the objectives of the survey (Krebs et al. 1999). Second, workers who consented to be interviewed in the odd year submitted forms with their contact information. After 2000, all interviews were done over the phone. The initial employer played no role in contacting workers for the second interview (Krebs et al. 1999). Nevertheless, we acknowledge that there is some potential that our estimates understate the between-firm mobility of immigrants.
Figure 1 details the proportions of each group making particular transitions. Most strikingly, only 20% of visible minority immigrants in our sample were promoted with their initial employers between interviews relative to 34% of white, Canadian-born men—a difference that is statistically significant at the 1% level.Footnote 15 Visible minority immigrants were not significantly more likely to move to new employers relative to white natives (15 versus 12%), but 58% of visible minority immigrants simply remained with their initial employers without being promoted relative to only 46% of white natives—again a statistically significant difference. By contrast, white immigrants were nearly as likely as white natives to be promoted when remaining with their initial employer but significantly more likely to move to new employers.
Labor market transitions by group. Notes: The figure displays the weighted fraction of each group making a particular transition using the employee weights provided by Statistics Canada along with 95% confidence intervals
Table 1 reports summary statistics for each group. In addition to the differences in mobility observed in Fig. 1, immigrants and natives differ in both their wage growth and the rates at which they switched occupations. Early-career white natives experienced wage growth of 8.3% between interviews relative to only 6.6% for visible minority immigrants. White immigrants, by contrast, experienced 9.5% wage growth between interviews.Footnote 16 While not statistically significant at conventional levels, the economic significance of these wage growth gaps early in the career could be considerable. Finally, more than a quarter of white immigrants changed occupations between interviews relative to only 16% of white natives.
Table 1 Summary statistics
Table 1 also makes it clear that controlling for observed characteristics may be important as immigrants and natives differ significantly on several dimensions. Consistent with Canada's bias in favor of skilled immigrants discussed in the next section, 51% of visible minority immigrants and 44% of white immigrants in our sample had a bachelor's degree or higher compared to only 19% of white natives. Given that the sample restriction is based on potential experience and immigrants spend more years in school, both visible minority and white immigrants were also on average approximately 2 years older than natives in our sample.
Finally, immigrants were distributed very differently across industries and occupations than their white native peers. For instance, nearly 32% of visible minority immigrants in our sample worked in finance and insurance or business services compared to only 15% of white natives. Similarly, more than 40% of white and visible minority immigrants worked either as managers or as professionals while less than 38% worked in technical occupations or the trades. By contrast, only 25% of white natives worked as managers or professionals while 50% worked in technical occupations and the trades. In Section 4, we examine whether these observed differences between natives and immigrants can explain the unconditional native-immigrant differences in mobility, wage growth, and occupation switching.
Immigration policy in Canada
Immigrants in our sample arrived in Canada between 1966 and 2002. In this section, we briefly discuss the key features of and changes to Canadian immigration policy in this period and the likely implications for immigrants' mobility. In 1967, a points system to score applicants based on characteristics such as education, age, language, and occupation was introduced to provide an objective standard for admission to Canada. Three main admission classes were established: economic-class applicants whose eligibility was evaluated solely based on the point system, nominated relatives who were assessed under the point system but received bonus points based on kinship, and family class applicants who were admitted solely on family ties.
In 1978, a new Immigration Act prioritized the admission of family members and refugees—thereby reducing the share of immigrants admitted under the economic class, who already constituted a small share of admitted applicants. Further changes in 1982 limited the admission of economic class applicants to those with pre-arranged employment, but these restrictions proved to be short-lived. Concerns about Canada's low fertility rate and an aging population in 1986 resulted in the elimination of the pre-arranged employment requirement for economic class applicants and a substantial increase in immigration levels with the number of immigrants admitted annually rising from 83,000 in 1985 to 99,000 in 1986 and ultimately to 250,000 by 1993 (Green and Green 1999).
In the early 1990s, Canada's immigration policy moved from emphasizing family reunification and short-term occupational needs to an emphasis on growing the country's stock of human capital. To this end, the share of family class immigrants was reduced in favor of economic class immigrants even as annual inflows of immigrants remained stable at about 1% of the population (Green and Green 1999). As a consequence, the composition of immigrants to Canada changed substantially in this period with significant increases in the average education level of newly arrived immigrants and the number of visible minority immigrants. This shift in policy has implications for the immigrants in our sample. Immigrants who entered Canada in the 1990s and early 2000s either as dependent children or applicants would likely have been selected based on their (or their parents') skill levels, while immigrants who entered Canada as young children prior to the 1990s would not necessarily come from families with high skill levels.Footnote 17 The effect on mobility of the change in immigration regimes, however, is unclear as child immigrants in our sample who entered under the former policy would also benefit from greater language acquisition and cultural assimilation.
In the late 1990s, Provincial Nominee Programs (PNP) were introduced that allowed provincial governments to nominate applicants for immigration based on the provinces' labor market needs; the federal government remains responsible for admitting nominees. Most of the PNPs—which differ by province in their particulars—require an applicant to work in the nominating province for a period of time on a temporary work permit before applying, but immigrants are not tied to a specific employer provided they remain in the province.Footnote 18 As such, immigrants entering Canada through the PNPs might be expected to be less mobile between firms than other workers as the universe of potential employers is restricted. These PNP restrictions on mobility, however, were unlikely to have affected many immigrants in our sample as the first PNP came into effect in 1998, and the fraction of immigrants entering Canada in our sample period was trivial. In 1999, for instance, only 477 immigrants entered Canada under the PNPs, and less than 3% (6248) of immigrants in 2004 were admitted via PNPs (Citizenship and Immigration Canada 2011).
To summarize, no immigration policies in place during our sample period restricted the mobility of immigrants between employers within a given province. In the period in which immigrants in our sample entered Canada, however, immigration selection procedures changed significantly, and the composition of the immigrant population changed significantly as well. In the next section, we examine whether the change in immigration selection procedures led to unobserved changes in immigrants that affected immigrants' mobility.
Immigration and between- and within-firm mobility
We first estimate multinomial logit models of the probabilities of making each transition between interviews. Each panel of Table 2 concerns a single transition. The first row of each panel reports the predicted probability of the transition for white, Canadian-born men. Below this predicted probability, each row reports the estimated difference between the predicted probabilities of making the transition for the specified minority group and white natives. Column (1) of Table 2 reports the estimates including only indicators for group membership. Columns (2) to (6) add controls for worker and job characteristics, occupation, industry, and the worker's position in the firm's wage distribution.
Table 2 Multinomial logit estimates of transition probabilities
A worker's occupation and industry may be endogenous if workers select into particular industries and occupations based on unobserved characteristics related to their employment transitions. Likewise, the worker's standing in the employer's wage distribution may be endogenous insofar it likely reflects earlier transitions. None of the controls, however, appreciably affect the estimated differences in the probabilities of each transition between white native men and the members of each minority group.
Conditional on worker and job characteristics in column (2), the estimated probability of transitioning to unemployment between interviews for white native men in their first 10 years in the labor market is 0.056. The probabilities of transitioning to unemployment for members of every other group are statistically indistinguishable from and within 1 percentage point of the estimated probability for white natives in each specification in columns (2) to (6). This is particularly important insofar we restrict our sample to workers who remain employed in the second interview in subsequent analysis relating transitions to wage growth. The similar estimated probabilities of transitioning to unemployment across groups suggests that this restriction is unlikely to materially affect our inferences.
The second panel of Table 2 reports the predicted probabilities of remaining with the initial employer and being promoted. Similar to the unconditional difference in Fig. 1, visible minority immigrants are approximately 15 percentage points less likely to remain with their initial employer having been promoted than white natives in columns (2) to (6). While we also find that visible minority natives and white immigrants are less likely to have been promoted than similar white natives, these differences are never statistically distinguishable from zero.
In the third and fourth panels, we find that visible minority immigrants are approximately 8 percentage points more likely to remain with the initial employer without being promoted and 6 percentage points more likely to move to a new employer between interviews than similar white natives—although neither difference is statistically significant.Footnote 19 By contrast, white immigrants are nearly 12 percentage points more likely to move to new employers between interviews than white natives—a difference that is statistically distinguishable from zero at the 10% level in most specifications and highlights the differences in mobility among immigrants with different ethnic backgrounds. While white immigrants exhibit a higher degree of interfirm mobility than white natives, they enjoy a similar probability of promotion when remaining with their initial employers. Visible minority immigrants, on the other hand, are also slightly more likely than white natives to move to new employers but are much less likely than white natives to be promoted when remaining with their initial employers. Differences between the groups in observed characteristics cannot explain the differences in early-career mobility evident in Fig. 1.Footnote 20
Before examining the role of these mobility differences in native-immigrant differences in wage growth and occupational change, we briefly consider some potential explanations for the native-immigrant differences in mobility observed in Table 2. Immigrating at an earlier age presumably leads to greater language competency, which has been shown to affect native-immigrant wage differentials (e.g., Chiswick and Miller 1995, Dustmann and van Soest 2002, Bleakley and Chin 2004, Adsera and Ferrer 2015).Footnote 21 Researchers have speculated that a "critical age" exists after which perfect language acquisition (i.e., vocabulary, syntax, accent) is impossible (Singleton and Lengyel 1995). Using age 9 as a rough benchmark for the critical age in Canada, we estimate the transition probabilities for immigrants who immigrated before age 9 and those who immigrated after age 9.Footnote 22
Table 3 reports these estimates by age-at-immigration.Footnote 23 Perhaps surprisingly, visible minority immigrants who arrived in Canada before age 9 are an estimated 27 percentage points less likely than similar white natives to be promoted with the initial employer between interviews while being 16.8 percentage points more likely to move to new employers. Visible minority immigrants who move to Canada after age 9, on the other hand, are only 9.3 percentage points less likely to have been promoted. Language competency appears unlikely to be the dominant factor underlying the struggles of visible minority immigrants in internal labor markets.
Table 3 Transition probabilities by age-at-immigration
Complicating the interpretation of the estimates in Table 3, however, is the shift in Canadian immigration policy in the early 1990s discussed in the previous section. Most (but not all) of the early-career immigrants in the WES who arrived in Canada before age 9 would have arrived under the older policy placing less emphasis on the skill of their parents, while most (but not all) immigrants who arrived after age 9 would have been admitted after Canada began emphasizing immigrants' skill. If unobserved skill levels are correlated within families, the immigrants in our sample who arrived early in life might be less skilled in unobserved senses than other immigrants. In Table 4, we allow the transition probabilities for immigrants to depend on whether immigrants entered Canada before or after 1993.Footnote 24 The estimates suggest that visible minority immigrants who entered Canada before 1993 and after 1993 were 16.2 and 12.5 percentage points less likely than white natives to have been promoted between interviews, respectively—a difference that is not statistically significant. This suggests that the change in immigration policy cannot explain the lower promotion rates of visible minority immigrants observed in Table 2.
Table 4 Transition probabilities by year of immigration
Alternatively, the struggles of visible minority immigrants in internal labor markets may stem from potential employers discounting the signaling value of immigrants' foreign credentials. If so, immigrants with more credentials to be discounted—more educated immigrants—might experience the greatest impediments to between- and within-firm mobility relative to similar white, Canadian-born men. To test this hypothesis, we report in Table 5 estimates from separate multinomial logit models for workers with and without a bachelor's degree or higher. For visible minority and white immigrants with bachelor's degrees, we fail to reject the null hypotheses that their transition probabilities are identical to those of white natives in column (1). Among workers without a bachelor's degree in column (2), however, visible minority immigrants are an estimated 19.4 percentage points less likely to be promoted while remaining with the initial employer without being significantly more likely to move to new employers than white natives. Insofar internal mobility is concerned, credential discounting does not appear to drive the immigrant-native differences in mobility observed in Table 2.
Table 5 Multinomial logit regressions by education
That less-educated visible minority immigrants in Canada struggle in internal labor markets is consistent with Milgrom and Oster's (1987) "invisibility hypothesis." Milgrom and Oster assume that potential employers possess less information about the ability of disadvantaged workers—rendering such workers "invisible" to potential employers. Promotions are assumed to convey (positive) information about these workers to other employers. Current employers with private information regarding their high-ability but "invisible" workers have incentives to conceal them by limiting their promotion opportunities. This suppresses the signals of ability promotions send to competing employers and prevents these workers from being bid away by other firms.
If employers have less information about the productivity of visible minority immigrants—particularly less educated ones—and promotions signal ability to asymmetrically informed firms, visible minority immigrants would be less likely to be promoted compared to white natives as we document given employers' incentives to "conceal" these workers. This would be less likely to be true for white immigrants—many of whom come from the USA and other parts of the Commonwealth—about whom employers may have better information.Footnote 25 Furthermore, the lower probability of promotion for visible minority immigrants would not necessarily lead to a disproportionately higher probability of between-firm mobility for visible minority immigrants because other employers possess less information about high-ability, visible minority immigrants and thus would be less likely to offer wages higher than their current employers.
Alternatively, other search frictions—including taste-based discrimination—may limit the outside opportunities of visible minority immigrants. Employers may decline to promote such workers precisely because the employer does not need to compete with outside offers. We discuss the need for further research on the nature of search frictions experienced by visible minority immigrants in the conclusion.
Mobility and wage growth
To establish the importance of mobility to wage growth in our sample, panel A of Table 6 reports estimates from log-wage growth models in which we regress the change in log-hourly wages between interviews on indicators for whether workers have been promoted with their initial employers or changed employers between interviews as well as different sets of controls. Men who remain with their initial employers without having been promoted serve as the reference group, and the sample necessarily excludes those workers who transition to unemployment. Moving to new employers between interviews is associated with wage growth of 15.7% in our sample controlling for worker and job characteristics in column (2)—larger than the 10% wage growth associated with job transitions reported by Topel and Ward (1992). The estimated wage growth associated with promotions is 2.5% in column (2)—somewhat smaller than Cobb-Clark's (2001) estimate of 4.5% among early-career men in the NLSY79. The estimates in columns (2) to (6) indicate that these returns are not sensitive to the choice of controls.
Table 6 Wage returns to promotion and employer change
In panel B of Table 6, we report estimates in which we allow the returns to transitions to differ by group for the specification controlling for worker and job characteristics. The p values for Wald tests of the hypotheses that the returns to a given transition are the same across groups are given in column (5). For transitions to new employers, we fail to reject the null of equal returns across groups. For promotions, however, the p value (less than 0.01) strongly supports rejecting the null, but this is driven entirely by visible minority natives, who experience much smaller wage growth following promotions than other workers. For white and visible minority immigrants, we fail to reject the null that the returns to promotion are the same as for white natives. If unobserved productivity differences or taste-based discrimination (in combination with search frictions) were behind the differing experiences of white natives and visible minority immigrants, we might expect the wage returns to these transitions to vary by group, but they do not. Instead, the similar returns to transitions across groups are consistent with the "invisibility hypothesis" discussed above insofar less visible workers—while less likely to be promoted—are expected to receive wage increases following promotions comparable to other workers when they manage to get promoted.
We next examine the link between mobility differences and wage growth across groups. As noted in the introduction, however, this exercise is limited by the fact that the wage growth gaps themselves are not precisely estimated. Panel A in Table 7 details the average log-wage growth experienced by members of each group and the wage growth gaps between white natives and the minority groups in our sample. Wage growth early in the career is quite rapid. White natives enjoyed average wage growth between interviews of 8.3%, while white (visible minority) immigrants experienced wage growth between interviews of 9.5 (6.6) percent. While the gaps in wage growth relative to white native men are not statistically significant at conventional levels, we note that they would generate large wage gaps in wage levels if compounded over several years.Footnote 26 Visible minority natives as a group are again an outlier in our sample insofar they experienced little wage growth between interviews.Footnote 27
Table 7 Oaxaca-Blinder decomposition of the wage growth gap between white natives and members of minority groups
In panel B of Table 7, we report estimates of the Oaxaca-Blinder (O-B) decompositions of the gaps in average wage growth (\( {\widehat{\Delta wg}}_M \)) between white natives (WN) and the minority groups (M) (Blinder 1973; Oaxaca 1973). For each minority group, the decomposition takes the form
$$ {\widehat{\Delta wg}}_M=\underset{\mathrm{explained}\ \mathrm{gap}}{\underbrace{\sum \limits_{k=1}^k{\widehat{\beta}}_{\mathrm{WN},k}\left({\overline{X}}_{\mathrm{WN},k}-{\overline{X}}_{M,k}\right)}}+\underset{\mathrm{unexplained}\ \mathrm{gap}}{\underbrace{\sum \limits_{k=1}^k\left({\widehat{\beta}}_{\mathrm{WN},k}-{\widehat{\beta}}_{M,k}\right){\overline{X}}_{M,k}}} $$
The O-B decomposition estimates the contributions to the log-wage growth gap between white natives and the members of a given minority group of observed differences in transitions and characteristics (referred to as the "explained" gap) and differences between groups in the "returns" to these characteristics (referred to as the "unexplained" gap).Footnote 28
Consistent with white natives being much more likely to be promoted than visible minority immigrants, the estimates in column (1) indicate that the difference in promotion receipt can account for 0.4 percentage points of the 1.7 percentage point gap in wage growth between white natives and visible minority immigrants—a contribution statistically significant at the 10% level. Oaxaca and Ransom (1999), however, show that only the total effect of the full set of categorical dummies is identified. Bearing this in mind, we note that transitions to new employers contribute − 0.4 percentage points to the observed wage growth gap between white natives and visible minority immigrants because a higher proportion of visible minority immigrants move to new employers than white natives and the wage returns to such moves are very high. As a result, the total contribution of within-firm and between-firm mobility to the wage growth gap between white natives and visible minority immigrants is approximately zero.
The O-B estimates in column (2) indicate that labor market transitions (moves to new employers and promotions) can account for 1.2 percentage points of the 1.1 percentage point gap in wage growth between white natives and white immigrants that favors white immigrants. This is unsurprising given that white immigrants in our sample were much more likely than white natives to change employers and the wage returns to employer changes are very large. While imprecisely estimated, we note that this contribution of mobility to the wage growth gap enjoyed by white immigrants is larger than the contribution of any other observable (e.g., experience, education). Thus, we tentatively infer that between-firm mobility may be important in explaining the early-career success of white immigrants relative to their white native peers.Footnote 29
Mobility and occupation switching
Standard models of job search suggest that young workers shop jobs for good matches (Jovanovic 1979). Moreover, the early part of the career likely entails a period of occupational experimentation as young people learn about their own skills and the demands of different occupations (Antonovics and Golan 2012). Young immigrant men may have even more motivation to change occupations if they enter the Canadian labor market in occupations for which they are over-qualified as documented in Wald and Fang (2008).
To assess the relationship between occupational mobility and between- and within-firm mobility among immigrants and natives, we report in panel A of Table 8 the fraction of each group changing occupations. Between interviews, 16 (22) percent of white (visible minority) natives change occupations. By contrast, nearly 26% of white immigrants and 19% of visible minority immigrants switch occupations.
Table 8 Transitions and occupation switching
In panel B, we report the fraction of workers making each transition who change occupations. White immigrants who move to new employers switch occupations 91% of the time. By contrast, only 71% of white natives and 80% of visible minority immigrants and natives switch occupations when moving to new employers. Among workers promoted and not promoted remaining with the initial employer, immigrants and white natives switched occupations at around the same rate.Footnote 30
Finally, panel C reports the fraction of occupation switchers in each group who made a particular transition. The percentages in this panel reflect both the rates of occupation switching among workers making different transitions reported in panel B and the rates of each transition among workers in different groups reported in Table 1. The means in panel C indicate that immigrants move between occupations in very different ways relative to white natives. Among white natives, 59% of the occupation switchers did so by moving to new employers, while 77 and 68% of white and visible minority immigrants who switched occupations, respectively, did so by moving to new employers. Internal promotions account for a much larger fraction (32%) of occupation switching among white natives. By contrast, only 18 and 19% of occupation switches for white and visible minority immigrants, respectively, were realized through internal promotions. For white immigrants, this is largely because white immigrants were more likely to move to new employers and were more successful at switching occupations in these moves than white natives. The difference in the rates of occupation switching between white natives and visible minority immigrants, however, is driven more by the fact that visible minority immigrants are much less likely than white natives to be internally promoted.
Our study presents two important stylized facts about mobility patterns between and within employers among early-career natives and immigrants in Canada. First, visible minority immigrants were much less likely to be promoted with their initial employers than white natives while being similarly likely to change employers between interviews. Second, white immigrants were much more likely than white natives to change employers while being just as likely be promoted with their initial employers. We present tentative evidence linking this greater between-firm mobility of white immigrants to their relatively fast wage growth and their ability to change occupations. Overall, our findings suggest that mobility may play an important role in the relative economic success of early-career white immigrants.
Important questions remain concerning the role of between- and within-firm mobility in the assimilation of immigrants. First, how does mobility influence the experiences of immigrants over a longer horizon? A major limitation of the WES is that the longitudinal component for workers is limited to a single year between interviews. Observing the contributions of mobility to the experiences of immigrants over a longer period, however, may be important. Both Topel and Ward (1992) and Light and McGarry (1998) document the diminishing returns to job changes over the course of the career, while Machado and Portela (2013) show that previous promotions are strong determinants of subsequent promotions. As such, the poor performance of visible minority immigrants in internal promotions may have consequences that cannot be offset by greater between-firm mobility over the long run. Indeed, Pendakur and Woodcock (2010) find that visible minority immigrants who have been in Canada for less than 10 years in the WES earn, on average, 31% less than similar white natives. While 42% of this wage gap is due to crowding of these immigrants into lower-paying firms as documented in Aydemir and Skuterud (2008), the remaining 58% is due to wage disparities relative to their native peers within firms. Our findings suggest that internal labor markets might play a key role in generating these within-firm wage gaps.
Second, why might visible minority immigrants be less "visible" to potential employers? The mobility patterns that we observe are consistent with potential employers having less information about visible minority immigrants. If information problems are at the heart of the mobility issues of visible minority immigrants, addressing this information asymmetry is important from a policy perspective. Given that the mobility difficulties appear to most pronounced among visible minority immigrants without higher education credentials, should policy-makers aim to vouch for the credentials (e.g., secondary school completion) of visible minority immigrants obtained abroad, or can credentialing programs be created? Can contacts with previous employers—potentially abroad—be facilitated?
Of course, information problems may be just one of many search frictions limiting the mobility of visible minority immigrants. Given the growing body of evidence—including our findings—of the existence of search frictions experienced by visible minority immigrants to Canada, understanding how their searches differ from those of their white peers (natives and immigrants) is of particular importance. Evidence concerning native-immigrant differences in Canada in the use of search networks, search methods, the geographic scope of search, and employer call-back rates would all shed much light on the potential existence and nature of the frictions experienced by visible minority immigrants in the Canadian labor market.
Studies documenting the occupational downgrading experienced by immigrants during their initial years in the host country include Chiswick (1978), Friedberg (2000), and Chiswick et al. (2005).
Depew et al. (2017) study the between-firm mobility of skilled guest workers in the USA.
Immigrants may lack the knowledge of local labor market institutions necessary for job search. Alternatively, immigrant enclaves might limit the search networks of recent immigrants to Canada (Warman 2007). If immigrants encounter search frictions not experienced by natives, employers may enjoy some degree of monopsony power over them that could drive the native-immigrants wage gap. Hirsch and Jahn (2015) and Naidu et al. (2016) provide evidence of employers' monopsony power over immigrants in Germany and the UAE, respectively.
Other studies documenting the large early-career wage gains associated with job mobility include Bartel (1980), Borjas and Rosen (1980), Antel (1991), and McCue (1996).
Imai et al. (2017), for instance, find that immigrants who arrived in Canada between 2000 and 2001 were initially employed in occupations requiring less cognitive skill and more manual skill than their occupations prior to immigration.
The target population of employers consisted of all business locations in Canada with paid employees in March of each surveyed year. In the 1999, 2001, 2003, and 2005 surveys, the sample of employers was refreshed with new employers from the Statistics Canada Business Register to maintain a representative cross section. Employers in the Yukon, Nunavut, and Northwest Territories and employers operating in crop production, animal production, fishing, hunting, trapping, private households, religious organizations, and public administration were excluded from the sample. Public administration's share of employment in Canada is around 6.5% (Statistics Canada, Table 281-0024).
The number of workers interviewed from each firm was proportional to firm's size except for workplaces with fewer than four employees in which all employees were surveyed.
We identify promoted workers if they report having been promoted and the most recent promotion date is after the first interview.
WES contains 47 detailed occupation categories based on the Standard Occupational Classification (SOC) 1991. Our occupational change indicator equals one if the detailed occupational category changes between interviews and zero otherwise.
We focus on male workers because of differences in family formation between native and immigrant women. Javdani and McGee (2017) find that the promotion experiences of early-career women in the WES—particularly those with families—differ significantly from those of their male peers.
According to Statistics Canada (2011), the visible minority population in Canada consists mainly of Chinese, South Asian, Black, Arab, West Asian, Filipino, Southeast Asian, Latin American, Japanese, and Korean individuals. A worker is identified as a visible minority if her/his parents or grandparents belonged to one of these groups. The Employment Equity Act in Canada defines visible minorities to be "persons, other than Aboriginal people, who are non-Caucasian in race or non-white in color."
Firms in the WES report the numbers of permanent full-time and part-time employees earning more than $80,000, earning between $60,000 and $80,000, earning between $40,000 and $60,000, earning between $20,000 and $40,000, and earning less than $20,000. We use this information along with the total number of employees within the firm to calculate the proportion of workers within the firm in a higher earnings category relative to any given worker. We cannot calculate the proportion of workers in higher earnings categories workers who earn more than $80,000 (because no such category exists). For these workers, we set the proportion of workers in higher earnings categories to zero.
Nearly 9% of white immigrants in the WES came from the USA compared to less than 1.5% of visible minority immigrants. Due to the North American Free Trade Agreement, workers from the USA need only a verifiable job offer from a Canadian employer to immigrate. The ease of return migration to the USA may explain the higher attrition rates of white immigrants.
Kim (2012) develops sample and population attrition adjusted weights for application in short panels such as ours. He shows that the effect of population attrition in the CPS on assimilation estimates is minor. Given the similarity in his adjusted and unadjusted assimilation estimates and our short panel, we eschew the re-weighting procedure.
The promotion rates in our sample are considerably higher than those reported in studies using changes in hierarchical levels or occupational categories to identify promotions (e.g., van der Klaauw and Dias da Silva 2011; Cassidy et al. 2016), but the promotion rates in those studies may fail to capture promotions within broad hierarchical levels or occupational categories. The promotion rates in our sample are similar to the rates of self-reported promotions among young workers in the USA documented in Pergamit and Veum (1999) and Cobb-Clark (2001).
Consistent with their higher education levels, both white and visible minority immigrants earned more on average than white natives in our sample—though this unconditional advantage is only statistically significant for white immigrants. Conditional on worker characteristics, visible minority immigrants earn significantly less than white native in terms of wage levels (see Appendix: Table 10).
Unfortunately, our data do not identify to the immigration class to which an immigrant belonged.
Many immigrants enter Canada on fixed length work permits prior to becoming permanent residents. The fixed duration may limit immigrants' promotion prospects if employers fear losing an employee when the permit expires, but work permits can be renewed. In addition, there is no reason to expect that the effects of fixed term work permits on white and visible minority immigrants' promotion prospects would differ.
This is unsurprising given that the predicted probabilities of the transition outcomes must sum to one for each group.
Between-firm mobility is likely influenced by local labor demand, and immigrants in Canada tend to be concentrated in provinces such as Ontario and British Columbia. As such, controlling for the region of residence is important in principle when estimating native-immigrant differences. Our early estimates, however, indicated that controlling for the province of residence and living in a city had no appreciable effect on the estimated marginal effects for immigrants.
Oreopoulos (2011) indicates that recruiters rationalized their dismissal of the resumes of skilled immigrants based on language concerns.
Our choice of age 9 as the benchmark critical age is motivated in part by Corak's (2011) finding that children who immigrate to Canada after age 9 are much less likely to graduate from high school than those who immigrate at earlier ages.
The estimates reported in Table 3 come from a specification identical to that in column (2) of Table 2, but we replace the two immigrant indicators (white and visible minority) with four immigrant indicators (i.e., white and visible minority immigrants in the two age-at-immigration groups).
The shift to an immigration policy focused on admitting skilled immigrants was realized through several policy decisions in the early 1990s. We use 1993 as a benchmark because amendments to the Immigration Regulations in 1993 significantly reduced the share of family class immigrants.
Approximately 30% of the white immigrants in our sample come from the USA and the UK compared to only 3% of visible minority immigrants.
In terms of wage-level gaps, visible minority immigrants in the first interview earn more than white natives unconditionally but earn 15.1% less conditional on worker characteristics—a gap that grows to 16% by the second year. White immigrants, on the other hand, face no wage gaps relative to their Canadian-born counterparts in either year (see Appendix: Table 10).
See Javdani (2017) for a discussion of the low wage returns to promotion and low wage growth between interviews experienced by visible minority natives in the WES.
We use the procedure developed by Yun (2005) to transform the coefficients of the categorical transition dummies so that the results of the decomposition are invariant to the choice of the (omitted) base category. Alternative decomposition methods (e.g., using the coefficients from a pooled model over white natives and the minority group as the reference coefficients) produced similar results where the explained gaps were concerned.
One potential concern for our estimates is that immigrants and natives with the same amount of potential experience may have different amounts of Canadian labor market experience given that some immigrants come to Canada after their labor market entry. To assess the robustness of our findings, we re-estimated the O-B decompositions restricting the sample to natives and immigrants who entered Canada within their first 3 years in the labor force (based on our potential experience measure). The estimates, reported in Appendix: Table 11, are similar to those in Table 7. Estimates from multinomial logit models of transition probabilities using this restricted sample are also similar to those reported in Table 2.
We also estimated probit models of the probability of occupation switching controlling for group indicators and worker characteristics. Similar to our multinomial logit estimates, the worker characteristics had little effect on the estimated marginal effects of the group indicators. As such, we report only the summary statistics by group in Table 8 for simplicity.
Abramitzky R, Boustan L, Eriksson K. A nation of immigrants: assimilation and economic outcomes in the age of mass migration. J Polit Econ. 2014;122(3):467–506.
Adsera, Alicia, and Ana M. Ferrer. 2015. The effect of linguistic proximity on the occupational assimilation of immigrant men in Canada. Working paper.
Antel JJ. The wage effects of voluntary labor mobility with and without intervening unemployment. Ind Labor Relat Rev. 1991;44:299–306.
Antonovics K, Golan L. Experimentation and job choice. J Labor Econ. 2012;30:333–66.
Aydemir A, Skuterud M. The immigrant wage differential within and across establishments. Ind Labor Relat Rev. 2008;61(3):334–52.
Bartel AP. Earnings growth on the job and between jobs. Econ Inq. 1980;18(1):123–37.
Blau FD, DeVaro J. New evidence on gender differences in promotion rates: an empirical analysis of a sample of new hires. Ind Relat J Econ Soc. 2007;46(3):511–50.
Bleakley H, Chin A. Language skills and earnings: evidence from childhood immigrants. Rev Econ Stat. 2004;86(2):481–96.
Blinder AS. Wage discrimination: reduced form and structural estimates. J Hum Resour. 1973;8(4):436–55.
Borjas GJ, Rosen S. Income prospects and job mobility of younger men. Res Labor Econ. 1980;3:159–81.
Bowlus AJ, Miyairi M, Robinson C. Immigrant job search assimilation in Canada. Can J Econ. 2016;49(1):5–51.
Burdett K. A theory of employee job search and quit rates. Am Econ Rev. 1978;68(1):212–20.
Cassidy H, DeVaro J, Kauhanen A. Promotion signaling, gender, and turnover: new theory and evidence. J Econ Behav Organ. 2016;126:140–66.
Chiswick B. The effect of Americanization on the earnings of foreign-born men. J Polit Econ. 1978;86:897–922.
Chiswick BR, Lee YL, Miller PW. A longitudinal analysis of immigrant occupational mobility: a test of the immigrant assimilation hypothesis. Int Migr Rev. 2005;39(2):332–53.
Chiswick BR, Miller PW. The endogeneity between language and earnings: international analyses. J Labor Econ. 1995;13(2):246–88.
Chui T. Immigration and ethnocultural diversity in Canada: National Household Survey, 2011. Ottawa: Statistics Canada; 2013.
Citizenship and Immigration Canada. 2011. Evaluation of the provincial nominee program. Evaluation and research. Accessed January 12, 2018.
Cobb-Clark DA. Getting ahead: the determinants of and payoffs to internal promotions for young US men and women. Research in Labor Economics. 2001;20:339–72.
Corak, Miles. 2011. Age at immigration and the education outcomes of children. IZA discussion paper 6072.
Depew B, Norlander P, Sorensen TA. Inter-firm mobility and return migration patterns of skilled guest workers. J Popul Econ. 2017;30(2):681–721.
Dustmann C, van Soest A. Language and the earnings of immigrants. Ind Labor Relat Rev. 2002;55(3):473–92.
Francesconi M. Determinants and consequences of promotions in Britain. Oxf Bull Econ Stat. 2001;63(3):279–310.
Friedberg RM. You can't take it with you? Immigrant assimilation and the portability of human capital. J Labor Econ. 2000;18(2):221–51.
Green AG, Green DA. The economic goals of Canada's immigration policy: past and present. Can Public Policy. 1999;25(4):425–51.
Green D. Immigrant occupational attainment: assimilation and mobility over time. J Labor Econ. 1999;17(1):49–79.
Green D, Worswick C. Immigrant earnings profiles in the presence of human capital investment: measuring cohort and macro effects. Labour Econ. 2012;19(2):241–59.
Grenier G, Xue L. Canadian immigrants' access to a first job in their intended occupation. J Int Migr Integr. 2011;12(3):275–303.
Hirsch B, Jahn EJ. Is there monopsonistic discrimination against immigrants? Ind Labor Relat Rev. 2015;68(3):501–28.
Imai S, Stacey D, Warman C. From engineer to taxi driver? Occupational skills and the economic outcomes of immigrants. Can J Econ. 2017; in press
Javdani, Mohsen. 2017. Does color matter? Estimating differences in promotions and returns to promotions between white and visible minority Canadian-Borns. Working paper.
Javdani, Mohsen and Andrew McGee. 2017. Moving up or falling behind? Gender, promotions, and wages in Canada. Working paper.
Jovanovic B. Job matching and the theory of turnover. J Polit Econ. 1979;87:972–90.
Kim, Seik. 2012. Sample attrition in the presence of population attrition. Working paper.
Kosteas VD. Job level changes and wage growth. Int J Manpow. 2009;30(3):269–84.
Krebs H, Patak Z, Picot G, Wannell T. The development and use of a Canadian linked employer-employee survey. In: The creation and analysis of employer-employee matched data. Ottawa: Statistics Canada; 1999. p. 515–34.
Light A, McGarry K. Job change patterns and the wages of young men. Rev Econ Stat. 1998;80(2):276–86.
Machado, C. Sofia, and Miguel Portela. 2013. Age and opportunities for promotion IZA discussion paper 7784.
McCue K. Promotions and wage growth. J Labor Econ. 1996;14(2):175–209.
Milgrom P, Oster S. Job discrimination, market forces, and the invisibility hypothesis. Q J Econ. 1987;102(3):453–76.
Naidu S, Nyarko Y, Wang S-Y. Monopsony power in migrant labor markets: evidence from the United Arab Emirates. J Polit Econ. 2016;124(6):1735–92.
Oaxaca R. Male-female wage differentials in urban labor markets. Int Econ Rev. 1973;14(3):693–709.
Oaxaca R, Ransom M. Identification in detailed wage decompositions. Rev Econ Stat. 1999;81(1):154–7.
Oreopoulos P. Why do skilled immigrants struggle in the labor market? A field experiment with six thousand résumés. Am Econ J Econ Pol. 2011;3(4):148–71.
Pendakur K, Woodcock S. Glass ceilings or glass doors? Wage disparity within and between firms. J Bus Econ Stat. 2010;28(1):181–9.
Pergamit MR, Veum JR. What is a promotion? Ind Labor Relat Rev. 1999;52(4):581–601.
Singleton D, Lengyel Z, editors. The age factor in second language acquisition: a critical look at the critical period hypothesis. Philadelphia: Clevedon [England]; 1995. p. 124–46.
Skuterud M, Su M. Immigrants and the dynamics of high-wage jobs. Ind Labor Relat Rev. 2012;65(2):377–97.
Statistics Canada. 2011. Visible minority and population group reference guide, National Household Survey.
Statistics Canada. 2016. Table 051-0011—international migrants, by age group and sex, Canada, provinces, and territories, annual (persons), CANSIM (database). (accessed: April 10, 2017).
Topel RH, Ward MP. Job mobility and the careers of young men. Q J Econ. 1992;107(2):439–79.
van der Klaauw B, Dias da Silva A. Wage dynamics and promotions inside and between firms. J Popul Econ. 2011;24:1513–48.
Wald S, Fang T. Overeducated immigrants in the Canadian labour market: evidence from the workplace and employee survey. Can Public Policy. 2008;34(4):457–79.
Warman C. Ethnic enclaves and immigrant earnings growth. Can J Econ. 2007;40(2):401–22.
Yun M-S. A simple solution to the identification problem in detailed wage decompositions. Econ Inq. 2005;43(4):766–72.
We thank the participants at the annual meetings of the Canadian Economics Association, the Western Economics Association, and the Society of Labor Economics for their suggestions. We would also like to thank the anonymous referee and the editor for the useful remarks. All remaining errors are our own.
Responsible editor: Hartmut F. Lehmann
The data used in our study are confidential data that were made available through the Statistics Canada Research Data Centre program. RDCs provide researchers with access, in a secure university setting, to microdata from population and household surveys. The centers are staffed by Statistics Canada employees. They are operated under the provisions of the Statistics Act in accordance with all the confidentiality rules and are accessible only to researchers with approved projects who have been sworn in under the Statistics Act as "deemed employees." Application process and guidelines to get access to the data are based on the affiliation of the Principal Investigator and the type of research being conducted at a Research Data Centre (RDC) and are available here: http://www.statcan.gc.ca/rdc-cdr/process-eng.htm.
Department of Economics, 3333 University Way, Kelowna, BC, V1V 1V7, Canada
Mohsen Javdani
Department of Economics, University of Alberta, Tory Building, Edmonton, AB, T6G 2H4, Canada
Andrew McGee
IZA - Institute of Labor Economics, Bonn, Germany
Search for Mohsen Javdani in:
Search for Andrew McGee in:
Correspondence to Mohsen Javdani.
Table 9 Marginal effects from probit models of the probability of attrition
Table 10 Wage gaps relative to white Canadian-born workers among men with less than 10 years of potential experience
Table 11 Robustness of O-B decompositions
Javdani, M., McGee, A. Labor market mobility and the early-career outcomes of immigrant men. IZA J Develop Migration 8, 20 (2018) doi:10.1186/s40176-018-0128-4
Received: 17 November 2017
Accepted: 03 April 2018
Employer changes
|
CommonCrawl
|
Data-driven control of hydraulic servo actuator based on adaptive dynamic programming
ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems
Recovering the initial condition in the one-phase Stefan problem
Chifaa Ghanmi 1,, , Saloua Mani Aouadi 1, and Faouzi Triki 2,
Faculty of Sciences of Tunis 2092, University of Tunis El Manar, Tunis, Tunisia
Laboratoire Jean Kuntzmann, UMR CNRS 5224, Université Grenoble-Alpes, 700 Avenue Centrale, 38401 Saint-Martin-d'Hères, France
* Corresponding author: C. Ghanmi
Received March 2021 Revised May 2021 Early access July 2021
Fund Project: The work of F. Triki is supported in part by the grant ANR-17-CE40-0029 of the French National Research Agency ANR (project MultiOnde)
We consider the problem of recovering the initial condition in the one-dimensional one-phase Stefan problem for the heat equation from the knowledge of the position of the melting point. We first recall some properties of the free boundary solution. Then we study the uniqueness and stability of the inversion. The principal contribution of the paper is a new logarithmic type stability estimate that shows that the inversion may be severely ill-posed. The proof is based on integral equations representation techniques, and the unique continuation property for parabolic type solutions. We also present few numerical examples operating with noisy synthetic data.
Keywords: Inverse Stefan problem, initial condition, free boundary problem, heat equation, stability analysis, integral equation, Tikhonov regularization method.
Mathematics Subject Classification: Primary: 35R30, 80A22, 45Q05, 35B35; Secondary: 65M32.
Citation: Chifaa Ghanmi, Saloua Mani Aouadi, Faouzi Triki. Recovering the initial condition in the one-phase Stefan problem. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021087
K. Ammari, A. Bchatnia and K. El Mufti, A remark on observability of the wave equation with moving boundary, J. Appl. Anal., 23 (2017), 43-51. doi: 10.1515/jaa-2017-0007. Google Scholar
K. Ammari and F. Triki, On weak observability for evolution systems with skew-adjoint generators, SIAM J. Math. Anal., 52 (2020), 1884-1902. doi: 10.1137/19M1241830. Google Scholar
G. Bruckner and J. Cheng, Tikhonov regularization for an integral equation of the first kind with logarithmic kernel, J. Inverse Ill-Posed Probl., 8 (2000), 665-675. doi: 10.1515/jiip.2000.8.6.665. Google Scholar
J. R. Cannon and J. Douglas Jr., The Cauchy problem for the heat equation, SIAM J. Numer. Anal., 4 (1967), 317-336. doi: 10.1137/0704028. Google Scholar
J. R. Cannon and J. Douglas Jr., The stability of the boundary in a Stefan problem, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3), 21 (1967), 83-91. Google Scholar
J. R. Cannon and C. D. Hill, Existence, uniqueness, stability, and monotone dependence in a Stefan problem for the heat equation, J. Math. Mech., 17 (1967), 1-19. doi: 10.1512/iumj.1968.17.17001. Google Scholar
J. R. Cannon and M. Primicerio, Remarks on the one-phase Stefan problem for the heat equation with the flux prescribed on the fixed boundary, J. Math. Anal. Appl., 35 (1971), 361-373. doi: 10.1016/0022-247X(71)90223-X. Google Scholar
M. Choulli, Various stability estimates for the problem of determining an initial heat distribution from a single measurement, Riv. Math. Univ. Parma (N.S.), 7 (2016), 279-307. Google Scholar
M. Choulli and M. Yamamoto, Logarithmic stability of parabolic Cauchy problems, preprint, arXiv: 1702.06299v4. Google Scholar
H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Mathematics and its Applications, 375, Kluwer Academic Publishers Group, Dordrecht, 1996. Google Scholar
E. Fernández-Cara, F. Hernández and J. Límaco, Local null controllability of a 1D Stefan problem, Bull. Braz. Math. Soc. (N.S.), 50 (2019), 745-769. doi: 10.1007/s00574-018-0093-9. Google Scholar
E. Fernández-Cara, J. Limaco and S. B. de Menezes, On the controllability of a free-boundary problem for the 1D heat equation, Systems Control Lett., 87 (2016), 29-35. doi: 10.1016/j.sysconle.2015.10.011. Google Scholar
A. Friedman, Variational Principles and Free Boundary Problems, Pure and Applied Mathematics, John Wiley & Sons, Inc., New York, 1982. Google Scholar
A. Friedman, Partial Differential Equations of Parabolic Type, Prentice-Hall, Inc., Englewood Cliffs, NJ, 1964. Google Scholar
G. C. Garcia, A. Osses and M. Tapia, A heat source reconstruction formula from single internal measurements using a family of null controls, J. Inverse Ill-Posed Probl., 21 (2013), 755-779. doi: 10.1515/jip-2011-0001. Google Scholar
B. Geshkovski and E. Zuazua, Controllability of one-dimensional viscous free boundary flows, SIAM J. Control Optim., 59 (2021), 1830-1850. doi: 10.1137/19M1285354. Google Scholar
C. Ghanmi, S. Mani-Aouadi and F. Triki, Identification of a Boundary Influx Condition in A One-Phase Stefan Problem, Appl. Anal., to appear. Google Scholar
N. L Gol'dman, Inverse Stefan Problems, Springer Science & Business Media, 2012. Google Scholar
A. Hajiollow, Y. Lotfi, K. Parand, A. H. Hadian, K. Rashedi and J. A. Rad, Recovering a moving boundary from Cauchy data in an inverse problem which arises in modeling brain tumor treatment: The (quasi) linearization idea combined with radial basis functions (RBFs) approximation, Engineering with Computers, 37 (2021), 1735-1749. doi: 10.1007/s00366-019-00909-8. Google Scholar
M. Hanke, A. Neubauer and O. Scherzer, A convergence analysis of the Landweber iteration for nonlinear ill-posed problems, Numer. Math., 72 (1995), 21-37. doi: 10.1007/s002110050158. Google Scholar
P. Jochum, The numerical solution of the inverse Stefan problem, Numer. Math., 34 (1980), 411-429. doi: 10.1007/BF01403678. Google Scholar
B. T. Johansson, D. Lesnic and T. Reeve, A method of fundamental solutions for the one-dimensional inverse Stefan problem, Appl. Math. Model., 35 (2011), 4367-4378. doi: 10.1016/j.apm.2011.03.005. Google Scholar
P. Knabner, Control of Stefan problems by means of linear-quadratic defect minimization, Numer. Math., 46 (1985), 429-442. doi: 10.1007/BF01389495. Google Scholar
W. T. Kyner, An existence and uniqueness theorem for a nonlinear Stefan problem, J. Math. Mech., 8 (1959), 483-498. doi: 10.1512/iumj.1959.8.58035. Google Scholar
O. A. Ladyzhenskaia, V. A. Solonnikov and N. N. Ural'tseva, Linear and Quasi-Linear Equations of Parabolic Type, American Mathematical Soc., 1968. Google Scholar
L. Landweber, An iteration formula for Fredholm integral equations of the first kind, Amer. J. Math., 73 (1951), 615-624. doi: 10.2307/2372313. Google Scholar
J. Li, M. Yamamoto and J. Zou, Conditional stability and numerical reconstruction of initial temperature, Commun. Pure Appl. Anal., 8 (2009), 361-382. doi: 10.3934/cpaa.2009.8.361. Google Scholar
R. Nevanlinna, H. Behnke, L. V. Grauert, H. Ahlfors, D. C. Spencer, L. Bers, K. Kodaira, M. Heins and J. A. Jenkins, Analytic Functions, Berlin, Springer, 1970. Google Scholar
R. Reemtsen and A. Kirsch, A method for the numerical solution of the one-dimensional inverse Stefan problem, Numer. Math., 45 (1984), 253-273. doi: 10.1007/BF01389470. Google Scholar
L. I. Rubenšteǐn, The Stefan Problem, Translations of Mathematical Monographs, 27, American Mathematical Society, Providence, RI, 1971. Google Scholar
W. Rudin, Real and Complex Analysis, 2$^{nd}$ edition, McGraw-Hill Series in Higher Mathematics. McGraw-Hill Book Co., New York-Düsseldorf-Johannesburg, 1974. Google Scholar
T. Wei and M. Yamamoto, Reconstruction of a moving boundary from Cauchy data in one-dimensional heat equation, Inverse Probl. Sci. Eng., 17 (2009), 551-567. doi: 10.1080/17415970802231610. Google Scholar
L. C. Wrobel, A boundary element solution to Stefan's problem, Boundary Elements V, (1983). Google Scholar
Figure 1. The exact initial condition $ u_0(x) $ and approximate solution with different Gaussian noise levels obtained with $ \lambda = 10^{-3} $, $ M = 250 $ using Tikhonov Regularization method
Figure 2. The exact initial condition $ u_0(x) $ and approximate solution with different Gaussian noise levels obtained with $ M = 250 $ using Landweber method
Figure 3. The exact initial condition $ u_0(x) $ and the approximate solution with different Gaussian noise levels obtained with $ \lambda = 10^{-2} $, $ M = 250 $ using Tikhonov method
Figure 5. The initial condition $ u_0(x) $ and approximate solution with different Gaussian noise levels obtained with $ \lambda = 10^{-3} $ and $ M = 250 $ using Tikhonov method
Figure 6. The initial condition $ u_0(x) $ and approximate solution with different Gaussian noise levels obtained with $ M = 250 $ using Landweber method
Table 1. Relative errors using Tikhonov method
$ \lambda $ Noise on $ s(t) $ ($ \% $) $ \frac{||{u_0-U_0}||_2}{||{u_0}||_2} $
$ 10^{-3} $ 0 $ \% $ 0.0425
Table 2. Relative errors using Landweber method
Noise on $ s(t) $ ($ \% $) $ \frac{||{u_0-U_0}||_2}{||{u_0}||_2} $
0 $ \% $ 0.0846
Donatella Danielli, Marianne Korten. On the pointwise jump condition at the free boundary in the 1-phase Stefan problem. Communications on Pure & Applied Analysis, 2005, 4 (2) : 357-366. doi: 10.3934/cpaa.2005.4.357
Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2053-2068. doi: 10.3934/cpaa.2017101
Vladimir V. Varlamov. On the initial boundary value problem for the damped Boussinesq equation. Discrete & Continuous Dynamical Systems, 1998, 4 (3) : 431-444. doi: 10.3934/dcds.1998.4.431
Masaru Ikehata, Mishio Kawashita. An inverse problem for a three-dimensional heat equation in thermal imaging and the enclosure method. Inverse Problems & Imaging, 2014, 8 (4) : 1073-1116. doi: 10.3934/ipi.2014.8.1073
Xiaoshan Chen, Fahuai Yi. Free boundary problem of Barenblatt equation in stochastic control. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1421-1434. doi: 10.3934/dcdsb.2016003
Roman Chapko, B. Tomas Johansson. An alternating boundary integral based method for a Cauchy problem for the Laplace equation in semi-infinite regions. Inverse Problems & Imaging, 2008, 2 (3) : 317-333. doi: 10.3934/ipi.2008.2.317
Jiayue Zheng, Shangbin Cui. Bifurcation analysis of a tumor-model free boundary problem with a nonlinear boundary condition. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4397-4410. doi: 10.3934/dcdsb.2020103
Hiroshi Matsuzawa. A free boundary problem for the Fisher-KPP equation with a given moving boundary. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1821-1852. doi: 10.3934/cpaa.2018087
Changming Song, Hong Li, Jina Li. Initial boundary value problem for the singularly perturbed Boussinesq-type equation. Conference Publications, 2013, 2013 (special) : 709-717. doi: 10.3934/proc.2013.2013.709
Jun Zhou. Initial boundary value problem for a inhomogeneous pseudo-parabolic equation. Electronic Research Archive, 2020, 28 (1) : 67-90. doi: 10.3934/era.2020005
Hui Yang, Yuzhu Han. Initial boundary value problem for a strongly damped wave equation with a general nonlinearity. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021019
Shaoyong Lai, Yong Hong Wu, Xu Yang. The global solution of an initial boundary value problem for the damped Boussinesq equation. Communications on Pure & Applied Analysis, 2004, 3 (2) : 319-328. doi: 10.3934/cpaa.2004.3.319
Alexander Arguchintsev, Vasilisa Poplevko. An optimal control problem by parabolic equation with boundary smooth control and an integral constraint. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 193-202. doi: 10.3934/naco.2018011
Shuli Chen, Zewen Wang, Guolin Chen. Cauchy problem of non-homogenous stochastic heat equation and application to inverse random source problem. Inverse Problems & Imaging, 2021, 15 (4) : 619-639. doi: 10.3934/ipi.2021008
Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033
Li Liang. Increasing stability for the inverse problem of the Schrödinger equation with the partial Cauchy data. Inverse Problems & Imaging, 2015, 9 (2) : 469-478. doi: 10.3934/ipi.2015.9.469
Soumen Senapati, Manmohan Vashisth. Stability estimate for a partial data inverse problem for the convection-diffusion equation. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021060
Chifaa Ghanmi Saloua Mani Aouadi Faouzi Triki
|
CommonCrawl
|
Inter-chromosomal k-mer distances
Alon Kafri1,
Benny Chor1 na1 &
David Horn2
BMC Genomics volume 22, Article number: 644 (2021) Cite this article
Inversion Symmetry is a generalization of the second Chargaff rule, stating that the count of a string of k nucleotides on a single chromosomal strand equals the count of its inverse (reverse-complement) k-mer. It holds for many species, both eukaryotes and prokaryotes, for ranges of k which may vary from 7 to 10 as chromosomal lengths vary from 2Mbp to 200 Mbp. Building on this formalism we introduce the concept of k-mer distances between chromosomes. We formulate two k-mer distance measures, D1 and D2, which depend on k. D1 takes into account all k-mers (for a single k) appearing on single strands of the two compared chromosomes, whereas D2 takes into account both strands of each chromosome. Both measures reflect dissimilarities in global chromosomal structures.
After defining the various distance measures and summarizing their properties, we also define proximities that rely on the existence of synteny blocks between chromosomes of different bacterial strains. Comparing pairs of strains of bacteria, we find negative correlations between synteny proximities and k-mer distances, thus establishing the meaning of the latter as measures of evolutionary distances among bacterial strains. The synteny measures we use are appropriate for closely related bacterial strains, where considerable sections of chromosomes demonstrate high direct or reversed equality. These measures are not appropriate for comparing different bacteria or eukaryotes.
K-mer structural distances can be defined for all species. Because of the arbitrariness of strand choices, we employ only the D2 measure when comparing chromosomes of different species. The results for comparisons of various eukaryotes display interesting behavior which is partially consistent with conventional understanding of evolutionary genomics. In particular, we define ratios of minimal k-mer distances (KDR) between unmasked and masked chromosomes of two species, which correlate with both short and long evolutionary scales.
k-mer distances reflect dissimilarities among global chromosomal structures. They carry information which aggregates all mutations. As such they can complement traditional evolution studies , which mainly concentrate on coding regions.
The phenomenon of Inversion Symmetry (IS) has recently been reevaluated and established in [1]. This generalization of the second Chargaff rule [2] implies that the number of occurrences of any sequence m of length k on a chromosomal strand S is equal to the number of occurrences of its inverse (reverse-complement) sequence minv on the same strand. Another way of stating the same fact is that the number of occurrences of m on one chromosomal strand is equal to the number of occurrences of m on the other strand provided both are being read along their own 5′ to 3′ directions.
The accuracy of such statements depends on the length k of the nucleotide sequences which are being employed. It turns out to have a monotonic dependence on k, i.e. as k increases the symmetry worsens. If one sets the required accuracy at 10% one finds [1] that it holds for k ≤ KL where KL grows logarithmically with the length L of the chromosome. KL values for mammals are 9 or 10, while for bacteria they are 7 or 8. These choices of KL guarantee that all possible k-mers of a particular k-value will be found on the chromosome in question.
Inversion symmetry can be restated as the demonstration of a low k-mer distance between the two strands of the same chromosome [3], with exact symmetry implying zero distance. The notion of k-mer distances between different chromosomes, within and between species, is a simple extension of the same basic idea: comparing frequencies of all strings of nucleotides of the same length k on different chromosomes, summing over one or over both strands of each chromosome.
Short k-mer distances can be interpreted as large structural similarities between chromosomes. In bacteria we establish correlations of short k-mer distances between bacterial strains with large synteny proximities. Both concepts are explained in the Methods section. For bacterial strains, they also serve as good measures of evolutionary distances.
The synteny proximities which we employ are valid measures between bacterial strains which are very close evolutionary relatives. Otherwise one cannot find large genomic sections with high identities among them. Therefore, conventional synteny measures which are used in genomic evolutionary studies [4] are very different from our synteny proximities and are mostly concentrated on coding regions.
k-mer distances, which are global measures, can be used to compare any two chromosomes. When studying eukaryotes, the compared chromosomes are dominated by non-coding regions. Comparing minimal k-mer distances between various genomes, we find interesting results. In particular, ratios of unmasked to masked minimal genome distances, correlate with evolutionary distances among different species.
Definitions and properties of k-mer distances between chromosomes
The term k-mer refers (in the genomic context) to all possible nucleotide substrings of length k that are contained in a given chromosomal strand of length L, uncovered by a sliding-window search. The total number of their occurrences is N = L-k + 1. We define the empirical frequency of a specific k-mer, e.g. m1, in the strand S as the number of occurrences of this k-mer in S divided by N
$$ {f}_{m_1}=\frac{n\left({m}_1\right)}{N} $$
Let us define the k-mer distance D1 as the L1-norm of the difference between k-dim vectors containing frequencies of all k-mers, when comparing two chromosomal strands (e.g. positive strands of two chromosomes) S1 and S2:
$$ {D}_1^k\left({S}_1,{S}_2\right)={\sum}_{i=1}^{4^k}\mid {f}_{m_i}\left({S}_1\right)-{f}_{m_i}\left({S}_2\right)\mid $$
The index 1 in D1 refers to the fact that we use only one strand on each chromosome in this comparison of two chromosomes.
Similarly, we may define a distance measure D2 by taking into account both strands of the two chromosomes, reading them along their own 5′ to 3′ directions. Since each specific k-mer on the negative strand, is accompanied by its inverse (reverse-complement) on the positive strand, we may define D2 as
$$ {D}_2^k\left({S}_1,{S}_2\right)={\sum}_{i=1}^{4^k}\left|{f}_{m_i}\left({S}_1\right)+{f}_{M_i}\left({S}_1\right)-{f}_{m_i}\left({S}_2\right)-{f}_{M_i}\left({S}_2\right)\right|/2 $$
where we use a single strand on each chromosome and define for every k-mer its inverse (reverse complement)
$$ {M}_i={m}_i^{inv} $$
and sum over all of them along a single strand of each of the two chromosomes. Division by 2 is introduced in the definition of D2 because the effective number of counts on each chromosome becomes 2 N.
The triangular inequality implies that
$$ \mid {f}_{m_i}\left({S}_1\right)+{f}_{M_i}\left({S}_1\right)-{f}_{m_i}\left({S}_2\right)-{f}_{M_i}\left({S}_2\right)\left|\le |{f}_{m_i}\left({S}_1\right)-{f}_{m_i}\left({S}_2\right)\right|+\left|{f}_{M_i}\left({S}_1\right)-{f}_{M_i}\left({S}_2\right)\right| $$
for every single k-mer. It follows then that
$$ {D}_2^k\left({S}_1,{S}_2\right)\le {D}_1^k\left({S}_1,{S}_2\right) $$
Using the above definitions we summarize the properties of k-mer distances:
Positivity. By definition all distances are non-negative.
If \( {D}_{1,2}^k\left({S}_1,{S}_2\right)=0 \) then S1 and S2 are equivalent, in the sense that both chromosomes have the same frequencies of all k-mers. This does not necessarily imply that the two chromosomes are equal to each other, because they may differ in length.
Symmetry. By definition, \( {D}_{1,2}^k\left({S}_1,{S}_2\right)={D}_{1,2}^k\left({S}_2,{S}_1\right) \).
Inequality: \( {D}_2^k\left({S}_1,{S}_2\right)\le {D}_1^k\left({S}_1,{S}_2\right) \), as proved above in Eq. 5.
Triangular inequalities of distances:
$$ {D}_{1,2}^k\left({S}_1,{S}_3\right)\le {D}_{1,2}^k\left({S}_1,{S}_2\right)+{D}_{1,2}^k\left({S}_2,{S}_3\right). $$
This can be proved in an analogous fashion to property 4.
Inversion symmetry [1] implies that \( {D}_1^k\left({S}_1,{S}_2\right)=0 \) if S2 is the inverse of S1 (or equivalent to it in the sense of property 2). Otherwise this distance will be positive. Such a definition of inversion symmetry has been introduced by [3]. \( {D}_2^k\left({S}_1,{S}_2\right)=0 \) is a trivial statement for two strands which are inverses of each other.
Monotonic increase with k:
$$ {D}_{1,2}^{k-1}\left({S}_1,{S}_2\right)\le {D}_{1,2}^k\left({S}_1,{S}_2\right) $$
To prove this property note that a k-mer mik can be generated from a corresponding mjk-1, which coincides with all first k-1 entries of mik, by adding to it one of the four nucleotides {A, C, G, T}. Let us define this set as {j,i} for a given mjk-1 and four corresponding mik. It follows then that
$$ {\displaystyle \begin{array}{c}{D}_1^{k-1}\left({S}_1,{S}_2\right)={\sum}_{j=1}^{4^{k-1}}\mid {f}_{m_j}\left({S}_1\right)-{f}_{m_j}\left({S}_2\right)\mid \le \\ {}{\sum}_{i=1}^{4^k}\mid {f}_{m_i}\left({S}_1\right)-{f}_{m_i}\left({S}_2\right)\mid ={D}_1^k\left({S}_1,{S}_2\right)\end{array}} $$
by summing over the indices using the {j,i} association, and applying the extended triangular inequality to each set of four fi whose k-mers mik begin with the same (k-1)-mer mjk-1 with index j.
This proof can be trivially extended to D2.
One condition for these inequalities to hold is that all k-mers are realized on the chromosomal strands which are being investigated, i.e. all \( n\left({m}_i^k\right)>0 \).
Finally we touch upon the question of the range of k-values for which the distance measures can be applied.
Shporer et al. [1] have introduced the notion of the KL limit. This is the k-value for which Inversion Symmetry fails at the rate of 10%. They demonstrated that chromosomes of different species, as well as different human chromosomal sections, follow a universal logarithmic slope of KL ~ 0.7 ln(L), where L is the length of the chromosome. This limit can also be derived from the assumption that L> > 4k allowing for all k-mers to be expressed on the chromosome.
As an example of relevant statistics we display in Fig. 1 the percentage of missing k-mers, i.e. those which do not appear on the strand, and the distance between two close strains of E. coli as function of k, demonstrating that good results are obtained for k ≤ KL = 7.
k-mer analysis of E. coli, for which KL = 7. a Percentage of missing k-mers, i.e., those for which \( n\left({m}_i^k\right)=0 \). b D1 distance between two K12 strains of E. coli
When evaluating distances between two chromosomal strands with different lengths, L1 and L2, one should limit oneself to KL where L = min(L1, L2), guaranteeing that the same k is valid for both chromosomal strands which are being compared.
We provide a python program for calculating k-mer distances between two chromosomes, given as fasta files, in (https://github.com/akafri/k-mer-distances).
Definition of synteny distances
Synteny blocks are genetic sequences in genomes of two species which consist of aligned homologous genes. A recent example of their importance was demonstrated by [5, 6]. Here we introduce definitions of synteny distances, which will be used to compare with k-mer distances. This comparison will be carried out using different strains of the same bacterium, where large synteny blocks with identity percentages higher than 90% exist. The threshold of 90% is arbitrary. It was made to guarantee high similarity between the relevant chromosomes. For bacteria, where the selection of a positive strand is well defined, we differentiate between Direct Synteny Blocks (DSB), appearing along the same strand in both genomes, and Inverse Synteny Blocks (ISB), lying on opposite strands. An example is shown in Fig. 2.
Synteny Blocks between E. coli 0157-H7-EDL933 (right) and E. coli K12-MG1655 (left). The colors represent the Identity Percentage where red indicates high identity percentage of DSB and blue indicates low identity percentage of DSB. The black colors represent ISBs
Searching for synteny blocks, BLAST was first used to identify local alignments between the full two sequences. The R package OmicCircus [7] was used to visualize results. From the BLAST output, we extract synteny blocks that have identity percentages higher than 90%, and calculate the overall sequence lengths of DSB and ISB (LDSB and LISB) respectively.
We then define direct synteny proximity
$$ {P}_{DSYN}\left({S}_1,{S}_2\right)=\frac{L_{DSB}}{\min \left({L}_1,{L}_2\right)}, $$
and overall synteny proximity as
$$ {P}_{SYN}\left({S}_1,{S}_2\right)=\frac{L_{DSB}+{L}_{ISB}}{\min \left({L}_1,{L}_2\right)} $$
where L1 and L2 are the lengths of the chromosomes S1 and S2 which are being compared.
The matched-pair algorithm for k-mer distances between two species
To define distances between two eukaryote genomes we started by evaluating a distance matrix between all chromosomes of the two species. We then constructed a graph whose vertices are the chromosomes of the two species and its edges (lines connecting the vertices) represent the distance value of each pair. We proceeded along the following algorithmic steps:
Eliminate edges with distances > 1 from the graph.
Define an empty distance vector.
Find the edge of the graph with the lowest distance value.
Add this value as an entry to the distance vector.
Remove this edge from the graph and repeat from step 3 until the graph is exhausted.
Inspect the resulting distance vector and report its minimum (the first edge considered by the matching algorithm) and its median.
Distance measures in bacteria
We compared genomes of 23 strains of E. coli and 14 strains of Salmonella enterica. They are listed in Tables 1 and 2.
Table 1 E. coli data, taken from [8]. See also data supplementary file
Table 2 Salmonella enterica data. Taken from NCBI [9]. See also data supplementary file
In Fig. 3 we present correlations of PDSYN with D1 for (a) E. Coli and for (b) S. enterica strains. In each of the two data sets we have looked into all pairs of strains. The data are presented for k = 7. We report only results between strains of the same bacterium since no significant correlation was found between any two strains of the two different bacteria. The higher statistics of E. coli leads to a clearer observation of the correlations.
Comparison of PDSYN with D1k = 7 for pairs of (left) E. coli strains and (right) S. enterica strains. Each dot represents a pair of strains. Arrows indicate the two principal components of PCA applied to the data points in the diagram, delineating the variance of the data along these two directions
Next we turn to correlations of over-all synteny with D2k = 7. This is presented in Fig. 4. Once again we note the strong correlations in the data. The strong negative correlation is particularly significant for the E. coli strains where we have many more pairs of strains which can be compared with one another. Hence we limit our further analysis to just E. coli strains.
Comparison of PSYN with D2k = 7 for pairs of (left) E. coli strains and (right) S. enterica strains
In order to appreciate the variation with k we display in Fig. 5 the Pearson correlation coefficients of D1 and D2 for all E. coli pairs of strains, as function of k, for the two classes of synteny measures. Clearly k = 7, the choice made in Figs. 3 and 4, leads to a strong correlation, as observed in Figs. 3 and 4. The relevant Pearson correlation p-values turn out to be miniscule, with the highest one being of order 10− 7 for k = 1 for both D1 and D2, and others of order 10− 22 and smaller.
Pearson correlation coefficients of the two k-mer distance measures of pairs of E. coli strains as function of k, with (left) PSYN and (right) PDSYN. We present only results of E. coli strains, because of the larger number of pairs of strains, which leads to higher statistical significance
We find different correlations of the two measures with PDSYN. Whereas D1 displays the expected negative correlation, for all relevant k, D2 is less sensitive to the direct synteny measure. This may be expected since D2 is a measure sensitive to both strands whereas PDSYN is sensitive to only one strand in each chromosome.
In order to appreciate this result let us dwell on the question why inversion symmetry [1] holds up to large k-values of order KL. The plausible explanation is that genomes evolve through rearrangement processes. These rearrangements are inversions of sections between two breakpoints on the same chromosome. They may follow one another in a nested fashion. This scenario can explain the observed inversion symmetry, as demonstrated in [1]. Pevzner and Tesler [5] have argued that such phenomena are the basis of chromosomal evolution for single chromosomes and, with lower probability, also between different chromosomes. Here we observed that D1 between two strains of bacteria correlates strongly with both PDSYN and PSYN for all k ≤ 7, both reflecting chromosomal evolution at the short evolutional scale appropriate to different strains of the same bacteria.
Distance measures between different species
In the previous section we have analyzed k-mer distances between closely related bacterial strains, where the synteny distances that we have defined can be easily observed. When evolutionary genomics is applied to different eukaryotes one often limits oneself to similarity between homologous proteins rather than accurate duplications or inversions of large sections of the DNA. The use of k-mer distances can indicate similarities between full chromosomes, which is the study we propose. From Inversion Symmetry we learn the powerful effect of rearrangement within a single chromosome. Rearrangements may also occur between chromosomes and k-mer distances reflect their effects.
Evaluating minimal D2 distances according to the matched-pair algorithm (see Methods) we obtain the results displayed in Tables 3 and 4. The genome inputs, both unmasked (Table 3) and masked (Table 4), are taken from the UCSC server (see data supplementary file). Clearly, there is quite a difference between the two choices: masking reduces the distance values considerably. We use k = 8 which is a choice appropriate for all displayed species in Tables 3, 4, 5 and 6.
Table 3 Minimal and median D2k = 8 distances between six genomes belonging to different mammals, for unmasked versions of the genomes. See Methods for definition of the computational procedure
Table 4 Minimal and median D2k = 8 distances between masked genomes of different mammals. See Methods for definition of the computational procedure
Table 5 Ratio of unmasked to masked minimal D2k = 8 distances. The ratios among primates and rodents are correlated with evolutionary time estimates (http://www.timetree.org/), but this is not true for the rest of this table
Table 6 Unmasked and masked minimal D2k = 8, their ratios, defined as KDRs, and the separation age estimates derived from (http://www.timetree.org/)
There are several striking results in the two tables 3 and 4. One important result is the closeness of minimal and medial distance values. This implies that similar k-mer distances are observed for many chromosomal pairs of the two genomes, and are not limited to a single particular pair of chromosomes. In other words, homology spreads out between different chromosomal sections of the two compared species.
Another important result is the huge difference between minimal k-mer distances of unmasked and masked genomes. Conventional understanding regards the low complexity components of the unmasked regions as unprotected by evolution. Hence ratios of unmasked to masked minimal D2k = 8 distances measure the aggregated effect of different strengths of mutations when the low complexity sections of genomes are taken into account.
The results for these ratios are presented in Table 5. They seem to be correlated to evolutionary time lapses among primates and rodents, where the separation between human and chimpanzee is dated at 6.7 MYA (million years ago), between mouse and rat 20 MYA and between rodents and primates 90 MYA. However the correlation between all four to dog and cow, ceases to exist. The separation age between the primates to dog and cow is estimated at 96 MYA and between dog and cow 78 MYA. All the evolutionary estimates are derived from the time-tree website (http://www.timetree.org/).
A major tool employed in genomic evolutionary studies is Reversal (or inversal) Distance (RD) [5, 6]. Concentrating on the orders and details of genes or other markers, the idea is to work out how many inversions take place along the evolutionary path from one species to another. RD is the minimum number of reversals required to transform one genome into the other. The web-tool of (http://www.timetree.org/) can be used to evaluate such distances. They fit much better the evolutionary time estimates, which is somewhat a tautology because the estimates of (http://www.timetree.org/) take the RD methodology into account. However, RD is problematic when very large evolutionary distances are concerned, because of the shortage in genes which can be compared between distant organisms. K-mer distances are not subject to such constraints. Hence they can be applied to such problems. In Table 6 we compare human with the nematode (C. elegans) and the fruit fly (D. melanogaster), using the same methods as in Table 5. Obviously these results are satisfactory.
Interestingly, k-mer distances are immune to large inversion events. In fact, this was the reason we use them to begin with, starting with the lessons drawn from Inversion Symmetry of chromosomes. On the other hand, k-mer distances are sensitive to all other mutations that occur along an evolutionary path. In this sense, K-mer minimal Distance Ratios among genomes (KDR) can serve as a complement to RD. Moreover, it is applicable to all eukaryotes.
The full potential of KDR has still to be investigated and explained. Evolutionary genomic tools deal extensively with substitution rates, in particular the non-synonymous ones affecting amino-acid changes in proteins. The analogous investigation of substitution rates in low-complexity and high-complexity genomic regions is needed to explain how KDR, or the various minimal or median k-mer distances among genomes, can be used for meaningful evolutionary conclusions.
We have introduced measures of k-mer distances, and applied them to bacteria and to eukaryotes. The two measures D1 and D2 were compared to synteny measures in bacteria, tracing large identical sections of chromosomes between two strains of the same species. We identified a strong correlation between D1 and direct syntenic regions and a strong correlation between D2 and both direct and inverse syntenies, which indicates evolutionary similarity between two strains. We argue therefore that k-mer distances are validated as good measures for evolutionary distances within bacteria.
D2 measures are also adequate for estimating distances between any two genomes which may have very ancient common ancestors. We exemplify this fact by demonstrating such distance measures between several eukaryotes. We find that there exists considerable difference between masked and unmasked distances, as expected from common evolutionary understanding of rapid variation in low complexity regions, being less protected by evolution. Moreover, we exploit this difference to establish minimal K-mer Distance Ratios (KDR), which correlate with evolutionary time scales of primates and rodents, as well as very large time scales such as between human, nematode and fruit fly.
Whereas conventional evolutionary studies continue to use traditional methods following changes within and throughout homologous genes, our k-mer distances take into account the full chromosomes, involving both coding and non-coding sections. As such, they carry novel information which complements traditional investigations.
All data analyzed during this study are included in the data supplementary information file.
Shporer S, Chor B, Rosset S, Horn D. Inversion symmetry of DNA k-mer counts: validity and deviations. BMC Genomics. 2016;17(1):696. https://doi.org/10.1186/s12864-016-3012-8.
Rudner R, Karkas JD, Chargaff E. Separation of B. subtilis DNA into reverseary strands. III. Direct analysis. Proc Natl Acad Sci U S A. 1968;60(3):921–2. https://doi.org/10.1073/pnas.60.3.921.
Baisnee P-F, Hampson S, Baldi P. Why are reverseary DNA strands symmetric? Bioinformatics. 2002;18(8):1021–33. https://doi.org/10.1093/bioinformatics/18.8.1021.
Sinha AU, Meller J. Cinteny: flexible analysis and visualization of synteny and genome rearrangements in multiple organisms. BMC Bioinformatics. 2007;8:82 Webserver: https://cinteny.cchmc.org/.
Pevzner P, Tesler G. Genome rearrangements in mammalian evolution: lessons from human and mouse genomes. Genome Res. 2003;13(1):37–45. https://doi.org/10.1101/gr.757503.
Pham SK, Pevzner PA. DRIMM-Synteny: decomposing genomes into evolutionary conserved segments. Bioinformatics. 2010;26(20):2509–16. https://doi.org/10.1093/bioinformatics/btq465.
Hu Y, Yan C, Hsu CH, Chen QR, Niu K, Komatsoulis GA, et al. OmicCircos: a simple-to-use R package for the circular visualization of multidimensional omics data. Cancer Informat. 2014;13:13–20. https://doi.org/10.4137/CIN.S13495.
Lukjancenko O, Wassenaar TM, Ussery DW. Comparison of 61 sequenced Escherichia coli genomes. Microbial Ecol. 2010;60(4):708–20.
NCBI browser at https://www.ncbi.nlm.nih.gov/genbank.
We thank Uri Gophna and Erez Persi for helpful discussions.
This research was partially supported by the research fund of the Blavatnik School of Computer Science.
Benny Chor is deceased.
Blavatnik School of Computer Science, Tel Aviv University, 69978, Tel Aviv, Israel
Alon Kafri & Benny Chor
School of Physics and Astronomy, Tel Aviv University, 69978, Tel Aviv, Israel
David Horn
Alon Kafri
Benny Chor
BC and DH initiated the study and contributed to its design. AK carried out the numerical data analysis. DH prepared the manuscript. All authors read and approved the manuscript.
Correspondence to David Horn.
AK and DH dedicate this work to the memory of Benny Chor, a dear mentor and colleague.
Kafri, A., Chor, B. & Horn, D. Inter-chromosomal k-mer distances. BMC Genomics 22, 644 (2021). https://doi.org/10.1186/s12864-021-07952-0
Accepted: 19 August 2021
Inversion symmetry
K-mer distances. Synteny
|
CommonCrawl
|
OSA Publishing > Optical Materials Express > Volume 9 > Issue 6 > Page 2708
Alexandra Boltasseva, Editor-in-Chief
Optical rotatory power of quartz between 77 K and 325 K for 1030 nm wavelength
Mariastefania De Vido, Klaus Ertel, Agnieszka Wojtusiak, Paul D. Mason, P. Jonathan Phillips, Saumyabrata Banerjee, Jodie M. Smith, Thomas J. Butcher, and Chris Edwards
Mariastefania De Vido,1,2,* Klaus Ertel,1 Agnieszka Wojtusiak,1,3 Paul D. Mason,1 P. Jonathan Phillips,1 Saumyabrata Banerjee,1 Jodie M. Smith,1 Thomas J. Butcher,1 and Chris Edwards1
1STFC Rutherford Appleton Laboratory, Central Laser Facility, Didcot, OX11 0QX, UK
2Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
3Loughborough University, Loughborough, LE11 3TU, UK
*Corresponding author: [email protected]
Mariastefania De Vido https://orcid.org/0000-0002-1235-8371
Saumyabrata Banerjee https://orcid.org/0000-0001-9407-477X
M De Vido
K Ertel
A Wojtusiak
P Mason
P Phillips
S Banerjee
T Butcher
C Edwards
•https://doi.org/10.1364/OME.9.002708
Mariastefania De Vido, Klaus Ertel, Agnieszka Wojtusiak, Paul D. Mason, P. Jonathan Phillips, Saumyabrata Banerjee, Jodie M. Smith, Thomas J. Butcher, and Chris Edwards, "Optical rotatory power of quartz between 77 K and 325 K for 1030 nm wavelength," Opt. Mater. Express 9, 2708-2715 (2019)
Pressure and Temperature Variation of the Optical Rotatory Power of α-Quartz* (JOSA)
Effect of Pressure on the Optical Rotatory Power and Dispersion of α-Quartz (JOSA)
High-resolution absorption measurement at the zero phonon line of Yb:YAG between 80 K and 300 K (OME)
Laser Materials
High power lasers
Laser damage
Optical activity
Stress birefringence
Original Manuscript: March 29, 2019
Revised Manuscript: May 17, 2019
Manuscript Accepted: May 17, 2019
Measurement results
We report on the experimental characterisation of the temperature dependence of the optical rotatory power of crystalline right-handed $\alpha$-quartz at 1030 nm wavelength. The temperature range covered in this study is between 77 K and 325 K. For the measurement we propagated light through a 13.11 mm thick quartz plate collinearly with the optic axis. The plate is anti-reflection coated and rotates the polarisation plane of 1030 nm light by 89.3 deg at room temperature, corresponding to a specific rotatory power of 6.8 deg/mm. When placed between parallel polarisers, the transmission through the system was 0.03% at room temperature and increased to 1% at 77 K, showing a measurable change in rotatory power. At 77 K, the angle of rotation imparted by the quartz plate is 85 deg, corresponding to a specific rotatory power of 6.5 deg/mm. To the best of our knowledge, this is the first time that the temperature dependence of optical activity of $\alpha$-quartz is reported for cryogenic temperatures in the infrared. We expect that the measurement results provided in this paper will assist in the design and characterisation of optical systems operating under cryogenic conditions.
Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
The temperature sensitivity of the properties of transparent materials is of considerable interest for the development of optical systems. Crystalline $\alpha$-quartz material is widely used for polarisation optics because of its anisotropy and high optical transparency over a wide range of wavelengths, from near infrared to ultraviolet. Crystalline quartz is an ideal material for high power laser applications because it can be produced with high purity, required to avoid laser-induced damage and beam distortion. The trigonal crystalline structure of $\alpha$-quartz results in positive birefringence, a property exploited for the realisation of waveplates. In addition to birefringence, the spiral molecular arrangement along the trigonal axis (optic axis) of quartz induces optical activity, resulting in the rotation of the polarisation plane of light propagating through the material. This property is used for the realisation of polarisation rotators, which find application, for example, in schemes for the compensation of stress-induced birefringence [1,2]. In order to optimise system design, information on the temperature dependence of optical activity in quartz is required. A number of published works report on the temperature dependence of optical activity of quartz above room temperature in the visible range [3–7]. However, an increasing number of high power lasers relies on cryogenic cooling to manage thermal loads and to increase system efficiency [8,9]. In particular, some thermally-induced stress birefringence compensation schemes benefit from the use of polarisation rotators at cryogenic temperatures [10]. As a result, interest in the optical properties of quartz at lower temperatures is rising; however, existing literature does not provide information on optical activity of quartz at low temperatures at the wavelengths commonly generated by high power lasers. Data at cryogenic temperatures has so far been provided only for visible radiation (wavelengths between 404.7 nm and 670.8 nm) by Molby, who measured it at 83 K and 300 K [11]. Subsequently, Chandrasekhar derived a formula to fit experimental data at room temperature for wavelengths between 150 nm and 3210 nm and at cryogenic temperatures for wavelengths between 400 nm and 670 nm [12]. In this paper, we report on the experimental characterisation of the dependence of optical activity of quartz for temperatures between 77 K and 325 K at 1030 nm wavelength, at which lasers based on Yb:YAG gain material operate [13]. The measurements were performed on a 13.11 mm thick quartz plate using 1030 nm light propagating collinearly with the optic axis. The measurements showed that at room temperature the quartz plate has a specific rotatory power of 6.8 deg/mm. At 77 K, this value decreases to 6.5 deg/mm. To the best of our knowledge, this is the first time that rotatory power of quartz is characterised in the infrared at cryogenic temperatures.
1 Crystalline quartz material
Crystalline $\alpha$-quartz belongs to the trigonal system, and therefore exhibits uniaxial birefringence. As a result of the helical arrangement of quartz molecules around the trigonal axis (optic axis), optical activity effects are observed in addition to birefringence. In purely optically active materials, circularly polarised light propagates unchanged, with left- and right- circular polarisations propagating at different speeds, determined by refractive indices $n_{L}$ and $n_{R}$, respectively. It can be shown that this effect causes the polarisation plane of light to continuously rotate during propagation through the material. Propagation over a geometric path length $L$ causes the polarisation plane to rotate by an angle [14]:
(1)$$\gamma(T) = \frac{\pi}{\lambda}\Delta n(T) L(T),$$
where $\lambda$ is the wavelength of light, $T$ is the temperature of the material and where
(2)$$\Delta n(T) = n_{L}(T)-n_{R}(T).$$
The specific rotatory power of the material is calculated as:
(3)$$\rho(T) = \frac{\gamma(T)}{L(T)} = \frac{\pi}{\lambda}\Delta n(T).$$
The optical rotatory power of quartz is most easily observed when light propagates along the optic axis of the material. Furthermore, light propagating along the optic axis of quartz is not affected by birefringence, thus allowing the contribution of optical activity to be isolated. For this reason, measurements were performed on a z-cut right-handed crystalline quartz cylindrical plate (i.e. the input and output surfaces of the plate are cut perpendicularly to the optic axis), anti-reflection (AR) coated for 1030 nm wavelength. The sample (RT-10-1030-90, Melles Griot) has a diameter of 25.4 mm and a thickness of 13.11 mm. The rotation angle imparted at room temperature by the quartz plate across its aperture was measured using a polarimeter (StrainMatic, Ilis). The result, displayed in Fig. 1, shows that the root mean square rotation angle across the aperture for 1030 nm wavelength is 89.3 deg.
Fig. 1. Rotation angle imparted by the quartz plate across its aperture at room temperature and 1030 nm measured using a polarimeter.
2 Experimental setup
The measurements of the rotatory power were carried out by means of an extinction method, using the experimental setup shown in Fig. 2. The figure also shows how the Cartesian reference system used throughout this paper is defined.
Fig. 2. Experimental setup used to characterise the temperature dependence of the rotatory power of quartz (PD1, PD2 = photo-detectors). The insert shows the holder in which the quartz sample is mounted.
Laser radiation is generated by an amplified spontaneous emission (ASE) source (BKTel). An external programmable optical filter (Waveshaper 1000S/1U, Finisar) filters the emission from the ASE source to select a 0.3 nm bandwidth spectrum centred around 1030 nm. A fibre amplifier (BKTel) amplifies the filtered radiation, which is subsequently collimated using a fibre collimator. A polarising beam splitter cube (CM1-PBS253, Thorlabs), further referred to as "polariser", selects the vertical polarisation component (i.e. electric field vector parallel to the y-axis of the Cartesian coordinate system). After the polariser, the beam has a power of 14 mW and is directed onto the quartz sample under study, which is located inside an optical cryostat (Optistat DN-V2, Oxford Instruments). The sample is mounted directly below the heat exchanger of the cryostat, using a copper base plate and an aluminium holder, which provides thermal contact over the whole outer surface of the sample (see the insert in Fig. 2). The temperature was measured using a platinum resistance thermometer attached to the copper plate. Fused silica windows, AR coated to reduce surface reflectivity below 0.5% for 1030 nm wavelength, allow optical access to the sample. The sample was orientated normal to the incident beam. The polarisation state of the beam transmitted through the sample was characterised using a polarising beam splitting cube (PBS103, Thorlabs), further referred to as "analyser", which can be rotated around the z-axis. A silicon photo-detector, indicated in Fig. 2 as PD1, monitors the power of the beam incident on the sample by continuously measuring the reflection off one surface of a wedged window. Another photo-detector, indicated as PD2, measures the power of the beam transmitted through the analyser. Both photo-detectors (PH100-Si-HA-OD2, Gentec-EO) were used with additional 1000 nm long-pass filters (FGL 1000M, Thorlabs) to suppress background light. Before the experiment, the power incident onto the quartz sample was calibrated by positioning PD2 directly behind the wedged window and by simultaneously measuring signals provided by PD1 and PD2. The calibrated signals were used to measure the transmission through the system as the ratio between the output power (after the analyser, as measured by PD2) and the power incident onto the quartz sample (after the wedged window).
3. Measurement results
Initially the analyser orientation was set to vertical, parallel to that of the polariser, as shown in Fig. 3(a). Without the quartz rotator in the beam path, the rotation angle of the analyser was adjusted so as to yield minimum rejected power. Transmission with the quartz rotator in place was measured at temperatures between 77 K and 325 K in steps of 5 K. After changing the temperature, the sample was given 10 min to thermalise before a measurement was taken. Experimental results are shown in Fig. 3(b).
Fig. 3. Orientation of the transmission axes of the polariser and analyser and of the optic axis of the quartz plate with respect to the reference system (a). Dependence of transmission on the temperature of the quartz sample while the transmission axes of polariser and analyser are kept parallel to the y-axis (b). Data shown in this article are available at [15].
Results show that minimum transmission of 0.03% is achieved for temperatures above 270 K and transmission increases with decreasing temperature, reaching 1% at 77 K. This measurement confirmed the presence of a measurable variation of the optical rotatory power over the temperature range under consideration. This variation is influenced by a combination of material contraction due to the reduction in temperature and of a change in the refractive indices $n_{L}$ and $n_{R}$. Data on the temperature dependence of the linear expansion coefficient of quartz in the direction parallel to the optic axis is reported in [16]. It is interesting to note that the derivative of the curve shown in Fig. 3(b) (i.e. the rate of change as a function of the temperature) changes around 140 K, as highlighted in Fig. 3(b) by the addition of a linear fit to the data points below 140 K. This effect, possibly due to structural changes of quartz at low temperatures [17,18], will be subject to further investigation. In order to determine how much the quartz plate rotates the polarisation over the temperature range under analysis, an additional set of measurements was performed. As the temperature of the quartz sample and the orientation of the transmission axis of the polariser were kept constant, the transmission axis of the analyser was rotated by an angle $\phi$ around the z-axis, with $\phi$ = 0 deg meaning that the axes of polariser and analyser are parallel. Minimum transmission at $\phi$ < 0 deg means that the quartz plate rotates the polarisation by less than 90 deg, and $\phi$ > 0 deg means that the polarisation is rotated by more than 90 deg. The measurement was repeated by varying the temperature between 77 K and 325 K in 25 K steps. At each temperature level, the orientation $\phi$ of the transmission axis of the analyser was varied in steps of 2 deg between -30 deg and +20 deg. The resulting experimental data are shown in Fig. 4(a) for a subset of temperatures. No substantial change was visible in this representation between 325 K and 200 K, and therefore only the former value is included in the graph. Figure 4(b) shows a zoomed-in view of the experimental data and fitting curves for $\phi$ values between -10 deg and 5 deg. Data points are fitted with fifth order polynomial fitting curves. From Fig. 4(b) it is possible to observe that, as the temperature is decreased, the minima of the fitting curves shift to lower $\phi$ values. Based on the definition of $\phi$, this corresponds to a reduction in the rotation of the polarisation plane of light propagating through the sample.
Fig. 4. Experimental data points (dots) and polynomial fitting curves (black lines) showing the temperature dependence of the transmission through the system as the axis of the analyser is rotated (a). Zoomed-in view of experimental data and fitting curves for $\phi$ values between -10 deg and 5 deg (b).
The angle $\phi _{min}$ for which minimum transmission is achieved was derived from the polynomial fits. From Fig. 4(b) it is possible to observe that, as temperature decreases, the transmission at $\phi _{min}$ increases, reaching 0.6% at 77 K. Figure 5 shows the dependence of transmission at $\phi _{min}$ as temperature is varied.
Fig. 5. Temperature dependence of the transmission at $\phi _{min}$.
The rotation angle $\gamma$ imparted by the quartz plate was calculated as 90 deg + $\phi _{min}$ and its dependence on temperature is shown in Fig. 6.
Fig. 6. Temperature dependence of the rotation angle $\gamma$ imparted on the polarisation plane by the quartz plate and of the refractive index difference $\Delta n$.
At 77 K, the rotation angle is 85 deg which, based on Eq. 3, corresponds to a specific rotatory power of 6.5 deg/mm. As the temperature is increased above 250 K, the rotation angle increases to values around 89.5 deg, in agreement with the polarimeter measurement shown in Fig. 1. This value corresponds to a specific rotatory power of 6.8 deg/mm and it is in agreement with the value predicted in [12]. Error bars in Fig. 6 take into account uncertainty in laser power measurement ($\pm$ 0.15%), sample length ($\pm$ 0.001 mm), and angular orientation of the analyser ($\pm$ 0.5 deg), which is the main source of error. The derivative of the curve shown in Fig. 6 (i.e. the rate of change of the rotation angle as a function of the temperature) changes around 200 K. As noticed earlier, this could be also due to structural changes occurring in quartz at low temperatures [17,18]. Further research will be required to validate this hypothesis. According to Eq. 1, a change in rotation can result from both a change in length $L$ and in the difference between the refractive indices $\Delta n$. The linear expansion coefficient data for the direction parallel to the optic axis reported in [16] was used to calculate the length of the sample as a function of temperature. As shown in Fig. 7, over the temperature range under consideration, the length changed by 0.13%.
Fig. 7. Temperature dependence of the length of the quartz sample calculated using the linear expansion coefficient data reported in [16].
It follows therefore that the main contributing factor to the change in rotation angle with temperature is a change in the refractive index difference $\Delta n$, which was calculated using Eq. 1, the $L(T)$ values in Fig. 7 and $\gamma (T)$. Over the temperature range under consideration, the change in refractive index difference $\Delta n$ is $2.1\cdot 10^{-6}$.
In this paper, we measured the rotatory power of a right-handed crystalline $\alpha$-quartz polarisation rotator at temperatures between 77 K and 325 K for 1030 nm light. To the best of our knowledge, this is the first time that optical activity of $\alpha$-quartz has been characterised at cryogenic temperatures in the infrared. The measurements showed that at room temperature the quartz plate has a specific rotatory power of 6.8 deg/mm, while at 77 K this value reduces to 6.5 deg/mm. The experimental data show that the polarisation state of the beam transmitted through the quartz rotator is no longer purely linear, since the minimum transmission through the measurement system at 77 K rises to 0.6%. This could be the consequence of a small error in the cut of the quartz rotator plate or the introduction of mechanical stress due to differential thermal expansion between the sample and the holder. These effects would cause a small contribution from birefringence (and its own temperature dependence) to appear. Despite the reduction in optical rotatory power of quartz with temperature, we expect that quartz rotators optimised for room temperature will still perform sufficiently well down to 77 K for most applications. We also expect that this data will be useful for optimising quartz rotators for particular operating temperatures.
Engineering and Physical Sciences Research Council (EPSRC) (1979259); Royal Commission for the Exhibition of 1851 (Industrial Fellowship); Horizon 2020 Framework Programme (H2020) (654148).
This project was supported by an Industrial Fellowship from the Royal Commission for the Exhibition of 1851. MDV is supported by the Engineering and Physical Sciences Research Council (EPSRC) through the Centre for Doctoral Training in Applied Photonics. This project has received funding from the European Union's grant agreement No 654148 Laserlab-Europe. Authors would like to thank Dr Martin Cuddy at the Culham Centre for Fusion Energy for providing access to the Ilis StrainMeter polarimeter.
1. M. Frede, R. Wilhelm, M. Brendel, C. Fallnich, F. Seifert, B. Willke, and K. Danzmann, "High power fundamental mode Nd: YAG laser with efficient birefringence compensation," Opt. Express 12(15), 3581–3589 (2004). [CrossRef]
2. Y. Wang, K. Inoue, H. Kan, T. Ogawa, and S. Wada, "Birefringence compensation of two tandem-set Nd:YAG rods with different thermally induced features," J. Opt. A: Pure Appl. Opt. 11(12), 125501 (2009). [CrossRef]
3. S. Chandrasekhar, "The temperature variation of the rotatory power of quartz from 30$^\circ$ to 410$^\circ$ C," Proc. - Indian Acad. Sci., Sect. A 39(6), 290–295 (1954). [CrossRef]
4. K. Vedam and T. A. Davis, "Pressure and temperature variation of the optical rotatory power of $\alpha$-quartz," J. Opt. Soc. Am. 58(11), 1451–1455 (1968). [CrossRef]
5. J. P. Bachheimer, "Optical rotatory power and depolarisation of light in the $\alpha$-, incommensurate and $\beta$-phases of quartz (20 to 600 degrees C)," J. Phys. C: Solid State Phys. 19(27), 5509–5517 (1986). [CrossRef]
6. P. Gomez and C. Hernandez, "High-accuracy universal polarimeter measurement of optical activity and birefringence of $\alpha$-quartz in the presence of multiple reflections," J. Opt. Soc. Am. B 15(3), 1147–1154 (1998). [CrossRef]
7. P. Gomez and C. Hernandez, "Optical anisotropy of quartz in the presence of temperature-dependent multiple reflections using a high-accuracy universal polarimeter," J. Phys. D: Appl. Phys. 33(22), 2985–2994 (2000). [CrossRef]
8. K. Ertel, S. Banerjee, P. D. Mason, P. J. Phillips, M. Siebold, C. Hernandez-Gomez, and J. C. Collier, "Optimising the efficiency of pulsed diode pumped Yb:YAG laser amplifiers for ns pulse generation," Opt. Express 19(27), 26610–26626 (2011). [CrossRef]
9. A. Kessler, M. Hornung, S. Keppler, F. Schorcht, M. Hellwing, H. Liebetrau, J. Körner, A. Svert, M. Siebold, M. Schnepp, J. Hein, and M. C. Kaluza, "16.6 J chirped femtosecond laser pulses from a diode-pumped Yb:CaF2 amplifier," Opt. Lett. 39(6), 1333–1336 (2014). [CrossRef]
10. A. V. Voitovich, E. V. Katin, I. B. Mukhin, O. V. Palashov, and E. A. Khazanov, "Wide aperture Faraday isolator for kilowatt average radiation powers," Quantum Electron. 37(5), 471–474 (2007). [CrossRef]
11. F. A. Molby, "The Rotatory Power of Quartz, Cinnobar, and Nicotine at Low Temperatures," Phys. Rev. (Series I) 31(3), 291–310 (1910). [CrossRef]
12. S. Chandrasekhar, "The optical rotatory power of quartz and its variation with temperature," Proc. - Indian Acad. Sci., Sect. A 35(3), 103–113 (1952). [CrossRef]
13. W. F. Krupke, "Ytterbium solid-state lasers-the first decade," IEEE J. Sel. Top. Quantum Electron. 6(6), 1287–1296 (2000). [CrossRef]
14. A. Yariv and P. Yeh, Optical Waves in Crystals (Wiley, 2003).
15. eData: STFC Research Data Repository, http://dx.doi.org/10.5286/edata/724.
16. R. J. Corruccini and J. J. Gniewek, "Thermal expansion of technical solids at low temperatures - A compilation from the literature," N. B. S. Circ. No. 29, U.S. Government Printing Office, Washington (1961).
17. Y. Le Page, L. D. Calvert, and E. J. Gabe, "Parameter variation in low-quartz between 94 ans 298 K," J. Phys. Chem. Solids 41(7), 721–725 (1980). [CrossRef]
18. G. A. Lager, J. D. Jorgensen, and F. J. Rotella, "Crystal structure and thermal expansion of $\alpha$-quartz SiO$_{2}$ at low temperature," J. Appl. Phys. 53(10), 6751–6756 (1982). [CrossRef]
M. Frede, R. Wilhelm, M. Brendel, C. Fallnich, F. Seifert, B. Willke, and K. Danzmann, "High power fundamental mode Nd: YAG laser with efficient birefringence compensation," Opt. Express 12(15), 3581–3589 (2004).
[Crossref]
Y. Wang, K. Inoue, H. Kan, T. Ogawa, and S. Wada, "Birefringence compensation of two tandem-set Nd:YAG rods with different thermally induced features," J. Opt. A: Pure Appl. Opt. 11(12), 125501 (2009).
S. Chandrasekhar, "The temperature variation of the rotatory power of quartz from 30$^\circ$∘ to 410$^\circ$∘ C," Proc. - Indian Acad. Sci., Sect. A 39(6), 290–295 (1954).
K. Vedam and T. A. Davis, "Pressure and temperature variation of the optical rotatory power of $\alpha$α-quartz," J. Opt. Soc. Am. 58(11), 1451–1455 (1968).
J. P. Bachheimer, "Optical rotatory power and depolarisation of light in the $\alpha$α-, incommensurate and $\beta$β-phases of quartz (20 to 600 degrees C)," J. Phys. C: Solid State Phys. 19(27), 5509–5517 (1986).
P. Gomez and C. Hernandez, "High-accuracy universal polarimeter measurement of optical activity and birefringence of $\alpha$α-quartz in the presence of multiple reflections," J. Opt. Soc. Am. B 15(3), 1147–1154 (1998).
P. Gomez and C. Hernandez, "Optical anisotropy of quartz in the presence of temperature-dependent multiple reflections using a high-accuracy universal polarimeter," J. Phys. D: Appl. Phys. 33(22), 2985–2994 (2000).
K. Ertel, S. Banerjee, P. D. Mason, P. J. Phillips, M. Siebold, C. Hernandez-Gomez, and J. C. Collier, "Optimising the efficiency of pulsed diode pumped Yb:YAG laser amplifiers for ns pulse generation," Opt. Express 19(27), 26610–26626 (2011).
A. Kessler, M. Hornung, S. Keppler, F. Schorcht, M. Hellwing, H. Liebetrau, J. Körner, A. Svert, M. Siebold, M. Schnepp, J. Hein, and M. C. Kaluza, "16.6 J chirped femtosecond laser pulses from a diode-pumped Yb:CaF2 amplifier," Opt. Lett. 39(6), 1333–1336 (2014).
A. V. Voitovich, E. V. Katin, I. B. Mukhin, O. V. Palashov, and E. A. Khazanov, "Wide aperture Faraday isolator for kilowatt average radiation powers," Quantum Electron. 37(5), 471–474 (2007).
F. A. Molby, "The Rotatory Power of Quartz, Cinnobar, and Nicotine at Low Temperatures," Phys. Rev. (Series I) 31(3), 291–310 (1910).
S. Chandrasekhar, "The optical rotatory power of quartz and its variation with temperature," Proc. - Indian Acad. Sci., Sect. A 35(3), 103–113 (1952).
W. F. Krupke, "Ytterbium solid-state lasers-the first decade," IEEE J. Sel. Top. Quantum Electron. 6(6), 1287–1296 (2000).
A. Yariv and P. Yeh, Optical Waves in Crystals (Wiley, 2003).
eData: STFC Research Data Repository, http://dx.doi.org/10.5286/edata/724 .
R. J. Corruccini and J. J. Gniewek, "Thermal expansion of technical solids at low temperatures - A compilation from the literature," N. B. S. Circ. No. 29, U.S. Government Printing Office, Washington (1961).
Y. Le Page, L. D. Calvert, and E. J. Gabe, "Parameter variation in low-quartz between 94 ans 298 K," J. Phys. Chem. Solids 41(7), 721–725 (1980).
G. A. Lager, J. D. Jorgensen, and F. J. Rotella, "Crystal structure and thermal expansion of $\alpha$α-quartz SiO$_{2}$2 at low temperature," J. Appl. Phys. 53(10), 6751–6756 (1982).
Bachheimer, J. P.
Banerjee, S.
Brendel, M.
Calvert, L. D.
Chandrasekhar, S.
Collier, J. C.
Corruccini, R. J.
Danzmann, K.
Davis, T. A.
Ertel, K.
Fallnich, C.
Frede, M.
Gabe, E. J.
Gniewek, J. J.
Gomez, P.
Hein, J.
Hellwing, M.
Hernandez, C.
Hernandez-Gomez, C.
Hornung, M.
Inoue, K.
Jorgensen, J. D.
Kaluza, M. C.
Kan, H.
Katin, E. V.
Keppler, S.
Kessler, A.
Khazanov, E. A.
Körner, J.
Krupke, W. F.
Lager, G. A.
Le Page, Y.
Liebetrau, H.
Mason, P. D.
Molby, F. A.
Mukhin, I. B.
Ogawa, T.
Palashov, O. V.
Phillips, P. J.
Rotella, F. J.
Schnepp, M.
Schorcht, F.
Siebold, M.
Svert, A.
Vedam, K.
Voitovich, A. V.
Wada, S.
Wilhelm, R.
Willke, B.
Yariv, A.
Yeh, P.
IEEE J. Sel. Top. Quantum Electron. (1)
J. Appl. Phys. (1)
J. Opt. A: Pure Appl. Opt. (1)
J. Opt. Soc. Am. (1)
J. Opt. Soc. Am. B (1)
J. Phys. C: Solid State Phys. (1)
J. Phys. Chem. Solids (1)
J. Phys. D: Appl. Phys. (1)
Phys. Rev. (Series I) (1)
Proc. - Indian Acad. Sci., Sect. A (2)
Quantum Electron. (1)
(1) γ ( T ) = π λ Δ n ( T ) L ( T ) ,
(2) Δ n ( T ) = n L ( T ) − n R ( T ) .
(3) ρ ( T ) = γ ( T ) L ( T ) = π λ Δ n ( T ) .
|
CommonCrawl
|
Category Archives: Data
How do we decide how many representatives there are for each state?
by David Lowry-Duda Posted on April 3, 2019
The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have.
But what does this really mean?
If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says
Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative.
This doesn't give clarity.1 In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto.
The Apportionment Act of 1792
According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.2
When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5.
Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives3
State ideal rounded_down
Vermont 2.851 2
NewHampshire 4.727 4
Maine 3.218 3
Massachusetts 12.62 12
RhodeIsland 2.281 2
Connecticut 7.894 7
NewYork 11.05 11
NewJersey 5.985 5
Pennsylvania 14.42 14
Delaware 1.851 1
Maryland 9.283 9
Virginia 21.01 21
Kentucky 2.290 2
NorthCarolina 11.78 11
SouthCarolina 6.874 6
Georgia 2.361 2
But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional "ideal" parts:
New Jersey (0.985)
Connecticut (0.894)
South Carolina (0.874)
Vermont (0.851)
Delaware (0.851)
Massachusetts+Maine (0.838)
North Carolina (0.78)
New Hampshire (0.727)
(Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion?
Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)?
There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, Is it not unfair that the fractional apportionment favours the North?4
Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto.
Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the Jefferson Method of Apportionment, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision.
As an aside, it's interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.5 In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament.
Measuring the fairness of an apportionment method
At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair?
We'll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.6
So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents.
For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander7
The number of people each representative actually represent is at the core of the notion of fairness — but even then, it's not obvious.
Suppose we enumerate the states, so that Si refers to state i. We'll also denote by Pi the population of state i, and we'll let Ri denote the number of representatives allotted to state i.
In the ideal scenario, every representative would represent the exact same number of people. That is, we would have
$$\text{pop. per rep. in state i}
= \frac{P_i}{R_i}
= \frac{P_j}{R_j}
= \text{pop. per rep. in state j}$$
for every pair of states i and j. But this won't ever happen in practice.
Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If
\frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1}
then we can say that each representative in state i represents more people, and thus those people have a diluted vote.
Amounts of Inequality
There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class.
A few natural ideas emerge:
We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$.
We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$.
We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation.
This last one needs a bit of explanation. Define the relative difference between two numbers x and y to be
\frac{\lvert x – y \rvert}{\min(x, y)}.
Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state j have smaller constituencies than in state i (and therefore people in state j have more powerful votes). Then the relative difference in constituency size is
\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1.
The relative difference in per capita representation is
\frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 =
\frac{P_i/R_i}{P_j/R_j} – 1.
Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same.
All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson's scheme — though to be fair, Jefferson's scheme doesn't seek to minimize inequality and there is no reason to think it should behave the same).
Each of these ideas leads to a different apportionment scheme, and in fact each has a name.
Minimizing differences in constituency size is the Dean method.
Minimizing differences in per capita representation is the Webster method.
Minimizing relative differences between both constituency size and per capita representation is the Hill (or sometimes Huntington-Hill) method.
Further, each of these schemes has been used at some time in US history. Webster's method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.8 The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.9
In 1929 an automatic apportionment act was passed.10 In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats:
The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method).
The apportionment that would come from the Webster method.
The apportionment that would come from the newly introduced Hill method.
If one reads congressional discussion from the time, then it will be good to note that Webster's method is sometimes called the method of major fractions and Hill's method is sometimes called the method of equal proportions. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill's method was declared to be the recommendation of the Academy.11 From 1930 on, Hill's method has been used.
Why use the Hill method?
The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the Alabama paradox was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one more seat available led to Alabama receiving one fewer seat.
The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section The Apportionment Act of 1792).
Another paradox was discovered in 1900, known as the Population paradox. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia's population was larger and growing much more rapidly.
In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901.
Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it's still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.12
The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn't suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.13
Understanding the modern Hill method in practice
Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that Pi is the population of state i, and Ri is the number of representatives allocated to state i. The Hill method seeks to minimize
\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1
whenever Pi/Ri > Pj/Rj. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size.
We can work out a different way of understanding this apportionment that is easier to implement in practice.
Suppose that we have allocated all of the representatives to each state and state j has Rj representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states i and j with Pi/Ri > Pj/Rj. (If this isn't possible then the allocation is perfect).
We can ask if it would be a good idea to move one representative from state j to state i, since state j's constituency sizes are smaller. This can be thought of as working with Ri′=Ri + 1 and Rj′=Rj − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that Pj/Rj′>Pi/Ri′ (since otherwise the relative difference is strictly smaller) and
\frac{P_jR_i'}{P_iR_j'} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1
(since the relative difference must be at least as large). This is equivalent to
\frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i}
\iff
\frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}.
As every variable is positive, we can rewrite this as
\frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2}
We've shown that $(2)$ must hold whenever Pi/Ri > Pj/Rj in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states i and j.
Clearly it holds if i = j as the denominator on the left is strictly smaller.
If we are in the case when Pj/Rj > Pi/Ri, then we necessarily have the chain Pj/(Rj − 1)>Pj/Rj > Pi/Ri > Pi/(Ri + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case.
This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill's method is the largest fraction
$$ \frac{R_i}{\sqrt{P_i(P_i+1)}} $$
being too large. (Some call this term the Hill rank-index).
An iterative Hill apportionment
This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for n seats, we can get an apportionment for n + 1 seats by assigning the additional seat the any state i which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$.
Further, it can be shown that there is a unique apportionment in Hill's method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique.
This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment.
Additional notes: Consequences of the 1870 and 1990 Apportionments
The 1870 Apportionment
Officially, Dean's method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton's method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean's method, not Hamilton's method. Specifically, New York and Illinois were each given one fewer seat than Hamilton's method would have given, while New Hampshire and Florida were given one additional seat each.
There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully.
One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean's method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden's 184.
There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it's widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the "Reconstruction" period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South.
Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 26914. If Jefferson's method had been used, then Gore would have won with 271 votes to Bush's 266.
These decisions have far-reaching consequences!
Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010.
Balinski, Michel L., and H. Peyton Young. "The quota method of apportionment." The American Mathematical Monthly 82.7 (1975): 701-730.
Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047.
Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011.
Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110.
Peskin, Allan. "Was there a Compromise of 1877." The Journal of American History 60.1 (1973): 63-75.
US Census Results
US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html
George Washington's collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html
Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877
Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg
Posted in Data, Expository, Mathematics, Politics, Story | Tagged apportionment, election, Hill apportionment | Leave a comment
Using lcalc to compute half-integral weight L-functions
by David Lowry-Duda Posted on October 9, 2018
This is a brief note intended primarily for my collaborators interested in using Rubinstein's lcalc to compute the values of half-integral weight $L$-functions.
We will be using lcalc through sage. Unfortunately, we are going to be using some functionality which sage doesn't expose particularly nicely, so it will feel a bit silly. Nonetheless, using sage's distribution will prevent us from needing to compile it on our own (and there are a few bugfixes present in sage's version).
Some $L$-functions are inbuilt into lcalc, but not half-integral weight $L$-functions. So it will be necessary to create a datafile containing the data that lcalc will use to generate its approximations. In short, this datafile will describe the shape of the functional equation and give a list of coefficients for lcalc to use.
Building the datafile
It is assumed that the $L$-function is normalized in such a way that
$$\begin{equation}
\Lambda(s) = Q^s L(s) \prod_{j = 1}^{A} \Gamma(\gamma_j s + \lambda_j) = \omega \overline{\Lambda(1 – \overline{s})}.
\end{equation}$$
This involves normalizing the functional equation to be of shape $s \mapsto 1-s$. Also note that $Q$ will be given as a real number.
An annotated version of the datafile you should create looks like this
2 # 2 means the Dirichlet coefficients are reals
0 # 0 means the L-function isn't a "nice" one
10000 # 10000 coefficients will be provided
0 # 0 means the coefficients are not periodic
1 # num Gamma factors of form \Gamma(\gamma s + \lambda)
1 # the \gamma in the Gamma factor
1.75 0 # \lambda in Gamma factor; complex valued, space delimited
0.318309886183790 # Q. In this case, 1/pi
1 0 # real and imaginary parts of omega, sign of func. eq.
0 # number of poles
1.000000000000000 # a(1)
-1.78381067250408 # a(2)
... # ...
-0.622124724090625 # a(10000)
If there is an error, lcalc will usually fail silently. (Bummer). Note that in practice, datafiles should only contain numbers and should not contain comments. This annotated version is for convenience, not for use.
You can find a copy of the datafile for the unique half-integral weight cusp form of weight $9/2$ on $\Gamma_0(4)$ here. This uses the first 10000 coefficients — it's surely possible to use more, but this was the test-setup that I first set up.
Generating the coefficients for this example
In order to create datafiles for other cuspforms, it is necessary to compute the coefficients (presumably using magma or sage) and then to populate a datafile. A good exercise would be to recreate this datafile using sage-like methods.
One way to create this datafile is to explicitly create the q-expansion of the modular form, if we happen to know a convenient expression. For us, we happen to know that $f = \eta(2z)^{12} \theta(z)^{-3}$. Thus one way to create the coefficients is to do something like the following.
num_coeffs = 10**5 + 1
weight = 9.0 / 2.0
R.<q> = PowerSeriesRing(ZZ)
theta_expansion = theta_qexp(num_coeffs)
# Note that qexp_eta omits the q^(1/24) factor
eta_expansion = qexp_eta(ZZ[['q']], num_coeffs + 1)
eta2_coeffs = []
for i in range(num_coeffs):
if i % 2 == 1:
eta2_coeffs.append(0)
eta2_coeffs.append(eta_expansion[i//2])
eta2 = R(eta2_coeffs)
g = q * ( (eta2)**4 / (theta_expansion) )**3
coefficients = g.list()[1:] # skip the 0 coeff
print(coefficients[:10])
normalized_coefficients = []
for idx, elem in enumerate(coefficients, 1):
normalized_coeff = 1.0 * elem / (idx ** (.5 * (weight - 1)))
normalized_coefficients.append(normalized_coeff)
print(normalized_coefficients[:10])
Using lcalc now
Suppose that you have a datafile, called g1_lcalcfile.txt (for example). Then to use this from sage, you point lcalc within sage to this file. This can be done through calls such as
# Computes L(0.5 + 0i, f)
lcalc('-v -x0.5 -y0 -Fg1_lcalcfile.txt')
# Computes L(s, f) from 0.5 to (2 + 7i) at 1000 equally spaced samples
lcalc('--value-line-segment -x0.5 -y0 -X2 -Y7 --number-samples=1000 -Fg1_lcalcfile.txt')
# See lcalc.help() for more on calling lcalc.
The key in these is to pass along the datafile through the -F argument.
Posted in Data, Mathematics, Programming, sage, sagemath, sagemath | Tagged lcalc | Leave a comment
How fat would we have to get to balance carbon emissions?
Let's consider a ridiculous solution to a real problem. We're unearthing tons of carbon, burning it, and releasing it into the atmosphere.
Disclaimer: There are several greenhouse gasses, and lots of other things that we're throwing wantonly into the environment. Considering them makes things incredibly complicated incredibly quickly, so I blithely ignore them in this note.
Such rapid changes have side effects, many of which lead to bad things. That's why nearly 150 countries ratified the Paris Agreement on Climate Change.1 Even if we assume that all these countries will accomplish what they agreed to (which might be challenging for the US),2
most nations and advocacy groups are focusing on increasing efficiency and reducing emissions. These are good goals! But what about all the carbon that is already in the atmosphere?3
You know what else is a problem? Obesity! How are we to solve all of these problems?
Looking at this (very unscientific) graph,4 we see that the red isn't keeping up! Maybe we aren't using the valuable resource of our own bodies enough! Fat has carbon in it — often over 20% by weight. What if we took advantage of our propensity to become propense? How fat would we need to get to balance last year's carbon emissions?
That's what we investigate here.
Posted in Data, Mathematics, Story | Tagged carbon sequestration, data visualization, global warming, xkcd style graphics | 1 Comment
|
CommonCrawl
|
Dual-Ouroboros: An improvement of the McNie scheme
AMC Home
Locally recoverable codes from algebraic curves with separated variables
May 2020, 14(2): 279-299. doi: 10.3934/amc.2020020
Multi-point codes from the GGS curves
Chuangqiang Hu 1, and Shudi Yang 2,,
Yau Mathematical Sciences Center, Tsinghua University, Peking, 100084, China
School of Mathematical Sciences, Qufu Normal University, Shandong, 273165, China
*Corresponding author: Shudi Yang
Received June 2018 Revised December 2018 Published September 2019
Fund Project: This work is partially supported by the NSFC (11701317, 11531007, 11571380, 11701320, 61472457) and Tsinghua University startup fund. This work is also partially supported by China Postdoctoral Science Foundation Funded Project (2017M611801), Jiangsu Planned Projects for Postdoctoral Research Funds (1701104C), Guangzhou Science and Technology Program (201607010144) and the Natural Science Foundation of Shandong Province of China (ZR2016AM04)
Full Text(HTML)
This paper is concerned with the construction of algebraic-geometric (AG) codes defined from GGS curves. It is of significant use to describe bases for the Riemann-Roch spaces associated with some rational places, which enables us to study multi-point AG codes. Along this line, we characterize explicitly the Weierstrass semigroups and pure gaps by an exhaustive computation for the basis of Riemann-Roch spaces from GGS curves. In addition, we determine the floor of a certain type of divisor and investigate the properties of AG codes. Multi-point codes with excellent parameters are found, among which, a presented code with parameters $ [216,190,\geqslant 18] $ over $ \mathbb{F}_{64} $ yields a new record.
Keywords: Algebraic geometric code, GGS curve, Weierstrass semigroup, pure Weierstrass gap.
Mathematics Subject Classification: Primary: 14H55, 11R58, 11T71.
Citation: Chuangqiang Hu, Shudi Yang. Multi-point codes from the GGS curves. Advances in Mathematics of Communications, 2020, 14 (2) : 279-299. doi: 10.3934/amc.2020020
M. Abdón, J. Bezerra and L. Quoos, Further examples of maximal curves, Journal of Pure and Applied Algebra, 213 (2009), 1192-1196. doi: 10.1016/j.jpaa.2008.11.037. Google Scholar
É. Barelli, P. Beelen, M. Datta, V. Neiger and J. Rosenkilde, Two-point codes for the generalized GK curve, IEEE Transactions on Information Theory, 64 (2018), 6268-6276. doi: 10.1109/TIT.2017.2763165. Google Scholar
D. Bartoli, L. Quoos and G. Zini, Algebraic geometric codes on many points from Kummer extensions, Finite Fields and Their Applications, 52 (2018), 319-335. doi: 10.1016/j.ffa.2018.04.008. Google Scholar
D. Bartoli, M. Montanucci and G. Zini, AG codes and AG quantum codes from the GGS curve, Des. Codes Cryptogr., 86 (2018), 2315-2344. doi: 10.1007/s10623-017-0450-5. Google Scholar
D. Bartoli, M. Montanucci and G. Zini, Multi point AG codes on the GK maximal curve, Designs, Codes and Cryptography, 86 (2018), 161-177. doi: 10.1007/s10623-017-0333-9. Google Scholar
P. Beelen and M. Montanucci, Weierstrass semigroups on the Giulietti-Korchmáros curve, Finite Fields and Their Applications, 52 (2018), 10-29. doi: 10.1016/j.ffa.2018.03.002. Google Scholar
C. Carvalho and F. Torres, On Goppa codes and Weierstrass gaps at several points, Designs, Codes and Cryptography, 35 (2005), 211-225. doi: 10.1007/s10623-005-6403-4. Google Scholar
C. S. Ding, Linear codes from some 2-designs, IEEE Transactions on Information Theory, 61 (2015), 3265-3275. doi: 10.1109/TIT.2015.2420118. Google Scholar
A. S. Castellanos, A. M. Masuda and L. Quoos, One-and two-point codes over Kummer extensions, IEEE Transactions on Information Theory, 62 (2016), 4867-4872. doi: 10.1109/TIT.2016.2583437. Google Scholar
A. S. Castellanos and G. C. Tizziotti, Two-point AG codes on the GK maximal curves, IEEE Transactions on Information Theory, 62 (2016), 681-686. doi: 10.1109/TIT.2015.2511787. Google Scholar
S. Fanali and M. Giulietti, One-point AG codes on the GK maximal curves, IEEE Transactions on Information Theory, 56 (2010), 202-210. doi: 10.1109/TIT.2009.2034826. Google Scholar
A. Garcia, C. Güneri and H. Stichtenoth, A generalization of the Giulietti-Korchmáros maximal curve, Advances in Geometry, 10 (2010), 427-434. doi: 10.1515/ADVGEOM.2010.020. Google Scholar
A. Garcia, S. J. Kim and R. F. Lax, Consecutive Weierstrass gaps and minimum distance of Goppa codes, Journal of Pure and Applied Algebra, 84 (1993), 199-207. doi: 10.1016/0022-4049(93)90039-V. Google Scholar
A. Garcia and R. F. Lax, Goppa codes and Weierstrass gaps, in Coding Theory and Algebraic Geometry, Lecture Notes in Math., Springer Berlin, 1518 (1992), 33–42. doi: 10.1007/BFb0087991. Google Scholar
M. Giulietti and G. Korchmáros, A new family of maximal curves over a finite field, Mathematische Annalen, 343 (2009), 229-245. doi: 10.1007/s00208-008-0270-z. Google Scholar
V. D. Goppa, Codes associated with divisors, Problemy Peredači Informatsii, 13 (1977), 33-39. Google Scholar
C. Güneri, M. Özdemiry and H. Stichtenoth, The automorphism group of the generalized Giulietti-Korchmáros function field, Advances in Geometry, 13 (2013), 369-380. doi: 10.1515/advgeom-2012-0040. Google Scholar
V. Guruswami and M. Sudan, Improved decoding of Reed-Solomon and algebraic-geometric codes, IEEE Transactions on Information Theory, 45 (1999), 1757-1767. doi: 10.1109/18.782097. Google Scholar
M. Homma and S. J. Kim, Goppa codes with Weierstrass pairs, Journal of Pure and Applied Algebra, 162 (2001), 273-290. doi: 10.1016/S0022-4049(00)00134-1. Google Scholar
C. Q. Hu and S. D. Yang, Multi-point codes over Kummer extensions, Des. Codes Cryptogr, 86 (2018), 211-230. doi: 10.1007/s10623-017-0335-7. Google Scholar
S. J. Kim, On the index of the Weierstrass semigroup of a pair of points on a curve, Archiv der Mathematik, 62 (1994), 73-82. doi: 10.1007/BF01200442. Google Scholar
C. Kirfel and R. Pellikaan, The minimum distance of codes in an array coming from telescopic semigroups, IEEE Transactions on Information Theory, 41 (1995), 1720-1732. doi: 10.1109/18.476245. Google Scholar
G. Korchmáros and G. P. Nagy, Hermitian codes from higher degree places, Journal of Pure and Applied Algebra, 217 (2013), 2371-2381. doi: 10.1016/j.jpaa.2013.04.002. Google Scholar
Y. Liu, M. J. Shi, Z. Sepasdar and P. Solé, Construction of Hermitian self-dual constacyclic codes over $ \mathbb{F}_{q^2} + u \mathbb{F}_{q^2}$, Applied and Computational Mathematics, 15 (2016), 359-369. Google Scholar
H. Maharaj, Code construction on fiber products of Kummer covers, IEEE Transactions on Information Theory, 50 (2004), 2169-2173. doi: 10.1109/TIT.2004.833356. Google Scholar
H. Maharaj and G. L. Matthews, On the floor and the ceiling of a divisor, Finite Fields and Their Applications, 12 (2006), 38-55. doi: 10.1016/j.ffa.2005.01.002. Google Scholar
H. Maharaj, G. L. Matthews and G. Pirsic, Riemann-Roch spaces of the Hermitian function field with applications to algebraic geometry codes and low-discrepancy sequences, Journal of Pure and Applied Algebra, 195 (2005), 261-280. doi: 10.1016/j.jpaa.2004.06.010. Google Scholar
G. L. Matthews, Weierstrass pairs and minimum distance of Goppa codes, Designs, Codes and Cryptography, 22 (2001), 107-121. doi: 10.1023/A:1008311518095. Google Scholar
G. L. Matthews, The Weierstrass semigroup of an $m$-tuple of collinear points on a {H}ermitian curve, Finite Fields and Their Applications, Lecture Notes in Comput. Sci., Springer, Berlin, 2948 (2004), 12–24. doi: 10.1007/978-3-540-24633-6_2. Google Scholar
G. L. Matthews, Weierstrass semigroups and codes from a quotient of the Hermitian curve, Designs, Codes and Cryptography, 37 (2005), 473-492. doi: 10.1007/s10623-004-4038-5. Google Scholar
MinT, Online database for optimal parameters of $ (t, m, s) $-nets, $ (t, s) $-sequences, orthogonal arrays, and linear codes, Accessed on 2017-01-10, URL http://mint.sbg.ac.at. Google Scholar
M. J. Shi, L. Q. Qian, L. Sok, N. Aydin and P. Solé, On constacyclic codes over $ \mathbb{Z}_4[u]/\langle u^2-1 \rangle $ and their Gray images, Finite Fields and Their Applications, 45 (2017), 86-95. doi: 10.1016/j.ffa.2016.11.016. Google Scholar
M. J. Shi and Y. P. Zhang, Quasi-twisted codes with constacyclic constituent codes, Finite Fields and Their Applications, 39 (2016), 159-178. doi: 10.1016/j.ffa.2016.01.010. Google Scholar
H. Stichtenoth, Algebraic Function Fields and Codes, Graduate Texts in Mathematics, 254. Springer-Verlag, Berlin, 2009. Google Scholar
K. Yang and P. V. Kumar, On the true minimum distance of Hermitian codes, in Coding Theory and Algebraic Geometry, Lecture Notes in Math., Springer, Berlin, 1518 (1992), 99–107. doi: 10.1007/BFb0087995. Google Scholar
H. D. Yan, H. Liu, C. J. Li and S. D. Yang, Parameters of LCD BCH codes with two lengths, Advances in Mathematics of Communications, 12 (2018), 579-594. doi: 10.3934/amc.2018034. Google Scholar
K. Yang, P. V. Kumar and H. Stichtenoth, On the weight hierarchy of geometric Goppa codes, IEEE Transactions on Information Theory, 40 (1994), 913-920. doi: 10.1109/18.335903. Google Scholar
S. D. Yang and C. Q. Hu, Weierstrass semigroups from Kummer extensions, Finite Fields and Their Applications, 45 (2017), 264-284. doi: 10.1016/j.ffa.2016.12.005. Google Scholar
S. D. Yang and C. Q. Hu, Pure Weierstrass gaps from a quotient of the Hermitian curve, Finite Fields and Their Applications, 50 (2018), 251-271. doi: 10.1016/j.ffa.2017.12.002. Google Scholar
Ville Salo, Ilkka Törmä. Recoding Lie algebraic subshifts. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 1005-1021. doi: 10.3934/dcds.2020307
Yuxi Zheng. Absorption of characteristics by sonic curve of the two-dimensional Euler equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 605-616. doi: 10.3934/dcds.2009.23.605
Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388
Kerioui Nadjah, Abdelouahab Mohammed Salah. Stability and Hopf bifurcation of the coexistence equilibrium for a differential-algebraic biological economic system with predator harvesting. Electronic Research Archive, 2021, 29 (1) : 1641-1660. doi: 10.3934/era.2020084
Pedro Branco. A post-quantum UC-commitment scheme in the global random oracle model from code-based assumptions. Advances in Mathematics of Communications, 2021, 15 (1) : 113-130. doi: 10.3934/amc.2020046
Huanhuan Tian, Maoan Han. Limit cycle bifurcations of piecewise smooth near-Hamiltonian systems with a switching curve. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020368
Meng Chen, Yong Hu, Matteo Penegini. On projective threefolds of general type with small positive geometric genus. Electronic Research Archive, , () : -. doi: 10.3934/era.2020117
Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305
Qian Liu, Shuang Liu, King-Yeung Lam. Asymptotic spreading of interacting species with multiple fronts Ⅰ: A geometric optics approach. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3683-3714. doi: 10.3934/dcds.2020050
João Vitor da Silva, Hernán Vivas. Sharp regularity for degenerate obstacle type problems: A geometric approach. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1359-1385. doi: 10.3934/dcds.2020321
Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021006
Bernard Bonnard, Jérémy Rouot. Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2020032
Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020050
Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems & Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077
PDF downloads (233)
HTML views (604)
Chuangqiang Hu Shudi Yang
|
CommonCrawl
|
Help with math
Visual illusions
Cut the knot!
Inventor's paradox
Math as language
Outline mathematics
Interactive activities
Analogue gadgets
Things impossible
Index/Glossary
Simple math
Fast Arithmetic Tips
Stories for young
Make an identity
Elementary geometry
Is There a Limit?
Dan Sitaru and Leo Giugiuc posed and solved the following problem involving a functional equation:
Let $f: \mathbb{R}\rightarrow [0,2]$ be a function for which, for all $x,$ $f^{2}(x-2)+f^{2}(x+2)=3.$ Study the existence of the limit $\displaystyle\lim_{x\rightarrow\infty}f(x).$
Substitute $x+6$ for $x$:
$f^{2}(x+4)+f^{2}(x+8)=3.$
$f^{2}(x)+f^{2}(x+4)=3.$
The two equations imply $f^{2}(x+8)=f^{2}(x),$ and, since $f$ is positive valued, $f(x+8)=f(x),$ meaning that $f$ is a periodic function. A periodic function has a limit at infinity only if it is constant. If it is, $2f^{2}(x)=3$ so that $f(x)=\sqrt{3/2}$ which is also its limit at infinity. Otherwise, the limit does not exist.
Critical remarks
Several pieces of data in the problem provide no useful information, nor are specifically helpful in solving the problem. They may only serve to distract from the essence of the problem. I refer to such occurences as a red herring. The function could have been defined as $f: \mathbb{R}\rightarrow\mathbb{R}^{+},$ just a positive real valued function. That the two squares add up to $3$ is of no consequence, except for the obtaining the value of the function in case it's constant. The same holds for the two values of the function being squared and for the two values being taken at $x-2$ and $x+2.$ Here's a simplified formulation:
Let $h: \mathbb{R}\rightarrow\mathbb{R}^{+}$ be a function for which, for all $x,$ $h(x)+h(x+a)=b,$ where $a$ and $b$ are constant. Study the existence of the limit $\displaystyle\lim_{x\rightarrow\infty}h(x).$
Substitute $x-a$ for $x$ to obtain $h(x-a)+h(x)=b,$ from which $h(x-a)=h(x+a),$ implying a period of $2h$ and, as a consequence, no limit at infinity unless $h$ is a constant. In that case, the constant value of the function and its limit is $b/2.$
For the solution of the original problem take $h(x)=f^{2}(x-2),$ $a=4,$ $b=3.$
(The problem is related to Problem 5, 1968 IMO.)
What Is Red Herring
On the Difference of Areas
Area of the Union of Two Squares
Circle through the Incenter
Circle through the Incenter And Antiparallels
Circle through the Circumcenter
Inequality with Logarithms
Breaking Chocolate Bars
Circles through the Orthocenter
100 Grasshoppers on a Triangular Board
Simultaneous Diameters in Concurrent Circles
An Inequality from the 2015 Romanian TST
Schur's Inequality
Further Properties of Peculiar Circles
Inequality with Csc And Sin
Area Inequality in Trapezoid
Triangles on HO
From Angle Bisector to 120 degrees Angle
A Case of Divergence
An Inequality for the Cevians through Spieker Point via Brocard Angle
An Inequality In Triangle and Without
Problem 3 from the EGMO2017
Mickey Might Be a Red Herring in the Mickey Mouse Theorem
A Cyclic Inequality from the 6th IMO, 1964
Three Complex Numbers Satisfy Fermat's Identity For Prime Powers
Probability of Random Lines Crossing
Planting Trees in a Row
Two Colors - Three Points
|Contact| |Front page| |Contents| |Algebra|
Copyright © 1996-2018 Alexander Bogomolny
|
CommonCrawl
|
Only show content I have access to (13)
Materials Research (138)
Statistics and Probability (6)
MRS Online Proceedings Library Archive (134)
Cardiology in the Young (9)
The Journal of Laryngology & Otology (9)
Epidemiology & Infection (4)
Journal of Fluid Mechanics (4)
Acta geneticae medicae et gemellologiae: twin research (2)
MRS Bulletin (2)
World's Poultry Science Journal (2)
Advances in X-ray Analysis (1)
Genetics Research (1)
Irish Journal of Psychological Medicine (1)
Journal of Biosocial Science (1)
Journal of Dairy Research (1)
Journal of Helminthology (1)
MRS Communications (1)
Microscopy and Microanalysis (1)
Probability in the Engineering and Informational Sciences (1)
The Mathematical Gazette (1)
Materials Research Society (138)
AEPC Association of European Paediatric Cardiology (9)
test society (4)
World's Poultry Science Association (2)
College of Psychiatrists of Ireland (1)
Mathematical Association (1)
MiMi / EMAS - European Microbeam Analysis Society (1)
Royal Aeronautical Society (1)
Society for Industrial and Organizational Psychology (SIOP) (1)
Unusual combination of three types of interatrial communications in a child
Tarun R. Ramman, Sushil Azad, Krishna S. Iyer
Journal: Cardiology in the Young / Volume 30 / Issue 12 / December 2020
Published online by Cambridge University Press: 18 November 2020, pp. 1933-1934
An unusual combination of three types of interatrial communications – coronary sinus defect, primum defect, and secundum defect – occurring together in a 3-year-old child is presented.
Efficacy of small-volume gastrografin videofluoroscopic screening for detecting pharyngeal leaks following total laryngectomy
M Narayan, S Limbachiya, D Balasubramanian, N Subramaniam, K Thankappan, S Iyer
Journal: The Journal of Laryngology & Otology / Volume 134 / Issue 4 / April 2020
Print publication: April 2020
Pharyngocutaneous fistulae are dreaded complications following total laryngectomy. This paper presents our experience using 3–5 ml gastrografin to detect pharyngeal leaks following total laryngectomy, and compares post-operative videofluoroscopy with clinical follow-up findings in the detection of pharyngocutaneous fistulae.
A retrospective case–control study was conducted of total laryngectomy patients. The control group (n = 85) was assessed clinically for development of pharyngocutaneous fistulae, while the study group (n = 52) underwent small-volume (3–5 ml) post-operative gastrografin videofluoroscopy.
In the control group, 24 of 85 patients (28 per cent) developed pharyngocutaneous fistulae, with 6 requiring surgical correction. In the study group, 24 of 52 patients (46 per cent) had videofluoroscopy-detected pharyngeal leaks; 4 patients (8 per cent) developed pharyngocutaneous fistulae, but all cases resolved following non-surgical management. Patients who underwent videofluoroscopy had a significantly lower risk of developing pharyngocutaneous fistulae; sensitivity and specificity in the detection of pharyngocutaneous fistulae were 58 per cent and 100 per cent respectively.
Small-volume gastrografin videofluoroscopy reliably identified small pharyngeal leaks. Routine use in total laryngectomy combined with withholding feeds in cases of early leaks may prevent the development of pharyngocutaneous fistulae.
Feasibility of ovine and synthetic temporal bone models for simulation training in endoscopic ear surgery
S Okhovat, T D Milner, A Iyer
Journal: The Journal of Laryngology & Otology / Volume 133 / Issue 11 / November 2019
Print publication: November 2019
Comparing the feasibility of ovine and synthetic temporal bones for simulating endoscopic ear surgery against the 'gold standard' of human cadaveric tissue.
A total of 10 candidates (5 trainees and 5 experts) performed endoscopic tympanoplasty on 3 models: Pettigrew temporal bones, ovine temporal bones and cadaveric temporal bones. Candidates completed a questionnaire assessing the face validity, global content validity and task-specific content validity of each model.
Regarding ovine temporal bone validity, the median values were 4 (interquartile range = 4–4) for face validity, 4 (interquartile range = 4–4) for global content validity and 4 (interquartile range = 4–4) for task-specific content validity. For the Pettigrew temporal bone, the median values were 3.5 (interquartile range = 2.25–4) for face validity, 3 (interquartile range = 2.75–3) for global content validity and 3 (interquartile range = 2.5–3) for task-specific content validity. The ovine temporal bone was considered significantly superior to the Pettigrew temporal bone for the majority of validity categories assessed.
Tympanoplasty is feasible in both the ovine temporal bone and the Pettigrew temporal bone. However, the ovine model was a significantly more realistic simulation tool.
Do we know how scabies outbreaks in residential and nursing care homes for the elderly should be managed? A systematic review of interventions using a novel approach to assess evidence quality
E. J. Morrison, J. Middleton, S. Lanza, J. E. Cowen, K. Hewitt, S. L. Walker, M. Nicholls, J. Rajan-Iyer, J. Fletcher, J. A. Cassell
Published online by Cambridge University Press: 05 August 2019, e250
Currently no national guidelines exist for the management of scabies outbreaks in residential or nursing care homes for the elderly in the United Kingdom. In this setting, diagnosis and treatment of scabies outbreaks is often delayed and optimal drug treatment, environmental control measures and even outcome measures are unclear. We undertook a systematic review to establish the efficacy of outbreak management interventions and determine evidence-based recommendations. Four electronic databases were searched for relevant studies, which were assessed using a quality assessment tool drawing on STROBE guidelines to describe the quality of observational data. Nineteen outbreak reports were identified, describing both drug treatment and environmental management measures. The quality of data was poor; none reported all outcome measures and only four described symptom relief measures. We were unable to make definitive evidence-based recommendations. We draw on the results to propose a framework for data collection in future observational studies of scabies outbreaks. While high-quality randomised controlled trials are needed to determine optimal drug treatment, evidence on environmental measures will need augmentation through other literature studies. The quality assessment tool designed is a useful resource for reporting of outcome measures including patient-reported measures in future outbreaks.
Identifying eigenmodes of averaged small-amplitude perturbations to turbulent channel flow
A. S. Iyer, F. D. Witherden, S. I. Chernyshenko, P. E. Vincent
Journal: Journal of Fluid Mechanics / Volume 875 / 25 September 2019
Print publication: 25 September 2019
Eigenmodes of averaged small-amplitude perturbations to a turbulent channel flow – which is one of the most fundamental canonical flows – are identified for the first time via an extensive set of high-fidelity graphics processing unit accelerated direct numerical simulations. While the system governing averaged small-amplitude perturbations to turbulent channel flow remains unknown, the fact such eigenmodes can be identified constitutes direct evidence that it is linear. Moreover, while the eigenvalue associated with the slowest-decaying anti-symmetric eigenmode mode is found to be real, the eigenvalue associated with the slowest-decaying symmetric eigenmode mode is found to be complex. This indicates that the unknown linear system governing the evolution of averaged small-amplitude perturbations cannot be self-adjoint, even for the case of a uni-directional flow. In addition to elucidating aspects of the flow physics, the findings provide guidance for development of new unsteady Reynolds-averaged Navier–Stokes turbulence models, and constitute a new and accessible benchmark problem for assessing the performance of existing models, which are used widely throughout industry.
Artificial intelligence/machine learning in manufacturing and inspection: A GE perspective
Kareem S. Aggour, Vipul K. Gupta, Daniel Ruscitto, Leonardo Ajdelsztajn, Xiao Bian, Kristen H. Brosnan, Natarajan Chennimalai Kumar, Voramon Dheeradhada, Timothy Hanlon, Naresh Iyer, Jaydeep Karandikar, Peng Li, Abha Moitra, Johan Reimann, Dean M. Robinson, Alberto Santamaria-Pang, Chen Shen, Monica A. Soare, Changjie Sun, Akane Suzuki, Raju Venkataramana, Joseph Vinciquerra
Journal: MRS Bulletin / Volume 44 / Issue 7 / July 2019
At GE Research, we are combining "physics" with artificial intelligence and machine learning to advance manufacturing design, processing, and inspection, turning innovative technologies into real products and solutions across our industrial portfolio. This article provides a snapshot of how this physical plus digital transformation is evolving at GE.
Peri-operative outcomes following major surgery for head and neck cancer in the elderly: institutional audit and case–control study
N Subramaniam, D Balasubramanian, P Rka, P Rathod, S Murthy, S Vidhyadharan, S Rajan, J Paul, K Thankappan, S Iyer
Journal: The Journal of Laryngology & Otology / Volume 132 / Issue 8 / August 2018
Elderly patients have been consistently shown to receive suboptimal therapy for cancers of the head and neck. This study was performed to determine the peri-operative outcomes of these patients and compare them with those of younger patients.
In this retrospective analysis, 115 patients aged 70 years or more undergoing major surgery for head and neck cancers were matched with 115 patients aged 50–60 years, and univariate analysis was performed.
Elderly patients had a reduced performance status (p < 0.001) and more co-morbid illnesses (p = 0.007), but a comparable intra-operative course. They had a longer median hospital stay (p = 0.016), longer intensive care unit stay (p = 0.04), longer median tracheostomy dependence (p = 0.04) and were more often discharged with feeding tubes (p < 0.001). They also had a higher incidence of post-operative non-fatal cardiac events (p = 0.045).
Elderly patients with good performance status should receive curative-intent surgery. Although hospital stay and tube dependence are longer, morbidity and mortality are comparable with younger patients.
Nickel-reduced graphene oxide composite foams for electrochemical oxidation processes: towards biomolecule sensing
S. Thoufeeq, Pankaj Kumar Rastogi, Narayanaru Sreekanth, Malie Madom Ramaswamy Iyer Anantharaman, Tharangattu N. Narayanan
Journal: MRS Communications / Volume 8 / Issue 3 / September 2018
Metal–graphene composites are sought after for various applications. A hybrid light-weight foam of nickel (Ni) and reduced graphene oxide (rGO), called Ni-rGO, is reported here for small molecule oxidations and thereby their sensing. Methanol oxidation and non-enzymatic glucose sensing are attempted with the Ni-rGO foam via electrocatalytically, and an enhanced methanol oxidation current density of 4.81 mA/cm2 is achieved, which is ~1.7 times higher than that of bare Ni foam. In glucose oxidation, the Ni-rGO electrode shows a better sensitivity over bare Ni foam electrode where it could detect glucose linearly over a concentration range of 10 µM to 4.5 mM with a very low detection limit of 3.6 µM. This work demonstrates the synergistic effects of metal and graphene in oxidative processes, and also shows the feasibility of scalable metal–graphene composite inks development for small molecule printable sensors and fuel cell catalysts.
Management of late presentation congenital heart disease
Parvathi U. Iyer, Guillermo E. Moreno, Luiz Fernando Caneo, Tahira Faiz, Lara S. Shekerdemian, Krishna S. Iyer
Journal: Cardiology in the Young / Volume 27 / Issue S6 / December 2017
Published online by Cambridge University Press: 04 December 2017, pp. S31-S39
In many parts of the world, mostly low- and middle-income countries, timely diagnosis and repair of congenital heart diseases (CHDs) is not feasible for a variety of reasons. In these regions, economic growth has enabled the development of cardiac units that manage patients with CHD presenting later than would be ideal, often after the window for early stabilisation – transposition of the great arteries, coarctation of the aorta – or for lower-risk surgery in infancy – left-to-right shunts or cyanotic conditions. As a result, patients may have suffered organ dysfunction, manifest signs of pulmonary vascular disease, or the sequelae of profound cyanosis and polycythaemia. Late presentation poses unique clinical and ethical challenges in decision making regarding operability or surgical candidacy, surgical strategy, and perioperative intensive care management.
The management of scabies outbreaks in residential care facilities for the elderly in England: a review of current health protection guidelines
L. C. J. WHITE, S. LANZA, J. MIDDLETON, K. HEWITT, L. FREIRE-MORAN, C. EDGE, M. NICHOLLS, J. RAJAN-IYER, J. A. CASSELL
Journal: Epidemiology & Infection / Volume 144 / Issue 15 / November 2016
Commonly thought of as a disease of poverty and overcrowding in resource-poor settings globally, scabies is also an important public health issue in residential care facilities for the elderly (RCFE) in high-income countries such as the UK. We compared and contrasted current local Health Protection Team (HPT) guidelines for the management of scabies outbreaks in RCFE throughout England. We performed content analysis on 20 guidelines, and used this to create a quantitative report of their variation in key dimensions. Although the guidelines were generally consistent on issues such as the treatment protocols for individual patients, there was substantial variation in their recommendations regarding the prophylactic treatment of contacts, infection control measures and the roles and responsibilities of individual stakeholders. Most guidelines did not adequately address the logistical challenges associated with mass treatment in this setting. We conclude that the heterogeneous nature of the guidelines reviewed is an argument in favour of national guidelines being produced.
Damping of modal perturbations in solid rocket motors
A.S. Iyer, V.K. Chakravarthy, S. Saha, D. Chakraborty
Journal: The Aeronautical Journal / Volume 120 / Issue 1231 / September 2016
Quasi-one-dimensional (quasi-1D) tools developed for capturing flow and acoustic dynamics in non-segmented solid rocket motors are evaluated using multi-dimensional computational fluid dynamic simulations and used to characterise damping of modal perturbations. For motors with high length-to-diameter ratios (of the order of 10), remarkably accurate estimates of frequencies and damping rates of lower modes can be obtained using the the quasi-1D approximation. Various grain configurations are considered to study the effect of internal geometry on damping rates. Analysis shows that lower cross-sectional area at the nozzle entry plane is found to increase damping rates of all the modes. The flow-turning loss for a mode increases if the more mass addition due to combustion is added at pressure nodes. For the fundamental mode, this loss is, therefore, maximum if burning area is maximum at the centre. The insights from this study in addition to recommendations made by Blomshield(1) based on combustion considerations would be very helpful in realizing rocket motors free from combustion instability.
Twin pregnancy in a Fontan-palliated patient
Anupama Nair, Sitaraman Radhakrishnan, Krishna S. Iyer
Journal: Cardiology in the Young / Volume 26 / Issue 6 / August 2016
Published online by Cambridge University Press: 29 April 2016, pp. 1221-1224
The Fontan connection, originally described in 1971, is used to provide palliation for patients with many forms of CHDs that cannot support a biventricular circulation. An increasing number of females who have undergone these connections in childhood are now surviving into adulthood and some are becoming pregnant. We report a case of a 29-year-old woman who presented with a twin pregnancy at 33 weeks of gestation. She had significant deterioration of her cardiovascular status before the twin babies were delivered by emergency caesarean section owing to associated obstetric complications. This report also highlights the various maternal and fetal complications occurring in pregnancy of Fontan-palliated patients and suggests the need for meticulous pre-conception counselling and strict perinatal care.
Descending aortic flow reversal in obstructed total anomalous pulmonary venous connection
Anupama K. Nair, Sitaraman Radhakrishnan, Krishna S. Iyer
Journal: Cardiology in the Young / Volume 26 / Issue 5 / June 2016
In this study, we present the case of a neonate with obstructed infracardiac total anomalous pulmonary venous connection with severe pulmonary hypertension and a patent ductus arteriosus with right-to-left shunting. The patient had an unusual finding of pandiastolic flow reversal in the upper descending thoracic aorta. He underwent emergency surgical re-routing of the pulmonary veins to the left atrium, and postoperative echocardiography showed disappearance of the descending aortic flow reversal. We hypothesise that in severely obstructed total anomalous pulmonary venous connection the left ventricular output may be extremely low, resulting in flow reversal in the descending aorta.
A numerical study of shear layer characteristics of low-speed transverse jets
Prahladh S. Iyer, Krishnan Mahesh
Journal: Journal of Fluid Mechanics / Volume 790 / 10 March 2016
Print publication: 10 March 2016
Direct numerical simulation (DNS) and dynamic mode decomposition (DMD) are used to study the shear layer characteristics of a jet in a crossflow. Experimental observations by Megerian et al. (J. Fluid Mech., vol. 593, 2007, pp. 93–129) at velocity ratios ( $R=\overline{v}_{j}/u_{\infty }$ ) of 2 and 4 and Reynolds number ( $Re=\overline{v}_{j}D/{\it\nu}$ ) of 2000 on the transition from absolute to convective instability of the upstream shear layer are reproduced. Point velocity spectra at different points along the shear layer show excellent agreement with experiments. The same frequency ( $St=0.65$ ) is dominant along the length of the shear layer for $R=2$ , whereas the dominant frequencies change along the shear layer for $R=4$ . DMD of the full three-dimensional flow field is able to reproduce the dominant frequencies observed from DNS and shows that the shear layer modes are dominant for both the conditions simulated. The spatial modes obtained from DMD are used to study the nature of the shear layer instability. It is found that a counter-current mixing layer is obtained in the upstream shear layer. The corresponding mixing velocity ratio is obtained, and seen to delineate the two regimes of absolute or convective instability. The effect of the nozzle is evaluated by performing simulations without the nozzle while requiring the jet to have the same inlet velocity profile as that obtained at the nozzle exit in the simulations including the nozzle. The shear layer spectra show good agreement with the simulations including the nozzle. The effect of shear layer thickness is studied at a velocity ratio of 2 based on peak and mean jet velocity. The dominant frequencies and spatial shear layer modes from DNS/DMD are significantly altered by the jet exit velocity profile.
Interpreting pre-operative mastoid computed tomography images: comparison between operating surgeon, radiologist and operative findings
K Badran, S Ansari, R Al Sam, Y Al Husami, A Iyer
Journal: The Journal of Laryngology & Otology / Volume 130 / Issue 1 / January 2016
Published online by Cambridge University Press: 08 January 2016, pp. 32-37
Print publication: January 2016
This study aimed to compare the interpretations of temporal bone computed tomography scans by an otologist and a radiologist with a special interest in temporal bone imaging. It also aimed to determine the usefulness of this imaging modality.
A head and neck radiologist and an otologist separately reported pre-operative computed tomography images using a structured proforma. The reports were then compared with operative findings to determine their accuracy and differences in interpretations.
Forty-eight patients who underwent pre-operative computed tomography scans in a 30-month period were identified. Six patients were excluded because complete operative findings had not been recorded. Positive and negative predictive values and accuracy of the anatomical and pathological findings were calculated for 42 patients by both reporters. The accuracy was found to be less than 80 per cent, except for identification of the tegmen and lateral semicircular canal erosion. Overall, there was no significant difference in interpretations of computed tomography scans between reporters. There was a slight difference in interpretation for tympanic membrane retraction, facial canal erosion and lateral semicircular canal fistula and/or erosion.
Pre-operative computed tomography scanning of the temporal bone is useful for predicting anatomy for surgical planning in patients with chronic otitis media, but its reliability remains questionable.
Numerical study of high speed jets in crossflow
Xiaochuan Chai, Prahladh S. Iyer, Krishnan Mahesh
Journal: Journal of Fluid Mechanics / Volume 785 / 25 December 2015
Large-eddy simulation (LES) and dynamic mode decomposition (DMD) are used to study an underexpanded sonic jet injected into a supersonic crossflow and an overexpanded supersonic jet injected into a subsonic crossflow, where the flow conditions are based on the experiments of Santiago & Dutton (J. Propul. Power, vol. 13 (2), 1997, pp. 264–273) and Beresh et al. (AIAA J., vol. 43, 2005a, pp. 379–389), respectively. The simulations successfully reproduce experimentally observed shock systems and vortical structures. The time averaged flow fields are compared to the experimental results, and good agreement is observed. The behaviour of the flow is discussed, and the similarities and differences between the two regimes are studied. The trajectory of the transverse jet is investigated. A modification to Schetz et al.'s theory is proposed (Schetz & Billig, J. Spacecr. Rockets, vol. 3, 1996, pp. 1658–1665), which yields good prediction of the jet trajectories in the current simulations in the near field. Point spectra taken at various locations in the flowfield indicate a global oscillation for the sonic jet flow, wherein different regions in the flow oscillate with a frequency of $St=fD/u_{\infty }=0.3$ . For supersonic jet flow, no such global frequency is observed. Dynamic mode decomposition of the three-dimensional pressure field obtained from LES is performed and shows the same behaviour. The DMD results indicate that the $St=0.3$ mode is dominant between the upstream barrel shock and the bow shock for the sonic jet, while the roll up of the upstream shear layer is dominant for the supersonic jet.
Are head bandages really required after middle-ear surgery? A systematic review
I Khan, S Mohamad, S Ansari, A Iyer
A systematic review was performed to evaluate the role and effectiveness of head bandages after routine elective middle-ear surgery.
Studies that compared the effectiveness of head bandage use after elective middle-ear surgery (e.g. myringoplasty, mastoidectomy and cochlear implantation) were identified using the following databases: Ovid Medline and Embase, the Ebsco collections, the Cochrane Library, PubMed, and Google Scholar. An initial search identified 71 articles. All titles and abstracts were reviewed. Thirteen relevant articles were inspected in more detail; of these, only five met the inclusion criteria. These included three randomised, controlled trials, one retrospective case series and one literature review.
The three randomised, controlled trials (level of evidence 1b) showed no statistically significant differences in post-operative outcomes (in terms of complications) associated with head bandage use in middle-ear surgery. This finding was supported by the retrospective case series involving patients undergoing cochlear implantation.
Current available evidence shows no advantage of head bandage use after middle-ear surgery. Head bandages may not be required after routine, uncomplicated middle-ear surgery.
Three-dimensional integration: An industry perspective
Subramanian S. Iyer
Journal: MRS Bulletin / Volume 40 / Issue 3 / March 2015
The field of electronics packaging is undergoing a significant transition to accommodate the slowing down of lithographically driven semiconductor scaling. Three-dimensional (3D) integration is an important component of this transition and promises to revolutionize the way chips are assembled and interconnected in a subsystem. In this article, we develop the key attributes of 3D integration, the enablers and the challenges that need to be overcome before widespread acceptance by industry. While we are already seeing the proliferation of applications in the memory subsystem, the best is yet to come with the heterogeneous integration of a diverse set of technologies, the mixing of lithographic nodes and an economic argument for its implementation based on overall system function, and cost rather than a narrow component-based analysis. Finally, an extension to monolithic 3D integration promises even further benefits.
Transforming youth mental health: a Canadian perspective
S. N. Iyer, P. Boksa, S. Lal, J. Shah, G. Marandola, G. Jordan, M. Doyle, R. Joober, A. K. Malla
Journal: Irish Journal of Psychological Medicine / Volume 32 / Issue 1 / March 2015
Published online by Cambridge University Press: 26 February 2015, pp. 51-60
In most mental illnesses, onset occurs before the age of 25 and the earliest stages are critical. The youth bear a large share of the burden of disease associated with mental illnesses. Yet, Canadian youths with mental health difficulties face delayed detection; long waiting lists; inaccessible, unengaging services; abrupt transitions between services; and, especially in remoter regions, even a complete lack of services. Responding to this crisis, the Canadian Institutes of Health Research announced a 5-year grant that was awarded to ACCESS, a pan-Canadian network of youths, families, clinicians, researchers, policymakers, community organisations and Indigenous communities. Using strategies developed collaboratively by all stakeholders, ACCESS will execute a youth mental healthcare transformation via early detection, rapid access and appropriate, high-quality care. The project includes an innovative, mixed-methods service research component. Similar in many respects to other national youth mental health initiatives, ACCESS also exhibits important differences of scale, scope and approach.
Primary otological manifestations of granulomatosis with polyangiitis: a case series
N Amiraraghi, S Robertson, A Iyer
Journal: The Journal of Laryngology & Otology / Volume 129 / Issue 2 / February 2015
A primary otological presentation of granulomatosis with polyangiitis is rare. We present four cases of granulomatosis with polyangiitis with different otological manifestations.
Case report:
A literature review of granulomatosis with polyangiitis cases presenting to otolaryngologists was undertaken. A case series review of four patients presenting within a 12-month period was also performed. One patient had serous otitis media which worsened after myringotomy. Two patients presented with acute ear infection and facial palsy and one with acute mastoiditis. All were positive for antineutrophilic cytoplasmic antibody, and three had positive findings upon histological analysis.
When acute supposed ear infections fail to respond to treatment (antibiotics or surgery), rarer causes of the symptoms should be considered. By reporting this case series, we aim to improve the early diagnosis of granulomatosis with polyangiitis to enable timely treatment and prevent systemic involvement.
|
CommonCrawl
|
Uncertainty analysis of tumour absorbed dose calculations in molecular radiotherapy
Domenico Finocchiaro1,2,
Jonathan I. Gear3,
Federica Fioroni1,
Glenn D. Flux3,
Iain Murray3,
Gastone Castellani2,
Annibale Versari4,
Mauro Iori1 &
Elisa Grassi1
EJNMMI Physics volume 7, Article number: 63 (2020) Cite this article
Internal dosimetry evaluation consists of a multi-step process ranging from imaging acquisition to absorbed dose calculations. Assessment of uncertainty is complicated and, for that reason, it is commonly ignored in clinical routine. However, it is essential for adequate interpretation of the results. Recently, the EANM published a practical guidance on uncertainty analysis for molecular radiotherapy based on the application of the law of propagation of uncertainty. In this study, we investigated the overall uncertainty on a sample of a patient following the EANM guidelines. The aim of this study was to provide an indication of the typical uncertainties that may be expected from performing dosimetry, to determine parameters that have the greatest effect on the accuracy of calculations and to consider the potential improvements that could be made if these effects were reduced.
Absorbed doses and the relative uncertainties were calculated for a sample of 49 patients and a total of 154 tumours. A wide range of relative absorbed dose uncertainty values was observed (14–102%). Uncertainties associated with each quantity along the absorbed dose calculation chain (i.e. volume, recovery coefficient, calibration factor, activity, time-activity curve fitting, time-integrated activity and absorbed dose) were estimated. An equation was derived to describe the relationship between the uncertainty in the absorbed dose and the volume. The largest source of error was the VOI delineation. By postulating different values of FWHM, the impact of the imaging system spatial resolution on the uncertainties was investigated.
To the best of our knowledge, this is the first analysis of uncertainty in molecular radiotherapy based on a cohort of clinical cases. Wide inter-lesion variability of absorbed dose uncertainty was observed. Hence, a proper assessment of the uncertainties associated with the calculations should be considered as a basic scientific standard. A model for a quick estimate of uncertainty without implementing the entire error propagation schema, which may be useful in clinical practice, was presented. Ameliorating spatial resolution may be in future the key factor for accurate absorbed dose assessment.
In recent decades, molecular radiotherapy (MRT) has been increasingly used for the treatment of neuroendocrine tumours (NETs). The use of somatostatin analogues labelled with radio-emitting isotopes has shown promising results [1,2,3], and it is expected that peptide receptor radionuclide therapy (PRRT) will become more widely used. Recently, the NETTER-1 trial [4] demonstrated that 177Lu-DOTATATE-PRRT significantly improved progression-free survival. It has also been demonstrated that absorbed doses delivered to healthy organs and tumours have large inter-patient variability [5,6,7,8]. Moreover, many studies have provided evidence of dose-effect correlations in PRRT [9,10,11]. For these reasons, groups from different hospitals and research institutes across Europe have proposed the use of dosimetry for PRRT in routine clinical practice [12]. Personalized medicine necessitates treatment to be optimized based on patient-specific dosimetry. Calculation of the absorbed doses delivered to organs at risk and tumours should ideally incorporate uncertainty analysis. This is particularly true in the case of tumour dosimetry that can be subjected to relatively high uncertainties due to the wider range of absorbed doses delivered and the lack of standardised S-factors [13, 14].
To date, investigations into uncertainties of absorbed dose calculations in MRT have been mainly based on phantom measurements or simulated data [15,16,17,18]. However, uncertainty evaluation should ideally be considered for each individual case. Furthermore, the majority of studies have focused on one or only a few aspects of MRT absorbed dose measurements (for example on the calibration of gamma cameras [19] or on activity quantification [20,21,22]). However, internal dosimetry evaluation consists of a multi-step process with a specific uncertainty associated with each step [23]. Consequently, each step should be included in the overall absorbed dose uncertainty calculation.
Recently, the EANM published practical guidance on uncertainty analysis for molecular radiotherapy absorbed dose calculations [24]. This guide provides a detailed schema to determine uncertainties based on the application of the law of propagation of uncertainty (LPU) and was designed to be implemented using standard resources available in every clinic offering MRT. The published EANM paper also reports a patient example to support readers for the implementation of the guidelines.
To the best of our knowledge, to date, there are no published data to address uncertainty analysis that includes every aspect of the dosimetry calculation chain on a sample of clinical cases.
In that context, this study shows the results of uncertainties in tumour absorbed dose calculations for a sample of patients treated at Azienda USL-IRCCS of Reggio Emilia (Italy). The scope of this paper is to give an indication of the typical uncertainties that may be expected from performing tumour dosimetry, to determine parameters that have the greatest effect on the accuracy of calculations and to consider the potential improvements that could be made if these effects were reduced.
This study was carried out retrospectively on a sample of 49 patients enrolled in a clinical trial between 2016 and 2017 (EUDRACT 2015-005546-63), which received local institutional ethics committee approval at the Azienda USL-IRCCS of Reggio Emilia hospital.
All patients were affected by NETs and were treated with PRRT. According to the trial design, each patient underwent several 177Lu- and 90Y-DOTATOC administrations. The dosimetry was conducted at the first cycle of therapy after a therapeutic injection of 177Lu-DOTATOC. A mean value of 4.2 ± 0.9 GBq of 177Lu-DOTATOC was administered to patients. A maximum of 5 lesions was analysed for each patient, with a total of 154. The clinical trial was conducted before the Lutathera approval by EMA and FDA.
All examinations were performed using a hybrid Symbia T2 SPECT/CT (Siemens Healthineers, Germany). The SPECT gamma camera was equipped with a medium-energy general purpose collimator (MEGP). The energy windows (EW) of 177Lu photopeaks were set at 113 keV ± 7.5% and 208.4 keV ± 7.5%. Images were acquired in step and shoot mode, with 32 × 2 views at 30 s per view. SPECT projections were reconstructed using an iterative algorithm with compensations for attenuation from CT images, scatter and full collimator-detector response in the Siemens E-Soft workstation (Syngo, MI Application version 32B, Siemens Medical Solution, Germany) with Flash 3D iterative algorithm (10 iterations; 8 subsets; Gaussian filter with 4.8 mm cut-off). As regards the scatter correction, the TEW (Triple Energy Window) correction was employed for the lower photopeak. The lower scatter window was set in the range from 87.58 to 104.53 keV (using a default window weight of 0.50), while the upper scatter window from 121.47 to 130.51 keV (using a default window weight of 0.94). With respect to the higher energy photopeak, the DEW (Double Energy Window) correction was employed and the lower scatter window ranged from 171.60 to 192.40 keV (using a default window weight of 0.75).
The FWHM of the system was measured by Grassi et al. [25] and the result was 10.4 ± 0.7 mm.
The imaging protocol consisted of four sequential SPECT/CT scans of the abdomen typically at 1, 24, 44 and 72 h p.i. (post injection). If necessary, also the thorax area at 1, 24 and 72 h p.i. was scanned.
A total of 141 lesions in the abdomen and a total of 13 lesions in the thorax were analysed.
Dosimetry workflow
At the first cycle, a complete dosimetric evaluation of the selected tumours was performed based on SPECT/CT acquisitions. The SPECT/CT system was previously calibrated using a cylindrical Jasczcak phantom (Data Spectrum Corporation, USA) filled with a homogenous 177Lu radioactive solution. A calibration factor (CF = 36.5 cps/MBq) was determined by the ratio between the known activity and the measured total counts, following the procedure described by Grassi et al. [26]. A series of sequential multiple acquisitions of the phantom was performed. The standard uncertainty from repeating activity measurements was taken.
Subsequent to each SPECT acquisition, a CT image was acquired for attenuation correction. For radiation protection of patients, low resolution CT scans were acquired (90 mAs at the first scan and 30 mAs at the following acquisitions) and no contrast medium was used. As a consequence, most of the lesions were not visible on the CT image. For that reason, all tumours were manually segmented on the SPECT image. Contouring was performed in the Velocity Workstation (Varian Medical System, USA) using a variable threshold (as a percentage of the maximum value) defined by a nuclear medicine physician. To minimize misregistration errors, contours were outlined on the fused SPECT/CT image acquired at 24 h p.i., transferred automatically to all co-registered images and manually translated by the user, in case of need, to adapt them to the lesion site.
Activities were corrected for partial volume effects using recovery coefficients (RCs) previously determined based on phantoms with spherical inserts [27]. RCs as a function of insert volume were fitted with the following exponential curve:
$$ RC\left(\upsilon \right)=\alpha \cdot \exp \left(-\beta \cdot \upsilon \right)+\gamma $$
where α, β and γ are the fitting parameters and v (volume) is the independent variable.
Two different exponential curves were used to fit the time-activity points:
$$ {f}_1(t)={A}_0\cdot \exp \left(-{\lambda}_1\cdot t\right) $$
$$ {f}_2(t)={A}_0\cdot \mathit{\exp}\left(-{\lambda}_1\cdot t\right)\left[1-\mathit{\exp}\left(-{\lambda}_2\cdot t\right)\right] $$
where A0, λ1 and λ2 are the fitting parameters and t (time) is the independent variable. Equation 2 was used in case of monotonically decreasing data points or if only 3 time-points were available. Otherwise, Eq. 3 was used.
Time-integrated activities (TIAs) were calculated by solving the integral of the exponential functions, based on the fitting parameters.
Tumour absorbed doses were calculated using the OLINDA1.1 sphere model. S-factors derived from OLINDA1.1 were fitted against mass using a power function, as shown in Fig. 1. In this study, the absorbed dose at the first therapy cycle and the relative uncertainty was calculated.
S-factors against mass for unit density spheres
Uncertainty analysis
In this section, it is briefly described how uncertainty associated with each parameter within the dosimetry workflow was calculated. Please refer to the EANM published guidelines [24] for more details.
Volume uncertainty u(v) was calculated using the analytical expression:
$$ {\left[\frac{u\left(\upsilon \right)}{\upsilon}\right]}^2={\left[3\frac{u(d)}{d}\right]}^2 $$
where d is the equivalent diameter of the outlined lesion, with uncertainty:
$$ {u}^2(d)=\frac{a^2}{6}+\frac{{\left(\mathrm{FWHM}\right)}^2}{4\ln 2} $$
where a is the voxel size and FWHM is the resolution of the imaging system.
Referring to Eq. 1 and assuming the vector b = (α, β, γ, v)T, the squared standard uncertainty associated with RC was calculated as:
$$ {u}^2\left(\mathrm{RC}\right)={\mathbf{g}}_{\boldsymbol{b}}^T{\boldsymbol{V}}_{\boldsymbol{b}}{\mathbf{g}}_{\boldsymbol{b}} $$
where gb is the vector containing the partial derivatives of the first order of RC with respect to b and Vb is the covariance matrix extended by one element, namely the partial derivative of the first order of RC(v) with respect to v.
Uncertainty associated with the number of counts (C) within the VOI was calculated by assuming a Gaussian profile of the counts with standard deviation σ and by propagating the volume, CF and RC uncertainties:
$$ u(C)=\frac{C}{2\cdot RC}\cdot \frac{u\left(\upsilon \right)}{\upsilon}\left[\operatorname{erf}\left(\frac{2r}{\sigma \sqrt{2}}\right)-\frac{2\sigma }{r\sqrt{2\pi }}\left(1-{e}^{-\frac{2{r}^2}{\sigma^2}}\right)\right] $$
where r is the equivalent radius, erf is the error function and \( \sigma =\frac{\mathrm{FWHM}}{2\sqrt{2\mathit{\ln}}2} \).
Calibration factor uncertainty was determined by applying the LPU of measured activity (nominal accuracy of dose calibrator) and counts (standard deviation from multiple measurements).
Uncertainties associated with the measured activities were determined by error propagation of CF, RC and C (uncertainty of administered activity was assumed to be negligible).
Uncertainty associated with the fitting parameters (Eqs. 2 and 3) were assumed to be the main source of uncertainty of the time-activity curve fitting.
Uncertainty associated with the TIA (denoted as \( \overset{\sim }{A} \)) included both the uncertainty of the fitting parameters and the uncertainty of the activities:
$$ {u}^2\left(\tilde{A}\right)={\mathbf{g}}_{\boldsymbol{b}}^T{\boldsymbol{V}}_{\boldsymbol{p}}{\mathbf{g}}_{\boldsymbol{p}}+{\left[\frac{u(A)}{A}\tilde{A}\right]}^2 $$
where gp is the gradient matrix of \( \overset{\sim }{A} \) with respect to the vector p containing the fitting parameters, Vp is the covariance matrix and \( \raisebox{1ex}{$u(A)$}\!\left/ \!\raisebox{-1ex}{$A$}\right. \) is the fraction standard uncertainty associated with the measured activities.
Uncertainty associated with S-factors, u(S), was derived by the propagation of the volume error (errors associated with the S-factors against volume fitting parameters were assumed to be negligible).
Absorbed dose uncertainty was determined by applying the LPU:
$$ {\left[\frac{u(AD)}{AD}\right]}^2={\left[\frac{u\left(\tilde{A}\right)}{\tilde{A}}\right]}^2+{\left[\frac{u(S)}{S}\right]}^2+2\frac{u\left(\tilde{A},S\right)}{\tilde{A}\cdot S} $$
where \( u\left(\overset{\sim }{A},S\right) \) is the covariance between \( \overset{\sim }{A} \)and S.
In addition to the parameters along the dosimetry workflow, the absorbed dose rate (AD rate) was calculated as the product between activity and S-factor. Absorbed dose rate uncertainty was determined using the following formula:
$$ {\left[\frac{u\left(\mathrm{AD}\;\mathrm{rate}\right)}{\mathrm{AD}\;\mathrm{rate}}\right]}^2={\left[\frac{u(A)}{A}\right]}^2+{\left[\frac{u(S)}{S}\right]}^2+2\frac{u\left(A,S\right)}{A\cdot S} $$
where u(A, S) is the covariance between A and S.
All absorbed dose calculations and statistical analyses were performed in MATLAB R2019a (The MathWorks Inc., USA). A MATLAB script was developed and used to automatically calculate the uncertainty associated with each parameter within the absorbed dose calculation chain.
Box-plots were used to visualize the distribution of standard uncertainty of each variable included in this analysis. Association between each variable and the absorbed dose uncertainties were qualitatively assessed graphically.
As discussed in the EANM guidance, uncertainty in the absorbed dose is expected to largely depend on the precision with which the lesion volume can be estimated. An absorbed dose uncertainty (AD uncertainty) curve against lesion volume (v) was determined by the least squared fitting. A Power function of Eq. 6 was used to fit the empirical data points:
$$ \mathrm{AD}\;\mathrm{uncertainty}\kern0.5em \left(\upsilon \right)=A\cdot {\upsilon}^B $$
where A and B are the fitting parameters and v is the independent variable.
Absorbed dose rate uncertainty against tumour volume was also plotted on a graph in order to assess the relationship between these quantities.
In this study, we further evaluated the expected improvement in absorbed dose uncertainty attainable with potential improvement in the accuracy of the volume estimation. This was achieved by repeating all uncertainty calculations assuming a range of system spatial resolutions, such as those typical of 68Ga PET/CT and CT imaging.
Forty-nine patients (22 males, 27 females, median age 62 years, range 36–79 years) were treated with PRRT. Among the 154 lesions analysed, 100 were situated within the liver (64.9%), 8 in the pancreas (5.2%), 5 in the lung (3.2%), 18 were bone lesions (11.7%), 18 were lymph nodes (11.7%) and 5 were in other locations (3.2%).
The median value of the contoured volumes on SPECT images was 6.9 mL and the interquartile range was 4.7–17.2 mL.
The average uncertainty in absorbed dose was relatively high with a mean of 65% and a median value of 73%. A wide range of uncertainty values was observed (14–102%). Figure 2 shows the distribution of the relative uncertainty for each parameter calculated along the dosimetry chain. The highest relative uncertainties were found to be associated with volume and S-factor. It is worth noting that S-factor uncertainty is strictly dependent on the volume uncertainty, which can be considered as a primary source of error. The absorbed dose uncertainty was plotted against the uncertainty associated with each of the parameters along with the dosimetry workflows, as shown in Fig. 3. Different patterns were obtained for each variable, demonstrating the complex relationship that each quantity has on the estimate of absorbed dose. A clear relationship was observed between absorbed dose uncertainty and volume uncertainty, as evidence that the major factors affecting the accuracy of absorbed dose calculation originate from the volume delineation.
Distribution of uncertainty (%) for each step of the absorbed dose calculation schema
Relationship between absorbed dose uncertainty (y-axis) and volume, RC, counts, CF, activity, curve fitting parameters, time-integrated activity and S-factors uncertainties (x-axis). The graph at the bottom right shows the absorbed dose (Gy) against the absorbed dose uncertainty
The absorbed dose uncertainties against volume were fitted using the power function of Eq. 11. The fit coefficients are shown in Table 1, while the fit curve is shown in Fig. 4.
Table 1 Power curve best-fit parameters of absorbed dose uncertainty against volume
Absorbed dose uncertainty (%) against volume (mL). Points were fitted with a power function. R2 and RMSE are reported in the graph
The average absorbed dose rate relative uncertainty was 58%, with a median value of 69% and values ranging between 11% and 87%. Absorbed dose rate uncertainty against volume is shown in Fig. 5.
Absorbed dose rate uncertainty (%) against volume (mL)
In order to assess if the number of data-points affects the accuracy of the time-activity curve (TAC) fitting, patients with four time-points and patients with three time-points were separately evaluated. A total of 141 four-point datasets and 13 three-points datasets were analysed, with an average of 12% relative uncertainty of TAC fitting on the former and an average of 16% on the latter.
The effect of different values of spatial resolution was investigated by hypothetically changing the value of FWHM given in Eq. 5. Figure 6 shows the relative absorbed dose uncertainty re-calculated for all the lesions, assuming three different values (0.5, 5 and 10 mm) of FWHM (note: the actual FWHM of the acquisition system was 10.4 mm). These values were chosen to represent the typical spatial resolution of CT, PET and SPECT acquisition systems, respectively. In Fig. 7, four lesions with very different volumes were considered and the absorbed dose uncertainty was estimated for a range of values of the system spatial resolution. Absorbed dose uncertainty and volume uncertainty were plotted on the same axis against volume in Fig. 8 to further assess the relationship between these variables.
On the left, absorbed dose uncertainty (%) against volume (mL) calculated for all the lesions, postulating imaging systems with FWHM equal to 0.5, 5 and 10 mm (representative of CT, PET and SPECT systems, respectively). On the right, distributions of absorbed dose uncertainty (%) for each value of the FWHM
Absorbed dose uncertainty (%) as a function of the imaging system spatial resolution (FWHM in mm) in four lesions. Lesions were chosen to fill a range of different values of volume
Absorbed dose uncertainty (black points) and the volume uncertainty (blue line) as a function of the delineated VOI volume
Lack of knowledge of absorbed dose calculation uncertainties has been a factor that has impeded widespread uptake of dosimetry in MRT.
The EANM guidelines provide a schema of uncertainty propagation to evaluate the standard uncertainty in absorbed dose to a target. This schema was based on the recommendations described within the GUM [28] and necessarily involves the formation of covariance matrices for several steps of the dosimetry process. In this work, we have applied the EANM guidelines to evaluate the uncertainty of tumour dosimetry calculations in PRRT. This study carried out, for the first time, the uncertainty analysis of the entire process of dosimetry calculation on a large sample of clinical cases compared to the existing scientific context. A total of 154 lesions were analyzed.
As shown in Fig. 2, the fractional uncertainty associated with the considered quantities (volume, CF, S-factor, etc.) was widespread around the median value, incurring a high inter-lesion variability. Volume and S-factor are the quantities with the highest values of uncertainty associated. It is worth noting that S-factor uncertainty mainly originates by the volume uncertainty and, in that sense, it is not a primary source of uncertainty. These results confirmed that the accuracy in absorbed dose and absorbed dose rate is dominated by the accuracy in the delineation of the VOI. For example, when contouring a volume, the uncertainty in edge definition due to the limited spatial resolution, together with the voxel width, involves errors in the assessment of the volume.
The uncertainty associated with the volume is then propagated to many of the other parameters (RC, counts, activity, fitting, TIA, S-factor and absorbed dose). The relationship between fractional absorbed dose uncertainty and tumour volume is evident in Fig. 4. The analytical power model for this relationship fitted the empirical data points well, and this could be useful, in clinical practice, for a quick estimate of uncertainty without implementing the entire error propagation schema, which could be useful to select the lesions to be monitored for patient outcome assessment.
Figure 8 shows the uncertainty in absorbed dose (black points) and the uncertainty in volume (blue line) on the same axis. This graph shows the "weight" of the uncertainty associated with the volume segmentation and other parameters on the accuracy of the dosimetric calculation. From a practical point of view, from Fig. 8, it is possible to deduce that uncertainty pertaining to a smaller lesion is mainly due to the volume delineation. For larger lesions, volume contouring impact is less significant and other parameters, such as random effects affecting the confidence of the fit parameters for the TAC, begin to dominate. As a result, data-points are increasingly distributed beyond the empirical function as the volume increases. It is also interesting to note that the fractional uncertainty in absorbed dose is lower than that of the volume uncertainty as covariance effects within the dosimetry chain reduce the overall uncertainty in absorbed dose. The random component of the fitting parameters does not contribute to the absorbed dose rate uncertainty. This results in smaller fluctuations of data-points around the average value, as evident by comparing Fig. 5 with Fig. 4. The accuracy of time-activity curve fitting depends on the number of data-points, the scan times and the theoretical model function employed. The optimal scan times to perform dosimetry in PRRT are yet to be determined. Sandstrom et al. [29] proposed to use a late time-point at 7 days, the EANM dosimetry committee [30] suggested to use at least three time-points, while Del Prete et al. [31] and Hänscheid et al. [32] proposed simplified dosimetry protocols based on two time-points and one time-point, respectively. It is evident that the greater the number of time-points, the more uncertainty will be reduced. However, an evaluation in terms of cost/benefits should be performed to determine the optimal solution. The average 177Lu-DOTATATE tumour effective half-life was found to be between 77 h and 110 h [31,32,33]. As a consequence, tumour uptake at 70 h p.i. (last acquisition time-point based on our clinical protocol) is about 60% of the maximum value. Extrapolation of the simulated curve beyond the last acquisition point inherently leads to errors on the calculation of absorbed dose, since the lesion-specific effective half-life may be misestimated. Half-life, in fact, depends on the tumour biology through the biological half-life. Biology influences and may cause the differentiation of the retention in each lesion, even in the same patient. However, restrictions on the timing and number of samples are necessary for patient benefit and extrapolation of the simulated curve to time intervals for which no data exist needs to be performed. Since no specific information about the lesion biological retention is available in practice, uncertainty of the time-integrated activity is estimated based on the goodness of the analytical function to fit the data. In this study, 141 tumours with four time-points (1, 24, 40 and 70 h p.i.) and 13 tumours with three time-points (1, 24, 70 h p.i.) were analyzed. Our data suggest that the use of four time-points reduces the uncertainty of the TAC fitting by 4% compared to using three time-points (12% to 16%). Moreover, it should be noted that it is undesirable to fit three data-points with a three parameters bi-exponential curve like the one in Eq. 3. From a pure mathematical point of view, this will result in an unreliable model with no test of goodness of fit and so with no possibility of checking the parameters. However, with early time-points acquisition (1 h p.i.), we often saw evidence of the initial uptake phase before the time-activity curve started to decrease in mono-exponential washout. Hence, using a mono-exponential curve to model the time-activity decay may not be the optimal choice. The potential impact on the accuracy of absorbed dose needs to be investigated; however, it goes beyond the aim of this study. In this work, time-activity points were fitted by using either mono- or bi-exponential curves. The optimal fit function should be chosen for each case based on the number and the distribution of available data-points, possibly by using model selection criteria as discussed by Kletting et al. [34].
These results may be useful to provide the user with an indication about the typical expected uncertainty while performing dosimetry. Assuming an acceptable absorbed dose uncertainty of 40% as a reference, which is the typical absorbed dose uncertainty on clinical cases as reported by Grassi et al., the correspondent cut-off tumour volume is around 33 mL. Consequently, it can be concluded that absorbed doses to lesions with volumes smaller than 33 mL cannot be determined to a sufficient level of confidence to make the result meaningful. However, it should be noted that these values depend on the spatial resolution of the imaging system and on the method used to contour the VOI. In this study, the VOIs were manually contoured on the SPECT images.
Figures 6 and 7 show the impact of spatial resolution on final uncertainty evaluated by postulating a different FWHM for each of the imaging systems. These results have demonstrated that uncertainty would be significantly reduced by increasing the spatial resolution. This effect would be particularly significant in the case of small volumes. Hence, a minimal acceptable volume cut-off should be set, depending on the spatial resolution of the system available in the site. A standard gamma camera, combined with an iterative reconstruction algorithm that includes attenuation, scatter and collimator-detector response, provides images with a spatial resolution around 1 cm for 177Lu, as reported in [25].
In this study, 128 lesions (out of 154) had a volume smaller than 33 mL. All 154 lesions were considered of clinical importance in the trial and were used in the treatment planning. It should be noted that for this analysis, a tumour volume cut-off was not introduced (consequently also lesions with very small volumes were analysed) to provide worthy results in the whole clinical range of volumes.
Anyway exclusion of tumours below 33 mL is undesirable as they may be of clinical importance; rather, the improvement of spatial resolution and VOIs delineation is desirable. The accuracy of VOI delineation may be improved by using the appropriate acquisition/reconstruction protocol (accounting for acquisition statistics, matrix, collimator type, reconstruction settings) to obtain images with a spatial resolution as high as possible. Lesions may be delineated using contrast-enhanced CT or 68Ga-PET where feasible, which are characterized by a better spatial resolution than SPECT imaging. Contouring on images with a spatial resolution of 5 mm (typical of PET images or new generation SPECT systems) would provide a cut-off tumour volume of 4 mL (considering an absorbed dose uncertainty equal to 40%). Almost all the lesions provided absorbed dose uncertainty smaller than 40% if a spatial resolution equal to 0.5 mm (typical of CT images) was used. In that case, an absorbed dose uncertainty cut-off lower than 40% may be set in order to increase the significance of absorbed dose calculations. For example, a cut-off volume of 4 mL would provide a confidence level of absorbed dose calculation around 20%. However, the possibility of using CT in place of SPECT or PET is to be evaluated, maybe combining both the morphological and functional information. It is worth to be noted performances of imaging systems are rapidly improving and new generation cameras provide images with better spatial resolution. Images with 5 mm of spatial resolution are in the present day within the reach of the most advanced SPECT/CT systems and even better resolutions may be reached with PET/CT systems. Uncertainty of volume evaluation might be further reduced by averaging VOIs delineated by different operators. However, this approach may be difficult to be applied in clinics.
This study had some limitations because some sources of uncertainty were not included in this analysis. VOIs were outlined using a standard threshold when possible; however, in some cases, the threshold was adapted by the physicians in order to adequately contour the tumour volume in relation to the tumour uptake and the activity of the surrounding tissues. For that reason, the uncertainty of volume determination is operator-dependent, but in this study, that component was not taken into account. Errors due to image misregistration were not included in this analysis. Misalignment of VOIs with the tumours was assumed to be negligible as each VOI was visually checked and manually adjusted in case of need. Activities were corrected for partial volume effect using pre-calculated RCs based on phantom measurements. This method makes some approximations: it is assumed lesions to have a spherical shape and counts do not spill-in from surrounding tissues. These approximations affect the accuracy of partial volume effect correction; however, they were not considered in this study. Following the MIRD schema, it was assumed that the tumour tissue was homogeneous, the tumours had spherical shapes and the target volumes were the sources activity volumes (i.e. the contribution of absorbed dose from the surrounding organs was not considered). There are uncertainties associated with deviations between these assumptions and reality, but they are outside the scope of this framework.
In conclusion, this study provided the first analysis of uncertainties of tumour absorbed dose calculations on a sample of clinical cases treated with PRRT. Assessment of uncertainties provides the degree of consistency of the data and allows to adequately weigh results in treatment planning. For that reason, it is firmly recommended to include the analysis of uncertainty for any measured or calculated parameters in clinical routine. However, such analyses in MRT are rarely performed. The application of uncertainty analysis in clinical practice may help clinicians to select tumours for treatment response evaluation and may help to identify parameters that more affect the accuracy of calculation. Such analysis may increase the validity of dosimetry, and in turn, it would encourage physicians to use dosimetry in treatment planning. In the research field, it may facilitate the determination of the dose-response relationship and it may allow to compare results among different clinical sites. This study showed volume delineation to be one of the parameters which more affect the accuracy of absorbed dose calculations and it most likely is the easiest side to ameliorate in the clinical practice. Based on these results, using PET or CT imaging or new generation SPECT systems would reduce the amount of uncertainty by a factor between 50% and 70% in comparison to using SPECT images acquired with less recent scanners. The ability to improve the accuracy of absorbed dose calculations might be crucial to optimize treatment efficacy in internal radionuclide therapy.
CF:
Calibration factor
EANM:
EW:
Energy window
FWHM:
Full width at half maximum
LPU:
Law of propagation of uncertainty
MRT:
Molecular radiotherapy
PRRT:
Peptide receptor radionuclide therapy
Recovery coefficient
SPECT:
Single-photon emission computed tomography
TAC:
Time-activity curve
TIA:
Time-integrated activity
VOI:
Volume-of-interest
Kwekkeboom DJ, Mueller-Brand J, Paganelli G, Anthony LB, Pauwels S, Kvols LK, et al. Overview of results of peptide receptor radionuclide therapy with 3 radiolabeled somatostatin analogs. J Nucl Med. 2005;46(Suppl 1):62S–6S.
Bodei L, Cremonesi M, Grana CM, Fazio N, Iodice S, Baio SM, et al. Peptide receptor radionuclide therapy with 177Lu-DOTATATE: the IEO phase I-II study. Eur J Nucl Med Mol Imaging. 2011;38(12):2125–35.
Paganelli G, Sansovini M, Ambrosetti A, Severi S, Monti M, Scarpi E, et al. 177 Lu-Dota-octreotate radionuclide therapy of advanced gastrointestinal neuroendocrine tumors: results from a phase II study. Eur J Nucl Med Mol Imaging. 2014;41(10):1845–51.
Strosberg J, El-Haddad G, Wolin E, Hendifar A, Yao J, Chasen B, et al. Phase 3 trial of 177Lu-dotatate for midgut neuroendocrine tumors. N Engl J Med. 2017;376(2):125–35.
Barone R, Borson-Chazot F, Valkema R, Walrand S, Chauvin F, Gogou L, et al. Patient-specific dosimetry in predicting renal toxicity with (90)Y-DOTATOC: relevance of kidney volume and dose rate in finding a dose-effect relationship. J Nucl Med. 2005;46(Suppl 1):99S–106S.
Sundlöv A, Sjögreen-Gleisner K, Svensson J, Ljungberg M, Olsson T, Bernhardt P, et al. Individualised 177Lu-DOTATATE treatment of neuroendocrine tumours based on kidney dosimetry. Eur J Nucl Med Mol Imaging. 2017;44(9):1480–9.
Eberlein U, Cremonesi M, Lassmann M. Individualized dosimetry for theranostics: necessary, nice to have, or counterproductive? J Nucl Med. 2017;58(Suppl 2):97S–103S.
Marin G, Vanderlinden B, Karfis I, Guiot T, Wimana Z, Reynaert N, et al. A dosimetry procedure for organs-at-risk in 177Lu peptide receptor radionuclide therapy of patients with neuroendocrine tumours. Phys Med. 2018;56:41–9.
Cremonesi M, Ferrari M, Bodei L, Tosi G, Paganelli G. Dosimetry in peptide radionuclide receptor therapy: a review. J Nucl Med. 2006;47(9):1467–75.
Ilan E, Sandström M, Wassberg C, Sundin A, Garske-Román U, Eriksson B, et al. Dose response of pancreatic neuroendocrine tumors treated with peptide receptor radionuclide therapy using 177Lu-DOTATATE. J Nucl Med. 2015;56(2):177–82.
Rudisile S, Gosewisch A, Wenter V, Unterrainer M, Böning G, Gildehaus FJ, et al. Salvage PRRT with 177Lu-DOTA-octreotate in extensively pretreated patients with metastatic neuroendocrine tumor (NET): dosimetry, toxicity, efficacy, and survival. BMC Cancer. 2019;19(1):788.
Flux GD, Sjogreen Gleisner K, Chiesa C, Lassmann M, Chouin N, Gear J, et al. From fixed activities to personalized treatments in radionuclide therapy: lost in translation? Eur J Nucl Med Mol Imaging. 2018;45(1):152–4.
Wehrmann C, Senftleben S, Zachert C, Müller D, Baum RP, et al. Results of individual patient dosimetry in peptide receptor radionuclide therapy with 177Lu DOTA-TATE and 177Lu DOTA-NOC. Cancer Biother Radiopharm. 2007;22(3):406–16.
Gupta SK, Singla S, Thakral P, Bal CS. Dosimetric analyses of kidneys, liver, spleen, pituitary gland, and neuroendocrine tumors of patients treated with 177Lu-DOTATATE. Clin Nucl Med. 2013;38(3):188–94.
Dewaraja YK, Wilderman SJ, Ljungberg M, Koral KF, Zasadny K, Kaminiski MS. Accurate dosimetry in 131I radionuclide therapy using patient-specific, 3-dimensional methods for SPECT reconstruction and absorbed dose calculation. J Nucl Med. 2005;46(5):840–9.
CAS PubMed PubMed Central Google Scholar
Flower MA, McCready VR. Radionuclide therapy dose calculations: what accuracy can be achieved? Eur Nucl Med. 1997;24(12):1462–4.
Gear JI, Charles-Edwards E, Partridge M, Flux GD. A quality-control method for SPECT-based dosimetry in targeted radionuclide therapy. Cancer Biother Radiopharm. 2007;22(1):166–74.
Gustafsson J, Brolin G, Cox M, Ljungberg M, Johansson L, Gleisner KS. Uncertainty propagation for SPECT/CT-based renal dosimetry in (177)Lu peptide receptor radionuclide therapy. Phys Med Biol. 2015;60(21):8329–46.
D'Arienzo M. Cazzato M, Cozzella ML. Gamma camera calibration and validation for quantitative SPECT imaging with (177)Lu. Appl Radiat Isot. 2016;112:156–164.
Frey EC, Humm JL, Ljungberg M. Accuracy and precision of radioactivity quantification in nuclear medicine images. Semin Nucl Med. 2012;42(3):208–18.
Marin G, Vanderlinden B, Karfis I, Guiot T, Wimana Z, Flamen P, et al. Accuracy and precision assessment for activity quantification in individualized dosimetry of 177Lu-DOTATATE therapy. EJNMMI Phys. 2017;4:7.
Uribe CF, Esquinas PL, Tanguay J, Gonzalez M, Gaudin E, Beauregard JM, et al. Accuracy of 177Lu activity quantification in SPECT imaging: a phantom study. EJNMMI Phys. 2017;4(1):2.
Stabin MB. Uncertainties in internal dose calculations for radiopharmaceuticals. J Nucl Med. 2008;49(5):853–60.
Gear JI, Cox MG, Gustafsson J, Gleisner KS, Murray I, Glatting G, et al. EANM practical guidance on uncertainty analysis for molecular radiotherapy absorbed dose calculations. Eur J Nucl Med Mol Imaging. 2018;45(13):2456–74.
Grassi E, Fioroni F, Mezzenga M, Finocchiaro D, Sarti MA, Filice A, et al. Impact of a commercial 3D OSEM reconstruction algorithm on the 177Lu activity quantification of SPECT/CT imaging in a molecular radiotherapy trial. Radiol Diagnostic Imaging. 2017;1(1):1–7.
Grassi E, Fioroni F, Ferri V, Mezzenga E, Sarti MA, Paulus T, et al. Quantitative comparison between the commercial software STRATOS(®) by Philips and a homemade software for voxel-dosimetry in radiopeptide therapy. Phys Med. 2015;31(1):72–9.
Finocchiaro D, Berenato S, Grassi E, Bertolini V, Castellani G, Lanconelli N, et al. Partial volume effect of SPECT images in PRRT with 177Lu labelled somatostatine analogues: a practical solution. Phys Med. 2019;57:153–9.
Joint Committee for Guides in Metrology. JCGM 102. Evaluation of measurement data - guide to the expression of uncertainty in measurement. Sèvres: BIPM; 2011.
Sandstrom M, Ilan E, Karlberg A, Johansson S, Freedman N, Garske-Roman U. Method dependence, observer variability and kidney volumes in radiation dosimetry of 177Lu-DOTATATE therapy in patients with neuroendocrine tumours. EJNMMI Phys. 2015;2(1):24.
Lassmann M, Chiesa C, Flux G, Bardies M, Committee ED. EANM dosimetry committee guidance document: good practice of clinical dosimetry reporting. Eur J Nucl Med Mol Imaging. 2011;38(1):192–200.
Del Prete M, Arsenault F, Saighi N, Zhao W, Buteau FA, Celler A, et al. Accuracy and reproducibility of simplified QSPECT dosimetry for personalized 177Lu-octreotate PRRT. EJNMMI Phys. 2018;5:25.
Hänscheid H, Lapa C, Buck AK, Lassmann M, Werner RA. Dose mapping after endoradiotherapy with 177Lu-DOTATATE/DOTATOC by a single measurement after 4 days. J Nucl Med. 2018;59:75–81.
Freedman N, Sandström M, Kuten J, Shtraus N, Ospovat I, Schlocker A, et al. Personalized radiation dosimetry for PRRT-how many scans are really required? EJNMMI Phys. 2020;7(1):26.
Kletting P, Kull T, Reske SN, Glatting G. Comparing time activity curves using the Akaike information criterion. Phys Med Niol. 2009;54(21):N501–7.
This work was supported by the European Metrology Programme For Innovation And Research (EMPIR) joint research project 15HLT06 "Metrology for clinical implementation of dosimetry in molecular radiotherapy" (MRTDosimetry) which has received funding from the European Union. The EMPIR initiative is co-funded by the European Union's Horizon 2020 research and innovation programme and the EMPIR Participating States.
Medical Physics Unit, Azienda Unità Sanitaria Locale di Reggio Emilia - IRCCS, Reggio Emilia, Italy
Domenico Finocchiaro, Federica Fioroni, Mauro Iori & Elisa Grassi
Department of Physics and Astronomy, University of Bologna, Bologna, Italy
Domenico Finocchiaro & Gastone Castellani
The Royal Marsden NHS Foundation Trust & Institute of Cancer Research, Downs Road, Sutton, SM2 5PT, UK
Jonathan I. Gear, Glenn D. Flux & Iain Murray
Nuclear Medicine Unit, Azienda Unità Sanitaria Locale di Reggio Emilia - IRCCS, Reggio Emilia, Italy
Annibale Versari
Domenico Finocchiaro
Jonathan I. Gear
Federica Fioroni
Glenn D. Flux
Gastone Castellani
Mauro Iori
Elisa Grassi
DF, FF and EG conceived and designed the study; DF, JG and GF analysed and interpreted the data; DF drafted the manuscript; JG, FF, GF and EG reviewed the manuscript. The authors read and approved the manuscript and consented to its publication.
Correspondence to Federica Fioroni.
This study involves human participants. All participants were enrolled in a clinical trial (EUDRACT 2015-005546-63) at Azienda USL-IRCCS of Reggio Emilia (Italy). The study was approved by the ethics committee of Azienda USL-IRCCS of Reggio Emilia (Italy), and each patient gave written informed consent for the study conduction.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Finocchiaro, D., Gear, J.I., Fioroni, F. et al. Uncertainty analysis of tumour absorbed dose calculations in molecular radiotherapy. EJNMMI Phys 7, 63 (2020). https://doi.org/10.1186/s40658-020-00328-5
Radionuclide therapy
PRRT
|
CommonCrawl
|
Vanishing singularity in hard impacting systems
Existence and some limit analysis of stationary solutions for a multi-dimensional bipolar Euler-Poisson system
July 2011, 16(1): 333-344. doi: 10.3934/dcdsb.2011.16.333
On a quasilinear hyperbolic system in blood flow modeling
Tong Li 1, and Kun Zhao 2,
Department of Mathematics, University of Iowa, Iowa City, IA 52242, United States
Mathematical Biosciences Institute, Ohio State University, Columbus, OH 43210, United States
Received April 2010 Revised September 2010 Published April 2011
This paper aims at an initial-boundary value problem on bounded domains for a one-dimensional quasilinear hyperbolic model of blood flow with viscous damping. It is shown that, for given smooth initial data close to a constant equilibrium state, there exists a unique global smooth solution to the model. Time asymptotically, it is shown that the solution converges to the constant equilibrium state exponentially fast as time goes to infinity due to viscous damping and boundary effects.
Keywords: initial-boundary value problem, global existence, long-time behavior., Hyperbolic balance laws, blood flow, classical solution.
Mathematics Subject Classification: Primary: 35L50, 35L65; Secondary: 92C3.
Citation: Tong Li, Kun Zhao. On a quasilinear hyperbolic system in blood flow modeling. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 333-344. doi: 10.3934/dcdsb.2011.16.333
M. Anliker, R. L. Rockwell and E. Ogden, Nonlinear analysis of flow pulses and shock waves in arteries, Part I: Derivation and properties of mathematical model,, Z. Angew. Math. Phys., 22 (1971), 217. doi: 10.1007/BF01591407. Google Scholar
A. C. L. Barnard, W. A. Hunt, W. P. Timlake and E. Varley, A theory of fluid flow in compliant tubes,, Biophysical J., 6 (1966), 717. doi: 10.1016/S0006-3495(66)86690-0. Google Scholar
S. Čanić, Blood flow through compliant vessels after endovascular repair: Wall deformations induced by the discontinuous wall properties,, Comput. Visual. Sci., 4 (2002), 147. doi: 10.1126/science.4.83.147. Google Scholar
S. Čanić and E. H. Kim, Mathematical analysis of the quasilinear effects in a hyperbolic model blood flow through compliant axi-symmetric vessels,, Math. Methods Appl. Sci., 26 (2003), 1161. doi: 10.1002/mma.407. Google Scholar
S. Čanić, D. Lamponi, A. Mikelić and J. Tambača, Self-consistent effective equations modeling blood flow in medium-to-large compliant arteries,, Multiscale Model. Simul., 5 (2005), 559. Google Scholar
S. Čanić and D. Mirković, A hyperbolic system of conservation laws in modeling endovascular treatment of abdoninal aortic aneurysm,, in, 141 (2000), 227. Google Scholar
C. M. Dafermos, A system of hyperbolic conservation laws with frictional damping,, Z. Angew. Math. Phys., 46 Special Issue (1995), 294. Google Scholar
L. Formaggia, F. Nobile and A. Quarteroni, A one-dimensional model for blood flow: application to vascular prosthesis,, in, 19 (2002), 137. Google Scholar
L. Formaggia, F. Nobile, A. Quarteroni and A. Veneziani, Multiscale modeling of the circulatory system: A preliminary analysis,, Comput. Visual. Sci., 2 (1999), 75. Google Scholar
L. Hsiao, "Quasilinear Hyperbolic Systems and Dissipative Mechanisms,", World Scientific, (1998). doi: 10.1142/9789812816917. Google Scholar
T. Li and S. Čanić, Critical thresholds in a quasilinear hyperbolic model of blood flow,, Netw. Heterog. Media, 4 (2009), 527. doi: 10.3934/nhm.2009.4.527. Google Scholar
T. Nishida, Nonlinear hyperbolic equations and related topics in fluid dynamics,, Publ. Math. D'Orsay, (1978), 46. Google Scholar
K. Nishihara, Convergence rates to nonlinear diffusion waves for solutions of system of hyperbolic conservation laws with damping,, J. Differential Equations, 131 (1996), 171. doi: 10.1006/jdeq.1996.0159. Google Scholar
K. Nishihara and T. Yang, Boundary effect on asymptotic behavior of solutions to the $p$-system with damping,, J. Differentail Equations, 156 (1999), 439. doi: 10.1006/jdeq.1998.3598. Google Scholar
M. Olufsen, C. Peskin, W. Kim, E. Pedersen, A. Nadim and J. Larsen, Numerical simulation and experimental validation of blood flow in arteries with structured-tree outflow conditions,, Annals of Biomedical Engineering, 28 (2000), 1281. doi: 10.1114/1.1326031. Google Scholar
R. H. Pan and K. Zhao, 3D compressible Euler equations with damping in bounded domains,, J. Differential Equations, 246 (2009), 581. doi: 10.1016/j.jde.2008.06.007. Google Scholar
A. J. Pullan, N. P. Smith and P. J. Hunter, An anatomically based model of transient coronary blood flow in the heart,, SIAM J. Appl. Math., 62 (2002), 990. doi: 10.1137/S0036139999355199. Google Scholar
T. C. Sideris, B. Thomases and D. H. Wang, Long time behavior of solutions to the 3D compressible Euler equations with damping,, Comm. Partial Differential Equations, 28 (2003), 795. doi: 10.1081/PDE-120020497. Google Scholar
J. L. Vazquez, "The Porous Medium Equation: Mathematical Theory,", Oxford Science Publications, (2007). Google Scholar
Tong Li, Kun Zhao. Global existence and long-time behavior of entropy weak solutions to a quasilinear hyperbolic blood flow model. Networks & Heterogeneous Media, 2011, 6 (4) : 625-646. doi: 10.3934/nhm.2011.6.625
Tatsien Li, Libin Wang. Global classical solutions to a kind of mixed initial-boundary value problem for quasilinear hyperbolic systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (1) : 59-78. doi: 10.3934/dcds.2005.12.59
Zhi-Qiang Shao. Lifespan of classical discontinuous solutions to the generalized nonlinear initial-boundary Riemann problem for hyperbolic conservation laws with small BV data: shocks and contact discontinuities. Communications on Pure & Applied Analysis, 2015, 14 (3) : 759-792. doi: 10.3934/cpaa.2015.14.759
Xiaoyun Cai, Liangwen Liao, Yongzhong Sun. Global strong solution to the initial-boundary value problem of a 2-D Kazhikhov-Smagulov type model. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 917-923. doi: 10.3934/dcdss.2014.7.917
Peng Jiang. Unique global solution of an initial-boundary value problem to a diffusion approximation model in radiation hydrodynamics. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3015-3037. doi: 10.3934/dcds.2015.35.3015
Nataliia V. Gorban, Olha V. Khomenko, Liliia S. Paliichuk, Alla M. Tkachuk. Long-time behavior of state functions for climate energy balance model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1887-1897. doi: 10.3934/dcdsb.2017112
Shaoyong Lai, Yong Hong Wu, Xu Yang. The global solution of an initial boundary value problem for the damped Boussinesq equation. Communications on Pure & Applied Analysis, 2004, 3 (2) : 319-328. doi: 10.3934/cpaa.2004.3.319
Yi Zhou, Jianli Liu. The initial-boundary value problem on a strip for the equation of time-like extremal surfaces. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 381-397. doi: 10.3934/dcds.2009.23.381
Gilles Carbou, Bernard Hanouzet. Relaxation approximation of the Kerr model for the impedance initial-boundary value problem. Conference Publications, 2007, 2007 (Special) : 212-220. doi: 10.3934/proc.2007.2007.212
Xianpeng Hu, Dehua Wang. The initial-boundary value problem for the compressible viscoelastic flows. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 917-934. doi: 10.3934/dcds.2015.35.917
Linglong Du, Caixuan Ren. Pointwise wave behavior of the initial-boundary value problem for the nonlinear damped wave equation in $\mathbb{R}_{+}^{n} $. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3265-3280. doi: 10.3934/dcdsb.2018319
Elena Rossi. Well-posedness of general 1D initial boundary value problems for scalar balance laws. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3577-3608. doi: 10.3934/dcds.2019147
Martn P. Árciga Alejandre, Elena I. Kaikina. Mixed initial-boundary value problem for Ott-Sudan-Ostrovskiy equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 381-409. doi: 10.3934/dcds.2012.32.381
Michal Beneš. Mixed initial-boundary value problem for the three-dimensional Navier-Stokes equations in polyhedral domains. Conference Publications, 2011, 2011 (Special) : 135-144. doi: 10.3934/proc.2011.2011.135
Haifeng Hu, Kaijun Zhang. Analysis on the initial-boundary value problem of a full bipolar hydrodynamic model for semiconductors. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1601-1626. doi: 10.3934/dcdsb.2014.19.1601
Türker Özsarı, Nermin Yolcu. The initial-boundary value problem for the biharmonic Schrödinger equation on the half-line. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3285-3316. doi: 10.3934/cpaa.2019148
Yue-Jun Peng, Yong-Fu Yang. Long-time behavior and stability of entropy solutions for linearly degenerate hyperbolic systems of rich type. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3683-3706. doi: 10.3934/dcds.2015.35.3683
Ivonne Rivas, Muhammad Usman, Bing-Yu Zhang. Global well-posedness and asymptotic behavior of a class of initial-boundary-value problem of the Korteweg-De Vries equation on a finite domain. Mathematical Control & Related Fields, 2011, 1 (1) : 61-81. doi: 10.3934/mcrf.2011.1.61
Igor Chueshov, Irena Lasiecka, Justin Webster. Flow-plate interactions: Well-posedness and long-time behavior. Discrete & Continuous Dynamical Systems - S, 2014, 7 (5) : 925-965. doi: 10.3934/dcdss.2014.7.925
Constantine M. Dafermos. Hyperbolic balance laws with relaxation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4271-4285. doi: 10.3934/dcds.2016.36.4271
Tong Li Kun Zhao
|
CommonCrawl
|
TMF:
TMF, 1971, Volume 9, Number 1, Pages 3–43 (Mi tmf4429)
Restrictions on the behavior of elastic and inelastic cross sections at high energies. I
A. A. Logunov, M. A. Mestvirishvili, O. A. Khrustalev
Abstract: Properties of the cross-sections of elastic and inelastic processes following from the general principles of quantum field theory are reviewed. By means of the unitarity and the Dyson representation the analiticity region in the $\cos\theta$-plane is established and the lower bound of the forward scattering amplitude decreasing is found.
Theoretical and Mathematical Physics, 1971, 9:1, 939–967
Citation: A. A. Logunov, M. A. Mestvirishvili, O. A. Khrustalev, "Restrictions on the behavior of elastic and inelastic cross sections at high energies. I", TMF, 9:1 (1971), 3–43; Theoret. and Math. Phys., 9:1 (1971), 939–967
\Bibitem{LogMesKhr71}
\by A.~A.~Logunov, M.~A.~Mestvirishvili, O.~A.~Khrustalev
\paper Restrictions on the behavior of elastic and inelastic cross sections at high energies.~I
\jour TMF
\mathnet{http://mi.mathnet.ru/tmf4429}
\jour Theoret. and Math. Phys.
http://mi.mathnet.ru/eng/tmf4429
http://mi.mathnet.ru/eng/tmf/v9/i1/p3
Cycle of papers
TMF, 1971, 9:1, 3–43
Restrictions on the behavior of elastic and inelastic cross sections at high energies. II
TMF, 1971, 9:2, 153–189
N. N. Bogolyubov, V. S. Vladimirov, A. N. Tavkhelidze, "On scaling asymptotics in quantum field theory. I", Theoret. and Math. Phys., 12:1 (1972), 619–629
N. E. Tyurin, O. A. Khrustalev, "Continuation of the scattering amplitude into the crossed channel in the case of increasing total cross sections", Theoret. and Math. Phys., 24:3 (1975), 837–845
A. A. Logunov, M. A. Mestvirishvili, G. L. Rcheulishvili, A. P. Samokhin, "Does the $u$ channel in the $t$ plane influence the behavior of the forward differential scattering cross section at high energies?", Theoret. and Math. Phys., 28:2 (1976), 691–698
G. L. Rcheulishvili, A. P. Samokhin, "Influence of singularities of the amplitude on the behavior of the differential scattering cross section", Theoret. and Math. Phys., 30:3 (1977), 205–212
A. A. Logunov, M. A. Mestvirishvili, G. L. Rcheulishvili, A. P. Samokhin, "Can the weak interaction become strong ?", Theoret. and Math. Phys., 36:2 (1978), 653–661
M. A. Mestvirishvili, G. L. Rcheulishvili, A. P. Samokhin, "Influence of singularities in the $\cos\theta$ plane on the behavior of inclusive spectra at high energies", Theoret. and Math. Phys., 41:1 (1979), 848–853
A. A. Logunov, M. A. Mestvirishvili, G. L. Rcheulishvili, A. P. Samokhin, "Singularities of the amplitude in the $\cos\theta$; plane and scattering at high energies", Theoret. and Math. Phys., 39:2 (1979), 377–389
A. A. Logunov, M. A. Mestvirishvili, G. L. Rcheulishvili, A. P. Samokhin, "Contribution from far singularities in the $\cos\theta$ plane to the scattering amplitude and to the distribution function of inclusive processes", Theoret. and Math. Phys., 43:3 (1980), 469–480
A. A. Arkhipov, V. I. Savrin, "Unitarity bound for three-particle scattering amplitude", Theoret. and Math. Phys., 49:1 (1981), 847–862
A. A. Arkhipov, "Unitarity bounds for multiparticle interaction amplitudes", Theoret. and Math. Phys., 53:2 (1982), 1067–1084
First page: 4
|
CommonCrawl
|
Confusion about Lorenz Gauge assumption in derivation of Liénard Wiechert Potentials/Fields
I have been going through Griffith's 'Introduction To Electrodynamics" 3rd Edition chapter 10 on potentials and fields and I am a little confused about the derivation of the Liénard Wiechert potentials, equations 10.39 and 10.40:
$V(\textbf{r},t)=\frac{1}{4\pi\epsilon_0}\frac{qc}{(rc-\textbf{r}\cdot\textbf{v})}$
$\textbf{A}(\textbf{r},t)=\frac{\textbf{v}}{c^2}V(\textbf{r},t)$
If my understanding proves correct, these equations are derived as such:
Assume static, time-independent fields which leads to our familiar Poisson equation for the two potentials, $V(\textbf{r},t)$ and $\textbf{A}(\textbf{r},t)$, $\textbf{without}$ any gauge assumptions on the potentials; no choice of gauge was made.
Extract the integral, static form of $V(\textbf{r},t)$ and $\textbf{A}(\textbf{r},t)$ (equation 10.17).
Argue that charge and current densities are to be evaluated at the retarded time due to the finite speed of light and show, in doing so, the Lorenz gauge (10.12) with be satisfied along with its subsequent d'Alembertian-form inhomogeneous wave equations (equations 10.16), despite making no decision in gauge.
Argue the geometrical doppler like effect for the charge and current densities and evaluate the integral forms of $V(\textbf{r},t)$ and $\textbf{A}(\textbf{r},t)$ (equation 10.17) to reach the final results of equations 10.39 and 10.40.
My confusion is that the Lorenz gauge did not seem to play a role in the above arguments, except maybe for point 3. But, the fact that point 3 was satisfied seemed like sheer coincidence and more so like the result of the solid argument that the information needs time to travel and be received; hence evaluation of the potentials at the retarded time.
So, are we forced to remain in the Lorenz gauge if we wish to use equations 10.39 and 10.40 for $V(\textbf{r},t)$ and $\textbf{A}(\textbf{r},t)$?
Was the fact that we must evaluate the potentials at the retarded time somehow automatically convoluted/correlated/ingrained with the Lorenz gauge?
maxwell-equations lienard-wiechert
Donkey KongDonkey Kong
$\begingroup$ (a) Equations (10.39) and (10.40) come from equations (10.19). (b) Equations (10.19) are "The natural generalization of equations (10.17) for nonstatic sources". (c) Equations (10.17) are in turn the static case of equations (10.16). (d) Equations (10.16) come from equations (10.4) and (10.5) under the condition of the Lorentz gauge, equation (10.12) $$ \boldsymbol{\nabla}\boldsymbol{\cdot}\mathbf{A}=-\mu_{o}\epsilon_{o}\dfrac{\partial V}{\partial t} \tag{10.12} $$ So equations (10.39) and (10.40) are valid under the condition of the Lorentz gauge. $\endgroup$ – Frobenius Jan 8 '17 at 10:34
$\begingroup$ That's why the author comments two paragraphs under equations (10.19) about them : "To prove them, I must show that they satisfy the inhomogeneous wave equation (10.16) and meet the Lorentz condition (10.12)." $\endgroup$ – Frobenius Jan 8 '17 at 10:34
From the OP's reference textbook(1)
(a) Equations (10.39) and (10.40) $$ \bbox[yellow] { V\left(\mathbf{r},t\right)=\dfrac{1}{4\pi \epsilon_{0}}\dfrac{qc}{\left(\mathfrak{r}c-{ \widehat{\boldsymbol{\mathfrak{r}}} }\boldsymbol{\cdot}\mathbf{v}\right)} } \tag{10.39} $$ $$ \bbox[yellow] { \mathbf{A}\left(\mathbf{r},t\right)=\dfrac{\mu_{0}}{4\pi}\dfrac{qc\mathbf{v}}{\left(\mathfrak{r}c-{ \widehat{\boldsymbol{\mathfrak{r}}} }\boldsymbol{\cdot}\mathbf{v}\right)}=\dfrac{\mathbf{v}}{c^{2}}V\left(\mathbf{r},t\right) } \tag{10.40} $$ come from equations (10.19).
(b) Equations (10.19) $$ \boxed { V\left(\mathbf{r},t\right)=\dfrac{1}{4\pi\epsilon_{0}}\int\dfrac{\rho\left(\mathbf{r'},t_{r}\right)}{\mathfrak{r}}\mathrm{d}\tau'\,, \quad \mathbf{A}\left(\mathbf{r},t\right)=\dfrac{\mu_{0}}{4\pi}\int\dfrac{\mathbf{J}\left(\mathbf{r'},t_{r}\right)}{\mathfrak{r}}\mathrm{d}\tau' } \tag{10.19} $$ are "The natural generalization of equations (10.17) for nonstatic sources".(1)
(c) Equations (10.17) $$ V\left(\mathbf{r}\right)=\dfrac{1}{4\pi\epsilon_{0}}\int\dfrac{\rho\left(\mathbf{r'}\right)}{\mathfrak{r}}\mathrm{d}\tau'\,, \quad \mathbf{A}\left(\mathbf{r}\right)=\dfrac{\mu_{0}}{4\pi}\int\dfrac{\mathbf{J}\left(\mathbf{r'}\right)}{\mathfrak{r}}\mathrm{d}\tau' \tag{10.17} $$ are in turn the static case of equations (10.16).
(d) Equations (10.16) \begin{align} (i)\quad \Box^{2}V&=\boldsymbol{-}\dfrac{1}{\epsilon_{0}}\rho\, ,\\ \tag{10.16}\\ (ii)\quad \Box^{2}\mathbf{A}&=\boldsymbol{-}\mu_{0}\mathbf{J}\, . \end{align} come from equations (10.4) and (10.5) $$ \boldsymbol{\nabla}^{2}V+\dfrac{\partial}{\partial t}\left(\boldsymbol{\nabla}\boldsymbol{\cdot} \boldsymbol{A} \right) =\boldsymbol{-}\dfrac{1}{\epsilon_{0}}\rho \tag{10.4} $$ $$ \biggl(\boldsymbol{\nabla}^{2} \mathbf{A}-\mu_{0}\epsilon_{0}\dfrac{\partial^{2} \mathbf{A}}{\partial t^{2}}\biggr)-\boldsymbol{\nabla}\biggl(\boldsymbol{\nabla}\boldsymbol{\cdot} \mathbf{A}+\mu_{0}\epsilon_{0}\dfrac{\partial V}{\partial t}\biggr) =\boldsymbol{-}\mu_{0} \mathbf{J}\, . \tag{10.5} $$ under the condition of the Loren(t)z gauge, equation (10.12) $$ \boldsymbol{\nabla}\boldsymbol{\cdot}\mathbf{A}=-\mu_{0}\epsilon_{0}\dfrac{\partial V}{\partial t} \tag{10.12} $$
As we see from this backwards procedure the equations (10.39) and (10.40) are created with initial condition the Loren(t)z gauge. On the other hand the author comments two paragraphs under equations (10.19) about these last equations(1): "To prove them, I must show that they satisfy the inhomogeneous wave equation (10.16) and meet the Lorentz condition (10.12)."
(1) David J. Griffiths "Introduction to Electrodynamics " 3rd Edition 1999.
FrobeniusFrobenius
$\begingroup$ Thanks for the answer! But I am still a little fuzzy. To assume/argue equations 10.19 (the potentials evaluated at the retarded time), must we be in the Lorenz gauge? In any other gauge is this assumption incorrect? $\endgroup$ – Donkey Kong Jan 8 '17 at 20:43
$\begingroup$ @Donkey Kong : I think that (10.19) are not valid generally for other gauges. For other gauge we'll have different "(10.19)-type" equations and different solutions for the potentials $\:V,\mathbf{A}\:$, but the field vectors $\:\mathbf{E},\mathbf{B}\:$ would be the same. An example is with the Coulomb gauge $\:\boldsymbol{\nabla}\boldsymbol{\cdot} \mathbf{A}=0\:$. With this gauge the first of equations (10.19) for the scalar potential is the same, see equation (10.10) in Griffiths, but for the vector potential the equation is very complicated and difficult to solve, see equation (10.11). $\endgroup$ – Frobenius Jan 8 '17 at 21:20
$\begingroup$ @Donkey Kong : I suggest to read the paragraphs "10.1.2 Gauge Transformations" and "10.1.3 Coulomb Gauge and Lorentz* Gauge" in Griffiths. $\endgroup$ – Frobenius Jan 8 '17 at 21:21
Not the answer you're looking for? Browse other questions tagged maxwell-equations lienard-wiechert or ask your own question.
Why is there no (time derivative of charge density) in the $B$ field in Jefimenko's equations?
From Liénard-Wiechert to Feynman potential expression
Solution of simple problems using only Maxwell equations in differential form
Maxwell's equations in Lorenz Gauge in terms of 4-vector potential
Liénard-Wiechert Fields for a static particle
Electrodynamics confusion - Hertzian dipole
|
CommonCrawl
|
Research | Open | Published: 08 November 2016
Determining the spatial heterogeneity underlying racial and ethnic differences in timely mammography screening
Joseph Gibbons ORCID: orcid.org/0000-0003-2470-90681 &
Melody K. Schiaffino2
International Journal of Health Geographicsvolume 15, Article number: 39 (2016) | Download Citation
The leading cause of cancer death for women worldwide continues to be breast cancer. Early detection through timely mammography has been recognized to increase the probability of survival. While mammography rates have risen for many women in recent years, disparities in screening along racial/ethnic lines persist across nations. In this paper, we argue that the role of local context, as identified through spatial heterogeneity, is an unexplored dynamic which explains some of the gaps in mammography utilization by race/ethnicity.
We apply geographically weighted regression methods to responses from the 2008 Public Health Corporations' Southeastern Household Health Survey, to examine the spatial heterogeneity in mammograms in the Philadelphia metropolitan area.
We find first aspatially that minority identity, in fact, increases the odds of a timely mammogram: 74% for non-Hispanic Blacks and 80% for Hispanic/Latinas. However, the geographically weighted regression confirms the relation of race/ethnicity to mammograms varies by space. Notably, the coefficients for Hispanic/Latinas are only significant in portions of the region. In other words, the increased odds of a timely mammography we found are not constant spatially. Other key variables that are known to influence timely screening, such as the source of healthcare and social capital, measured as community connection, also vary by space.
These results have ramifications globally, demonstrating that the influence of individual characteristics which motivate, or inhibit, cancer screening may not be constant across space. This inconsistency calls for healthcare practitioners and outreach services to be mindful of the local context in their planning and resource allocation efforts.
Breast cancer persists as a leading cause of cancer death in women worldwide [1]. Early detection of breast cancer, defined as timely or guideline-concordant screening mammography and diagnosis contribute to survivorship. Specifically, stage 0–1 detection results in nearly 100% 5-year survival while stage IV detection only has a 22% survival rate according to a recent American Cancer Society estimate [2, 3]. Screening rates have risen among women, in particular for women in countries where screening was not previously available [3, 4]. However, disparities along the continuum of breast cancer persist among underserved women, a situation that reflects the experiences of underserved women everywhere. In particular, Non-Hispanic Black (henceforth, Black) women in the U.S. experience later stage diagnosis at much higher rates compared to White women [3, 4]. In addition, Hispanic/Latina women continue to experience lower comparable rates of timely screening mammography than both White and Black women as well as late-stage diagnoses comparable to Black women [4]. Recently, screening recommendations have also experienced variation with technological advances in screening modalities, changes in recommended ages for screening initiation contingent on genetics, familial history, and other nuanced risk factors have lead to the flattening of disparities [2, 5]. However, largely overlooked in this discussion is the role of spatial variation, or heterogeneity, in local screening rates. We argue that the spatial heterogeneity of mammograms by race/ethnicity helps to understand the disparities in rates overall, underlining the subtle role of local context on cancer screening.
While differences in outcomes across socially and racially/ethnically diverse populations are known, the role of local variation in breast cancer screening behaviors among underserved minority populations is not as well understood. Studies of geographic access to mammograms demonstrate disparities in screening rates by race/ethnicity, but often stop short of examining other contextual influences [6–9]. More subtle social, cultural, and other local factors are also found to shape timely cancer screenings [10–13]. We highlight for this study one's community connection, group membership, and perceived medical discrimination as these factors are associated with healthy minority behaviors and vary at a local level [14–17], thus contributing to the risk of disease for minorities in a community [18, 19].
Community connection has been framed through a number of different terms, including social capital [14, 20] and collective efficacy [17]. It is derived from several measures including interpersonal trust with neighbors, a feeling of belongingness to the place, and the sense that residents share mutual interests [21]. Strong community connection within a group may facilitate leverage for treatment and survival by promoting timely screening. The protective effects of local ties can assist the spread of local health information such as where health services can be accessed, securing assistance in transportation to services, or the encouragement from peers to seek them out [17, 22, 23]. Membership in local community groups, ranging from churches to local nonprofits, provides another avenue to encourage service usage as it often puts members in contact with others outside of their proximate friend and family circle [23–25]. Group membership can be a facilitator to mobilize individuals toward healthy behaviors effectively [26, 27]. Put simply, the ability of friends or one's pastor to inform and encourage one to seek out services like mammograms is more viable when these exchanges take place in a local day to day setting, such as a neighborhood.
The impact of community connection and group membership on health outcomes is noted to vary between racial/ethnic groups [14, 20, 26]. For example, community connection has been found to have a stronger positive effect on health outcomes among Latino populations, ceteris paribus, compared to the health outcomes of Black populations [28]. Sampson shows, in his study of collective efficacy, that the strength of community connection and group membership is not equal across space, being deeply stratified by local disparities such as racial segregation [17]. How these matter locally for mammograms for minority women is unclear. Dean and colleagues found that while local social capital influenced the relationship of Black women to mammography utilization, postulating a relationship with collective efficacy, they could not establish the direction of the relationship [14].
Another factor which may influence the use of health services by underserved minorities related to local context is discrimination from medical practitioners or medical discrimination. Medical discrimination as a barrier to health outcomes was widely described in the IOM report Unequal Treatment when it was one of the first empirical reports on the validated effect of medical discrimination on health outcomes [29]. Evidence regarding continued medical discrimination in health services experienced by women suggests this is a persistent issue that remains unaddressed [30, 31]. Jacobs et al. [32] found that medical discrimination related inversely to receiving screening mammography, they also found that more Black women compared to other groups reported health services discrimination.
While medical discrimination is a form of institutional racial discrimination, thus taking place within the larger context of the health service system, there is evidence that the perception of this discrimination for minority patients is not consistent across space. Studies on businesses and nonprofits, for example, have both found the institutional environment of professional settings is subject to local context [33, 34]. What's more, Hunt et al. [15] found through a health survey on minority women that the perception of discrimination was lower in segregated communities. This evidence suggests medical discrimination may not be homogenous across communities and may be subject to spatial heterogeneity that warrants further study.
Examining the influence of local context on timely mammography requires an estimation strategy which accounts for granular variations of effects within a place. To this end, geographically weighted regression (GWR) is a novel way to examine the spatial heterogeneity in rates of timely mammography by race/ethnicity. Past studies have shown that GWR is an effective way to not only document local variations in health outcomes, [35] but also service usage [36]. While multi-level modeling strategies, commonly used in urban health research [37–40], can examine the interrelation between individual and neighborhood characteristics, they are limited in that they treat local effects as stationary and mutually independent across neighborhoods [41]. Multi-level strategies overlook the underlying spatial structures that would influence timely mammography rates within and between neighborhood boundaries.
To our knowledge, this is the first study to use GWR to understand the role of spatial heterogeneity to explore within race and social category variation in the utilization of timely screening mammography. The expected contributions of these findings relate to the potential of GWR as a tool for healthcare professionals better understand nuance within places to improve patient- and community-centered responses to the need for timely mammography that may not always be easily answered by broader designations. Further, our results suggest other factors such as social and spatial determinants also need to be considered or re-configured. The objectives of this analysis are to assess the spatial factors associated with timely mammography utilization in a cohort of women. With GWR, we can compare the local variation in our predictors localized population parameters at the census tract level. Through this comparison, we can begin to contextualize the spatial relationship of population factors to timely mammographies among women in the study sample, isolating potential neighborhood impacts on the local spatial structure.
Timely mammography theoretical and conceptual foundations
Variation in the utilization of timely mammography outcomes is multi-dimensional and complex. The Andersen Model of Health Services Utilization is a valuable model that takes into account this complexity and offers a framework that allows us to adapt, conceptualize, and study these dimensions for our present analysis [42]. Broadly, Andersen describes multi-level factors that operationalize the complexity of access to care and utilization as a product of how multiple social, contextual, and perceived factors can influence our utilization or lack thereof. As Fig. 1 shows, these factors are operationalized as predisposing, or background characteristics which shape a person's inclination to seek out healthcare and are not mutable. For example, African–Americans are less likely to find care due to historical systemic racism within the health system [43, 44]. Next, enabling factors include those which facilitate or hinder, if absent, one's efforts to find healthcare. For example, lacking insurance makes it nearly impossible for one to obtain timely and affordable healthcare. Finally, need factors reflect ailments a person might be experiencing that would require healthcare in the first place, these are subject to a perceived and evaluated need that can be influenced by discrimination when they do visit a doctor. The Andersen Model has been adapted successfully in multiple cases, and part of the strength of this model is its adaptability to study context and outcomes related to health utilization behavior [45]. With the present analysis, we adapted the Andersen Model to include local context as it relates to spatial heterogeneity across all levels of need from predisposing to outcome.
Revised Andersen behavioral model
To address the potential effect of local context on cancer screening disparities, our study explores the spatial heterogeneity in factors associated with timely mammography within racial and ethnic minority populations. To this end, we propose the following hypotheses:
Utilization of timely screening mammography by Latina/Hispanic women will be negatively associated versus non-Hispanic women, and will not vary significantly between Black women versus White women.
Utilization of timely mammography will vary significantly by geography among Black women.
Utilization of timely mammography will vary significantly by geography among Hispanic women.
Utilization of timely screening mammography will be positively associated with community connection.
Utilization of timely screening mammography will vary significantly by geography among respondents reporting community connection.
To empirically examine our hypotheses, we used a sample of female respondents from the 2008 Public Health Management Corporation (PHMC) Southeastern Pennsylvania Household Survey (N = 3261) with geocodes to link to the 2005–2009 American Community Survey (ACS) geographic dataset consisting of approximately N = 998 census tracts. The goal of the PHMC is to collect the information on individual's health status, behaviors, attitudes, and access to healthcare in the following five counties of the Philadelphia metropolitan area [46]. PHMC respondents used in this study are those eligible to receive guideline concordant recommendations appropriate for the data collection time period of 2008, specifically women aged 40 and over [47]. While the U.S. Preventive Services Task Force has since suggested a reduced marginal benefit in the range of participants to include in population-based screening mammography [48], for the purposes of the present analysis we included the population appropriate to the time period. On the reliability and validity of the PHMC surveys, a recent study [21] reported that several health and socioeconomic indicators (e.g., obesity rate and poverty) drawn from PHMC data were comparable with those estimated by the Centers for Diseases Control and Prevention.
The dependent variable is the self-reported use of timely Breast Cancer Screening Mammography. Participants were asked if they had received a screening mammography within the guideline concordant time frame recommended by their medical practitioner for the time period in which data were collected. Following the common practice, we dichotomized the answers into no (coded 0, reference group) and yes (coded 1). Our predictors were determined based on Andersen's Behavioral Model [42, 49], including Predisposing, Enabling and Need factors which would compel one to seek out medical services like a mammography. Starting with Predisposing factors, our focal predictors are race/ethnicity; the PHMC classified respondents into non-Hispanic White (reference group, hence just White), Black, Hispanic/Latinas, and non-Hispanic other minorities. Three race/ethnicity dummy variables were included in the analysis. Other predisposing covariates include age, poverty, race/ethnicity, marital status, employment status, and education attainment. Respondents reported their ages in years, and we treated age as a continuous variable. In keeping with the screening guidelines circa 200 [8, 48] we restrict our sample to age 40 and above. Marital status was categorized into four groups: single (reference group), married or living with a partner, widowed/divorced/separated (WDS), and another marital status. Gender was not included as a predictor given the surveys focus on female breast cancer screening.
Turning to Enabling factors, we add as focal variables community connection and medical discrimination given their strong association with race. First, we include a measure of Medical discrimination; the respondents were asked if they have ever experienced discrimination when getting medical care because of their race, ethnicity, or color. Those who perceived medical discrimination were coded 1, otherwise 0. Next, we include a measure of community connection, a composite score based on the principal components analysis (PCA) of respondents' answers to the following three questions: (1) Willingness, "Would you say that most people in your neighborhood are always, often, sometimes, rarely, or never willing to help their neighbors?" From always to never, we coded from 5 to 1. (2) Belonging, "Do you strongly agree, agree, disagree, or strongly disagree that you feel that you belong and are part of your neighborhood?" We coded the answers with a four-level Likert-type scale where 4 means strongly agree, and 1 indicates strongly disagree. (3) Trust, "Do you strongly agree, agree, disagree, or strongly disagree with the statement that most people in your neighborhood can be trusted?" The coding scheme is also a four-level Likert-type scale (4 = strongly agree, and 1 = strongly disagree). The PCA results suggested that one factor is sufficient to capture over 60% of the variance among these three questions. We used the regression method to obtain the factor score as our dependent variable (with a mean of 0 and a standard deviation of 1). A higher score indicates stronger community connection.
Also, we include enabling factors more commonly found in behavioral models [49], consisting of those who lived under the federal poverty line as a measure of the financial situation, coding 1 as in poverty and 0 otherwise. For employment status, we classified those will full-time employment status as employed. Next, we include a measure of insurance status; a respondent coded 1 when a respondent reported that she had health insurance, otherwise 0. Next, we included variables for Source of Health Care, where an individual goes to get medical services, as a way to understand healthcare access. We categorized the answers into four groups: private doctor's office, community health center or public clinic, outpatient clinic, and other places (e.g., hospital emergency room). To test our hypotheses, the "other places" category was used as the reference group, and three dummy variables were considered in the analysis. We also include a measure of Local group participation, the total number of local groups that a respondent participates in such as social, political, religious, school-related, and athletic groups. Finally, we include a measure of residence in the city and county of Philadelphia, City.
Finally, for factors of Need, we use a measure of self-rated health. The respondents were asked to evaluate their health as poor, fair, good, very good or excellent. Their answers were further dichotomized into poor/fair (coded 1) and good/very good/excellent (coded 0), which is a conventional practice. While it is common in GWR studies using administrative units like census tracts to utilize the geographic centroids of the unit as a proxy of the individual level, this approach has been criticized for underestimating the spatial variation across research area [41]. To address this issue, and following the precedent established by previous studies, we used ArcGIS to generate coordinates for each respondent that fall at random within their respective census tract [41, 50]. To ensure the reliability of this approach, multiple coordinates were generated for each observation and sensitivity analysis were conducted (results available on request). This approach of spatial randomization has been found to be a useful method to preserve spatial variation [41].
Analytic methods and strategy
To explore the spatial variation between timely mammograms and other covariates across the Philadelphia metropolitan area, we employed logistic GWR to handle the binary dependent variable [51]. As we randomly created the coordinates for each individual, the model below can be applied to our data:
$$\log \left( {\frac{{y_{i} }}{{1 - y_{i} }}} \right) = \beta_{0i} \left( {u_{i} ,v_{i} } \right) + \mathop \sum \limits_{n = 1}^{k} \beta_{ni} \left( {u_{i} ,v_{i} } \right)*x_{ni}$$
where y i is the probability of reporting timely mammograms for an individual i, (u i , v i ) denotes the coordinates of individual i, x ni represents the explanatory variables (n = 1, …, k) discussed above for individual i, and β ni represents the estimated association of variable n with mammograms for individual i. We used the software program developed by Fotheringham et al. [51] to implement the analysis. The estimation method is the iteratively reweighted least squares and the kernel density function is the bi-square weighting function, which is a commonly used weighting scheme [51]. When the data points are dense in a study area, the choice of kernel density function may not affect the results greatly.
One advantage of GWR is that it is an extension of generalized regression models, and thus the interpretations of regression coefficients remain unchanged [52–54]. Explicitly, the regression coefficient of a specific variable at a specific location, (u i , v i ), in the model above indicates the change in the log-odds of having a timely mammograms given a one-unit change in this variable. Similar to the conventional logistic regression, exponentiating the coefficient yields the odds ratio associated with this variable at a particular location. As the model above generates results for each individual in our data, it is ineffective to show all local estimates. Following previous studies [41, 53, 55], we reported the estimates of conventional logistic results, presented the five-number summary (i.e., minimum, three quartiles, and maximum) of local estimates, and visualized the GWR results with thematic maps using a recently developed method [50]. The corrected Akaike Information Criterion (AIC) was used to understand whether the logistic GWR fits the data better than the conventional logistic model [51]. As a rule of thumb, when the difference in AICs between two models is larger than 4, the model with the smaller AIC is strongly preferred [56].
Aspatial results
Table 1 presents the descriptive statistics for this study. Overall, 74.03% of the PHMC respondents received timely screening mammographies. As for racial composition, the 2008 PHMC survey included 70.84% of White, 22.85% of Black, 3.93% of Hispanic/Latinas, and roughly 2.39% of non-Hispanic other minority groups. These figures closely matched to those reported by the 2005–2009 ACS. Most of those surveyed, 95.95%, had some insurance. Only 6.10% reported experiencing medical discrimination. As for healthcare access, most respondents went to a private practice for regular care, 88.87%, compared to a community health center, which amounted to only 5.24% of those surveyed. Regarding other individual characteristics, 6.69% of the interviewees did not complete high school, while more than 40.08% of the individuals had a college degree or greater. As for group membership, most respondents reported membership in at least one group. Community connection was not reported in Table 1 as it is a means-centered variable.
Table 1 Descriptive statistics
Table 2 presents the global, or conventional logistic findings. The results for predisposing factors are somewhat surprising, given the previous literature. Both Black and Hispanic/Latina women reported greater odds of getting timely screening mammograms. Being Black increases the odds of a timely mammogram by 74% (1.738 − 1 = 0.738; p < 0.01) while being Hispanic/Latina increases the odds by 80% (p < 0.0501). The other predisposing factors are more in line with the past literature. A college education (or greater) and being married both increase the likelihood one will get a mammogram. Turning to enabling factors, employment and having insurance both increase the odds one will get a timely mammogram. Also, where one goes for healthcare consistently has an important role in screening. Based on our findings, any place other than a center like a hospital will increase odds of a timely mammogram. Access to a community health center appears to matter the most in encouraging a mammogram. Meanwhile, experiencing medical discrimination was inversely related to reporting receipt of a timely mammogram though this association was not significant (AOR 0.784). What is more, community connection was not significant in the global models and membership in groups only had a marginally significant effect. Turning lastly to need, women with poor/fair self-rated health reported 30% lower odds of receiving a mammogram in a timely manner by 30% (p < 0.01).
Table 2 Global logistic regression results of breast cancer screening (1 = yes; 0 = no)
GWR results
GWR logistic regression generated a set of coefficient estimates for each individual, which makes it difficult, if not impossible, to present all results. Following Fotheringham et al. [51], we reported the five-number summary in Table 3 and visualized the GWR findings into thematic maps. The goal of this table is to present the spatial range in magnitude of the variable coefficients. Local statistical significance for select GWR coefficients is mapped out in Figs. 2 and 3. While several methods have been proposed to examine spatial heterogeneity of significance and coefficients [57, 58], these methods are not applicable to the logistic GWR model and visualization remains an appropriate way to explore this spatial heterogeneity.
Table 3 Five-number summary of the GWR logistic regression results; bandwidth 3000
GWR of breast cancer screening and race in the Philadelphia metropolitan area
Breast cancer screening and community connection in Philadelphia metropolitan area
On the question of whether the GWR logistic model fit our data better than the global logistic model, we compared the corrected AICs in Tables 2 and 3. Because the GWR AIC is smaller than the global AIC by 4, it indicates the GWR provides superior fit for our predictors. As Table 3 shows, the GWR estimates range quite dramatically, suggesting that the relationships between our independent variables and receipt of timely mammograms may depend on where an individual resides. This offers support to the importance of geographically weighted results over the global results. Starting with our focal predisposing predictors, the maximum size of the coefficient for being Hispanic/Latina is nearly 4 times as large as its minimum, suggesting substantial variation in how being Hispanic/Latina impacts timely mammograms. The coefficients for Blacks also increase, albeit not as dramatically. These results mean that the impact of race on mammograms is not consistent across the region. Turning to our focal enabling variables, community connection, group membership, and medical discrimination also vary, although most notably there are some local coefficients for which community connection relates negatively to mammograms.
To better contextualize our GWR estimates, we make use of a series of maps of the region to unpack the local spatial relations for Black and Hispanic/Latina coefficients, presented in Fig. 2. To help with the easy interpretation, we first created the spatially smoothed local estimates and local t-values with the GWR results. We then overlaid local estimates with t-values in the geographic information systems and showed the local estimates with a t-value that is greater than 1.96 (p value <0.05). That is, the colored areas were estimated to have statistically significant associations of covariates with receipt of timely mammograms. We used the red–orange gradient scheme to show different magnitudes of the local estimates, red signifying strong effects and orange indicating weak associations. Second, in a separate set of maps we then overlaid the areas with insignificant coefficients (with t-values between −1.96 and 1.96) on top of census tract data displaying ACS estimates. While one should proceed with caution in interpreting these visuals without multi-level models, given the risk of ecological fallacy, they do provide some indication of the context as to why the significant coefficients are located where they are.
The localized coefficients for Hispanic/Latinas present an interesting find. These results show that the higher odds of receiving timely mammography among Hispanic/Latina is only significant in roughly half of the region, especially in the suburban Bucks County, not across all respondents in that ethnic category as regression results suggest in Table 1. This is unexpected for one as this area only has a few large Hispanic populations, suggesting that 'being Hispanic/Latina' matters for reasons other than being in a mostly Hispanic area. Spatial heterogeneity for Black coefficients, in contrast, are significant across the region, growing in strength as one moves east. The lowest coefficients are generally found in Delaware County. It is not clear, based on where the mostly Black populations are found, why this variation exists as all counties have areas with large Black populations, although Philadelphia has the strongest concentrations. One possible explanation why Delaware County has the lowest coefficients is that it is only of the region that does not directly share a border with Philadelphia or inner ring suburban communities, and thereby is the furthest from the largest Black populations.
Turning to our enabling variables of community connection and medical discrimination we find spatial findings of interest. First, Fig. 3 reveals the coefficients for community connection were significant in select parts of the region, encompassing most of the city of Philadelphia and its immediate surrounding areas. This is notable as community connection was not significant in the global model. Comparing this map to the ACS data in Fig. 2 shows that the significant coefficients appear to co-occur in the areas where the highest concentrations of Black and Hispanic populations are found. These results do not mean that no other area of the region lacked community connection, but our findings do suggest that there is a significant relationship between community connection and women seeking out mammograms that is confined spatially to the area presented in the figure.
Broadly, our results report greater odds of timely screening mammography among racial and ethnic minority populations that appear to be better for this well-insured cohort study sample. However, our primary study purpose, the study of spatial heterogeneity, illustrates a salient point. Geographically weighted regression results support our hypotheses that spatial heterogeneity exists in timely mammograms among Black and Hispanic/Latina women as they compare to white women, and what appear to be greater odds of timely mammography among the whole racial/ethnic group may in fact be limited. In addition, we found that other predisposing and enabling factors like community connection also vary substantially over space. This presents an important innovation to our understanding of health service provision, demonstrating the overlooked role local context carries when considering Andersen's Behavioral Model of utilization. While racial/ethnic groups are typically considered homogenous, our findings show that unaccounted variation across space and place exist within these groups, even when accounting for standard controls like socio-economic and demographic variables. This illustrates that social factors persist even among the insured as we saw that health status persisted as a barrier to timely care.
While GWR is an exploratory tool, comparing the GWR maps to one another, as well as to the neighborhood census tract data, reveals patterns allowing informed speculation as for the role race/ethnicity has on mammography. First, significant Hispanic/Latina coefficients are mainly found in the suburban counties of Bucks and Montgomery. One's Hispanic/Latina identity thus appears to matter in encouraging mammographies in these suburban areas. This may reflect recent patterns of immigration in the United States as Hispanic migrants have increasingly dispersed into suburban and rural 'new destinations' as opposed to concentrating in cities [59]. Future research should investigate screening practices for suburban Hispanic/Latinas to understand this trend better. Second, while the coefficients for Black respondents are significant and positive across the region, a close analysis of the other GWR results suggests a more localized dynamic is taking place. Community connection's effect in encouraging mammograms is localized to a mostly Black and Hispanic area. This could be a reflection of a phenomenon known as 'ethnic density.' Ethnic density a process identified in several countries wherein minorities residing in mostly minority communities, such as places racially segregated, gain protective health effects from the close connections and reduced discrimination enjoyed in these places [38, 60, 61]. Indeed, it would support Dean's et al. [14] theory that Black women are more likely to pursue mammograms in their local context based on the presence of local community connection.
There are a number of possible considerations for the high overall mammography utilization rates for minority women, including income and insurance status. Indeed, insurance was one of the most salient predictors in our models, which is not surprising given our highly insured study population. On the other hand, high overall mammography utilization among minorities could be a reflection of high levels of community health centers in the city of Philadelphia. Indeed, our results show these centers had the strongest predictive power on mammograms. Laiteerapong et al. [62] suggested that Black women visit community health centers like Federally Qualified Health Centers (FQHCs) at rates greater than White or Hispanic/Latina women and that mammograms are also more likely to occur among FQHC attendees, suggesting a positive effect of FQHC utilization. These results could also be affected by the disproportionate representation in the PHMC of respondents with high levels of socio-economic status or some other unmolded factors unique to Philadelphia. Future research should seek to replicate this analysis in other regions to determine the singularity of our study. While the exact spatial character of race/ethnicity's relation to screening is likely to vary based on location, the bottom line is that the impact of local context on mammography matters differently for racial/ethnic groups across space, a finding likely to be applicable globally.
Timely mammography screening is the first step in understanding and acting to mitigate the devastating impact of breast cancer. There is substantial literature supporting the need for better access to timely screening and care; we lack an understanding of the localized racial/ethnic, cultural and economic factors that continue to make these barriers persist. It is not sufficient to aspatially examine the predisposing and enabling factors that facilitate or bar access to timely screening mammograms among racial/ethnic minorities. Indeed, as our results show, the impact of one's race/ethnicity on pursuing mammograms, as well as other intervening variables, changes from one area to another. Thus, efforts to ensure equitable screening rates among groups must investigate local potential variations in their instances, seeking to determine why these disparities exist and, when necessary, how to manage them.
GWR:
geographically weighted regression
PHMC:
Public Health Management Corporation
American Community Survey
AIC:
Akaike Information Criterion
FQHC:
Federally Qualified Health Centers
Torre LA, Bray F, Siegel RL, Ferlay J, Lortet-Tieulent J, Jemal A. Global cancer statistics, 2012: global cancer statistics, 2012. CA Cancer J Clin. 2015;65(2):87–108. doi:10.3322/caac.21262.
ACS. Breast cancer survival rates by stage. http://www.cancer.org/cancer/breastcancer/detailedguide/breast-cancer-survival-by-stage. Published 2014.
ACS. Breast cancer facts and figures for Hispanics/Latinos 2015–2017. http://www.cancer.org/acs/groups/content/@research/documents/document/acspc-046405.pdf. Published 2016.
ACS. Cancer facts and figures for African Americans 2013–2014. Am Cancer Soc. 2013. http://www.cancer.org/acs/groups/content/@epidemiologysurveilance/documents/document/acspc-036921.pdf.
Siu AL. Screening for breast cancer: U.S. preventive services task force recommendation statement. Ann Intern Med. 2016;164(4):279–96. doi:10.7326/m15-2886.
Alford-Teaster J, Lange JM, Hubbard RA, et al. Is the closest facility the one actually used? An assessment of travel time estimation based on mammography facilities. Int J Health Geogr. 2016;15(8):1–10. doi:10.1186/s12942-016-0039-7.
Khan-Gates JA, Ersek JL, Eberth JM, Adams SA, Pruitt SL. Geographic access to mammography and its relationship to breast cancer screening and stage at diagnosis: a systematic review. Womens Health Issues. 2015;25(5):482–93. doi:10.1016/j.whi.2015.05.010.
Onega T, Cook A, Kirlin B, et al. The influence of travel time on breast cancer characteristics, receipt of primary therapy, and surveillance mammography. Breast Cancer Res Treat. 2011;129(1):269–75. doi:10.1007/s10549-011-1549-4.
Huang B, Dignan M, Han D, Johnson O. Does distance matter? Distance to mammography facilities and stage at diagnosis of breast cancer in Kentucky. J Rural Health. 2009;25(4):366–71.
Iqbal J, Ginsburg O, Rochon PA, Sun P, Narod SA. Differences in breast cancer stage at diagnosis and cancer-specific survival by race and ethnicity in the United States. JAMA. 2015;313(2):165–73. doi:10.1001/jama.2014.17322.
Mejia de Grubb MC, Kilbourne B, Kihlberg C, Levine RS. Demographic and geographic variations in breast cancer mortality among U.S. Hispanics. J Health Care Poor Underserved. 2013;24(Suppl 1):140–52. doi:10.1353/hpu.2013.0043.
Tian N, Gaines Wilson J, Benjamin Zhan F. Female breast cancer mortality clusters within racial groups in the United States. Health Place. 2010;16(2):209–18. doi:10.1016/j.healthplace.2009.09.012.
Wang F, McLafferty S, Escamilla V, Luo L. Late-stage breast cancer diagnosis and health care access in Illinois. Prof Geogr. 2008;60(1):54–69. doi:10.1080/00330120701724087.
Dean L, Subramanian SV, Williams DR, Armstrong K, Charles CZ, Kawachi I. The role of social capital in African-American women's use of mammography. Soc Sci Med. 2014;104:148–56. doi:10.1016/j.socscimed.2013.11.057.
Hunt MO, Wise LA, Jipguep M-C, Cozier YC, Rosenberg L. Neighborhood racial composition and perceptions of racial discrimination: evidence from the Black Women's Health Study. Soc Psychol Q. 2007;70(3):272–89.
Chen D, Yang T-C. The pathways from perceived discrimination to self-rated health: an investigation of the roles of distrust, social capital, and health behaviors. Soc Sci Med. 2014;104:64–73. doi:10.1016/j.socscimed.2013.12.021.
Sampson RJ. Great American City: Chicago and the enduring neighborhood effect. 1st ed. Chicago: University of Chicago Press; 2012.
Clark WAV, Burt JE. The impact of workplace on residential relocation. Ann Assoc Am Geogr. 1980;70(1):59–66. doi:10.1111/j.1467-8306.1980.tb01297.x.
Cromley E, McLafferty S. GIS and public health. 2nd ed. New York: The Guilford Press; 2012.
Kawachi I, Kennedy BP, Glass R. Social capital and self-rated health: a contextual analysis. Am J Public Health. 1998;89(8):1187–93.
Gibbons J, Yang T-C. Connecting across the divides of race/ethnicity: how does segregation matter? Urban Aff Rev. 2015;Online First:1–28. doi:10.1177/1078087415589193.
Putnam RD. Bowling alone: the collapse and revival of American Community. New York: Simon and Schuster; 2000.
Small ML. Unanticipated gains: origins of network inequality in everyday life. New York: Oxford University Press; 2009.
Benjamins MR. Religious influences on trust in physicians and the health care system. Int J Psychiatry Med. 2006;36(1):69–83.
Ahern MM, Hendryx MS. Social capital and trust in providers. Soc Sci Med. 2003;57(7):1195–203. doi:10.1016/S0277-9536(02)00494-X.
Kim D. Bonding versus bridging social capital and their associations with self rated health: a multilevel analysis of 40 US communities. J Epidemiol Community Health. 2006;60(2):116–22. doi:10.1136/jech.2005.038281.
Hutchinson RN, Putt MA, Dean LT, Long JA, Montagnet CA, Armstrong K. Neighborhood racial composition, social capital and black all-cause mortality in Philadelphia. Soc Sci Med. 2009;68(10):1859–65. doi:10.1016/j.socscimed.2009.02.005.
Klinenberg E. Heat wave: a social autopsy of disaster in Chicago. Chicago: University of Chicago Press; 2003.
Smedley B, Stith A, Nelson A, editors. Unequal treatment: confronting racial and ethnic disparities in health care. Washington: The National Academies Press; 2002.
Hausmann LR, Jeong K, Bost JE, Ibrahim SA. Perceived discrimination in health care and use of preventive health services. J Gen Intern Med. 2008;23(10):1679–84. doi:10.1007/s11606-008-0730-x.
Abramson CM, Hashemi M, Sanchez-Jankowski M. Perceived discrimination in US healthcare: charting the effects of key social characteristics within and across racial groups. Prev Med Rep. 2015;2:615–21. doi:10.1016/j.pmedr.2015.07.006.
Jacobs EA, Rathouz PJ, Karavolos K, et al. Perceived discrimination is associated with reduced breast and cervical cancer screening: the Study of Women's Health Across the Nation (SWAN). J Womens Health Larchmt. 2014;23(2):138–45. doi:10.1089/jwh.2013.4328.
Gibbons J. Does racial segregation make community-based organizations more territorial? Evidence from Newark, NJ, and Jersey City, NJ: does racial segregation make community-based organizations more territorial? J Urban Aff. 2015;37(5):600–19. doi:10.1111/juaf.12170.
Marquis C, Battilana J. Acting globally but thinking locally? The enduring influence of local communities on organizations. Res Organ Behav. 2009;29:283–302. doi:10.1016/j.riob.2009.06.001.
Black NC. An ecological approach to understanding adult obesity prevalence in the United States: a county-level analysis using geographically weighted regression. Appl Spat Anal Policy. 2014;7(3):283–99. doi:10.1007/s12061-014-9108-0.
Comber AJ, Brunsdon C, Phillips M. The varying impact of geographic distance as a predictor of dissatisfaction over facility access. Appl Spat Anal Policy. 2012;5(4):333–52. doi:10.1007/s12061-011-9074-8.
Acevedo-Garcia D, Lochner KA, Osypuk TL, Subramanian SV. Future directions in residential segregation and health research: a multilevel approach. Am J Public Health. 2003;93(2):215–21.
Gibbons J, Yang T-C. Self-rated health and residential segregation: how does race/ethnicity matter? J Urban Health. 2014;91(4):648–60. doi:10.1007/s11524-013-9863-2.
Kramer MR, Hogue CR. Is segregation bad for your health? Epidemiol Rev. 2009;31(1):178–94. doi:10.1093/epirev/mxp001.
Subramanian SV. Racial residential segregation and geographic heterogeneity in black/white disparity in poor self-rated health in the US: a multilevel statistical analysis. Soc Sci Med. 2005;60(8):1667–79. doi:10.1016/j.socscimed.2004.08.040.
Yang T-C, Matthews SA. Understanding the non-stationary associations between distrust of the health care system, health conditions, and self-rated health in the elderly: a geographically weighted regression approach. Health Place. 2012;18(3):576–85. doi:10.1016/j.healthplace.2012.01.007.
Andersen RM. Revisiting the behavioral model and access to medical care: does it matter? J Health Soc Behav. 1995;36(1):1. doi:10.2307/2137284.
Armstrong K, McMurphy S, Dean LT, et al. Differences in the patterns of health care system distrust between Blacks and Whites. J Gen Intern Med. 2008;23(6):827–33. doi:10.1007/s11606-008-0561-9.
Yang T-C, Matthews SA, Hillemeier MM. Effect of health care system distrust on breast and cervical cancer screening in Philadelphia, Pennsylvania. Am J Public Health. 2011;101(7):1297.
Gelberg L, Andersen RM, Leake BD. The behavioral model for vulnerable populations: application to medical care use and outcomes for homeless people. Health Serv Res. 2000;34(6):1273–302.
PHMC. Household health survey documentation. Philadelphia: Public Health Management Corporation; 2008.
Final Recommendation Statement Breast Cancer: Screening. Rockville, MD: U.S. Preventative Task Force; 2002. https://www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/breast-cancer-screening-2002.
Final Recommendation Statement Breast Cancer: Screening. Rockville, MD: U.S. Preventative Task Force; 2016. http://www.uspreventiveservicestaskforce.org/Page/Document/UpdateSummaryFinal/breast-cancer-screening1.
Babitsch B, Gohl D, von Lengerke T. Re-revisiting Andersen's behavioral model of health services use: a systematic review of studies from 1998–2011. GMS Psycho-Soc-Med. 2012;9. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3488807/. Accessed 16 Dec 2015.
Matthews SA, Yang T-C. Mapping the results of local statistics: using geographically weighted regression. Demogr Res. 2012;26:151–66. doi:10.4054/DemRes.2012.26.6.
Fotheringham S, Brunsdon C, Charlton M. Geographically weighted regression: the analysis of spatially varying relationships. New York: Wiley; 2003.
Brunsdon C, Fotheringham AS, Charlton M. Geographically weighted regression: a method for exploring spatial nonstationarity. In: Kemp K, editor. Encyclopedia of geographic information science. California: Sage; 2008. p. 558.
Brunsdon C, Fotheringham S, Charlton M. Geographically weighted regression. J R Stat Soc Ser Stat. 1998;47(3):431–43.
Chen VY-J, Yang T-C. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research. Comput Methods Prog Biomed. 2012;107(2):262–73. doi:10.1016/j.cmpb.2011.10.006.
Shoff C, Yang T-C. Untangling the associations among distrust, race, and neighborhood social environment: a social disorganization perspective. Soc Sci Med. 2012;74(9):1342–52. doi:10.1016/j.socscimed.2012.01.012.
Burnham K, Anderson D. Model selection and multimodel inference: a practical information-theoretic approach. New York: Springer; 2002.
Brunsdon C, Fotheringham AS, Charlton M. Spatial nonstationarity and autoregressive models. Environ Plan A. 1998;30(6):957–73. doi:10.1068/a300957.
Leung Y, Mei C-L, Zhang W-X. Statistical tests for spatial nonstationarity based on the geographically weighted regression model. Environ Plan A. 2000;32(1):9–32. doi:10.1068/a3162.
Massey DS. New faces in new places: the changing geography of American immigration. New York: Russell Sage Foundation; 2008.
Bécares L, Cormack D, Harris R. Ethnic density and area deprivation: Neighbourhood effects on Māori health and racial discrimination in Aotearoa/New Zealand. Soc Sci Med. 2013;88:76–82. doi:10.1016/j.socscimed.2013.04.007.
Bécares L, Nazroo J, Stafford M. The buffering effects of ethnic density on experienced racism and health. Health Place. 2009;15(3):700–8. doi:10.1016/j.healthplace.2008.10.008.
Laiteerapong N, Kirby J, Gao Y, et al. Health care utilization and receipt of preventive care for patients seen at federally funded health centers compared to other sites of primary care. Health Serv Res. 2014;49(5):1498–518. doi:10.1111/1475-6773.12178.
JG contributed to the design of the study, carried out all analyses, contributed to the drafting of the manuscript, and led the interpretation of research results. MS contributed to the design of the study, participated in the interpretation of the research results, and contributed to the drafting of the manuscript. Both authors read and approved the final manuscript.
The authors would like to thank Tse-Chuan Yang for his advice and input towards the development of the study design and interpretation of results.
All analysis of the data used were conducted with the program GWR 4, with the maps based on the data created with the program ArcGIS 10.2. The authors cannot directly share the data used in this study due to restrictions from its creator, the PHMC. This data is available for purchase from the PHMC should one desire replicating the results.
The individual-level survey data in this study was collected by the PHMC for outside analysis with the consent of the participants. Documentation of consent can be obtained from the PHMC.
As the paper uses secondary data, the PHMC's Southeastern Pennsylvania Household Health Survey, the policies of my University's Institutional Review Board are that it is not subject an ethical review. Information on the individual participants was de-identified by the PHMC, minimizing risk for individual respondents.
Department of Sociology Health, San Diego State University, 5500 Campanile Dr., San Diego, CA, 92182-4493, USA
Joseph Gibbons
Graduate School of Public Health, San Diego State University, 5500 Campanile Dr., San Diego, CA, 92182-4493, USA
Melody K. Schiaffino
Search for Joseph Gibbons in:
Search for Melody K. Schiaffino in:
Correspondence to Joseph Gibbons.
Timely mammograms
Spatial heterogeneity
|
CommonCrawl
|
Framework for Hamiltonian simulation and beyond: standard-form encoding, qubitization, and quantum signal processing
This is a Perspective on "Hamiltonian Simulation by Qubitization" by Guang Hao Low and Isaac L. Chuang, published in Quantum 3, 163 (2019).
By Su Yuan (Department of Computer Science, Institute for Advanced Computer Studies and Joint Center for Quantum Information and Computer Science, University of Maryland).
Simulating the Hamiltonian dynamics of a quantum system is one of the most promising applications of quantum computers. Indeed, the idea of quantum computers, suggested by Feynman [1] and Manin, was strongly motivated by the problem of quantum simulation. Over the past two decades, many efficient quantum algorithms have been designed for simulating both general Hamiltonians and specific systems in condensed matter physics, quantum chemistry, and quantum field theory. These algorithms employ a variety of techniques to process Hamiltonians encoded in different ways, each with runtime that depends on a number of parameters, typically time, accuracy, and system size. Published in Quantum [2], Guang Hao Low and Isaac L. Chuang's paper develops a framework that unifies a number of these Hamiltonian encodings and leads to a simulation algorithm with not only optimal query complexity (as established in their earlier work [3]) but also low ancilla overhead, a feature that is especially desirable for near-term realization.
Naturally, the cost of quantum simulation depends on how the input Hamiltonians are accessed by the quantum computers. Previous works typically consider two input models: (i) the sparse matrix model [4], well-motivated from a theoretical perspective, assumes that the Hamiltonians are sparse and access to the nonzero matrix elements are provided by oracles; and (ii) the linear-combination-of-unitaries model [5], favored for practical applications such as condensed matter physics and quantum chemistry, handles Hamiltonians that can be decomposed into linear combinations of unitaries. Low and Chuang propose a general input model they call "standard-form encoding", which includes both above models as special cases. This generality has also motivated new query models of density-matrix simulation, offering better scaling than the sample-based model studied by Lloyd, Mohseni, and Rebentrost [6].
Now that the input Hamiltonians are encoded in standard form, we ask how they are related to the operations we can perform on digital quantum computers. This relation is made clear through "qubitization", the central result by Low and Chuang [2]. Qubitization asserts that whenever the encoded Hamiltonian $H$ contains an eigenvalue $\lambda$, the operation that encodes $H$ contains a two-by-two block
$\begin{bmatrix}
\lambda & -\sqrt{1-\lambda^2}\\
-\sqrt{1-\lambda^2} & -\lambda
\end{bmatrix}$
\sqrt{1-\lambda^2} & \lambda
on the diagonal, with respect to a basis determined jointly by the eigenstates of $H$ and the encoding. Different eigenvalues of $H$ associate with different two-by-two blocks; they behave as if they are single-qubit rotations or reflections—hence the name "qubitization". This spectral relation builds on earlier results such as Szegedy quantum walk [7,8] and Marriott-Watrous QMA amplification [9], but Low and Chuang's new formulation makes it more versatile for quantum simulation.
For a given Hamiltonian $H$ and evolution time $t$, the goal of quantum simulation is to implement $e^{-itH}$ using a quantum circuit comprised of elementary gates. When $H$ is qubitized, this can be realized by "quantum signal processing" [3,10]. Quantum signal processing allows one to apply polynomial functions to the entries of a single-qubit unitary operation. Combining with standard-form encoding and qubitization, this technique implements a transformation that approximates $\lambda\mapsto e^{-it\lambda}$ on each eigenstate of $H$ with eigenvalue $\lambda$, thus providing an approach to quantum simulation. The structure of the resulting circuit is simple and the ancilla overhead is low. Indeed, recent study shows that this algorithm is competitive against other quantum algorithms for an instance of classically infeasible, practically useful simulation [11].
The new framework of Low and Chuang enables function design of Hamiltonians, in many cases with optimal query complexity, of which Hamiltonian simulation is a particular example. After the original preprint release, the ideas of standard-form encoding, qubitization, and quantum signal processing have been successfully applied to areas beyond quantum simulation [12]. It will certainly be of interest to identify more applications of this framework [13,14], as well as to explore its performance in practice for near-term implementation of quantum simulation.
@article{Su2019framework, doi = {10.22331/qv-2019-08-13-21}, url = {https://doi.org/10.22331/qv-2019-08-13-21}, title = {Framework for {H}amiltonian simulation and beyond: standard-form encoding, qubitization, and quantum signal processing}, author = {Su Yuan}, journal = {{Quantum Views}}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {3}, pages = {21}, month = aug, year = {2019} }
[1] Richard P. Feynman. Simulating physics with computers. International Journal of Theoretical Physics, 21 (6-7): 467–488, 1982. 10.1007/BF02650179.
[2] Guang Hao Low and Isaac L. Chuang. Hamiltonian Simulation by Qubitization. Quantum, 3: 163, July 2019. ISSN 2521-327X. 10.22331/q-2019-07-12-163. URL https://doi.org/10.22331/q-2019-07-12-163.
[3] Guang Hao Low and Isaac L. Chuang. Optimal Hamiltonian simulation by quantum signal processing. Physical Review Letters, 118: 010501, 2017. 10.1103/PhysRevLett.118.010501.
[4] Dorit Aharonov and Amnon Ta-Shma. Adiabatic quantum state generation and statistical zero knowledge. In Proceedings of the 35th ACM Symposium on Theory of Computing, pages 20–29, 2003.
[5] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Simulating Hamiltonian dynamics with a truncated Taylor series. Physical Review Letters, 114 (9): 090502, 2015. 10.1103/PhysRevLett.114.090502.
[6] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nature Physics, 10 (9): 631, 2014. 10.1038/nphys3029.
[7] Mario Szegedy. Quantum speed-up of markov chain based algorithms. In Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science, pages 32–41, Oct 2004. 10.1109/FOCS.2004.53.
[8] Andrew M Childs. On the relationship between continuous-and discrete-time quantum walk. Communications in Mathematical Physics, 294 (2): 581–603, 2010. 10.1007/s00220-009-0930-1.
[9] Chris Marriott and John Watrous. Quantum Arthur-Merlin games. Computational Complexity, 14 (2): 122–152, June 2005. ISSN 1016-3328. 10.1007/s00037-005-0194-x. URL http://dx.doi.org/10.1007/s00037-005-0194-x.
https://doi.org/10.1007/s00037-005-0194-x
[10] Guang Hao Low, Theodore J. Yoder, and Isaac L. Chuang. Methodology of resonant equiangular composite quantum gates. Physical Review X, 6: 041067, 2016. 10.1103/PhysRevX.6.041067.
[11] Andrew M. Childs, Dmitri Maslov, Yunseong Nam, Neil J. Ross, and Yuan Su. Toward the first quantum simulation with quantum speedup. Proceedings of the National Academy of Sciences, 115 (38): 9456–9461, 2018. 10.1073/pnas.1801723115.
[12] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: Exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st ACM Symposium on Theory of Computing, pages 193–204, 2019. ISBN 978-1-4503-6705-9. 10.1145/3313276.3316366. URL http://doi.acm.org/10.1145/3313276.3316366.
[13] Dominic W Berry, Mária Kieferová, Artur Scherer, Yuval R Sanders, Guang Hao Low, Nathan Wiebe, Craig Gidney, and Ryan Babbush. Improved techniques for preparing eigenstates of fermionic Hamiltonians. npj Quantum Information, 4 (1): 22, 2018. 10.1038/s41534-018-0071-5.
[14] David Poulin, Alexei Kitaev, Damian S Steiger, Matthew B Hastings, and Matthias Troyer. Quantum algorithm for spectral measurement with a lower gate count. Physical Review Letters, 121 (1): 010501, 2018. 10.1103/PhysRevLett.121.010501.
On Crossref's cited-by service no data on citing works was found (last attempt 2023-01-27 05:46:51). On SAO/NASA ADS no data on citing works was found (last attempt 2023-01-27 05:46:51).
|
CommonCrawl
|
\( \newcommand{\sem}[1]{ [\! [#1] \!] } \newcommand{\semzero}[1]{\sem{#1}_0} \newcommand{\emptymap}{\{\}} \newcommand{\fracc}[2]{\begin{eqnarray} \frac{\begin{array}{c} #1 \end{array}}{\begin{array}{c} #2 \end{array}} \end{eqnarray}} \newcommand{\sembox}[1]{\fbox{\(#1\)}} \newcommand{\fraccc}[2]{\frac{\begin{array}{c} #1 \end{array}}{\begin{array}{c} #2 \end{array}}} \newcommand{\onepart}[1]{#1} \newcommand{\twopart}[2]{#1\quad#2} \newcommand{\threepart}[3]{#1\quad#2\quad#3} \newcommand{\axiomm}[1]{$\begin{array}{c} #1 \end{array}$} \newcommand{\vd}{\vdash} \newcommand{\Ran}{{\rm Ran}} \newcommand{\Dom}{{\rm Dom}} \newcommand{\pp}{+\!\!\!+\,} \newcommand{\kw}[1]{\texttt{#1}} \newcommand{\id}[1]{\textit{#1}} \newcommand{\rarr}{\rightarrow} \newcommand{\larr}{\leftarrow} \newcommand{\fop}[1]{\color{blue}{\texttt{#1}}} \newcommand{\fw}[1]{\color{green}{\texttt{#1}}} \newcommand{\Eval}[1]{\sem{\,#1\,}} \newcommand{\extractF}[1]{\langle\,#1\,\rangle} \newcommand{\Let}{\mathrm{let}} \newcommand{\Where}[1]{\mathrm{where} \begin{array}[t]{l} #1 \end{array}} \newcommand{\In}{\mathrm{in}} \newcommand{\N}{N} \newcommand{\Z}{Z} \newcommand{\R}{R} \newcommand{\size}{\mathrm{size}} \newcommand{\length}{\mathrm{length}} \renewcommand{\log}{\mathrm{log}} \newcommand{\lam}[2]{\backslash #1 \rarr #2} \newcommand{\lamK}[2]{\lam{\kw{#1}}{\kw{#2}}} \)
Parallel Programming in Futhark »
2. The Futhark Language
2.1. Basic Language Features
2.2. Array Operations
2.3. Size Types
2.4. Records
2.5. Parametric Polymorphism
2.6. Higher-Order Functions
2.7. Sequential Loops
2.8. In-Place Updates
2.9. Modules
3. Practical Matters
4. Interoperability
5. A Parallel Cost Model for Futhark Programs
6. Fusion and List Homomorphisms
7. Regular Flattening
8. Pseudo-Random Numbers and Monte Carlo Sampling Methods
9. Irregular Flattening
10. Bibliography
11. Conclusion
2. The Futhark Language¶
Futhark is a pure functional data-parallel array language. It is both syntactically and conceptually similar to established functional languages, such as Haskell and Standard ML. In contrast to these languages, Futhark focuses less on expressivity and elaborate type systems, and more on compilation to high-performance parallel code. Futhark programs are written with bulk operations on arrays, called Second-Order Array Combinators (SOACs), that mirror the higher-order functions found in conventional functional languages: map, reduce, filter, and so forth. In Futhark, the parallel SOACs have sequential semantics but permit parallel execution, and will typically be compiled to parallel code.
The primary idea behind Futhark is to design a language that has enough expressive power to conveniently express complex programs, yet is also amenable to aggressive optimisation and parallelisation. The tension is that as the expressive power of a language grows, the difficulty of efficient compilation rises likewise. For example, Futhark supports nested parallelism, despite the complexities of efficiently mapping it to the flat parallelism supported by hardware, as many algorithms are awkward to write with just flat parallelism. On the other hand, we do not support non-regular arrays, as they complicate size analysis a great deal. The fact that Futhark is purely functional is intended to give an optimising compiler more leeway in rearranging the code and performing high-level optimisations.
Programming in Futhark feels similar to programming in other functional languages. If you know languages such as Haskell, OCaml, Scala, or Standard ML, you will likely be able to read and modify most Futhark code. For example, this program computes the dot product \(\Sigma_{i} x_{i}\cdot{}y_{i}\) of two vectors of integers:
def main (x: []i32) (y: []i32): i32 =
reduce (+) 0 (map2 (*) x y)
In Futhark, the notation for an array of element type t is []t. The program defines a function called main that takes two arguments, both integer arrays, and returns an integer. The main function first computes the element-wise product of its two arguments, resulting in an array of integers, then computes the sum of the elements in this new array.
If we save the program in a file dotprod.fut, then we can compile it to a binary dotprod (or dotprod.exe on Windows) by running:
$ futhark c dotprod.fut
A Futhark program compiled to an executable will read the arguments to its main function from standard input, and will print the result to standard output:
$ echo [2,2,3] [4,5,6] | ./dotprod
In Futhark, an array literal is written with square brackets surrounding a comma-separated sequence of elements. Integer literals can be suffixed with a specific type. This is why dotprod prints 36i32, rather than just 36 - this makes it clear that the result is a 32-bit integer. Later we will see examples of when these suffixes are useful.
The futhark c compiler we used above translates a Futhark program into sequential code running on the CPU. This can be useful for testing, and will work on most systems, even those without GPUs. However, it wastes the main potential of Futhark: fast parallel execution. We can instead use the futhark opencl compiler to generate an executable that offloads execution via the OpenCL framework. In principle, this allows offloading to any kind of device, but the futhark opencl compilation pipelines makes optimisation assumptions that are oriented towards contemporary GPUs. Use of futhark opencl is simple, assuming your system has a working OpenCL setup:
$ futhark opencl dotprod.fut
Execution is just as before:
In this case, the workload is small enough that there is little benefit in parallelising the execution. In fact, it is likely that for this tiny dataset, the OpenCL startup overhead results in several orders of magnitude slowdown over sequential execution. See Section 3.2 for information on how to measure execution times.
The ability to compile Futhark programs to executables is useful for testing, but it should be noted that it is not how Futhark is intended to be used in practice. As a pure functional array language, Futhark is not capable of reading input or managing a user interface, and as such cannot be used as a general-purpose language. Futhark is intended to be used for small, performance-sensitive parts of larger applications, typically by compiling a Futhark program to a library that can be imported and used by applications written in conventional languages. See Section 4 for more information.
As compiled Futhark executables are intended for testing, they take a range of command line options to manipulate their behaviour and print debugging information. These will be introduced as needed.
For most of this book, we will be making use of the interactive Futhark interpreter, futhark repl, which provides a Futhark REPL into which you can enter arbitrary expressions and declarations:
$ futhark repl
|// |\ | |\ |\ /
|/ | \ |\ |\ |/ /
| | \ |/ | |\ \
| | \ | | | \ \
Version 0.21.2.
Copyright (C) DIKU, University of Copenhagen, released under the ISC license.
Run :help for a list of commands.
[0]> 1 + 2
3i32
[1]>
The prompts are numbered to permit error messages to refer to previous inputs. We will generally elide the numbers in this book, and just write the prompt as > (do not confuse this with the Unix prompt, which we write as $).
futhark repl supports a variety of commands for inspecting and debugging Futhark code. These will be introduced as necessary, in particular in Section 3.1. There is also a batch-mode counterpart to futhark repl, called futhark run, which non-interactively executes the given program in the interpreter.
2.1. Basic Language Features¶
As a functional or value-oriented language, the semantics of Futhark can be understood entirely by how values are constructed, and how expressions transform one value to another. As a statically typed language, all Futhark values are classified by their type. The primitive types in Futhark are the signed integer types i8, i16, i32, i64, the unsigned integer types u8, u16, u32, u64, the floating-point types f32, f64, and the boolean type bool. An f32 is always a single-precision float and a f64 is a double-precision float.
Numeric literals can be suffixed with their intended type. For example, 42i8 is of type i8, and 1337e2f64 is of type f64. If no suffix is given, the type is inferred by the context. In case of ambiguity, integral literals are given type i32 and decimal literals are given f64. Boolean literals are written as true and false.
Note: converting between primitive values
Futhark provides a collection of functions for performing straightforward conversions between primitive types. These are all of the form to.from. For example, i32.f64 converts a value of type f64 (double-precision float) to a value of type i32 (32-bit signed integer), by truncating the fractional part:
> i32.f64 2.1
> f64.i32 2
Technically, i32.f64 is not the name of the function. Rather, this is a reference to the function f64 in the module i32. We will not discuss modules further until Section 2.9, so for now it suffices to think of i32.f64 as a function name. The only wrinkle is that if a variable with the name i32 is in scope, the entire i32 module becomes inaccessible by shadowing.
Futhark provides shorthand for the most common conversions:
r32 == f32.i32
t32 == i32.f32
All values can be combined in tuples and arrays. A tuple value or type is written as a sequence of comma-separated values or types enclosed in parentheses. For example, (0, 1) is a tuple value of type (i32,i32). The elements of a tuple need not have the same type – the value (false, 1, 2.0) is of type (bool, i32, f64). A tuple element can also be another tuple, as in ((1,2),(3,4)), which is of type ((i32,i32),(i32,i32)). A tuple cannot have just one element, but empty tuples are permitted, although they are not very useful — these are written () and are of type (). Records exist as syntactic sugar on top of tuples, and will be discussed in Section 2.4.
An array value is written as a sequence of comma-separated values enclosed in square brackets: [1,2,3]. An array type is written as [d]t, where t is the element type of the array, and d is an integer indicating the size. We often elide d, in which case the size will be inferred. As an example, an array of three integers could be written as [1,2,3], and has type [3]i32. An empty array is written simply as [], although the context must make the type of an empty array unambiguous.
Multi-dimensional arrays are supported in Futhark, but they must be regular, meaning that all inner arrays have the same shape. For example, [[1,2], [3,4], [5,6]] is a valid array of type [3][2]i32, but [[1,2], [3,4,5], [6,7]] is not, because there we cannot determine integers m and n such that [m][n]i32 is the type of the array. The restriction to regular arrays is rooted in low-level concerns about efficient compilation, but we can understand it in language terms by the inability to write a type with consistent dimension sizes for an irregular array value. In a Futhark program, all array values, including intermediate (unnamed) arrays, must be typeable. We will return to the implications of this restriction in later chapters.
2.1.1. Simple Expressions¶
The Futhark expression syntax is mostly conventional ML-derived syntax, and supports the usual binary and unary operators, with few surprises. Futhark does not have syntactically significant indentation, so feel free to put white space whenever you like. This section will not try to cover the entire Futhark expression language in complete detail. See the reference manual for a comprehensive treatment.
Function application is via juxtaposition. For example, to apply a function f to a constant argument, we write:
We will discuss defining our own functions in Section 2.1.2.
A let-expression can be used to give a name to the result of an expression:
let z = x + y
in body
Futhark is eagerly evaluated (unlike Haskell), so the expression for z will be fully evaluated before body. The keyword in is optional when it precedes another let. Thus, instead of writing:
let a = 0 in
let b = 1 in
let c = 2 in
a + b + c
we can write
let a = 0
let b = 1
let c = 2
in a + b + c
The final in is still necessary. In examples, we will often skip the body of a let-expression if it is not important. A limited amount of pattern matching is supported in let-bindings, which permits tuple components to be extracted:
let (x,y) = e -- e must be of some type (t1,t2)
This feature also demonstrates the Futhark line comment syntax — two dashes followed by a space. Block comments are not supported.
Two-way if-then-else is the main branching construct in Futhark:
if x < 0 then -x else x
Pattern matching with the match keyword will be discussed later.
Arrays are indexed using conventional row-major notation, as in the expression a[i1, i2, i3, ...]. All array accesses are checked at runtime, and the program will terminate abnormally if an invalid access is attempted. Indices are of type i64, though any signed type is permitted in an index expression (it will be casted to an i64).
White space is used to disambiguate indexing from application to array literals. For example, the expression a b [i] means "apply the function a to the arguments b and [i]", while a b[i] means "apply the function a to the argument b[i]".
Futhark also supports array slices. The expression a[i:j:s] returns a slice of the array a from index i (inclusive) to j (exclusive) with a stride of s. If the stride is positive, then i <= j must hold, and if the stride is negative, then j <= i must hold. Slicing of multiple dimensions can be done by separating with commas, and may be intermixed freely with indexing. Note that unlike array indices, slice indices can only be of type i64.
Some syntactic sugar is provided for concisely specifying arrays of intervals of integers. The expression x...y produces an array of the integers from x to y, both inclusive. The upper bound can be made exclusive by writing x..<y. For example:
> 1...3
[1i32, 2i32, 3i32]
> 1..<3
[1i32, 2i32]
It is usually necessary to enclose a range expression in parentheses, because they bind very loosely. A stride can be provided by writing x..y...z, with the interpretation "first x, then y, up to z". For example:
> 1..3...7
[1i32, 3i32, 5i32, 7i32]
> 1..3..<7
The element type of the produced array is the same as the type of the integers used to specify the bounds, which must all have the same type (but need not be constants). We will be making frequent use of this notation throughout this book.
Note: structural equality
The Futhark equality and inequality operators == and != are overloaded operators, just like +. They work for types built from basic types (e.g., i32), array types, tuple types, and record types. The operators are not allowed on values containing sub-values of abstract types or function types.
Notice that Futhark does not support a notion of type classes [PJ93] or equality types [Els98]. Allowing the equality and inequality operators to work on values of abstract types could potentially violate abstraction properties, which is the reason for the special treatment of equality types and equality type variables in the Standard ML programming language.
2.1.2. Top-Level Definitions¶
A Futhark program consists of a sequence of top-level definitions, which are primarily function definitions and value definitions. A function definition has the following form:
def name params... : return_type = body
A function may optionally declare its return type and the types of its parameters. If type annotations are not provided, the types are inferred. As a concrete example, here is the definition of the Mandelbrot set iteration step \(Z_{n+1} = Z_{n}^{2} + C\), where \(Z_n\) is the actual iteration value, and \(C\) is the initial point. In this example, all operations on complex numbers are written as operations on pairs of numbers. In practice, we would use a library for complex numbers.
def mandelbrot_step ((Zn_r, Zn_i): (f64, f64))
((C_r, C_i): (f64, f64))
: (f64, f64) =
let real_part = Zn_r*Zn_r - Zn_i*Zn_i + C_r
let imag_part = 2.0*Zn_r*Zn_i + C_i
in (real_part, imag_part)
Or equivalently, without specifying the types:
def mandelbrot_step (Zn_r, Zn_i)
(C_r, C_i) =
It is generally considered good style to specify the types of the parameters and the return value when defining top-level functions. Type inference is mostly used for local and anonymous functions, which we will get to later.
We can define a constant with very similar notation:
def name: value_type = definition
def physicists_pi: f64 = 4.0
Top-level definitions are declared in order, and a definition may refer only to those names that have been defined before it occurs. This means that circular and recursive definitions are not permitted. We will return to function definitions in Section 2.3 and Section 2.5, where we will look at more advanced features, such as parametric polymorphism and implicit size parameters.
Note: Loading files into futhark repl
At this point you may want to start writing and applying functions. It is possible to do this directly in futhark repl, but it quickly becomes awkward for multi-line functions. You can use the :load command to read declarations from a file:
> :load test.fut
Loading test.fut
The :load command will remove any previously entered declarations and provide you with a clean slate. You can reload the file by running :load without further arguments:
> :load
Emacs users may want to consider futhark-mode, which is able to load the file being edited into futhark repl with C-c C-l, and provides other useful features as well.
Exercise: Simple Futhark programming
This is a good time to make sure you can actually write and run a Futhark program on your system. Write a program that contains a function main that accepts as input a parameter x : i32, and returns x if x is positive, and otherwise the negation of x. Compile your program with futhark c and verify that it works, then try with futhark opencl.
Solution (click to show)
def main (x: i32): i32 = if x < 0 then -x else x
2.1.2.1. Type abbreviations¶
The previous definition of mandelbrot_step accepted arguments and produced results of type (f64,f64), with the implied understanding that such pairs of floats represent complex numbers. To make this clearer, and thus improve the readability of the function, we can use a type abbreviation to define a type complex:
type complex = (f64, f64)
We can now define mandelbrot_step as follows:
def mandelbrot_step ((Zn_r, Zn_i): complex)
((C_r, C_i): complex)
: complex =
Type abbreviations are purely a syntactic convenience — the type complex is fully interchangeable with the type (f64, f64):
> type complex = (f64, f64)
> def f (x: (f64, f64)): complex = x
> f (1,2)
(1.0f64, 2.0f64)
For abstract types, that hide their definition, we have to use the module system discussed in Section 2.9.
2.2. Array Operations¶
Futhark provides various combinators for performing bulk transformations of arrays. Judicious use of these combinators is key to getting good performance. There are two overall categories: first-order array combinators, like zip, that always perform the same operation, and second-order array combinators (SOACs), like map, that take a functional argument indicating the operation to perform. SOACs are the basic parallel building blocks of Futhark programming. While they are designed to resemble familiar higher-order functions from other functional languages, they have some restrictions to enable efficient parallel execution.
We can use zip to transform two arrays to a single array of pairs:
> zip [1,2,3] [true,false,true]
[(1i32, true), (2i32, false), (3i32, true)]
Notice that the input arrays may have different types. We can use unzip to perform the inverse transformation:
> unzip [(1,true),(2,false),(3,true)]
([1i32, 2i32, 3i32], [true, false, true])
The zip function requires the two input arrays to have the same length. This is verified statically, by the type checker, using rules we will discuss in Section 2.3.
Transforming between arrays of tuples and tuples of arrays is common in Futhark programs, as many array operations accept only one array as input. Due to a clever implementation technique, zip and unzip usually have no runtime cost (they are fused into other operations), so you should not shy away from using them out of efficiency concerns. For operating on arrays of tuples with more than two elements, there are zip/unzip variants called zip3, zip4, etc, up to zip5/unzip5.
Now let's take a look at some SOACs.
2.2.1. Map¶
The simplest SOAC is probably map. It takes two arguments: a function and an array. The function argument can be a function name, or an anonymous function. The function is applied to every element of the input array, and an array of the result is returned. For example:
> map (\x -> x + 2) [1,2,3]
Anonymous functions need not define their parameter- or return types, but you are free to do so in cases where it aids readability:
> map (\(x:i32): i32 -> x + 2) [1,2,3]
Partially applying operators is also supported using so-called operator sections, with a syntax taken from Haskell:
> map (+2) [1,2,3]
> map (2-) [1,2,3]
[1i32, 0i32, -1i32]
However, note that the following will not work:
[0]> map (-2) [1,2,3]
Error at [0]> :1:5-1:8:
Cannot unify `t2' with type `a0 -> x1' (must be one of i8, i16, i32, i64, u8, u16, u32, u64, f32, f64 due to use at [0]> :1:7-1:7).
When matching type
a0 -> x1
This is because the expression (-2) is taken as negative number -2 enclosed in parentheses. Instead, we have to write it with an explicit lambda:
> map (\x -> x-2) [1,2,3]
[-1i32, 0i32, 1i32]
There are variants of map, suffixed with an integer, that permit simultaneous mapping of multiple arrays, which must all have the same size. This is supported up to map5. For example, we can perform an element-wise sum of two arrays:
> map2 (+) [1,2,3] [4,5,6]
There is nothing magical about map2 - it is simply a predefined higher-order function that combines map and zip. If needed, you can define your own variants that go even higher, although the resulting code is usually not very readable.
Be careful when writing map expressions where the function returns an array. Futhark requires regular arrays, so this is unlikely to go well:
map (\n -> 1...n) ns
In fact, the type checker will complain and refuse to run this program at all.
We can use map to duplicate many other language constructs. For example, if we have two arrays xs:[n]i32 and ys:[m]i32 — that is, two integer arrays of sizes n and m — we can concatenate them using:
map (\i -> if i < n then xs[i] else ys[i-n])
(0..<n+m)
However, it is not a good idea to write code like this, as it hinders the compiler from using high-level properties to do optimisation. Using map with explicit indexing is usually only necessary when solving complicated irregular problems that cannot be represented directly.
2.2.2. Scan and Reduce¶
While map is an array transformer, the reduce SOAC is an array aggregator: it uses some function of type t -> t -> t to combine the elements of an array of type []t to a value of type t. In order to perform this aggregation in parallel, the function must be associative and have a neutral element (in algebraic terms, constitute a monoid):
A function \(f\) is associative if \(f(x,f(y,z)) = f(f(x,y),z)\) for all \(x,y,z\).
A function \(f\) has a neutral element \(e\) if \(f(x,e) = f(e,x) = x\) for all \(x\).
Many common mathematical operators fulfill these laws, such as addition: \((x+y)+z=x+(y+z)\) and \(x+0=0+x=x\). But others, like subtraction, do not. In Futhark, we can use the addition operator and its neutral element to compute the sum of an array of integers:
> reduce (+) 0 [1,2,3]
It turns out that combining map and reduce is both powerful and has remarkable optimisation properties, as we will discuss in Section 6. Many Futhark programs are primarily map-reduce compositions. For example, we can define a function to compute the dot product of two vectors of integers:
def dotprod (xs: []i32) (ys: []i32): i32 =
reduce (+) 0 (map2 (*) xs ys)
A close cousin of reduce is scan, often called generalised prefix sum. Where reduce produces just one result, scan produces one result for every prefix of the input array. This is perhaps best understood with an example:
scan (+) 0 [1,2,3] == [0+1, 0+1+2, 0+1+2+3] == [1, 3, 6]
Intuitively, the result of scan is an array of the results of calling reduce on increasing prefixes of the input array. The last element of the returned array is equivalent to the result of calling reduce. Like with reduce, the operator given to scan must be associative and have a neutral element.
There are two main ways to compute scans: exclusive and inclusive. The difference is that the empty prefix is considered in an exclusive scan, but not in an inclusive scan. Computing the exclusive +-scan of [1,2,3] thus gives [0,1,3], while the inclusive +-scan is [1,3,6]. The scan in Futhark is inclusive, but it is easy to generate a corresponding exclusive scan simply by prepending the neutral element and removing the last element.
While the idea behind reduce is probably familiar, scan is a little more esoteric, and mostly has applications for handling problems that do not seem parallel at first glance. Several examples are discussed in the following chapters.
2.2.3. Filtering¶
We have seen map, which permits us to change all the elements of an array, and we have seen reduce, which lets us collapse all the elements of an array. But we still need something that lets us remove some, but not all, of the elements of an array. This SOAC is filter, which keeps only those elements of an array that satisfy some predicate.
> filter (<3) [1,5,2,3,4]
The use of filter is mostly straightforward, but there are some patterns that may appear subtle at first glance. For example, how do we find the indices of all nonzero entries in an array of integers? Finding the values is simple enough:
> filter (!=0) [0,5,2,0,1]
But what are the corresponding indices? We can solve this using a combination of indices, zip, filter, and unzip:
> def indices_of_nonzero (xs: []i32): []i32 =
let xs_and_is = zip xs (indices xs)
let xs_and_is' = filter (\(x,_) -> x != 0) xs_and_is
let (_, is') = unzip xs_and_is'
in is'
> indices_of_nonzero [1, 0, -2, 4, 0, 0]
Be aware that filter is a somewhat expensive SOAC, corresponding roughly to a scan plus a map.
The expression indices xs gives us an array of the same size as xs, whose elements are the indices of xs starting at 0:
> indices [5,3,1]
2.3. Size Types¶
Functions on arrays typically impose constraints on the shape of their parameters, and often the shape of the result depends on the shape of the parameters. Futhark has direct support for expressing simple instances of such constraints in the type system. Size types have an impact on almost all other language features, so even though this section will introduce the most important concepts, features, and restrictions, the interactions with other features, such as parametric polymorphism, will be discussed when those features are introduced.
As a simple example, consider a function that packs a single i32 value in an array:
def singleton (x: i32): [1]i32 = [x]
We explicitly annotate the return type to state that this function returns a single-element array. Even if we did not add this annotation, the compiler would infer it for us.
For expressing constraints among the sizes of the parameters, Futhark provides size parameters. Consider the definition of dot product we have used so far:
The dotprod function assumes that the two input arrays have the same size, or else the map2 will fail. However, this constraint is not visible in the written type of the function (although it will have been inferred). Size parameters allow us to make this explicit:
def dotprod [n] (xs: [n]i32) (ys: [n]i32): i32 =
The [n] preceding the value parameters (xs and ys) is called a size parameter, which lets us assign a name to the dimensions of the value parameters. A size parameter must be used at least once in the type of a value parameter, so that a concrete value for the size parameter can be determined at runtime. Size parameters are implicit, and need not an explicit argument when the function is called. For example, the dotprod function can be used as follows:
> dotprod [1,2] [3,4]
As with singleton, even if we did not explicitly add a size parameter, the compiler would still automatically infer its existence (any array must have a size), and furthermore infer that xs and ys must have the same size, as they are passed to map2.
A size parameter is in scope in both the body of a function and its return type, which we can use, for instance, for defining a function for computing averages:
def average [n] (xs: [n]f64): f64 =
reduce (+) 0 xs / r64 n
Size parameters are always of type i64, and in fact, any i64-typed variable in scope can be used as a size annotation. This feature lets us define a function that replicates an integer some number of times:
def replicate_i32 (n: i64) (x: i32): [n]i64 =
map (\_ -> x) (0..<n)
In Section 2.5 we will see how to write a polymorphic replicate function that works for any type.
As a more complicated example of using size parameters, consider multiplying two matrices x and y. This is only permitted if the number of columns in x equals the number of rows in y. In Futhark, we can encode this as follows:
def matmult [n][m][p] (x: [n][m]i32, y: [m][p]i32): [n][p]i32 =
map (\xr -> map (dotprod xr) (transpose y)) x
Three sizes are involved, n, m, and p. We indicate that the number of columns in x must match the number of columns in y, and that the size of the returned matrix has the same number of rows as x, and the same number of columns as y.
Presently, only variables and constants are legal as size annotations. This restriction means that the following function definition is not valid:
def dup [n] (xs: [n]i32): [2*n]i32 =
map (\i -> xs[i/2]) (0..<n*2)
Instead, we will have to write it as:
def dup [n] (xs: [n]i32): []i32 =
dup is an instance of a function whose return size is not equal to the size of one of its inputs. You have seen such functions before - the most interesting being filter. When we apply a function that returns an array with such an anonymous size, the type checker will invent a new name (called a size variable) to stand in for the statically unknown size. This size variable will be different from any other size in the program. For example, the following expression would not type check:
[1]> zip (dup [1,2,3]) (dup [3,2,1])
Error at [1]> :1:24-41:
Dimensions "ret₇" and "ret₁₂" do not match.
Note: "ret₇" is unknown size returned by "doubleup" at 1:6-21.
Note: "ret₁₂" is unknown size returned by "doubleup" at 1:25-40.
Even though we know that the two applications of dup will have the same size at run-time, the type checker assumes that each application will produce a distinct size. However, the following works:
let xs = dup [1,2,3] in zip xs xs
Size types have an escape hatch in the form of size coercions, which allow us to change the size of an array to an arbitrary new size, with a run-time check that the two sizes are actually equivalent. This allows us to force the previous example to type check:
> zip (dup [1,2,3] :> [6]i32) (dup [3,2,1] :> [6]i32)
[(1i32, 3i32), (1i32, 3i32), (2i32, 2i32),
(2i32, 2i32), (3i32, 1i32), (3i32, 1i32)]
The expression e :> t can be seen as a kind of "dynamic cast" to the desired array type. The element type and dimensionality must be unchanged - only the size is allowed to differ.
Exercise: Why two coercions?
Do we need two size coercions? Would zip (dup [1,2,3]) (dup [3,2,1] :> [6]i32) be sufficient?
No. Each call to dup produces a distinct size that is different from all other sizes (in type theory jargon, it is "rigid"), which implies it is not equal to the specific size 6.
Exercise: implement i32_indices
Using size parameters, and the knowledge that 0..<x produces an array of size x, implement a function i32_indices that works as indices, except that the input array must have elements of type i32? (If you have read ahead to Parametric Polymorphism, feel free to make it polymorphic as well.)
def i32_indices [n] (xs: [n]i32) : [n]i64 =
0..<n
2.3.1. Sizes and type abbreviations¶
Size parameters are also permitted in type abbreviations. As an example, consider a type abbreviation for a vector of integers:
type intvec [n] = [n]i32
We can now use intvec [n] to refer to integer vectors of size n:
def x: intvec [3] = [1,2,3]
A type parameter can be used multiple times on the right-hand side of the definition; perhaps to define an abbreviation for square matrices:
type sqmat [n] = [n][n]i32
The brackets surrounding [n] and [3] are part of the notation, not the parameter itself, and are used for disambiguating size parameters from the type parameters we shall discuss in Section 2.5.
Parametric types must always be fully applied. Using intvec by itself (without a size argument) is an error.
The definition of a type abbreviation must not contain any anonymous sizes. This is illegal:
type vec = []i32
If this was allowed, then we could write a type such as [2]vec, which would hide the fact that there is an inner size, and thus subvert the restriction to regular arrays. If for some reason we do wish to hide inner types, we can define a size-lifted type with the type~ keyword:
type~ vec = []i32
This is convenient when we want it to be an implementation detail that the type may contain an array (and is most useful after we introduce abstract types in Section 2.9). Size-lifted types come with a serious restriction: they may not be array elements. If we write down the type [2]vec, the compiler will complain. Ordinary type abbreviations, defined with type, will sometimes be called non-lifted types. This distinction is not very important for type abbreviations, but becomes more important when we discuss polymorphism in Section 2.5.
2.3.2. The causality restriction¶
Anonymous sizes have subtle interactions with size inference, which leads to some non-obvious restrictions. This is a relatively advanced topic that will not show up in simple programs, so you can skip this section for now and come back to it later.
To see the problem, consider the following function definition:
def f (b: bool) (xs: []i32) =
let a = [] : [][]i32
let b = [filter (>0) xs]
in a[0] == b[0]
The comparison on the last line forces the row size of a and b to be the same, let's say n. Further, while the empty array literal can be given any row size, that n must be the size of whatever array is produced by the filter. But now we have a problem: constructing the empty array requires us to know the specific value of n, but it is not computed until later! This is called a causality violation: we need a value before it is available.
This particular case is trivial, and can be fixed by flipping the order in which a and b are bound, but the ultimate purpose of the causality restriction is to ensure that the program does not contain circular dependencies on sizes. To make the rules simpler, causality checking uses a specified evaluation order to determine that a size is always computed before it is used. The evaluation order is mostly intuitive:
Function arguments are evaluated before function values.
For let-bindings, the bound expression is evaluated before the body.
For binary operators, the left operand is evaluated before the right operand.
Since Futhark is a pure language, this evaluation order does not have any effect on the result of programs, and may differ from what actually happens at runtime. It is used merely as a piece of type checking fiction to ensure that some straightforward evaluation order exists, where all anonymous sizes have been computed before their value is needed.
We will see a more realistic example of the impact of the causality restriction in Section 2.6.1, when we get to higher-order functions.
2.4. Records¶
Semantically, a record is a finite map from labels to values. These are supported by Futhark as a convenient syntactic extension on top of tuples. A label-value pairing is often called a field. As an example, let us return to our previous definition of complex numbers:
We can make the role of the two floats clear by using a record instead.
type complex = {re: f64, im: f64}
We can construct values of a record type with a record expression, which consists of field assignments enclosed in curly braces:
def sqrt_minus_one = {re = 0.0, im = -1.0}
The order of the fields in a record type or value does not matter, so the following definition is equivalent to the one above:
def sqrt_minus_one = {im = -1.0, re = 0.0}
In contrast to most other programming languages, record types in Futhark are structural, not nominal. This means that the name (if any) of a record type does not matter. For example, we can define a type abbreviation that is equivalent to the previous definition of complex:
type another_complex = {re: f64, im: f64}
The types complex and another_complex are entirely interchangeable. In fact, we do not need to name record types at all; they can be used anonymously:
def sqrt_minus_one: {re: f64, im: f64} = {re = 0.0, im = -1.0}
However, for readability purposes it is usually a good idea to use type abbreviations when working with records.
There are two ways to access the fields of records. The first is by field projection, which is done by dot notation known from most other programming languages. To access the re field of the sqrt_minus_one value defined above, we write sqrt_minus_one.re.
The second way of accessing field values is by pattern matching, just like we do with tuples. A record pattern is similar to a record expression, and consists of field patterns enclosed in curly braces. For example, a function for adding complex numbers could be defined as:
def complex_add ({re = x_re, im = x_im}: complex)
({re = y_re, im = y_im}: complex)
{re = x_re + y_re, im = x_im + y_im}
As with tuple patterns, we can use record patterns in both function parameters, let-bindings, and loop parameters.
As a special syntactic convenience, we can elide the = pat part of a record pattern, which will bind the value of the field to a variable of the same name as the field. For example:
def conj ({re, im}: complex): complex =
{re = re, im = -im}
This convenience is also present in tuple expressions. If we elide the definition of a field, the value will be taken from the variable in scope with the same name:
{re, im = -im}
2.4.1. Tuples as a Special Case of Records¶
In Futhark, tuples are merely records with numeric labels starting from 0. For example, the types (i32,f64) and {0:i32,1:f64} are indistinguishable. The main utility of this equivalence is that we can use field projection to access the components of tuples, rather than using a pattern in a let-binding. For example, we can say foo.0 to extract the first component of a tuple.
Notice that the fields of a record must constitute a prefix of the positive numbers for it to be considered a tuple. The record type {0:i32,2:f64} does not correspond to a tuple, and neither does {1:i32,2:f64} (but {1:f64,0:i32} is equivalent to the tuple (i32,f64), because field order does not matter).
2.5. Parametric Polymorphism¶
Consider the replication function we wrote earlier:
This function works only for replicating values of type i32. If we wanted to replicate, say, a boolean value, we would have to write another function:
def replicate_bool (n: i64) (x: bool): [n]bool =
This duplication is not particularly nice. Since the only difference between the two functions is the type of the x parameter, and we don't actually use any i32-specific operations in replicate_i32, or bool-specific operations in replicate_bool, we ought to be able to write a single function that is parameterised over the element type. In some languages, this is done with generics, or template functions. In ML-derived languages, including Futhark, we use parametric polymorphism. Just like the size parameters we saw earlier, a Futhark function may have type parameters. These are written as a name preceded by an apostrophe. As an example, this is a polymorphic version of replicate:
def replicate 't (n: i64) (x: t): [n]t =
Notice how the type parameter binding is written as 't; we use just t to refer to the parametric type in the x parameter and the function return type. Type parameters may be freely intermixed with size parameters, but must precede all ordinary parameters. Just as with size parameters, we do not need to explicitly pass the types when we call a polymorphic function; they are automatically deduced from the concrete parameters.
We can also use type parameters when defining type abbreviations:
type triple 't = [3]t
And of course, these can be intermixed with size parameters:
type vector 't [n] = [n]t
In contrast to function definitions, the order of parameters in a type does matter. Hence, vector i32 [3] is correct, and vector [3] i32 would produce an error.
We might try to use parametric types to further refine our previous definition of complex numbers, by making it polymorphic in the representation of scalar numbers:
type complex 't = {re: t, im: t}
This type abbreviation is fine, but we will find it difficult to write useful functions with it. Consider an attempt to define complex addition:
def complex_add 't ({re = x_re, im = x_im}: complex t)
({re = y_re, im = y_im}: complex t)
: complex t =
{re = ?, im = ?}
How do we perform an addition x_re and y_re? These are both of type t, of which we know nothing. For all we know, they might be instantiated to something that is not numeric at all. Hence, the Futhark compiler will prevent us from using the + operator. In some languages, such as Haskell, facilities such as type classes may be used to support a notion of restricted polymorphism, where we can require that an instantiation of a type variable supports certain operations (like +). Futhark does not have type classes, but it does support programming with certain kinds of higher-order functions and it does have a powerful module system. The support for higher-order functions in Futhark and the module system are the subjects of the following sections.
2.6. Higher-Order Functions¶
Futhark supports certain kinds of higher-order functions. For performance reasons, certain restrictions apply, which means that Futhark can eliminate higher-order functions at compile time through a technique called defunctionalisation [Hov18][HHE18]. From a programmer's point-of-view, the main restrictions are the following:
Functions may not be stored inside arrays.
Functions may not be returned from branches in conditional expressions.
Functions are not allowed in loop parameters.
Whereas these restrictions seem daunting, functions may still be grouped in records and tuples and such structures may be passed to functions and even returned by functions. In effect, quite a few functional design patterns may be applied, ranging from defining polymorphic higher-order functions, for the purpose of obtaining a high degree of abstraction and code reuse (e.g., for defining program libraries), to specific uses of higher-order functions for representing various concepts as functions. Examples of such uses include a library for type-indexed compact serialisation (and deserialisation) of Futhark values [Els05][Ken04] and encoding of Conal Elliott's functional images [Ell03].
We have seen earlier how anonymous functions may be constructed and passed as arguments to SOACs. Here is an example anonymous function that takes parameters x, y, and z, returns a value of type t, and has body e :
\x y z: t -> e
Futhark allows for the programmer to specify so-called sections, which provide a way to form implicit eta-expansions of partially applied operations. Sections are encapsulated in parentheses. Assuming binop is a binary operator, such as +, the section (binop) is equivalent to the expression \x y -> x binop y. Similarly, the section (x binop) is equivalent to the expression \y -> x binop y and the section (binop y) is equivalent to the expression \x -> x binop y.
For making it easy to select fields from records (and tuples), a select-section may be used. An example is the section (.a.b.c), which is equivalent to the expression \y -> y.a.b.c. Similarly, the example section (.[i]), for indexing into an array, is equivalent to the expression \y -> y[i].
At a high level, Futhark functions are values, which can be used as any other values. However, to ensure that the Futhark compiler is able to compile the higher-order functions efficiently via defunctionalisation, certain type-driven restrictions exist on how functions can be used, as described earlier. Moreover, for Futhark to support higher-order polymorphic functions, type variables, when bound, are divided into non-lifted (bound with an apostrophe, e.g. 't), and lifted (bound with an apostrophe and a hat, e.g. '^t). Only lifted type parameters may be instantiated with a functional type. Within a function, a lifted type parameter is treated as a functional type. All abstract types declared in modules (see Section 2.9) are considered non-lifted, and may not be functional.
Uniqueness typing (see Section 2.8) generally interacts poorly with higher-order functions. The issue is that there is no way to express, in the type of a function, how many times a function argument is applied, or to what, which means that it will not be safe to pass a function that consumes its argument. The following two conservative rules govern the interaction between uniqueness types and higher-order functions:
In the expression let p = e in ..., if any in-place update takes place in the expression e, the value bound by p must not be or contain a function.
A function that consumes one of its arguments may not be passed as a higher-order argument to another function.
A number of higher-order utility functions are available at top-level. Amongst these are the following quite useful functions:
val const '^a '^b : a -> b -> a -- constant function
val id '^a : a -> a -- identity function
val |> '^a '^b : a -> (a -> b) -> b -- pipe right
val <| '^a '^b : (a -> b) -> a -> b -- pipe left
val >-> '^a '^b '^c : (a -> b) -> (b -> c) -> a -> c
val <-< '^a '^b '^c : (b -> c) -> (a -> b) -> a -> c
val curry '^a '^b '^c : ((a,b) -> c) -> a -> b -> c
val uncurry '^a '^b '^c : (a -> b -> c) -> (a,b) -> c
2.6.1. Causality and piping¶
The causality restriction discussed in Section 2.3.2 has significant interaction with higher-order functions, particularly the pipe operators. Programmers familiar with other languages, in particular Haskell, may wish to use the <| operator frequently, due to its similarity to Haskell's $ operator. Unfortunately, it has pitfalls due to causality. Consider this expression:
length <| filter (>0) [1,-2,3]
This is a causality violation. The reason is that length has the following type scheme:
val length [n] 't : [n]t -> i64
This means that whenever we use length, the type checker must instantiate the size variable n with some specific size, which must be available at the place length itself occurs. In the expression above, this specific size is whatever anonymous size variable the filter application produces. However, since the rule for binary operators is left-to-right evaluation, length function is instantiated (but not applied!) before the filter runs. The distinction between instantiation, which is when a polymorphic value is given its concrete type, and application, which is when a function is provided with an argument, is crucial here. The end result is that the compiler will complain:
> length <| filter (>0) [1,-2,3]
Error at [1]> :1:1-6:
Causality check: size "ret₁₁" needed for type of "length":
[ret₁₁]i32 -> i64
But "ret₁₁" is computed at 1:11-30.
Hint: Bind the expression producing "ret₁₁" with 'let' beforehand.
The compiler suggests binding the filter expression with a let, which forces it to be evaluated first, but there are neater solutions in this case. For example, we can exploit that function arguments are evaluated before function is instantiated:
> length (filter (>0) [1,-2,3])
Or we can use the left-to-right piping operator:
> filter (>0) [1,-2,3] |> length
2.7. Sequential Loops¶
Futhark does not directly support recursive functions, but instead provides syntactical sugar for expressing the equivalent of certain tail-recursive functions. Consider the following hypothetical tail-recursive formulation of a function for computing the Fibonacci numbers
def fibhelper(x: i32, y: i32, n: i32): i32 =
if n == 1 then x else fibhelper(y, x+y, n-1)
def fib(n: i32): i32 = fibhelper(1,1,n)
We cannot write this directly in Futhark, but we can express the same idea using the loop construct:
def fib(n: i32): i32 =
let (x, _) = loop (x, y) = (1,1) for i < n do (y, x+y)
The semantics of this loop is precisely as in the tail-recursive function formulation. In general, a loop
loop pat = initial for i < bound do loopbody
has the following semantics:
Bind pat to the initial values given in initial.
Bind i to 0.
While i < bound, evaluate loopbody, rebinding pat to be the value returned by the body. At the end of each iteration, increment i by one.
Return the final value of pat.
Semantically, a loop-expression is completely equivalent to a call to its corresponding tail-recursive function.
For example, denoting by t the type of x, the loop
loop x = a for i < n do
g(x)
has the semantics of a call to the following tail-recursive function:
def f(i: i32, n: i32, x: t): t =
if i >= n then x
else f(i+1, n, g(x))
-- the call
let x = f(i, n, a)
The syntax shown above is actually just syntactical sugar for a common special case of a for-in loop over an integer range, which is written as:
loop pat = initial for xpat in xs do loopbody
Here, xpat is an arbitrary pattern that matches an element of the array xs. For example:
loop acc = 0 for (x,y) in zip xs ys do
acc + x * y
The purpose of the loop syntax is partly to render some sequential computations slightly more convenient, but primarily to express certain very specific forms of recursive functions, specifically those with a fixed iteration count. This property is used for analysis and optimisation by the Futhark compiler. In contrast to most functional languages, Futhark does not properly support recursion, and users are therefore required to use the loop syntax for sequential loops.
Apart from for-loops, Futhark also supports while-loops. These loops do not provide as much information to the compiler, but can be used for convergence loops, where the number of iterations cannot be predicted in advance. For example, the following program doubles a given number until it exceeds a given threshold value:
def main (x: i32, bound: i32): i32 =
loop x while x < bound do x * 2
In all respects other than termination criteria, while-loops behave identically to for-loops.
For brevity, the initial value expression can be elided, in which case an expression equivalent to the pattern is implied. This feature is easier to understand with an example. The loop
def fib (n: i32): i32 =
let y = 1
let (x, _) = loop (x, y) = (x, y) for i < n do (y, x+y)
can also be written:
let (x, _) = loop (x, y) for i < n do (y, x+y)
This style of code can sometimes make imperative code look more natural.
Note: Type-checking with futhark repl
If you are uncertain about the type of some Futhark expression, the :type command (or :t for short) can help. For example:
> :t 2
2 : i32
> :t (+2)
(+ 2) : i32 -> i32
You will also be informed if the expression is ill-typed:
[1]> :t true : i32
Error at [1]> :1:1-1:10:
Couldn't match expected type `i32' with actual type `bool'.
2.8. In-Place Updates¶
While Futhark is an uncompromisingly pure functional language, it may occasionally prove useful to express certain algorithms in an imperative style. Consider a function for computing the \(n\) first Fibonacci numbers:
def fib (n: i64): [n]i32 =
-- Create "empty" array.
let arr = replicate n 1
-- Fill array with Fibonacci numbers.
in loop (arr) for i < n-2 do
arr with [i+2] = arr[i] + arr[i+1]
The notation arr with [i+2] = arr[i] + arr[i+1] produces an array equivalent to arr, but with a new value for the element at position i+2. A shorthand syntax is available for the common case where we immediately bind the array to a variable of the same name:
let arr = arr with [i+2] = arr[i] + arr[i+1]
-- Can be shortened to:
let arr[i+2] = arr[i] + arr[i+1]
If the array arr were to be copied for each iteration of the loop, we would spend a lot of time moving around data, even though it is clear in this case that the "old" value of arr will never be used again. Precisely, what should be an algorithm with complexity \(O(n)\) would become \(O(n^2)\), due to copying the size \(n\) array (an \(O(n)\) operation) for each of the \(n\) iterations of the loop.
To prevent this copying, Futhark updates the array in-place, that is, with a static guarantee that the operation will not require any additional memory allocation, or copying the array. An in-place update can modify the array in time proportional to the elements being updated (\(O(1)\) in the case of the Fibonacci function), rather than time proportional to the size of the final array, as would the case if we perform a copy. In order to perform the update without violating referential transparency, Futhark must know that no other references to the array exists, or at least that such references will not be used on any execution path following the in-place update.
In Futhark, this is done through a type system feature called uniqueness types, similar to, although simpler than, the uniqueness types of the programming language Clean. Alongside a (relatively) simple aliasing analysis in the type checker, this extension is sufficient to determine at compile time whether an in-place modification is safe, and signal a compile time error if in-place updates are used in a way where safety cannot be guaranteed.
The simplest way to introduce uniqueness types is through examples. To that end, let us consider the following function definition.
def modify (a: *[]i32) (i: i64) (x: i32): *[]i32 =
a with [i] = a[i] + x
The function call modify a i x returns \(a\), but where the element at index i has been increased by \(x\). Notice the asterisks: in the parameter declaration (a: *[i32]), the asterisk means that the function modify has been given "ownership" of the array \(a\), meaning that any caller of modify will never reference array \(a\) after the call again. In particular, modify can change the element at index i without first copying the array, i.e. modify is free to do an in-place modification. Furthermore, the return value of modify is also unique - this means that the result of the call to modify does not share elements with any other visible variables.
Let us consider a call to modify, which might look as follows.
let b = modify a i x
Under which circumstances is this call valid? Two things must hold:
The type of a must be *[]i32, of course.
Neither a or any variable that aliases a may be used on any execution path following the call to modify.
When a value is passed as a unique-typed argument in a function call, we say that the value is consumed, and neither it nor any of its aliases (see below) can be used again. Otherwise, we would break the contract that gives the function liberty to manipulate the argument however it wants. Notice that it is the type in the argument declaration that must be unique - it is permissible to pass a unique-typed variable as a non-unique argument (that is, a unique type is a subtype of the corresponding nonunique type).
A variable \(v\) aliases \(a\) if they may share some elements, for instance by an overlap in memory. As the most trivial case, after evaluating the binding b = a, the variable b will alias a. As another example, if we extract a row from a two-dimensional array, the row will alias its source:
let b = a[0] -- b is aliased to a
-- (assuming a is not one-dimensional)
Most array combinators produce fresh arrays that initially alias no other arrays in the program. In particular, the result of map f a does not alias a. One exception is array slicing, where the result is aliased to the original array.
Let us consider the definition of a function returning a unique array:
def f(a: []i32): *[]i32 = e
Notice that the argument, a, is non-unique, and hence we cannot modify it inside the function. There is another restriction as well: a must not be aliased to our return value, as the uniqueness contract requires us to ensure that there are no other references to the unique return value. This requirement would be violated if we permitted the return value in a unique-returning function to alias its (non-unique) parameters.
To summarise: values are consumed by being the source in a in-place binding, or by being passed as a unique parameter in a function call. We can crystallise valid usage in the form of three principal rules:
Uniqueness Rule 1
When a value is consumed — for example, by being passed in the place of a unique parameter in a function call, or used as the source in a in-place expression, neither that value, nor any value that aliases it, may be used on any execution path following the function call. A violation of this rule is as follows:
let b = a with [i] = 2 -- Consumes 'a'
in f(b,a) -- Error: a used after being consumed
If a function definition is declared to return a unique value, the return value (that is, the result of the body of the function) must not share memory with any non-unique arguments to the function. As a consequence, at the time of execution, the result of a call to the function is the only reference to that value. A violation of this rule is as follows:
def broken (a: [][]i32, i: i64): *[]i32 =
a[i] -- Error: Return value aliased with 'a'.
If a function call yields a unique return value, the caller has exclusive access to that value. At the point the call returns, the return value may not share memory with any variable used in any execution path following the function call. This rule is particularly subtle, but can be considered a rephrasing of Uniqueness Rule 2 from the "calling side".
It is worth emphasising that everything related to uniqueness types is implemented as a static analysis. All violations of the uniqueness rules will be discovered at compile time (during type-checking), leaving the code generator and runtime system at liberty to exploit them for low-level optimisation.
2.8.1. When To Use In-Place Updates¶
If you are used to programming in impure languages, in-place updates may seem a natural and convenient tool that you may use frequently. However, Futhark is a functional array language, and should be used as such. In-place updates are restricted to simple cases that the compiler is able to analyze, and should only be used when absolutely necessary. Most Futhark programs are written without making use of in-place updates at all.
Typically, we use in-place updates to efficiently express sequential algorithms that are then mapped on some array. Somewhat counter-intuitively, however, in-place updates can also be used for expressing irregular nested parallel algorithms (which are otherwise not expressible in Futhark), albeit in a low-level way. The key here is the array combinator scatter, which writes to several positions in an array in parallel. Suppose we have an array is of type [n]i32, an array vs of type [n]t (for some t), and an array as of type [m]t. Then the expression scatter as is vs morally computes
for i in 0..n-1:
j = is[i]
v = vs[i]
if ( j >= 0 && j < length as )
then { as[j] = v }
else { }
and returns the modified as array. The old as array is marked as consumed and may not be used anymore. Notice that writing outside the index domain of the target array has no effect.
Moreover, identical indices in is (that are valid indices into the target array) are required to map to identical values; otherwise, the result is unspecified. In particular, it is not guaranteed that one of the duplicate writes will complete atomically; they may be interleaved. Futhark features a function, called reduce_by_index (a generalised histogram operation), which can handle this case deterministically. The parallel scatter operation can be used, for instance, to implement efficiently the radix sort algorithm, as demonstrated in Section 5.6.1.
2.9. Modules¶
When most programmers think of module systems, they think of rather utilitarian systems for namespace control and splitting programs across multiple files. And in most languages, the module system is indeed little more than this. But in Futhark, we have adopted an ML-style higher-order module system that permits abstraction over modules [EHAO18]. The module system is not just a method for organising Futhark programs, it is also a powerful facility for writing generic code. Most importantly, all module language constructs are eliminated from the program at compile time, using a technique called static interpretation [Els99][Ann18]. As a consequence, from a programmer's perspective, there is no overhead involved with making use of module language features.
2.9.1. Simple Modules¶
At the most basic level, a module (called a structure in Standard ML) is merely a collection of declarations
module add_i32 = {
type t = i32
def add (x: t) (y: t): t = x + y
def zero: t = 0
Now, add_i32.t is an alias for the type i32, and add_i32.add is a function that adds two values of type i32. The only peculiar thing about this notation is the equal sign before the opening brace. The declaration above is actually a combination of a module binding
module add_i32 = ...
and a module expression
In this case, the module expression encapsulates a number of declarations enclosed in curly braces. In general, as the name suggests, a module expression is an expression that returns a module. A module expression is syntactically and conceptually distinct from a regular value expression, but serves much the same purpose. The module language is designed such that evaluation of a module expression can always be done at compile time.
Apart from a sequence of declarations, a module expression can also be merely the name of another module
module foo = add_i32
Now every name defined in add_i32 is also available in foo. At compile-time, only a single version of the add function is defined.
2.9.2. Module Types¶
What we have seen so far is nothing more than a simple namespace mechanism. The ML module system only becomes truly powerful once we introduce module types and parametric modules (in Standard ML, these are called signatures and functors).
A module type is the counterpart to a value type. It describes which names are defined, and as what. We can define a module type that describes add_i32:
module type i32_adder = {
val add : t -> t -> t
val zero : t
As with modules, we have the notion of a module type expression. In this case, the module type expression is a sequence of specifications enclosed in curly braces. A specification specifies how a name must be defined: as a value (including functions) of some type, as a type abbreviation, or as an abstract type (which we will return to later).
We can assert that some module implements a specific module type via a module type ascription:
module foo = add_i32 : i32_adder
Syntactic sugar lets us move the module type to the left of the equal sign:
module add_i32: i32_adder = {
When we are ascribing a module with a module type, the module type functions as a filter, removing anything not explicitly mentioned in the module type:
module bar = add_i32 : { type t = i32
val zero : t }
An attempt to access bar.add will result in a compilation error, as the ascription has hidden it. This is known as an opaque ascription, because it obscures anything not explicitly mentioned in the module type. The module system in Standard ML supports both opaque and transparent ascription, but in Futhark we support only opaque ascription. This example also demonstrates the use of an anonymous module type. Module types are structural (just like value types), and are named only for convenience.
We can use type ascription with abstract types to hide the definition of a type from the users of a module:
module speeds: { type thing
val car : thing
val plane : thing
val futhark : thing
val speed : thing -> i32 } = {
type thing = i32
def car: thing = 0
def plane: thing = 1
def futhark: thing = 2
def speed (x: thing): i32 =
if x == car then 120
else if x == plane then 800
else if x == futhark then 10001
else 0 -- will never happen
The (anonymous) module type asserts that a distinct type thing must exist, but does not mention its definition. There is no way for a user of the speeds module to do anything with a value of type speeds.thing apart from passing it to speeds.speed. The definition is entirely abstract. Furthermore, no values of type speeds.thing exists except those that are created by the speeds module.
2.9.3. Parametric Modules¶
While module types serve some purpose for namespace control and abstraction, their most interesting use is in the definition of parametric modules. A parametric module is conceptually equivalent to a function. Where a function takes a value as input and produces a value, a parametric module takes a module and produces a module. For example, given a module type
module type monoid = {
We can define a parametric module that accepts a module satisfying the monoid module type, and produces a module containing a function for collapsing an array
module sum (M: monoid) = {
def sum (a: []M.t): M.t =
reduce M.add M.zero a
There is an implied assumption here, which is not captured by the type system: The function add must be associative and have zero as its neutral element. These constraints come from the parallel semantics of reduce, and the algebraic concept of a monoid. Notice that in monoid, no definition is given of the type t—we only assert that there must be some type t, and that certain operations are defined for it.
We can use the parametric module sum as follows:
module sum_i32 = sum add_i32
We can now refer to the function sum_i32.sum, which has type []i32 -> i32. The type is only abstract inside the definition of the parametric module. We can instantiate sum again with another module, this time an anonymous module:
module prod_f64 = sum {
type t = f64
def add (x: f64) (y: f64): f64 = x * y
def zero: f64 = 1.0
The function prod_f64.sum has type []f64 -> f64, and computes the product of an array of numbers (we should probably have picked a more generic name than sum for this function).
Operationally, each application of a parametric module results in its definition being duplicated and references to the module parameter replace by references to the concrete module argument. This is quite similar to how C++ templates are implemented. Indeed, parametric modules can be seen as a simplified variant with no specialisation, and with module types to ensure rigid type checking. In C++, a template is type-checked when it is instantiated, whereas a parametric module is type-checked when it is defined.
Parametric modules, like other modules, can contain more than one declaration. This feature is useful for giving related functionality a common abstraction, for example to implement linear algebra operations that are polymorphic over the type of scalars. The following example uses an anonymous module type for the module parameter and the open declaration for bringing the names from a module into the current scope:
module linalg(M : {
type scalar
val zero : scalar
val add : scalar -> scalar -> scalar
val mul : scalar -> scalar -> scalar
}) = {
open M
def dotprod [n] (xs: [n]scalar) (ys: [n]scalar)
: scalar =
reduce add zero (map2 mul xs ys)
def matmul [n] [p] [m] (xss: [n][p]scalar)
(yss: [p][m]scalar)
: [n][m]scalar =
map (\xs -> map (dotprod xs) (transpose yss)) xss
2.9.4. Importing other files¶
While Futhark's module system is not directly file-oriented, there is still a close interaction. You can access code in other files as follows:
import "module"
The above will include all non-local top-level definitions from module.fut and make them available in the current Futhark program. The .fut extension is implied.
You can also include files from subdirectories::
import "path/to/a/file"
The above will include the file path/to/a/file.fut relative to the including file.
If we are defining a top-level function (or any other top-level construct) that we do not want to be visible outside the current file, we can prefix it with local:
local def i_am_hidden x = x + 2
Qualified imports are possible, where a module is created for the file::
module M = import "module"
In fact, a plain import "module" is equivalent to:
local open import "module"
This declaration opens "module" in the current file, but does not propagate its contents to modules that in turn import the current file.
© Copyright Martin Elsman, Troels Henriksen, and Cosmin E. Oancea. Created using Sphinx 1.8.5.
|
CommonCrawl
|
Phenome-wide investigation of the causal associations between childhood BMI and adult trait outcomes: a two-sample Mendelian randomization study
Shan-Shan Dong1 na1,
Kun Zhang1 na1,
Yan Guo1,
Jing-Miao Ding1,
Yu Rong1,
Jun-Cheng Feng1,
Shi Yao1,
Ruo-Han Hao1,
Feng Jiang1,
Jia-Bin Chen1,
Xiao-Feng Chen1 &
Tie-Lin Yang ORCID: orcid.org/0000-0001-7062-30251,2
Childhood obesity is reported to be associated with the risk of many diseases in adulthood. However, observational studies cannot fully account for confounding factors. We aimed to systematically assess the causal associations between childhood body mass index (BMI) and various adult traits/diseases using two-sample Mendelian randomization (MR).
After data filtering, 263 adult traits genetically correlated with childhood BMI (P < 0.05) were subjected to MR analyses. Inverse-variance weighted, MR-Egger, weighted median, and weighted mode methods were used to estimate the causal effects. Multivariable MR analysis was performed to test whether the effects of childhood BMI on adult traits are independent from adult BMI.
We identified potential causal effects of childhood obesity on 60 adult traits (27 disease-related traits, 27 lifestyle factors, and 6 other traits). Higher childhood BMI was associated with a reduced overall health rating (β = − 0.10, 95% CI − 0.13 to − 0.07, P = 6.26 × 10−11). Specifically, higher childhood BMI was associated with increased odds of coronary artery disease (OR = 1.09, 95% CI 1.06 to 1.11, P = 4.28 × 10−11), essential hypertension (OR = 1.12, 95% CI 1.08 to 1.16, P = 1.27 × 10−11), type 2 diabetes (OR = 1.36, 95% CI 1.30 to 1.43, P = 1.57 × 10−34), and arthrosis (OR = 1.09, 95% CI 1.06 to 1.12, P = 8.80 × 10−9). However, after accounting for adult BMI, the detrimental effects of childhood BMI on disease-related traits were no longer present (P > 0.05). For dietary habits, different from conventional understanding, we found that higher childhood BMI was associated with low calorie density food intake. However, this association might be specific to the UK Biobank population.
In summary, we provided a phenome-wide view of the effects of childhood BMI on adult traits. Multivariable MR analysis suggested that the associations between childhood BMI and increased risks of diseases in adulthood are likely attributed to individuals remaining obese in later life. Therefore, ensuring that childhood obesity does not persist into later life might be useful for reducing the detrimental effects of childhood obesity on adult diseases.
Obesity is a worldwide health problem. The prevalence of adult obesity has increased dramatically since the 1980s [1]. It is particularly worrisome that the rate of increase in childhood obesity has been nearly double that in adults [1]. Childhood overweight and obesity often persist in adulthood, which increases the risks of premature mortality and physical morbidity across the lifespan [2].
Compelling observational studies have reported that childhood obesity is associated with the risk of many complex diseases in adulthood, such as coronary artery disease (CAD) [3], cancers [4], diabetes [5], and polycystic ovary syndrome symptoms [6]. However, results from observational studies are unable to fully account for confounding factors (e.g., socioeconomic status). Therefore, whether the relationship is causal is uncertain.
Mendelian randomization (MR), which uses genetic markers of the exposure as instruments, is now widely used to assess the causal relationship between exposure and outcome [7]. As shown in Fig. 1a, MR must satisfy three assumptions [7]: (1) the selected instruments must be associated with the exposure, (2) the instruments must not be associated with confounding factors, and (3) the instruments must influence the outcome only through the exposure (no horizontal pleiotropy exists). Conventionally, one-sample MR could be performed by using the two-stage least squares analysis method. For example, a previous study [8] using one-sample MR showed that abdominal adiposity might have a causal unfavorable effect on cardiometabolic risk factors in children and adolescents. Recently, two-sample MR analysis methods using summary-level GWAS data have been developed [9]. With a large amount of GWAS summary data deposited in public databases, two-sample MR analysis provides a cost-efficient way to investigate the potential causal effects of childhood obesity on adult traits. Using this method, previous studies have demonstrated the causal adverse effects of childhood body mass index (BMI) on adult cardiometabolic diseases [10] and osteoarthritis [11]. Using SNPs associated with adult BMI as instruments, two recent MR phenome-wide association studies [12, 13] have shown the causal effects of adult obesity on many other traits/diseases. The causal effects of childhood obesity are suspected but have not been systematically characterized.
a Schematic diagram of an MR analysis. Since genetic alleles are independently segregated and randomly assigned, SNPs are not associated with confounding factors that may bias estimates from observational studies. Three assumptions of MR are as follows: (1) the selected instrument is predictive of the exposure, (2) the instrument is independent of confounding factors, and (3) there is no horizontal pleiotropy (the instrument is associated with the outcome only through the exposure). b The analysis pipeline of the current study
Another interesting question is whether the causal effect of childhood BMI on the later health outcomes is independent from adult BMI. It was reported that childhood obesity was associated with an increased risk of multiple comorbidities in adulthood even if the obesity did not persist [14]. However, a recent study [15] showed that the observational association between childhood overweight and adult type 2 diabetes (T2D) only hold if the overweight continued until puberty or later ages. Multivariable MR [16] can be used to determine whether several exposures affect an outcome through the same pathway or whether the exposures have independent effects. A study [17] using multivariable MR showed that the causal adverse effects of large body size in early life on CAD and T2D is depend on adult body size. Systematically assessing the influences of childhood BMI on adult traits and whether these effects are independent from adult BMI might be useful for subsequent decision on the timing of preventive strategies.
In this study, we performed a MR phenome-wide association study to assess the causal effects of childhood BMI on adult traits/diseases using 2-sample MR with current available GWAS summary data (data collected before August 2019). Multivariable MR was also used to determine the independent effects of childhood BMI after accounting for adult BMI. Our results offer a systemic view of the causal effects of childhood BMI on adult traits.
The outline of the experimental approach used in this study is shown in Fig. 1b. The STROBE-MR checklist (https://peerj.com/preprints/27857/) [18] was used for reporting this work.
Summary data resources
Childhood BMI
The childhood BMI GWAS summary dataset was from the Early Growth Genetics consortium (http://egg-consortium.org/childhood-bmi.html, "EGG_BMI_HapMap_DISCOVERY.txt.gz"). The phenotype used in this GWAS was sex- and age-adjusted standard deviation scores of childhood BMI at the latest time point (oldest age) between 2 and 10 years [19]. The GWAS included 47,541 European children in total.
Adulthood outcomes
GWAS summary data were obtained from the following resources: (1) 3513 GWAS summary data on up to 456,422 array-genotyped and imputed UK Biobank individuals (aged between 40 and 69 at recruitment) from the Genome-wide Complex Trait Analysis (GCTA) website; (2) 778 GWAS summary datasets for up to 452,264 UK Biobank individuals from the Gene ATLAS database (http://geneatlas.roslin.ed.ac.uk/); and (3) 839 GWAS datasets from the LDhub GWAShare Center (http://ldsc.broadinstitute.org/); 4) 90 datasets from various other resources (Additional file 1: Table S1). All datasets were collected before May 2019.
Next, we filtered the GWAS summary datasets first using the following criteria:
GWAS with small sample size and limited statistical power might fail to detect SNP-trait associations [20]. To avoid potential horizontal pleiotropy, it is necessary to make sure that the outcome data we collected have enough sample size to detect SNP-trait associations. Here we only kept data sets with N > 50,000 and both cases and controls are > 10,000 for binary phenotypes. When the significance threshold is P < 5 × 10−8, GWAS with this sample size has over 90% power to detect SNPs with explained phenotypic variance portion of over 1 × 10−3. The statistical power was calculated using the formula presented in the work of Visscher et al. [20]. The same cutoff has also been used in a previous study which aimed to analyze pleiotropy in multiple traits [21].
Confounding by ancestry could occur if instruments associated with exposure had different frequencies in different ethnic groups [22]. The exposure data we used for childhood BMI is from the European ancestry. Therefore, we only kept the GWAS summary data set which is based on European population or > 80% of the samples are European.
Exclude sex-specific GWAS, unless the trait is only available for a specific sex (e.g., breast cancer).
Exclude adolescent traits, parent or sibling traits (e.g., illnesses of father). We also removed traits related to adult obesity, since 11 of the 15 childhood BMI SNPs are in linkage disequilibrium (LD) with adult BMI variants.
If a trait has more than one GWAS dataset, we only kept the dataset with the greatest number of subjects for this trait.
Finally, a total of 903 datasets remained, including 863 datasets specifically for the UK Biobank population (Additional file 2: Fig. S1). For all 903 datasets, the URLs for detailed phenotype description and data access are listed in Additional file 1: Table S2. All outcomes were recoded to make sure the variables followed increasing patterns. For example, overall health rating was originally coded from 1 to 4 to refer excellent, good, fair, and poor, respectively. Under such situation, an allele positively associated with this trait is actually a risk factor of overall health rating. To avoid misunderstanding, we recoded these traits by changing the plus or minus sign of the beta value in the association results.
Estimated standardized effect size of SNPs
To enable comparison of effect sizes across studies, we obtained the estimated standardized effect size (β) and standard error (se) as a function of minor allele frequency and sample size as described previously [23] using the following equation:
$$ \beta =\frac{z}{\sqrt{\ 2p\left(1-p\right)\left(\mathrm{n}+{z}^2\ \right)}}, se=\frac{1}{\sqrt{\ 2p\left(1-p\right)\left(\mathrm{n}+{z}^2\ \right)}} $$
where z can be calculated as β/se from the original summary data, p is the minor allele frequency, and n is the total sample size.
Genetic correlation analyses
As a phenome-wide study, our hypothesis-free MR analyses with many independent statistical tests might suffer from the problem of multiple testing burden [24]. On the other hand, if two traits are causally related and both of them have non-zero heritability, there should be genetic correlations between them [24]. Therefore, to solve the problem of multiple testing burden, we firstly screened the large publicly available GWAS summary results for evidence of genetic correlation with childhood BMI using LD score regression [25]. Formal MR analyses were subsequently performed to assess the causal effects. As an analytical strategy to mine the phenome [24], this analysis process has also been used in a previous study [26]. We used genetic correlation analysis to select data potentially associated with childhood BMI for further MR analysis; therefore, we used P < 0.05 as the cutoff to preserve all datasets with suggestive evidence. All traits were classified into three main categories—lifestyle factors, disease-related traits, and others. All disease-related traits were further classified according to the International Classification of Diseases 11th Revision (ICD-11) [27].
Instruments selection
Fifteen independent SNPs with P < 5 × 10−8 identified from the original GWAS study [19] for childhood BMI were used as instruments (Additional file 1: Table S3). The genetic risk score of these SNPs explained 2.0% of the variance in childhood BMI [19]. To avoid potential confounding, we looked up each instrument SNP and their proxies (r2 > 0.8) in the PhenoScanner GWAS database (http://phenoscanner.medschl.cam.ac.uk) [28, 29] to assess any previous associations (P < 0.0033 (0.05/15)) with 4 plausible confounders selected based on previously published studies: birth weight [19, 30, 31], years of educational attainment and age completed full time education [32,33,34], and maternal smoking around birth [35,36,37,38]. Two SNPs were associated with a potential confounder (rs12041852, maternal smoking around birth, P = 7.43 × 10−5; rs12507026 (in LD with rs13130484), years of educational attainment, P = 0.0028), resulting a set of 13 SNPs for further analysis. In addition to the GWAS which reported these SNPs, 11 of the 13 loci have also been reported to be associated with childhood obesity in other previously published studies (Additional file 1: Table S4). For each outcome, we also used the RadialMR [39] package to further exclude outlying pleiotropic SNPs. RadialMR [39] identified outlying genetic instruments via modified Q-statistics. Among the 13 SNPs, 9 SNPs were in LD with adult BMI variants (Additional file 1: Table S3). The effect sizes (se) of the rest 4 SNPs were 0.042 (0.007), 0.045 (0.008), 0.041 (0.007), and 0.139 (0.025) respectively.
MR analyses
We used four complementary methods of two-sample MR (inverse variance weighted (IVW) method, MR-Egger method, weighted median method, and weighted mode method) to estimate the causal effects. They make different assumptions about horizontal pleiotropy. When the horizontal pleiotropy is balanced (i.e., the pleiotropic effects are independent of SNP-exposure effects), there should be no bias in the effect derived from MR. If the horizontal pleiotropic effects are biasing the estimate in the same direction (directional pleiotropy), the causal estimates will be biased (except for the MR-Egger method).
The IVW method assumes balanced pleiotropy [40]. We obtained the IVW estimate by meta-analyzing the SNP specific Wald estimates using multiplicative random effects. Cochran's Q statistic [41] was used to check for the presence of heterogeneity, which can indicate pleiotropy. Cochran's Q statistic [41] follows a χ2 distribution with L − 1 degrees of freedom (L refers to the number of instruments) under the null hypothesis of homogeneity.
The MR-Egger method is based on the INSIDE assumption (instrument strength independent of the direct effects) [40]. It requires that the SNPs' potential pleiotropic effects are independent of the SNPs' association with the exposure [40]. MR-Egger is also based on the no measurement error in the SNP exposure effects (NOME) assumption, which can be evaluated by the regression dilution I2 (GX) [42]. When I2 (GX) < 0.9, adjustment methods should be considered [42]. Therefore, simulation extrapolation (SIMEX) correction analysis was performed to estimate the causal effect when I2 (GX) < 0.9 [42]. The intercept term of the MR-Egger method represents an estimate of the directional pleiotropic effect [43]. We also calculated the Rucker's Q′ statistic [44] to measure the heterogeneity in the MR-Egger analysis. Rucker's Q′ follows a χ2 distribution with L − 2 degrees of freedom under the null hypothesis of no heterogeneity (L refers to the number of instruments) [44]. Generally, we have Rucker's Q′ ≤ Cochran's Q [44]. If the difference Q − Q′ is sufficiently extreme with respect to a χ2 distribution with the 1 degree of freedom, we would infer that directional pleiotropy is an important factor and MR-Egger model provides a better fit than the IVW method [45].
The weighted median method estimates the causal effect under the assumption that at least 50% of the total weight of the instrument comes from valid variants [46]. Compared with IVW and MR-Egger, this method has greater robustness to provide a consistent causal effect estimate even when up to 50% of the SNPs are invalid instruments [46]. The mode-based method provides a consistent effect estimate when the largest number of similar individual-instrument estimates come from valid instruments, even if the majority of instruments are invalid [47].
We also used MR pleiotropy residual sum and outlier (MR-PRESSO) global test [48] to detect horizontal pleiotropy. The analyses of the four MR methods were carried out using the TwoSampleMR package in R. We chose the main MR method as follows:
If no directional pleiotropy was detected (P > 0.05 for tests of Q, MR-Egger intercept, Q − Q′ and MR-PRESSO), use IVW.
If directional pleiotropy was detected and P > 0.05 for the test of Q′, use MR-Egger.
If directional pleiotropy was detected and P < 0.05 for the test of Q′, use weighted median.
We also checked the consistency of the directions in all four MR methods. Only significant results with the same direction in all methods were remained to make sure the positive results we selected are robust under different assumptions.
Effect estimates are reported in β values for continuous outcomes and converted to ORs for dichotomous outcomes.
For outcomes with significant MR analysis results, leave-one-out sensitivity analysis was carried out to check whether the causal association was driven by a single SNP. Over 95% of the outcome datasets we used are specifically for the UK Biobank population. To check whether the significant results could be replicated in other datasets, we performed MR analysis for 5 outcomes (CAD, disease count, hypertensive disease, osteoarthritis, and T2D) with available summary data from resources without UK Biobank participants. The datasets for disease count, hypertensive disease, and osteoarthritis were from the Genetic Epidemiology Research on Adult Health and Aging (GERA) cohort [49]. The datasets for CAD and T2D were obtained from the studies performed by Nikpay et al. [50] and Scott et al. [51], respectively. Detailed phenotype description and data access URLs are listed in Additional file 1: Table S5.
Estimating the number of independent outcomes
As our analysis involved a large number of summary data, we expected that some of these outcomes might be highly correlated with each other. Therefore, we used PhenoSpD [52] to estimate the number of independent outcomes to correct for multiple testing. We used the LD score regression method [25] to create a correlation matrix between each outcome. The matrix was used as an input for PhenoSpD to assess the number of independent outcomes through matrix spectral decomposition. Suppose the number of independent outcomes is n, then the significant threshold was set as 0.05/n after multiple testing correction.
Polygenic risk score (PRS) analyses for dietary habits with significant MR results
We used the SNP instruments in MR analysis as markers to construct PRSs for childhood BMI and adult BMI. UK Biobank samples were used as the target datasets. We only used samples with the ethnic background of European ancestry. Samples with missingness > 5% and mismatching phenotypic and genotypic sex and samples that have withdrawn consent were excluded. PRSs were calculated using the software PRsice [53]. The correlations between PRSs and dietary traits were tested with age, sex, and top 10 principal components as covariates. Logistic regression was used for binary phenotypes and linear regression was used for continuous phenotypes.
Instruments for adult BMI and multivariable MR analysis
For outcomes with significant MR analysis results, we also carried out MR analyses for adult BMI. Over 95% of the outcome data we used were from the UK Biobank population. In two-sample MR analysis, overlap in participants between the exposure and outcome can cause bias towards the risk factor-outcome association [54]. Therefore, we used the adult BMI SNPs reported by Locke et al. [55] rather than the SNPs reported by Yengo et al. [56] since 65% (450,000/700,000) in Yengo et al. are the UK Biobank participants. To avoid potential confounding caused by ancestry [22], we only used the reported SNPs by Locke et al. from the European-descent individuals. Among the total 77 SNPs, one SNP (rs12016871) was not present in the 60 summary data sets of the outcomes, so we used the rest 76 independent SNPs as instruments (Additional file 1: Table S6). Multivariable MR analysis [16] was then used to determine whether childhood BMI and adult BMI affect the outcomes through the same pathway or whether they have independent effects. SNPs from the univariable MR analysis were used after performing linkage disequilibrium clumping to account for instrument correlation between the two sets.
Reverse-direction MR analyses
For the 60 traits with significant causal effects, we also performed reverse-direction MR to assess potential reverse causal effects. For each exposure, we used the clumping algorithm in PLINK [57] to select independent SNPs for each trait (r2 threshold = 0.001, window size = 1 Mb and P < 5 × 10−8). The 1000G European data (phase 3) were used as the reference for LD estimation. For exposures with less than 3 significant SNPs available for MR, we used SNPs meeting a more relaxed threshold (P < 1 × 10−5). This relaxing statistical threshold method for genetic instruments has been used in previous MR studies [26]. The MR analyses process was the same as previously described.
According to the cross-trait LD score analyses, 263 outcomes showed genetic correlation with childhood BMI (Fig. 1b, Additional file 1: Table S2), including 249 outcomes specific for the UK Biobank population. We manually checked the cohorts involved in these outcomes and found that samples in these studies were not overlapped with those in the childhood BMI study. These outcomes (138 disease-related traits, 80 lifestyle factors, and 45 other traits) were subjected to subsequent MR analysis.
Assessment of pleiotropy
The results of assessment of pleiotropy are shown in Additional file 1: Table S7. No significant evidence of pleiotropy was detected by the Cochran's Q test and MR-PRESSO global test (P > 0.05). MR-Egger's intercept test detected evidence of directional pleiotropy for 2 outcomes (P < 0.05, Additional file 1: Table S7, Additional file 2: Fig. S2A). The difference Q − Q′ is sufficiently extreme with respect to a χ2 distribution with the 1 degree of freedom in additional 7 outcomes (P < 0.05, Additional file 1: Table S7, Additional file 2: Fig. S2B). Since Rucker's Q′ test did not detect evidence of heterogeneity in these 9 outcomes, MR-Egger was chosen as the main method for them. For the other outcomes without evidence of directional pleiotropy, we chose IVW as the main MR method.
The NOME assumption violation (I2 (GX) < 0.9) was detected in all outcomes (Additional file 1: Table S7). Therefore, we also carried out MR-Egger with SIMEX analyses.
MR results
The results of PhenoSpD showed that the independent outcome number was 145, setting the Bonferroni P value threshold for our main MR analysis at P < 3.45 × 10−4 (0.05/145). In addition to multiple testing corrections of the main MR method, P < 3.45 × 10−4 of the weighted median method was also set as a cutoff to obtain confident results supported by at least two MR methods. Sixty significant associations were detected (Additional file 1: Table S8). A total of 27 disease-related traits, 27 lifestyle factors, and 6 other traits were included. For better illustration, we summarized the MR findings in Figs. 2, 3, 4, and 5.
Summary view of the MR analysis results for the disease-related traits. Traits with significant positive associations with childhood BMI are shown in red. Traits with significant negative associations with childhood BMI are shown in blue. The other traits are shown in black. Traits from resources not specific to the UK Biobank population are shown in italic. For diseases from the UK Biobank population, those with pre-posed code (e.g., K80 Cholelithiasis) are obtained from clinical diagnoses. Diseases without pre-posed code were obtained from questionnaire. The URLs for detailed description for all phenotypes are listed in Additional file 1: Table S2
Summary Mendelian randomization (MR) estimates derived from the inverse-variance weighted, MR-Egger, weighted median, and weighted mode-based methods for the 27 disease-related traits. Childhood BMI was used as exposure and significant associations were detected for these traits
Summary view of the MR analysis results for the lifestyle factors and other traits. Traits with significant positive associations with childhood BMI are shown in red. Traits with significant negative associations with childhood BMI are shown in blue. The other traits are shown in black. Traits from resources not specific to the UK Biobank population are shown in italic. The URLs for detailed description for all phenotypes are listed in Additional file 1: Table S2
Summary Mendelian randomization (MR) estimates derived from the inverse-variance weighted, MR-Egger, weighted median, and weighted mode-based methods for the 27 lifestyle factors and 6 other hematological test traits. Childhood BMI was used as exposure and significant associations were detected for these traits
The performances of the four methods were similar. Using the threshold of P < 3.45 × 10−4, the IVW and weighted median methods supported the causal associations between childhood BMI and all 60 traits. But the numbers of associations supported by the weighted mode and MR-Egger methods were only 1 and 3 outcomes, respectively. The difference may be due to the fact that the power of weighted mode and MR-Egger methods is smaller than that of the IVW and weighted median methods [47]. At the suggestive significant level of 0.05, 59 of the 60 associations were supported by at least three methods. The weighted mode and MR-Egger method detected the associations with 58 and 29 outcomes, respectively. This is consistent with the previous report that MR-Egger has the lowest power of the four methods to detect a causal effect [47].
Childhood obesity is a risk factor for general health outcomes in adulthood
As shown in Figs. 2 and 3 and Additional file 1: Table S8, there is evidence that childhood BMI causally affects a total of 27 outcomes related to adult diseases, including 3 general health traits; 3 circulatory system traits; 7 endocrine, nutritional, metabolic traits; 5 musculoskeletal system traits; and 9 other traits.
Childhood BMI and general health
As shown in Fig. 3 and Additional file 2: Fig. S3, higher childhood BMI was associated with reduced overall health rating (β = − 0.10, 95% CI − 0.13 to − 0.07, P = 6.26 × 10−11) and an increased the number of self-reported non-cancer illnesses (β = 0.09, 95% CI 0.06 to 0.13, P = 1.58 × 10−7). One SD increase in childhood BMI was associated with 9% higher odds of long-standing illness disability or infirmity (OR = 1.09, 95% CI 1.06 to 1.12, P = 8.50 × 10−11). Leave-one-out analysis showed that no single SNP was driving the causal estimates (Additional file 2: Fig. S3). There was no association between childhood BMI and falls in last year (P > 0.05, Additional file 1: Table S8).
Childhood BMI and circulatory system traits
We found that a 1 SD increase in childhood BMI was associated with 9% higher odds of CAD (OR = 1.09, 95% CI 1.06 to 1.11, P = 4.28 × 10−11, Fig. 3, Additional file 2: Fig. S4). The other two circulatory system traits with significant associations are essential hypertension (OR = 1.12, 95% CI 1.08 to 1.16, P = 1.27 × 10−11) and high blood pressure diagnosed by doctor (OR = 1.14, 95% CI 1.09 to 1.18, P = 3.12 × 10−11) (Fig. 3, Additional file 2: Fig. S4). Analyses of treatment/medication conditions also showed that higher childhood BMI increased the risk of receiving blood pressure medication (Fig. 3, Additional file 2: Fig. S5). In contrast, we did not detect any association for acute myocardial infarction and varicose veins of lower extremities (P > 0.05 in all four MR methods; Additional file 1: Table S8). For the other traits, suggestive association signals were detected in at least one MR method, but the associations were no longer significant after multiple testing corrections.
Childhood BMI and endocrine, nutritional, or metabolic traits
We observed that a 1 SD increase in childhood BMI was associated with 36% higher odds of T2D (OR = 1.36, 95% CI 1.30 to 1.43, P = 1.57 × 10−34, Fig. 3, Additional file 2: Fig. S6). We also found evidence that higher childhood BMI caused increased risk the other 3 diabetes-related traits (Fig. 3, Additional file 2: Fig. S6). Higher childhood BMI also increased the risk of receiving Metformin, a drug for T2D treatment (Fig. 3, Additional file 2: Fig. S5). We observed adverse effects of childhood BMI on self-reported hypothyroidism (OR = 1.06, 95% CI 1.03 to 1.09, P = 8.77 × 10−6) and non-cancer thyroid problems (OR = 1.07, 95% CI 1.04 to 1.10, P = 7.78 × 10−7). For lipid traits, childhood BMI was negatively correlated with HDL cholesterol level (β = − 0.13, 95% CI − 0.19 to − 0.07, P = 1.33 × 10−5). The associations with triglycerides (β = 0.09, 95% CI − 0.04 to 0.14, P = 5.09 × 10−4) and total cholesterol (β = − 0.07, 95% CI − 0.12 to − 0.02, P = 8.11 × 10−3) were suggestive. No association between childhood BMI and LDL cholesterol level was detected (P > 0.05 in all four MR methods; Additional file 1: Table S8).
Childhood BMI and musculoskeletal system traits
As shown in Fig. 3 and Additional file 2: Fig. S7, we observed adverse effects of childhood BMI on self-reported osteoarthritis (OR = 1.07, 95% CI 1.05 to 1.10, P = 7.20 × 10−8), arthrosis (OR = 1.09, 95% CI 1.06 to 1.12, P = 8.80 × 10−9), and related traits. We also found evidence that childhood BMI was positively associated with adult heel bone mineral density (BMD) (details in Additional file 1: Table S2) (β = 0.20, 95% CI 0.15 to 0.24, P = 3.40 × 10−20).
Childhood BMI and other disease-related traits
As shown in Fig. 3 and Additional file 2: Fig. S8, we found evidence that higher childhood BMI caused an increased risk of cholelithiasis (OR = 1.26, 95% CI 1.18 to 1.35, P = 3.29 × 10−5), and the risk effect was supported by three MR methods (IVW, weighted median, and MR-Egger) after multiple testing corrections. Consistent with findings about general health, higher childhood BMI was also found to be associated with reduced health satisfaction (β = − 0.13, 95% CI − 0.18 to − 0.08, P = 7.44 × 10−7).
Childhood BMI and adult lifestyle factors
As shown in Figs. 4 and 5, there is evidence that childhood BMI causally affects a total of 27 adult lifestyle factors, including 20 dietary habits, 4 smoking behaviors, usual walking pace (including three categories: slow pace (less than 3 miles per hour), steady average pace (3–4 miles per hour), and brisk pace (more than 4 miles per hour), details in Additional file 1: Table S2), pub/social club (a type of leisure/social activities, details in Additional file 1: Table S2), and alcohol intake frequency.
Childhood BMI and adult physical activities, smoking/drinking behaviors
As shown in Fig. 5 and Additional file 2: Fig. S9, for physical activities, we noticed that childhood BMI was negatively associated with usual walking pace (β = − 0.12, 95% CI − 0.15 to − 0.08, P = 3.24 × 10−10). For smoking behaviors, we observed positive associations between childhood BMI and adult smoking status. Higher childhood BMI was negatively associated with alcohol intake frequency (β = − 0.13, CI − 0.17 to − 0.09, P = 2.74 × 10−11).
Childhood BMI and adult dietary habits
We observed a positive association between childhood BMI and adult diet portion size (β = 0.26, 95% CI 0.18 to 0.34, P = 7.34 × 10−11, Fig. 5, Additional file 2: Fig. S10). In contrast, higher childhood BMI was associated with low calorie density food intake (Fig. 5, Additional file 2: Fig. S10). For example, childhood BMI was positively associated with the intake of high-fiber foods (e.g., fresh fruit intake, bran cereal, and wholemeal bread) and low fat/sugar food (e.g., skimmed milk, never/rarely using spread on bread, never eat sugar or food/drinks containing sugar). We also found negative associations between childhood BMI and the intake of meat (beef, lamb/mutton, and processed meat), full cream milk, and butter spread on bread.
Childhood BMI and other traits
In the hematological test traits, we observed the positive association between childhood BMI and several traits associated with reticulocyte (e.g., reticulocyte percentage, β = 0.10, 95% CI 0.07 to 0.14, P = 9.15 × 10−8).
We did not observe significant association between childhood BMI and education qualification related traits (Additional file 1: Table S8). For socioeconomic status, we observed suggestive evidence that childhood BMI was negatively associated with average total household income before tax (P < 0.05 in IVW, weighted median and weighted mode methods but this association did not meet our significant criterion). We did not detect any association between childhood BMI and Townsend deprivation index at recruitment (a measure of material deprivation within a population which incorporates four variables: unemployment, non-car ownership, non-home ownership and household overcrowding, details in Additional file 1: Table S2) (P > 0.05 in all four MR methods; Additional file 1: Table S8).
MR analyses in additional datasets without UK Biobank participants and PRS analysis for dietary habits in the UK Biobank data
We used several datasets (Additional file 1: Table S5) without UK Biobank participants to check whether the significant results could also be found in other studies. The results were consistent with our previous findings for disease-related traits (Additional file 1: Table S9, Additional file 2: Fig. S11). For example, childhood BMI was positively associated with disease count (β = 0.14, CI 0.06 to 0.22, P = 6.32 × 10−4). Higher childhood BMI increased the risk of CAD (OR = 1.10, CI 1.06 to 1.12, P = 1.20 × 10−6), hypertensive disease (OR = 1.21, CI 1.11 to 1.32, P = 1.33 × 10−5), T2D (OR = 1.18, CI 1.12 to 1.24, P = 8.85 × 10−11), and osteoarthritis (OR = 1.16, CI 1.06 to 1.26, P = 1.04 × 10−3).
We could not find available summary data in the European population for the other significant traits. Specifically, for dietary habits, we carried out MR analysis using adult BMI as exposure and 3 dietary habits from the Asian population as outcomes instead. Sixteen SNP instruments (Additional file 1: Table S10) were selected from the GWAS study by Wen et al. [58] in 86,757 Asians recruited from 21 studies. The outcome data were published by Matoba et al. [59], including up to 165,084 Japanese individuals collected by Biobank Japan (Additional file 1: Table S11). As shown in Additional file 1: Table S11, significant positive association between adult BMI and coffee intake was observed (β = 0.17, CI 0.11 to 0.24, P = 1.08 × 10−7). However, no association was found between adult BMI and meat/vegetable intake (P > 0.05).
We further carried out PRS analysis in the UK Biobank population. As shown in Additional file 1: Table S12, PRS for both childhood BMI and adult BMI is associated with higher portion sizes, more fruit intake and other low calorie density food intake, the direction of which was the same as the MR analysis.
Multivariable MR analyses
The independent effects of childhood BMI after accounting for adult BMI
For the 60 outcomes with significant MR analysis results, we also carried out MR analyses for adult BMI. As it might be expected, although the effect sizes were different, at least suggestive associations (P < 0.05) were detected between adult BMI and these traits (Additional file 1: Table S13). The results were similar to the results of Millard et al. [12]. We performed multivariable MR analyses to assess the causal effects of childhood BMI which might be independent of adult BMI. As shown in Additional file 1: Table S14, after accounting for adult BMI, the effects of childhood BMI on adult traits were attenuated or no longer present. At the significant level of P < 0.05, we detected the associations between childhood BMI and 14 traits, including 12 dietary habits, heel BMD, and reticulocyte percentage. Of note, the detrimental effects of childhood BMI on disease-related traits (e.g., CAD, T2D, and arthrosis) were no longer present (P > 0.05).
Positive association between adult BMI and heel BMD was no longer present after accounting for childhood BMI
We also analyzed whether the effects of adult BMI are independent of childhood BMI. As shown in Additional file 1: Table S15, at the significant level of P < 0.05, the associations between adult BMI and 70% (42/60) of the traits remained after accounting for childhood BMI. Of note, while the positive association between childhood BMI and heel BMD was significant after accounting for adult BMI (β = 0.11, CI 0.02 to 0.20, P = 0.0211, Additional file 1: Table S14), the association between adult BMI and heel BMD was no longer exist after accounting for childhood BMI (P > 0.05, Additional file 1: Table S15).
The independent outcome number for the 60 traits was 33, setting the Bonferroni P value threshold for the main MR analysis at P < 1.52 × 10−3 (0.05/33). Similar to the forward MR analysis, confident results supported by both main MR method and weighted median MR method were considered as significant. As shown in Additional file 1: Table S16, we did not detect significant association for childhood BMI. Significant associations between 6 traits and adult BMI were detected (Additional file 1: Table S17 and Additional file 2: Fig. S12), including 3 diabetes traits, overall health rating (β = − 0.36, CI − 0.45 to − 0.27, P = 2.41 × 10−14), alcohol intake frequency (β = − 0.30, CI − 0.40 to − 0.21, P = 6.86 × 10−10), and usual walking pace (β = − 0.25, CI − 0.38 to − 0.13, P = 3.53 × 10−5). In addition, we observed suggestive positive association between portion size and adult BMI (β = 0.22, CI 0.06 to 0.37, P = 6.09 × 10−3).
In this study, with GWAS summary data from public resources, we carried out two-sample MR analyses to investigate the causal effects of childhood BMI on adult outcomes with genetic correlation. We identified potential causal effects of childhood obesity on 60 adult traits. Compared with previous studies of childhood BMI which only focused on a few traits [10, 11], here we provided a phenome-wide investigation of the causal associations between childhood BMI and adult outcomes.
We observed that childhood obesity is a risk factor for general health outcomes in adulthood. Consistently, previous studies have demonstrated that high childhood BMI was associated with increased mortality and morbidity [2] in adulthood.
Specifically, we observed adverse effects of higher childhood BMI on CAD and T2D. This is consistent with the results of a previous MR study by Geng et al. [10]. We also replicated their finding about the negative association between childhood BMI and HDL cholesterol level, which is a well-known trait inversely related with CAD [60]. In addition, positive association between childhood BMI and high blood pressure was supported by different MR methods. Our analyses on treatment/medication conditions further showed that higher childhood BMI increased the risk of receiving CAD and T2D related medications, including blood pressure medication and metformin. Observational studies have also shown that higher childhood BMI is related to increased incidence of diabetes [61], CAD [3], and hypertension [62]. These data supported that childhood obesity might be a determinant of adult CAD/T2D risk.
Consistent with another MR study on childhood BMI [11], we detected positive association between childhood BMI and adult osteoarthritis, especially hip and knee pain. A previous observational study suggested that obesity from childhood had an accumulative effect on knee osteoarthritis development [63]. Similarly, a study by McFarlane et al. [64] on the 1958 British birth cohort observed a significant association with knee pain at the age of 45 years with high BMI from as early as age 11 years [64]. Moreover, another study [65] reported that the childhood overweight measures were significantly associated with adulthood knee mechanical joint pain among males. Therefore, it is possible that the effect of childhood obesity on the knee joint can persist into adulthood.
The adverse effects of childhood BMI on disease-related traits were no longer present after accounting for adult BMI
Our multivariable MR analysis results showed that the positive associations between childhood BMI and increased risks of adult diseases (e.g., CAD, T2D, and arthrosis) were no longer present (P > 0.05) after accounting for adult BMI. Consistently, Richardson et al. [17] showed that the causal adverse effects of large body size in early life on CAD and T2D is depend on adult body size. A recent observational study [15] has shown that the association between childhood overweight and adult T2D only holds if the overweight continued until puberty or later ages. These findings suggest that there is a window of opportunity to mitigate the detrimental impact of childhood obesity. Indeed, a previous study [66] observed reversal of T2D and improvements in cardiovascular risk factors after surgical weight loss in adolescents. Therefore, ensuring that childhood obesity does not persist into later life might be useful for reducing the detrimental effects of childhood obesity on adult diseases. On the other hand, since 70% of obese adults were not obese in childhood or adolescence [67], targeting obesity reduction in adults is still very important to reduce the overall burden of obesity.
The significant association between higher childhood BMI and low calorie density food intake in adulthood
For dietary habits, it was unexpected that higher childhood BMI was associated with low calorie density food intake. However, positive associations between childhood obesity and healthy diet habits have been reported in observational studies previously. For example, a healthy diet score was associated with increased odds of overweight/obesity in children from the UK [68]. Similarly, less frequent intake of energy-dense foods was associated with larger waist circumference in Swedish children [69]. It is possible that subjects suffering from childhood obesity may reduce their intake of unhealthy foods to lose weight.
The PRS analysis using UK biobank data also detected the association between higher childhood/adult BMI and low calorie density food. However, our MR analysis in the Asian population did not find any significant association between adult BMI and meat/vegetable intake. Therefore, it is likely that the association between BMI and low energy dense food is specific to the UK biobank population. Our current results might be affected by the fact that the enrolled individuals in the UK Biobank demonstrated a "healthy volunteer bias" [70], with lower rates of obesity and fewer self-reported health conditions than the general population.
We observed a positive association between childhood BMI and adult heel BMD. A previous MR study reported that adiposity is causally related to increased BMD at all sites except the skull in 5221 subjects from the Avon Longitudinal Study of Parents and Children [71]. In adults, MR analysis suggested that adiposity might be causally related to BMD at the femur [72]. Protective effect on osteoporosis of higher BMI in adults has also been reported previously [73]. We also observed a positive association between adult BMI and adult heel BMD. However, after accounting for childhood BMI, the positive association of adult BMI and heel BMD vanished, suggesting that this association depend on childhood BMI. It is widely accepted that most of the skeletal mass is acquired by the age of 20. Several studies have suggested that peak bone density is achieved by the end of adolescence [74, 75]. The risk of developing osteoporosis is influenced to a large extent by the levels of peak BMD. Our results implicated that the increasing effect on BMD of obesity might mainly work in childhood. Investigations taking peak BMD into consideration in adults are further needed to confirm our findings.
The reverse-direction causal effects
In the reverse-direction MR analyses, we did not detect significant association between the 60 traits and childhood BMI. Meanwhile, 6 traits were detected to be causally associated with adult BMI. For example, we noticed that diabetes diagnosed by doctor was negatively associated with adult BMI. This might be as expected since lipolysis, proteolysis, and acute fluid loss during diabetes could cause weight loss [76]. Of note, we detected a negative causal effect of alcohol intake frequency on adult BMI. Consistently, Tolstrup et al. [77] reported that obesity was inversely associated with drinking frequency for a given level of total alcohol intake. A previous study [78] on alcohol-dependent individuals reported that subjects consuming the highest levels of alcohol had decreased fat mass. In addition, high alcohol consumption might impair nutrient absorption [79]. However, while frequently drinking moderate amounts of alcohol may protect individuals from weight gain, heavy drinking is more consistently related to weight gain [80]. In forward MR analysis, a causal negative effect of adult BMI on alcohol intake frequency was detected. These results highlight a bidirectional relation between obesity and alcohol intake. Further studies are needed to detail the mechanism link between obesity and alcohol consumption. We also detected a negative causal effect of usual walking pace on adult BMI. Forward MR analysis showed a causal negative effect of adult BMI on usual walking pace. Previous studies have reported that obese adults prefer to walk at a slower speed than their lean counterparts [81, 82]. As a most common type of physical activity in daily life, walking is the principal component of non-exercise activity thermogenesis [83]. Since higher levels of physical activity are consistently associated with weight loss maintenance [84], increasing usual walking speed may be an active and useful strategy for weight management.
General limitations of the study
The limitations of the current study should be addressed. Firstly, because there are inevitably overlapping loci between childhood BMI and adult BMI, it is hard to identify which of these causal effects are due to early-life obesity, as opposed to late-life effects. However, childhood BMI GWASs conducted to date are notably smaller in sample size compared to adulthood GWASs; it is hard to obtain variants only associated with childhood BMI and not with overall BMI. When data for larger scale GWASs on childhood BMI are available, the power will be improved to identify more SNPs specifically associated with childhood BMI with smaller effects, and then the results of our analysis might be updated. Secondly, although our analyses supported that our results were not affected by pleiotropy, we cannot rule out the possibility of a shared genetic basis rather than a causal relationship. Thirdly, since we used GWAS summary data from the public database for our analyses, we cannot assess the effects of population stratification on our results. Summary data from multiple multi-ethnic populations might lead to biased association results since different ethnic populations have different LD structures and allele frequencies [85]. The summary data we used here were mainly derived from the European population. However, since we did not subset to the European-only results, there is a potential of bias from significant distinctions in disease outcomes between European and non-Europeans. UK Biobank is an unparalleled resource of extensive health information from 500,000 individuals [86]. Over 95% of our results are derived from the UK Biobank population. However, the UK Biobank data were reported to be skewed as wealthier and more educated [70]; this might affect the generalization of our results. Lastly, we did not take sex into account in both exposure and outcomes. Besides, clinical and public health decisions about potential interventions ideally require evidence about the effect size of the exposure on outcomes. However, this must be approached with care since Mendelian randomization estimate the effects on outcomes of a lifelong exposure to exposure risk SNPs, rather than an intervention at a specific time in life for a specific duration [22]. Therefore, the effect sizes from MR analyses in our study should not be considered equivalent to those from an RCT of a short-term intervention [87].
In summary, using public GWAS datasets, we carried out 2-sample MR analyses to investigate the causal effects of childhood BMI on adult outcomes. We identified potential causal effects of childhood obesity on 60 adult traits. Our results suggested that the adverse effect of obesity might start early from childhood, but the positive association between childhood BMI and diseases-related traits in adulthood can be attributed to individuals remaining obese in later life.
The childhood BMI dataset was downloaded from the Early Growth Genetics Consortium (http://egg-consortium.org/childhood-bmi.html, "EGG_BMI_HapMap_DISCOVERY.txt.gz").
The 903 adult outcome datasets were from the Genome-wide Complex Trait Analysis (GCTA) website (https://cnsgenomics.com/software/gcta/#DataResource), the Gene ATLAS database (http://geneatlas.roslin.ed.ac.uk/), and the LDhub GWAShare Center (http://ldsc.broadinstitute.org/). Details for each dataset can be obtained from Additional file 1: Table S2. The replication datasets (details in Additional file 1: Table S5) were from the GERA cohort [49] and the studies performed by Nikpay et al. [50] and Scott et al. [51]. The UK Biobank data were obtained under the application number 46387.
MR:
Mendelian randomization
T2D:
LD:
Linkage disequilibrium
IVW:
Inverse variance weighted
Instrument strength independent of the direct effects
No measurement error in the SNP exposure effects
SIMEX:
Simulation extrapolation
MR-PRESSO:
MR pleiotropy residual sum and outlier
PRS:
Ng M, Fleming T, Robinson M, Thomson B, Graetz N, Margono C, et al. Global, regional, and national prevalence of overweight and obesity in children and adults during 1980-2013: a systematic analysis for the Global Burden of Disease Study 2013. Lancet. 2014;384(9945):766–81. https://doi.org/10.1016/S0140-6736(14)60460-8.
Reilly JJ, Kelly J. Long-term impact of overweight and obesity in childhood and adolescence on morbidity and premature mortality in adulthood: systematic review. Int J Obes. 2011;35(7):891–8. https://doi.org/10.1038/ijo.2010.222.
Baker JL, Olsen LW, Sorensen TI. Childhood body-mass index and the risk of coronary heart disease in adulthood. N Engl J Med. 2007;357(23):2329–37. https://doi.org/10.1056/NEJMoa072515.
Weihrauch-Bluher S, Schwarz P, Klusmann JH. Childhood obesity: increased risk for cardiometabolic disease and cancer in adulthood. Metabolism. 2019;92(2019):147–52. https://doi.org/10.1016/j.metabol.2018.12.001.
Simmonds M, Burch J, Llewellyn A, Griffiths C, Yang H, Owen C, Duffy S, Woolacott N. The use of measures of obesity in childhood for predicting obesity and the development of obesity-related diseases in adulthood: a systematic review and meta-analysis. Health Technol Assess. 2015;19(43):1–336. https://doi.org/10.3310/hta19430.
Laitinen J, Taponen S, Martikainen H, Pouta A, Millwood I, Hartikainen AL, Ruokonen A, Sovio U, McCarthy MI, Franks S, Järvelin MR. Body size from birth to adulthood as a predictor of self-reported polycystic ovary syndrome symptoms. Int J Obes Relat Metab Disord. 2003;27(6):710–5. https://doi.org/10.1038/sj.ijo.0802301.
Smith GD, Ebrahim S. 'Mendelian randomization': can genetic epidemiology contribute to understanding environmental determinants of disease? Int J Epidemiol. 2003;32(1):1–22. https://doi.org/10.1093/ije/dyg070.
Viitasalo A, Schnurr TM, Pitkänen N, Hollensted M, Nielsen TRH, Pahkala K, et al. Abdominal adiposity and cardiometabolic risk factors in children and adolescents: a Mendelian randomization analysis. Am J Clin Nutr. 2019;110(5):1079–87. https://doi.org/10.1093/ajcn/nqz187.
Burgess S, Butterworth A, Thompson SG. Mendelian randomization analysis with multiple genetic variants using summarized data. Genet Epidemiol. 2013;37(7):658–65. https://doi.org/10.1002/gepi.21758.
Geng T, Smith CE, Li C, Huang T. Childhood BMI and adult type 2 diabetes, coronary artery diseases, chronic kidney disease, and cardiometabolic traits: a Mendelian randomization analysis. Diabetes Care. 2018;41(5):1089–96. https://doi.org/10.2337/dc17-2141.
Prats-Uribe A, Sayols-Baixeras S, Fernandez-Sanles A, Duarte-Salles T, Logue J, Elosua R, et al. The causal association between childhood and adulthood body mass index and osteoarthritis: a mendelian randomization study. Ann Rheumatic Dis. 2018;77(Supplement 2):1188.
Millard LAC, Davies NM, Tilling K, Gaunt TR, Davey SG. Searching for the causal effects of body mass index in over 300 000 participants in UK Biobank, using Mendelian randomization. Plos Genet. 2019;15(2):e1007951. https://doi.org/10.1371/journal.pgen.1007951.
Hyppönen E, Mulugeta A, Zhou A, Santhanakrishnan VK. A data-driven approach for studying the role of body mass in multiple diseases: a phenome-wide registry-based case-control study in the UK Biobank. Lancet Digit Health. 2019;1(3):e116–e26. https://doi.org/10.1016/S2589-7500(19)30028-7.
Must A, Jacques PF, Dallal GE, Bajema CJ, Dietz WH. Long-term morbidity and mortality of overweight adolescents. A follow-up of the Harvard Growth Study of 1922 to 1935. N Engl J Med. 1992;327(19):1350–5. https://doi.org/10.1056/NEJM199211053271904.
Bjerregaard LG, Jensen BW, Ängquist L, Osler M, Sørensen TIA, Baker JL. Change in overweight from childhood to early adulthood and risk of type 2 diabetes. N Engl J Med. 2018;378(14):1302–12. https://doi.org/10.1056/NEJMoa1713231.
Burgess S, Thompson SG. Multivariable Mendelian randomization: the use of pleiotropic genetic variants to estimate causal effects. Am J Epidemiol. 2015;181(4):251–60. https://doi.org/10.1093/aje/kwu283.
Richardson TG, Sanderson E. Use of genetic variation to separate the effects of early and later life adiposity on disease risk: mendelian randomisation study. BMJ. 2020;369:m1203.
Smith GD, Davies NM, Dimou N, Egger M, Gallo V, Golub R, et al. STROBE-MR: guidelines for strengthening the reporting of Mendelian randomization studies. PeerJ Preprints. 2019;7:e27857v1.
Felix JF, Bradfield JP, Monnereau C, van der Valk RJ, Stergiakouli E, Chesi A, et al. Genome-wide association analysis identifies three new susceptibility loci for childhood body mass index. Hum Mol Genet. 2016;25(2):389–403. https://doi.org/10.1093/hmg/ddv472.
Visscher PM, Wray NR, Zhang Q, Sklar P, McCarthy MI, Brown MA, et al. 10 years of GWAS discovery: biology, function, and translation. Am J Hum Genet. 2017;101(1):5–22. https://doi.org/10.1016/j.ajhg.2017.06.005.
Watanabe K, Stringer S, Frei O, Umićević Mirkov M, de Leeuw C. A global overview of pleiotropy and genetic architecture in complex traits. Nat Genet. 2019;51(9):1339–1348.
Davies NM, Holmes MV, Davey SG. Reading Mendelian randomisation studies: a guide, glossary, and checklist for clinicians. BMJ. 2018;362:k601.
Zhu Z, Zhang F, Hu H, Bakshi A, Robinson MR, Powell JE, et al. Integration of summary data from GWAS and eQTL studies predicts complex trait gene targets. Nat Genet. 2016;48(5):481–7.
Zheng J, Baird D, Borges MC, Bowden J, Hemani G, Haycock P, Evans DM, Smith GD. Recent developments in Mendelian randomization studies. Curr Epidemiol Rep. 2017;4(4):330–45. https://doi.org/10.1007/s40471-017-0128-6.
Bulik-Sullivan BK, Loh PR, Finucane HK, Ripke S, Yang J. LD Score regression distinguishes confounding from polygenicity in genome-wide association studies. Nat Genet. 2015;47(3):291–295.
Savage JE, Jansen PR, Stringer S. Genome-wide association meta-analysis in 269,867 individuals identifies new genetic and functional links to intelligence. Nat Genet. 2018;50(7):912–9.
International classification of diseases for mortality and morbidity statistics (11th Revision). https://icd.who.int/browse11/l-m/en. Accessed 20 May 2019.
Staley JR, Blackshaw J, Kamat MA, Ellis S, Surendran P, Sun BB, Paul DS, Freitag D, Burgess S, Danesh J, Young R, Butterworth AS. PhenoScanner: a database of human genotype-phenotype associations. Bioinformatics. 2016;32(20):3207–9. https://doi.org/10.1093/bioinformatics/btw373.
Kamat MA, Blackshaw JA, Young R, Surendran P, Burgess S, Danesh J, Butterworth AS, Staley JR. PhenoScanner V2: an expanded tool for searching human genotype-phenotype associations. Bioinformatics. 2019;35(22):4851–3. https://doi.org/10.1093/bioinformatics/btz469.
Qiao Y, Ma J, Wang Y, Li W, Katzmarzyk PT, Chaput JP, et al. Birth weight and childhood obesity: a 12-country study. Int J Obes Suppl. 2015;5(Suppl 2):S74–9. https://doi.org/10.1038/ijosup.2015.23.
Barker DJ. The developmental origins of chronic adult disease. Acta Paediatr Suppl. 2004;93(446):26–33.
Ahrens W, Pigeot I, Pohlabeln H, De Henauw S, Lissner L, Molnar D, et al. Prevalence of overweight and obesity in European children below the age of 10. Int J Obes. 2014;38(Suppl 2):S99–107. https://doi.org/10.1038/ijo.2014.140.
Hahn RA, Truman BI. Education improves public health and promotes health equity. Int J Health Serv. 2015;45(4):657–78. https://doi.org/10.1177/0020731415585986.
Fonseca R, Michaud P-C, Zheng Y. The effect of education on health: evidence from national compulsory schooling reforms. SERIEs. 2020;11(1):83–103. https://doi.org/10.1007/s13209-019-0201-0.
Moller SE, Ajslev TA, Andersen CS, Dalgard C, Sorensen TI. Risk of childhood overweight after exposure to tobacco smoking in prenatal and early postnatal life. Plos One. 2014;9(10):e109184. https://doi.org/10.1371/journal.pone.0109184.
Sukjamnong S, Chan YL, Zakarya R, Saad S, Sharma P, Santiyanont R, et al. Effect of long-term maternal smoking on the offspring's lung health . Am J Physiol Lung Cell Mol Physiol. 2017;313(2):L416-Ll23.
Clifford A, Lang L, Chen R. Effects of maternal cigarette smoking during pregnancy on cognitive parameters of children and young adults: a literature review. Neurotoxicol Teratol. 2012;34(6):560–70. https://doi.org/10.1016/j.ntt.2012.09.004.
Montgomery SM, Ekbom A. Smoking during pregnancy and diabetes mellitus in a British longitudinal birth cohort. Bmj. 2002;324(7328):26–7. https://doi.org/10.1136/bmj.324.7328.26.
Bowden J, Spiller W, Del Greco MF, Sheehan N, Thompson J, Minelli C, et al. Improving the visualization, interpretation and analysis of two-sample summary data Mendelian randomization via the Radial plot and Radial regression. Int J Epidemiol. 2018;47(6):2100. https://doi.org/10.1093/ije/dyy265.
Bowden J, Davey Smith G, Burgess S. Mendelian randomization with invalid instruments: effect estimation and bias detection through Egger regression. Int J Epidemiol. 2015;44(2):512–25. https://doi.org/10.1093/ije/dyv080.
Cochran WG. The combination of estimates from different experiments. Biometrics. 1954;10(1):101–29. https://doi.org/10.2307/3001666.
Bowden J, Del Greco MF, Minelli C, Davey Smith G, Sheehan NA, Thompson JR. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic. Int J Epidemiol. 2016;45(6):1961–74. https://doi.org/10.1093/ije/dyw220.
Hemani G, Bowden J, Davey SG. Evaluating the potential role of pleiotropy in Mendelian randomization studies. Hum Mol Genet. 2018;27(R2):R195–208. https://doi.org/10.1093/hmg/ddy163.
Rucker G, Schwarzer G, Carpenter JR, Binder H, Schumacher M. Treatment-effect estimates adjusted for small-study effects via a limit meta-analysis. Biostatistics. 2011;12(1):122–42. https://doi.org/10.1093/biostatistics/kxq046.
Bowden J, Del Greco MF, Minelli C, Davey Smith G, Sheehan N, Thompson J. A framework for the investigation of pleiotropy in two-sample summary data Mendelian randomization. Stat Med. 2017;36(11):1783–802. https://doi.org/10.1002/sim.7221.
Bowden J, Davey Smith G, Haycock PC, Burgess S. Consistent estimation in Mendelian randomization with some invalid instruments using a weighted median estimator. Genet Epidemiol. 2016;40(4):304–14. https://doi.org/10.1002/gepi.21965.
Hartwig FP, Davey Smith G, Bowden J. Robust inference in summary data Mendelian randomization via the zero modal pleiotropy assumption. Int J Epidemiol. 2017;46(6):1985–98. https://doi.org/10.1093/ije/dyx102.
Verbanck M, Chen CY, Neale B, Do R. Detection of widespread horizontal pleiotropy in causal relationships inferred from Mendelian randomization between complex traits and diseases. Nat Genet. 2018;50(5):693–8. https://doi.org/10.1038/s41588-018-0099-7.
Zhu Z, Zheng Z, Zhang F, Wu Y, Trzaskowski M, Maier R. Causal associations between risk factors and common diseases inferred from GWAS summary data. Nat Commun. 2018;9(1):224. https://doi.org/10.1038/s41467-017-02317-2.
Nikpay M, Goel A, Won HH, Hall LM, Willenborg C, Kanoni S, et al. A comprehensive 1,000 Genomes-based genome-wide association meta-analysis of coronary artery disease. Nat Genet. 2015;47(10):1121–30. https://doi.org/10.1038/ng.3396.
Scott RA, Scott LJ, Mägi R, Marullo L, Gaulton KJ, Kaakinen M, et al. An expanded genome-wide association study of type 2 diabetes in Europeans. Diabetes. 2017;66(11):2888–902. https://doi.org/10.2337/db16-1253.
Zheng J, Richardson TG, Millard LAC, Hemani G, Elsworth BL, Raistrick CA, et al. PhenoSpD: an integrated toolkit for phenotypic correlation estimation and multiple testing correction using GWAS summary statistics. Gigascience. 2018;7(8):giy090.
Euesden J, Lewis CM, O'Reilly PF. PRSice: polygenic risk score software. Bioinformatics. 2015;31(9):1466–8. https://doi.org/10.1093/bioinformatics/btu848.
Burgess S, Davies NM, Thompson SG. Bias due to participant overlap in two-sample Mendelian randomization. Genet Epidemiol. 2016;40(7):597–608. https://doi.org/10.1002/gepi.21998.
Locke AE, Kahali B, Berndt SI, Justice AE, Pers TH, Day FR, et al. Genetic studies of body mass index yield new insights for obesity biology. Nature. 2015;518(7538):197–206. https://doi.org/10.1038/nature14177.
Yengo L, Sidorenko J, Kemper KE, Zheng Z, Wood AR, Weedon MN, Frayling TM, Hirschhorn J, Yang J, Visscher PM, the GIANT Consortium. Meta-analysis of genome-wide association studies for height and body mass index in ∼700000 individuals of European ancestry. Hum Mol Genet. 2018;27(20):3641–9. https://doi.org/10.1093/hmg/ddy271.
Chang CC, Chow CC, Tellier LC, Vattikuti S, Purcell SM, Lee JJ. Second-generation PLINK: rising to the challenge of larger and richer datasets. Gigascience. 2015;4(1):7. https://doi.org/10.1186/s13742-015-0047-8.
Wen W, Zheng W, Okada Y, Takeuchi F, Tabara Y, Hwang JY, et al. Meta-analysis of genome-wide association studies in East Asian-ancestry populations identifies four new loci for body mass index. Hum Mol Genet. 2014;23(20):5492–504. https://doi.org/10.1093/hmg/ddu248.
Matoba N, Akiyama M, Ishigaki K. GWAS of 165,084 Japanese individuals identified nine loci associated with dietary habits. Nat Hum Behav. 2020;4(3):308–16.
Barter P, Gotto AM, LaRosa JC, Maroni J, Szarek M, Grundy SM, et al. HDL cholesterol, very low levels of LDL cholesterol, and cardiovascular events. N Engl J Med. 2007;357(13):1301–10. https://doi.org/10.1056/NEJMoa064278.
Tirosh A, Shai I, Afek A, Dubnov-Raz G, Ayalon N, Gordon B, Derazne E, Tzur D, Shamis A, Vinker S, Rudich A. Adolescent BMI trajectory and risk of diabetes versus coronary disease. N Engl J Med. 2011;364(14):1315–25. https://doi.org/10.1056/NEJMoa1006992.
Zhang T, Zhang H, Li Y, Sun D, Li S, Fernandez C, Qi L, Harville E, Bazzano L, He J, Xue F, Chen W. Temporal relationship between childhood body mass index and insulin and its impact on adult hypertension: the Bogalusa Heart Study. Hypertension. 2016;68(3):818–23. https://doi.org/10.1161/HYPERTENSIONAHA.116.07991.
Wills AK, Black S, Cooper R, Coppack RJ, Hardy R, Martin KR, Cooper C, Kuh D. Life course body mass index and risk of knee osteoarthritis at the age of 53 years: evidence from the 1946 British birth cohort study. Ann Rheum Dis. 2012;71(5):655–60. https://doi.org/10.1136/ard.2011.154021.
Macfarlane GJ, de Silva V, Jones GT. The relationship between body mass index across the life course and knee pain in adulthood: results from the 1958 birth cohort study. Rheumatology (Oxford). 2011;50(12):2251–6. https://doi.org/10.1093/rheumatology/ker276.
Antony B, Jones G, Venn A, Cicuttini F, March L, Blizzard L, Dwyer T, Cross M, Ding C. Association between childhood overweight measures and adulthood knee pain, stiffness and dysfunction: a 25-year cohort study. Ann Rheum Dis. 2015;74(4):711–7. https://doi.org/10.1136/annrheumdis-2013-204161.
Inge TH, Miyano G, Bean J, Helmrath M, Courcoulas A, Harmon CM, Chen MK, Wilson K, Daniels SR, Garcia VF, Brandt ML, Dolan LM. Reversal of type 2 diabetes mellitus and improvements in cardiovascular risk factors after surgical weight loss in adolescents. Pediatrics. 2009;123(1):214–22. https://doi.org/10.1542/peds.2008-0522.
Simmonds M, Llewellyn A, Owen CG, Woolacott N. Predicting adult obesity from childhood obesity: a systematic review and meta-analysis. Obes Rev. 2016;17(2):95–107. https://doi.org/10.1111/obr.12334.
Wilkie HJ, Standage M, Gillison FB, Cumming SP, Katzmarzyk PT. Multiple lifestyle behaviours and overweight and obesity among children aged 9-11 years: results from the UK site of the International Study of Childhood Obesity, Lifestyle and the Environment. BMJ Open. 2016;6(2):e010677. https://doi.org/10.1136/bmjopen-2015-010677.
Lehto R, Ray C, Lahti-Koski M, Roos E. Health behaviors, waist circumference and waist-to-height ratio in children. Eur J Clin Nutr. 2011;65(7):841–8. https://doi.org/10.1038/ejcn.2011.49.
Fry A, Littlejohns TJ, Sudlow C, Doherty N, Adamska L, Sprosen T, Collins R, Allen NE. Comparison of sociodemographic and health-related characteristics of UK Biobank participants with those of the general population. Am J Epidemiol. 2017;186(9):1026–34. https://doi.org/10.1093/aje/kwx246.
Kemp JP, Sayers A, Smith GD, Tobias JH, Evans DM. Using Mendelian randomization to investigate a possible causal relationship between adiposity and increased bone mineral density at different skeletal sites in children. Int J Epidemiol. 2016;45(5):1560–72. https://doi.org/10.1093/ije/dyw079.
Warodomwichit D, Sritara C, Thakkinstian A, Chailurkit LO, Yamwong S, Ratanachaiwong W, Ongphiphadhanakul B, Sritara P. Causal inference of the effect of adiposity on bone mineral density in adults. Clin Endocrinol. 2013;78(5):694–9. https://doi.org/10.1111/cen.12061.
Zhu Z, Zheng Z, Zhang F, Wu Y, Trzaskowski M, Maier R. Causal associations between risk factors and common diseases inferred from GWAS summary data. Nat Commun. 2018;9(1):224.
Theintz G, Buchs B, Rizzoli R, Slosman D, Clavien H, Sizonenko PC, Bonjour JP. Longitudinal monitoring of bone mass accumulation in healthy adolescents: evidence for a marked reduction after 16 years of age at the levels of lumbar spine and femoral neck in female subjects. J Clin Endocrinol Metab. 1992;75(4):1060–5. https://doi.org/10.1210/jcem.75.4.1400871.
Matkovic V, Jelic T, Wardlaw GM, Ilich JZ, Goel PK, Wright JK, Andon MB, Smith KT, Heaney RP. Timing of peak bone mass in Caucasian females and its implication for the prevention of osteoporosis. Inference from a cross-sectional model. J Clin Invest. 1994;93(2):799–808. https://doi.org/10.1172/JCI117034.
Alberti KG, Zimmet PZ. Definition, diagnosis and classification of diabetes mellitus and its complications. Part 1: diagnosis and classification of diabetes mellitus provisional report of a WHO consultation. Diabet Med. 1998;15(7):539–53. https://doi.org/10.1002/(SICI)1096-9136(199807)15:7<539::AID-DIA668>3.0.CO;2-S.
Tolstrup JS, Heitmann BL, Tjønneland AM, Overvad OK, Sørensen TI, Grønbaek MN. The relation between drinking pattern and body mass index and waist and hip circumference. Int J Obes. 2005;29(5):490–7. https://doi.org/10.1038/sj.ijo.0802874.
de Timary P, Cani PD, Duchemin J, Neyrinck AM, Gihousse D, Laterre PF, Badaoui A, Leclercq S, Delzenne NM, Stärkel P. The loss of metabolic control on alcohol drinking in heavy drinking alcohol-dependent subjects. PLoS One. 2012;7(7):e38682. https://doi.org/10.1371/journal.pone.0038682.
Lieber CS. Relationships between nutrition, alcohol use, and liver disease. Alcohol Res Health. 2003;27(3):220–31.
Shelton NJ, Knott CS. Association between alcohol calorie intake and overweight and obesity in English adults. Am J Public Health. 2014;104(4):629–31. https://doi.org/10.2105/AJPH.2013.301643.
de Souza SA, Faintuch J, Valezi AC, Sant' Anna AF, Gama-Rodrigues JJ, de Batista Fonseca IC, et al. Gait cinematic analysis in morbidly obese patients. Obes Surg 2005;15(9):1238–1242, doi: https://doi.org/10.1381/096089205774512627.
Malatesta D, Vismara L, Menegoni F, Galli M, Romei M, Capodaglio P. Mechanical external work and recovery at preferred walking speed in obese subjects. Med Sci Sports Exerc. 2009;41(2):426–34. https://doi.org/10.1249/MSS.0b013e31818606e7.
Frühbeck G. Does a NEAT difference in energy expenditure lead to obesity? Lancet. 2005;366(9486):615–6. https://doi.org/10.1016/S0140-6736(05)66834-1.
Ostendorf DM, Caldwell AE, Creasy SA. Physical activity energy expenditure and total daily energy expenditure in successful weight loss maintainers. Obesity (Silver Spring). 2019;27(3):496–504. https://doi.org/10.1002/oby.22373.
Fu J, Festen EA, Wijmenga C. Multi-ethnic studies in complex traits. Hum Mol Genet. 2011;20(R2):R206–13. https://doi.org/10.1093/hmg/ddr386.
Keyes KM, Westreich D. UK Biobank, big data, and the consequences of non-representativeness. Lancet. 2019;393(10178):1297. https://doi.org/10.1016/S0140-6736(18)33067-8.
Holmes MV, Ala-Korpela M, Smith GD. Mendelian randomization in cardiometabolic disease: challenges in evaluating causality. Nat Rev Cardiol. 2017;14(10):577–90. https://doi.org/10.1038/nrcardio.2017.78.
We gratefully acknowledge the laboratories who submitted the GWAS summary data to the public databases on which our study is based. We also thank UK Biobank for developing and curating their data resources.
This study is supported by the National Natural Science Foundation of China: (31871264, 32070588), Natural Science Foundation of Zhejiang Province (LWY20H060001), and the Fundamental Research Funds for the Central Universities. No funding bodies had any role in the study design, data collection, and analysis, decision to publish, or preparation of the manuscript.
Shan-Shan Dong and Kun Zhang contributed equally to this work.
Key Laboratory of Biomedical Information Engineering of Ministry of Education, Biomedical Informatics & Genomics Center, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
Shan-Shan Dong, Kun Zhang, Yan Guo, Jing-Miao Ding, Yu Rong, Jun-Cheng Feng, Shi Yao, Ruo-Han Hao, Feng Jiang, Jia-Bin Chen, Hao Wu, Xiao-Feng Chen & Tie-Lin Yang
National and Local Joint Engineering Research Center of Biodiagnosis and Biotherapy, The Second Affiliated Hospital, Xi'an Jiaotong University, Xi'an, 710004, China
Tie-Lin Yang
Shan-Shan Dong
Kun Zhang
Yan Guo
Jing-Miao Ding
Yu Rong
Jun-Cheng Feng
Shi Yao
Ruo-Han Hao
Feng Jiang
Jia-Bin Chen
Xiao-Feng Chen
T.-L.Y. and S.-S.D. designed the study. S.-S.D. and Y.G. wrote and edited the manuscript. S.-S.D., K.Z., J.-M.D., J.-C.F., S.Y., F.J., X.-F.C., H.W., and R.-H.H. collected and analyzed the data. J.-M.D., S.Y., J.-B.C., and Y.R. drew the figures. All authors read and approved the final manuscript.
Correspondence to Tie-Lin Yang.
This file provides the details of Supplementary tables S1-S17.
This file provides the details of Supplementary figures S1-S12.
Dong, SS., Zhang, K., Guo, Y. et al. Phenome-wide investigation of the causal associations between childhood BMI and adult trait outcomes: a two-sample Mendelian randomization study. Genome Med 13, 48 (2021). https://doi.org/10.1186/s13073-021-00865-3
Accepted: 11 March 2021
Adult outcome
The impact of genomics on precision public health
|
CommonCrawl
|
Dr Denis Erkal
Senior Lecturer of Astrophysics
[email protected]
12 BC 03
Academic and research departments
Astrophysics Research Group.
I am a senior lecturer in the astrophysics research group at the University of Surrey since 2017. I was previously a postdoc at the Institute of Astronomy at the University of Cambridge from 2013-2017. I received my PhD from the University of Chicago in 2013.
My work has mainly focussed on understanding on how the Milky Way was built by the accretion of many smaller systems. In particular, I am interested in tidal streams which form as globular clusters or dwarf galaxies are disrupted by the tides of the Milky Way. These streams roughly follow orbits and are excellent tracers of the potential of our Galaxy.
Expand biography
Galactic archaeology; Near field cosmology
Twelve for dinner: The Milky Way's feeding habits shine a light on dark matter
PhD student selected for prestigious New York Pre-Doctoral Program
Astronomers discover an oversized black hole population in the star cluster Palomar 5
Scientists find remnant of strange dismembered star cluster at galaxy's edge
Hyper-fast star spotted after ejection by our galaxy's supermassive black hole
The Gaia Sausage: The major collision that changed the Milky Way galaxy
Surrey welcomes public for Stargazing Live
My research interests fall under the umbrellas of galactic archaeology and near-field cosmology. In particular, I am interested both in studying how the Milky Way was built up by the accretion of smaller systems and in using our Galaxy to learn about the properties of dark matter.
I am involved in the S5 collaboration, WEAVE, and the 4MOST survey.
Expand research interests
Postgraduate research supervision
I have supervised masters and PhD students at Surrey and Cambridge. I am happy to support applications for PhD studentships. Please contact me for more details.
I am currently supervising
Sophia Lilleengen
Tariq Hilmi
Madison Walder
I am currently teaching
Energy, Entropy, and Numerical Physics (PHY2063)
Scientific Investigation Skills (PHY1035)
Expand my teaching
W. Cerny, J. D. Simon, T. S. Li, A. Drlica-Wagner, A. B. Pace, C. E. Martínez-Vázquez, A. H. Riley, B. Mutlu-Pakdil, S. Mau, P. S. Ferguson, D. Erkal, R. R. Munoz, C. R. Bom, J. L. Carlin, D. Carollo, Y. Choi, A. P. Ji, V. Manwadkar, D. Martínez-Delgado, A. E. Miller, N. E. D. Noël, J. D. Sakowska, D. J. Sand, G. S. Stringfellow, E. J. Tollerud, A. K. Vivas, J. A. Carballo-Bello, D. Hernandez-Lang, D. J. James, D. L. Nidever, J. L. Nilo Castellon, K. A. G. Olsen, A. Zenteno (2023)Pegasus IV: Discovery and Spectroscopic Confirmation of an Ultra-faint Dwarf Galaxy in the Constellation Pegasus, In: The Astrophysical journal942(2)111 IOP
DOI: 10.3847/1538-4357/aca1c3
We report the discovery of Pegasus IV, an ultra-faint dwarf galaxy found in archival data from the Dark Energy Camera processed by the DECam Local Volume Exploration Survey. Pegasus IV is a compact, ultra-faint stellar system ( r 1 / 2 = 41 − 6 + 8 pc; M V = −4.25 ± 0.2 mag) located at a heliocentric distance of 90 − 6 + 4 kpc . Based on spectra of seven nonvariable member stars observed with Magellan/IMACS, we confidently resolve Pegasus IV's velocity dispersion, measuring σ v = 3.3 − 1.1 + 1.7 km s −1 (after excluding three velocity outliers); this implies a mass-to-light ratio of M 1 / 2 / L V , 1 / 2 = 167 − 99 + 224 M ⊙ / L ⊙ for the system. From the five stars with the highest signal-to-noise spectra, we also measure a systemic metallicity of [Fe/H] = − 2.63 − 0.30 + 0.26 dex, making Pegasus IV one of the most metal-poor ultra-faint dwarfs. We tentatively resolve a nonzero metallicity dispersion for the system. These measurements provide strong evidence that Pegasus IV is a dark-matter-dominated dwarf galaxy, rather than a star cluster. We measure Pegasus IV's proper motion using data from Gaia Early Data Release 3, finding ( μ α * , μ δ ) = (0.33 ± 0.07, −0.21 ± 0.08) mas yr −1 . When combined with our measured systemic velocity, this proper motion suggests that Pegasus IV is on an elliptical, retrograde orbit, and is currently near its orbital apocenter. Lastly, we identify three potential RR Lyrae variable stars within Pegasus IV, including one candidate member located more than 10 half-light radii away from the system's centroid. The discovery of yet another ultra-faint dwarf galaxy strongly suggests that the census of Milky Way satellites is still incomplete, even within 100 kpc.
Denis Erkal (2022)Dynamics in the outskirts of four Milky Way globular clusters: it's the tides that dominate, In: MNRASpp. 1-18
We present the results of a spectroscopic survey of the outskirts of 4 globular clusters— NGC 1261, NGC 4590, NGC 1904, and NGC 1851— covering targets within 1 degree from the cluster centres, with 2dF/AAOmega on the Anglo-Australian Telescope (AAT) and FLAMES on the Very Large Telescope (VLT). We extracted chemo-dynamical information for individual stars, from which we estimated the velocity dispersion profile and the rotation of each cluster. The observations are compared to direct N-body simulations and appropriate limepy/spes models for each cluster to interpret the results. In NGC 1851, the detected internal rotation agrees with existing literature, and NGC 1261 shows some rotation signal beyond the truncation radius, likely coming from the escaped stars. We find that the dispersion profiles for both the observations and the simulations for NGC 1261, NGC 1851, and NGC 1904 do not decrease as the limepy/spes models predict beyond the truncation radius, where the N-body simulations show that escaped stars dominate; the dispersion profile of NGC 4590 follows the predictions of the limepy/spes models, though the data do not effectively extend beyond the truncation radius. The increasing/flat dispersion profiles in the outskirts of NGC 1261, NGC 1851 and NGC 1904, are reproduced by the simulations. Hence, the increasing/flat dispersion profiles of the clusters in question can be explained by the tidal interaction with the Galaxy without introducing dark matter.
Sophia Lilleengen, Michael S. Petersen, Denis Erkal, Jorge Peñarrubia, Sergey E. Koposov, Ting S Li, Lara R. Cullinane, Alexander P. Ji, Kyler Kuehn, Geraint F. Lewis, Dougal Mackey, Andrew B. Pace, Nora Shipp, Daniel B. Zucker, Joss Bland-Hawthorn, Tariq Hilmi (2022)The effect of the deforming dark matter haloes of the Milky Way and the Large Magellanic Cloud on the Orphan-Chenab stream, In: Monthly Notices of the Royal Astronomical Society518(1)pp. 774-790
DOI: 10.1093/mnras/stac3108
It has recently been shown that the Large Magellanic Cloud (LMC) has a substantial effect on the Milky Way's stellar halo and stellar streams. Here, we explore how deformations of the Milky Way and LMC's dark matter haloes affect stellar streams, and whether these effects are observable. In particular, we focus on the Orphan-Chenab (OC) stream which passes particularly close to the LMC and spans a large portion of the Milky Way's halo. We represent the Milky Way–LMC system using basis function expansions that capture their evolution in an-body simulation. We present the properties of this system, such as the evolution of the densities and force fields of each galaxy. The OC stream is evolved in this time-dependent, deforming potential, and we investigate the effects of the various moments of the Milky Way and the LMC. We find that the simulated OC stream is strongly influenced by the deformations of both the Milky Way and the LMC and that this effect is much larger than current observational errors. In particular, the Milky Way dipole has the biggest impact on the stream, followed by the evolution of the LMC's monopole, and the LMC's quadrupole. Detecting these effects would confirm a key prediction of collisionless, cold dark matter, and would be a powerful test of alternative dark matter and alternative gravity models.
Christopher Eckner, Francesca Calore, Denis Erkal, Sophia Lilleengen, Michael S. Petersen (2023)How do the dynamics of the Milky Way -Large Magellanic Cloud system affect gamma-ray constraints on particle dark matter?, In: Monthly Notices of the Royal Astronomical Society518(3)pp. 4138-4158 Oxford University Press
Previous studies on astrophysical dark matter (DM) constraints have all assumed that the Milky Way's (MW) DM halo can be modelled in isolation. However, recent work suggests that the MW's largest dwarf satellite, the Large Magellanic Cloud (LMC), has a mass of 10-20% that of the MW and is currently merging with our Galaxy. As a result, the DM haloes of the MW and LMC are expected to be strongly deformed. We here address and quantify the impact of the dynamical response caused by the passage of the LMC through the MW on the prospects for indirect DM searches. Utilising a set of state-of-the-art numerical simulations of the evolution of the MW-LMC system, we derive the DM distribution in both galaxies at the present time based on the Basis Function Expansion formalism. Consequently, we build J-factor all-sky maps of the MW-LMC system to study the impact of the LMC passage on gamma-ray indirect searches for thermally produced DM annihilating in the outer MW halo as well as within the LMC halo standalone. We conduct a detailed analysis of 12 years of Fermi-LAT data that incorporates various large-scale gamma-ray emission components and we quantify the systematic uncertainty associated with the imperfect knowledge of the astrophysical gamma-ray sources. We find that the dynamical response caused by the LMC passage can alter the constraints on the velocity-averaged annihilation cross section for weak scale particle DM at a level comparable to the existing observational uncertainty of the MW halo's density profile and total mass.
Andrew B Pace, Denis Erkal, Ting S Li (2022)Proper Motions, Orbits, and Tidal Influences of Milky Way Dwarf Spheroidal Galaxies, In: The Astrophysical Journal940136 IOP Publishing
DOI: 10.3847/1538-4357/ac997b
We combine Gaia early data release 3 astrometry with accurate photometry and utilize a probabilistic mixture model to measure the systemic proper motion of 52 dwarf spheroidal (dSph) satellite galaxies of the Milky Way (MW). For the 46 dSphs with literature line-of-sight velocities we compute orbits in both a MW and a combined MW + Large Magellanic Cloud (LMC) potential and identify Car II, Car III, Hor I, Hyi I, Phx II, and Ret II as likely LMC satellites. 40% of our dSph sample has a >25% change in pericenter and/or apocenter with the MW + LMC potential. For these orbits, we use a Monte Carlo sample for the observational uncertainties for each dSph and the uncertainties in the MW and LMC potentials. We predict that Ant II, Boo III, Cra II, Gru II, and Tuc III should be tidally disrupting by comparing each dSph's average density relative to the MW density at its pericenter. dSphs with large ellipticity (CVn I, Her, Tuc V, UMa I, UMa II, UMi, Wil 1) show a preference for their orbital direction to align with their major axis even for dSphs with large pericenters. We compare the dSph radial orbital phase to subhalos in MW-like N-body simulations and infer that there is not an excess of satellites near their pericenter. With projections of future Gaia data releases, we find that dSph's orbital precision will be limited by uncertainties in the distance and/or MW potential rather than in proper motion precision. Finally, we provide our membership catalogs to enable community follow-up.
Lara Cullinane, Dougal Mackey, Gary Da Costa, Sergey E. Koposov, Denis Erkal (2023)The Magellanic Edges Survey – IV. Complex tidal debris in the SMC outskirts, In: Monthly Notices of the Royal Astronomical Society: Letters581(1)pp. L25-L30 Oxford University Press
DOI: 10.1093/mnrasl/slac129
We use data from the Magellanic Edges Survey (MagES) in combination with Gaia EDR3 to study the extreme southern outskirts of the Small Magellanic Cloud (SMC), focussing on a field at the eastern end of a long arm-like structure which wraps around the southern periphery of the Large Magellanic Cloud (LMC). Unlike the remainder of this structure, which is thought to be comprised of perturbed LMC disc material, the aggregate properties of the field indicate a clear connection with the SMC. We find evidence for two stellar populations in the field: one having properties consistent with the outskirts of the main SMC body, and the other significantly perturbed. The perturbed population is on average ∼0.2 dex more metal-rich, and is located ∼7 kpc in front of the dominant population with a total space velocity relative to the SMC centre of ∼230 km s−1 broadly in the direction of the LMC. We speculate on possible origins for this perturbed population, the most plausible of which is that it comprises debris from the inner SMC that has been recently tidally stripped by interactions with the LMC.
W. Cerny, Joshua D. Simon, TIANRU LI, A Drlica-Wagner, A.B Pace, C. E Martınez-Vazquez, A. H. Riley, B. Mutlu-Pakdil, S Mau, P. S. Ferguson, DENIS ERKAL, Ricardo R. Muñoz, C. R Bom, J. L. Carlin, D. Carollo, Y. Choi, A. P. Ji, D Martınez-Delgado, V Manwadkar, A. E Miller, NOELIA ESTELLA DONATA NOEL, Joanna D Sakowska, D. J. Sand, G. S. Stringfellow, E. J. Tollerud, AK Vivas, Julio A. Carballo-Bello, D. Hernandez-Lang, DEBORAH JAMES, J. L. Nilo Castellon, K. A. G Olsen, A. Zenteno (2022)Pegasus IV: Discovery and Spectroscopic Confirmation of an Ultra-Faint Dwarf Galaxy in the Constellation Pegasus
DOI: 10.48550/arXiv.2203.11788
We report the discovery of Pegasus IV, an ultra-faint dwarf galaxy found in archival data from the Dark Energy Camera processed by the DECam Local Volume Exploration Survey. Pegasus IV is a compact, ultra-faint stellar system ($r_{1/2} = 41^{+8}_{-6}$ pc; $M_V = -4.25 \pm 0.2$ mag) located at a heliocentric distance of $90^{+4}_{-6}$ kpc. Based on spectra of seven non-variable member stars observed with Magellan/IMACS, we confidently resolve Pegasus IV's velocity dispersion, measuring $\sigma_{v} = 3.3^{+1.7}_{-1.1} \text{ km s}^{-1}$ (after excluding three velocity outliers); this implies a mass-to-light ratio of $M_{1/2}/L_{V,1/2} = 167^{+224}_{-99} M_{\odot}/L_{\odot}$ for the system. From the five stars with the highest signal-to-noise spectra, we also measure a systemic metallicity of $\rm [Fe/H] = -2.67^{+0.25}_{-0.29}$ dex, making Pegasus IV one of the most metal-poor ultra-faint dwarfs. We tentatively resolve a non-zero metallicity dispersion for the system. These measurements provide strong evidence that Pegasus IV is a dark-matter-dominated dwarf galaxy, rather than a star cluster. We measure Pegasus IV's proper motion using data from Gaia Early Data Release 3, finding ($\mu_{\alpha*}, \mu_{\delta}) = (0.33\pm 0.07, -0.21 \pm 0.08) \text{ mas yr}^{-1}$. When combined with our measured systemic velocity, this proper motion suggests that Pegasus IV is on an elliptical, retrograde orbit, and is currently near its orbital apocenter. Lastly, we identify three potential RR Lyrae variable stars within Pegasus IV, including one candidate member located more than ten half-light radii away from the system's centroid. The discovery of yet another ultra-faint dwarf galaxy strongly suggests that the census of Milky Way satellites is still incomplete, even within 100 kpc.
S. Mau, W. Cerny, A.B Pace, Y. Choi, A. Drlica-Wagner, L. Santana-Silver, A.H Riley, Denis Erkal, S. Stringfellow, M. Adamow, J.L Carlin, R.A Gruendal, D. Hernandez-Lang, N. Kuropatkin, T.S Li, C.E Martinez-Vasquez, E. Morganson, B. Mutlu-Pakdil, E.H Neilsen, D.L Nidever, K.A.G Olsen, D.J Sand, E.J Tollerud, D.L Tucker, B. Yanny, A. Zenteno, S. Allam, W.A Barkhouse, K. Bechtol, E.F Bell, P. Balaji, D. Crnojevic, J. Esteves, P.S Ferguson, C. Gallart, A.K Hughes, D.J James, P. Jethwa, L.C Johnson, K. Kuehn, S. Majewski, Y-Y. Mao, P. Massana, M. McNanna, A. Monachesi, E.O Nadler, Noelia Noel, A. Palmese, F. Paz-Chinchon, A. Pieres, J. Sanchez, N. Shipp, J.D Simon, M. Soares-Santos, K. Tavangar, R.P van der Marel, A.K Vivas, A.R Walker, R.H Wechsler (2020)Two Ultra-Faint Milky Way Stellar Systems Discovered in Early Data from the DECam Local Volume Exploration Survey, In: Astrophysical Journal American Astronomical Society
We report the discovery of two ultra-faint stellar systems found in early data from the DECam Local Volume Exploration survey (DELVE). The rst system, Centaurus I (DELVE J1238
Alexander P. Ji, Sergey E. Koposov, Ting S Li, DENIS ERKAL, Andrew B. Pace, Joshua D. Simon, Vasily Belokurov, Lara Cullinane, Gary S. Da Costa, Kyler Kuehn, Geraint F. Lewis, Dougal Mackey, Nora Shipp, Jeffrey D. Simpson, Daniel B. Zucker, Terese T. Hansen, Joss Bland-Hawthorn (2021)Kinematics of Antlia 2 and Crater 2 from The Southern Stellar Stream Spectroscopic Survey (S5), In: The Astrophysical Journal IOP Publishing
We present new spectroscopic observations of the diffuse Milky Way satellite galaxies Antlia 2 and Crater 2, taken as part of the Southern Stellar Stream Spectroscopic Survey (S5). The new observations approximately double the number of confirmed member stars in each galaxy and more than double the spatial extent of spectroscopic observations in Antlia 2. A full kinematic analysis, including Gaia EDR3 proper motions, detects a clear velocity gradient in Antlia 2 and a tentative velocity gradient in Crater 2. The velocity gradient magnitudes and directions are consistent with particle stream simulations of tidal disruption. Furthermore, the orbit and kinematics of Antlia 2 require a model that includes the reflex motion of the Milky Way induced by the Large Magellanic Cloud. We also find that Antlia 2's metallicity was previously overestimated, so it lies on the empirical luminosity-metallicity relation and is likely only now experiencing substantial stellar mass loss. This low stellar mass loss contrasts with current dynamical models of Antlia 2's size and velocity dispersion, which require it to have lost more than 90% of its stars to tides. Overall, the new kinematic measurements support a tidal disruption scenario for the origin of these large and extended dwarf spheroidal galaxies.
Sergey E Koposov, Douglas Boubert, Ting S Li, Denis Erkal, Gary S Da Costa, Daniel B Zucker, Alexander P Ji, Kyler Kuehn, Geraint F Lewis, Dougal Mackey, Jeffrey D Simpson, Nora Shipp, Zhen Wan, Vasily Belokurov, Joss Bland-Hawthorn, Sarah L Martell, Thomas Nordlander, Andrew B Pace, Gayandhi M De Silva, Mei-Yu Wang (2020)Discovery of a nearby 1700 km/s star ejected from the Milky Way by Sgr A*, In: Monthly Notices of the Royal Astronomical Society491(2)pp. 2465-2480 Oxford University Press
DOI: 10.1093/mnras/stz3081
We present the serendipitous discovery of the fastest main-sequence hyper-velocity star (HVS) by the Southern Stellar Stream Spectroscopic Survey (S5). The star S5-HVS1 is a ∼2.35 M⊙ A-type star located at a distance of ∼9 kpc from the Sun and has a heliocentric radial velocity of 1017 ± 2.7 kms−1 without any signature of velocity variability. The current 3D velocity of the star in the Galactic frame is 1755 ± 50 kms−1. When integrated backwards in time, the orbit of the star points unambiguously to the Galactic Centre, implying that S5-HVS1 was kicked away from Sgr A* with a velocity of ∼1800 kms−1 and travelled for 4.8 Myr to its current location. This is so far the only HVS confidently associated with the Galactic Centre. S5-HVS1 is also the first hyper-velocity star to provide constraints on the geometry and kinematics of the Galaxy, such as the Solar motion Vy,⊙ = 246.1 ± 5.3 kms−1 or position R0 = 8.12 ± 0.23 kpc. The ejection trajectory and transit time of S5-HVS1 coincide with the orbital plane and age of the annular disc of young stars at the Galactic Centre, and thus may be linked to its formation. With the S5-HVS1 ejection velocity being almost twice the velocity of other hyper-velocity stars previously associated with the Galactic Centre, we question whether they have been generated by the same mechanism or whether the ejection velocity distribution has been constant over time.
D Erkal, V Belokurov, C F P Laporte, S E Koposov, T S Li, C J Grillmair, N Kallivayalil, A M Price-Whelan, N W Evans, K Hawkins, D Hendel, C Mateu, J F Navarro, A del Pino, C T Slater, S T Sohn (2019)The total mass of the Large Magellanic Cloud from its perturbation on the Orphan stream, In: Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP)
In a companion paper by Koposov et al., RR Lyrae from Gaia Data Release 2 are used to demonstrate that stars in the Orphan stream have velocity vectors significantly misaligned with the stream track, suggesting that it has received a large gravitational perturbation from a satellite of the Milky Way. We argue that such a mismatch cannot arise due to any realistic static Milky Way potential and then explore the perturbative effects of the Large Magellanic Cloud (LMC). We find that the LMC can produce precisely the observed motion-track mismatch and we therefore use the Orphan stream to measure the mass of the Cloud. We simultaneously fit the Milky Way and LMC potentials and infer that a total LMC mass of 1.38+0.27−0.24×1011M⊙ is required to bend the Orphan Stream, showing for the first time that the LMC has a large and measurable effect on structures orbiting the Milky Way. This has far-reaching consequences for any technique which assumes that tracers are orbiting a static Milky Way. Furthermore, we measure the Milky Way mass within 50 kpc to be 3.80+0.14−0.11×1011M⊙. Finally, we use these results to predict that, due to the reflex motion of the Milky Way in response to the LMC, the outskirts of the Milky Way's stellar halo should exhibit a bulk, upwards motion.
D Boubert, V Belokurov, Denis Erkal, G Iorio (2018)A Magellanic origin for the Virgo substructure, In: Monthly Notices of the Royal Astronomical Societysty3014 Oxford University Press (
DOI: 10.1093/mnras/sty3014
The Milky Way halo has been mapped out in recent work using a sample of RR Lyrae stars drawn from a cross-match of Gaia with 2MASS. We investigate the significant residual in this map which we constrain to lie at Galactocentric radii 12 < R < 27 kpc and extend over 2600 deg2 of the sky. A counterpart of this structure exists in both the Catalina Real Time Survey and the sample of RR Lyrae variables identified in Pan-STARRS, demonstrating that this structure is not caused by the spatial inhomogeneity of Gaia. The structure is likely the Virgo Stellar Stream and/or Virgo Over-Density. We show the structure is aligned with the Magellanic Stream and suggest that it is either debris from a disrupted dwarf galaxy that was a member of the Vast Polar Structure or that it is SMC debris from a tidal interaction of the SMC and LMC 3 Gyr ago. If the latter then the sub-structure in Virgo may have a Magellanic origin.
T S Li, S E Koposov, D B Zucker, G F Lewis, K Kuehn, J D Simpson, A P Ji, N Shipp, Y-Y Mao, M Geha, A B Pace, A D Mackey, S Allam, D L Tucker, G S Da Costa, Denis Erkal, J D Simon, J R Mould, S L Martell, Z Wan, G M De Silva, K Bechtol, E Balbinot, V Belokurov, J Bland-Hawthorn, A R Casey, L Cullinane, A Drlica-Wagner, S Sharma, A K Vivas, R H Wechsler, B Yanny (2019)The Southern Stellar Stream Spectroscopic Survey (S⁵): Overview, Target Selection, Data Reduction, Validation, and Early Science, In: Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP)
We introduce the Southern Stellar Stream Spectroscopy Survey (S⁵), an on-going program to map the kinematics and chemistry of stellar streams in the Southern Hemisphere. The initial focus of S⁵ has been spectroscopic observations of recently identified streams within the footprint of the Dark Energy Survey (DES), with the eventual goal of surveying streams across the entire southern sky. Stellar streams are composed of material that has been tidally striped from dwarf galaxies and globular clusters and hence are excellent dynamical probes of the gravitational potential of the Milky Way, as well as providing a detailed snapshot of its accretion history. Observing with the 3.9-m Anglo-Australian Telescope's 2-degree-Field fibre positioner and AAOmega spectrograph, and combining the precise photometry of DES DR1 with the superb proper motions from Gaia DR2, allows us to conduct an efficient spectroscopic survey to map these stellar streams. So far S⁵ has mapped 9 DES streams and 3 streams outside of DES; the former are the first spectroscopic observations of these recently discovered streams. In addition to the stream survey, we use spare fibres to undertake a Milky Way halo survey and a low-redshift galaxy survey. This paper presents an overview of the S⁵ program, describing the scientific motivation for the survey, target selection, observation strategy, data reduction and survey validation. Finally, we describe early science results on stellar streams and Milky Way halo stars drawn from the survey. Updates on S⁵, including future public data releases, can be found at http://s5collab.github.io.
D. Boubert, D. Erkal, A. Gualandris (2020)Deflection of the hypervelocity stars by the pull of the Large Magellanic Cloud on the Milky Way, In: Monthly Notices of the Royal Astronomical Society Oxford University Press
Stars slingshotted by the supermassive black hole at the Galactic centre escape from the Milky Way so quickly that their trajectories are almost straight lines. Previous works have shown how these `hypervelocity stars' (stars moving faster than the local Galactic escape speed) are subsequently de ected by the gravitational field of the Milky Way and the Large Magellanic Cloud (LMC), but have neglected to account for the reflex motion of the Milky Way in response to the y-by of the LMC. A consequence of this motion is that the hypervelocity stars we see in the outskirts of the Milky Way today were ejected from where the Milky Way centre was hundreds of millions of years ago. This change in perspective causes large apparent de ections of several degrees in the trajectories of the hypervelocity stars. We quantify these deflections by simulating the ejection of hypervelocity stars from an isolated Milky Way (with a spherical or flattened dark matter halo), from a fixed-in-place Milky Way with a passing LMC, and from a Milky Way which responds to the passage of the LMC, finding that LMC passage causes larger de ections than can be caused by a attened Galactic dark matter halo in CDM. The 10 as yr
J. I. Read, D. Erkal (2019)Abundance matching with the mean star formation rate: there is no missing satellites problem in the Milky Way above M200∼109M⊙, In: Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP)
We introduce a novel abundance matching technique that produces a more accurate estimate of the pre-infall halo mass, M200, for satellite galaxies. To achieve this, we abundance match with the mean star formation rate, averaged over the time when a galaxy was forming stars, ⟨SFR⟩, instead of the stellar mass, M∗. Using data from the Sloan Digital Sky Survey, the GAMA survey and the Bolshoi simulation, we obtain a statistical ⟨SFR⟩−M200 relation in ΛCDM. We then compare the pre-infall halo mass, Mabund200, derived from this relation with the pre-infall dynamical mass, Mdyn200, for 21 nearby dSph and dIrr galaxies, finding a good agreement between the two. As a first application, we use our new ⟨SFR⟩−M200 relation to empirically measure the cumulative mass function of a volume-complete sample of bright Milky Way satellites within 280 kpc of the Galactic centre. Comparing this with a suite of cosmological 'zoom' simulations of Milky Way-mass halos that account for subhalo depletion by the Milky Way disc, we find no missing satellites problem above M200∼09M⊙ in the Milky Way. We discuss how this empirical method can be applied to a larger sample of nearby spiral galaxies.
Andrew R. Casey, Alexander P. Ji, Terese T. Hansen, Ting S Li, Sergey E. Koposov, Gary Da Costa, Joss Bland-Hawthorn, Lara Cullinane, DENIS ERKAL, Geraint F. Lewis, Kyler Kuehn, Dougal Mackey, Sarah L. Martell, Andrew B. Pace, Jeffrey D. Simpson, Daniel B. Zucker (2021)Signature of a massive rotating metal-poor star imprinted in the Phoenix stellar stream *, In: The Astrophysical journal IOP Publishing
The Phoenix stellar stream has a low intrinsic dispersion in velocity and metallicity that implies the progenitor was probably a low mass globular cluster. In this work we use Magellan/MIKE high-dispersion spectroscopy of eight Phoenix stream red giants to confirm this scenario. In particular, we find negligible intrinsic scatter in metallicity (σ([Fe II/H]) = 0.04 +0.11 −0.03) and a large peak-to-peak range in [Na/Fe] and [Al/Fe] abundance ratios, consistent with the light element abundance patterns seen in the most metal-poor globular clusters. However, unlike any other globular cluster, we also find an intrinsic spread in [Sr II/Fe] spanning ∼1 dex, while [Ba II/Fe] shows nearly no intrinsic spread (σ([Ba II/H]) = 0.03 +0.10 −0.02). This abundance signature is best interpreted as slow neutron capture element production from a massive fast-rotating metal-poor star (15−20 M , v ini /v crit = 0.4, [Fe/H] = −3.8). The low inferred cluster mass suggests the system would have been unable to retain supernovae ejecta, implying that any massive fast-rotating metal-poor star that enriched the interstellar medium must have formed and evolved before the globular cluster formed. Neutron capture element production from asymptotic giant branch stars or magneto-rotational instabilities in core-collapse supernovae provide poor fits to the observations. We also report one Phoenix stream star to be a lithium-rich giant (A(Li) = 3.1 ± 0.1). At [Fe/H] = −2.93 it is among the most metal-poor lithium-rich giants known.
W. Cerny, A.B Pace, A Drlica-Wagner, S. E. Koposov, A. K. Vivas, S Mau, A. H. Riley, C. R Bom, J. L. Carlin, Y. Choi, DENIS ERKAL, P. S. Ferguson, D. J. James, T. S. Li, D Martinez-Delgado, C. E. Martínez-Vázquez, Ricardo R. Muñoz, B. Mutlu-Pakdil, K. A. G. Olsen, A Pieres, Joanna D Sakowska, D. J. Sand, J. D. Simon, A Smercina, G. S. Stringfellow, E. J. Tollerud, M. Adamów, D. Hernandez-Lang, N Kuropatkin, L. Santana-Silva, D. L. Tucker, A. Zenteno, (2021)Eridanus IV: an Ultra-Faint Dwarf Galaxy Candidate Discovered in the DECam Local Volume Exploration Survey, In: The Astrophysical Journal Letters920(2) IOP
DOI: 10.3847/2041-8213/ac2d9a
We present the discovery of a candidate ultra-faint Milky Way satellite, Eridanus IV (DELVE J0505$-$0931), detected in photometric data from the DECam Local Volume Exploration survey (DELVE). Eridanus IV is a faint ($M_V = -4.7 \pm 0.2$), extended ($r_{1/2} = 75^{+16}_{-13}$ pc), and elliptical ($\epsilon = 0.54 \pm 0.1$) system at a heliocentric distance of $76.7^{+4.0}_{-6.1}$ kpc, with a stellar population that is well-described by an old, metal-poor isochrone (age of $\tau \sim 13.0$ Gyr and metallicity of ${\rm [Fe/H] \lesssim -2.1}$ dex). These properties are consistent with the known population of ultra-faint Milky Way satellite galaxies. Eridanus IV is also prominently detected using proper motion measurements from Gaia Early Data Release 3, with a systemic proper motion of $(\mu_{\alpha} \cos \delta, \mu_{\delta}) = (+0.25 \pm 0.06, -0.10 \pm 0.05)$ mas yr$^{-1}$ measured from its horizontal branch and red giant branch member stars. We find that the spatial distribution of likely member stars hints at the possibility that the system is undergoing tidal disruption.
Denis Erkal, Vasily A Belokurov, Daniel L Parkin (2020)Equilibrium models of the Milky Way mass are biased high by the LMC, In: Monthly Notices of the Royal Astronomical Society498(4)pp. 5574-5580 Oxford University Press (OUP)
DOI: 10.1093/mnras/staa2840
Recent measurements suggest that the Large Magellanic Cloud (LMC) may weigh as much as 25 per cent of the Milky Way (MW). In this work, we explore how such a large satellite affects mass estimates of the MW based on equilibrium modelling of the stellar halo or other tracers. In particular, we show that if the LMC is ignored, the MW mass within 200 kpc is overestimated by as much as 50 per cent. This bias is due to the bulk motion in the outskirts of the Galaxy's halo and can be, at least in part, accounted for with a simple modification to the equilibrium modelling. Finally, we show that the LMC has a substantial effect on the orbit Leo I which acts to increase its present-day speed relative to the MW. We estimate that accounting for a 1.5×1011M⊙ LMC would lower the inferred MW mass to ∼1012M⊙.
Denis Erkal, T S Li, S E Koposov, V Belokurov, E Balbinot, K Bechtol, B Buncher, A Drlica-Wagner, K Kuehn, J L Marshall, C E Martínez-Vázquez, A B Pace, N Shipp, J D Simon, K M Stringer, A K Vivas, R H Wechsler, B Yanny, F B Abdalla, S Allam, J Annis, S Avila, E Bertin, D Brooks, E Buckley-Geer, D L Burke, A Carnero Rosell, M Carrasco Kind, J Carretero, C B D'Andrea, L N da Costa, C Davis, J De Vicente, P Doel, T F Eifler, A E Evrard, B Flaugher, J Frieman, J García-Bellido, E Gaztanaga, D W Gerdes, D Gruen, R A Gruendl, J Gschwend, G Gutierrez, W G Hartley, D L Hollowood, K Honscheid, D J James, E Krause, M A G Maia, M March, F Menanteau, R Miquel, R L C Ogando, A A Plazas, E Sanchez, B Santiago, V Scarpine, R Schindler, I Sevilla-Noarbe, M Smith, R C Smith, M Soares-Santos, F Sobreira, E Suchyta, M E C Swanson, G Tarle, D L Tucker, A R Walker (2018)Modelling the Tucana III stream - a close passage with the LMC, In: Monthly Notices of the Royal Astronomical Society481(3)pp. 3148-3159 Oxford University Press (OUP)
We present results of the first dynamical stream fits to the recently discovered Tucana III stream. These fits assume a fixed Milky Way potential and give proper motion predictions, which can be tested with the upcoming Gaia Data Release 2. These fits reveal that Tucana III is on an eccentric orbit around the Milky Way and, more interestingly, that Tucana III passed within 15 kpc of the Large Magellanic Cloud (LMC) approximately 75 Myr ago. Given this close passage, we fit the Tucana III stream in the combined presence of the Milky Way and the LMC. We find that the predicted proper motions depend on the assumed mass of the LMC and that the LMC can induce a substantial proper motion perpendicular to the stream track. A detection of this misalignment will directly probe the extent of the LMC's influence on our Galaxy, and has implications for nearly all methods which attempt to constraint the Milky Way potential. Such a measurement will be possible with the upcoming Gaia DR2, allowing for a measurement of the LMC's mass.
Jason L Sanders, Edward J Lilley, Eugene Vasiliev, N Wyn Evans, Denis Erkal (2020)Models of distorted and evolving dark matter haloes, In: Monthly Notices of the Royal Astronomical Society499(4)pp. 4793-4813 Oxford University Press
We investigate the ability of basis function expansions to reproduce the evolution of a Milky Way-like dark matter halo, extracted from a cosmological zoom-in simulation. For each snapshot, the density of the halo is reduced to a basis function expansion, with interpolation used to recreate the evolution between snapshots. The angular variation of the halo density is described by spherical harmonics, and the radial variation either by biorthonormal basis functions adapted to handle truncated haloes or by splines. High fidelity orbit reconstructions are attainable using either method with similar computational expense. We quantify how the error in the reconstructed orbits varies with expansion order and snapshot spacing. Despite the many possible biorthonormal expansions, it is hard to beat a conventional Hernquist–Ostriker expansion with a moderate number of terms (≳15 radial and ≳6 angular). As two applications of the developed machinery, we assess the impact of the time-dependence of the potential on (i) the orbits of Milky Way satellites and (ii) planes of satellites as observed in the Milky Way and other nearby galaxies. Time evolution over the last 5 Gyr introduces an uncertainty in the Milky Way satellites' orbital parameters of $\sim 15 \, \mathrm{per\, cent}$, comparable to that induced by the observational errors or the uncertainty in the present-day Milky Way potential. On average, planes of satellites grow at similar rates in evolving and time-independent potentials. There can be more, or less, growth in the plane's thickness, if the plane becomes less, or more, aligned with the major or minor axis of the evolving halo.
Sergey E Koposov, Matthew G Walker, Vasily Belokurov, Andrew R Casey, Alex Geringer-Sameth, Dougal Mackey, Gary Da Costa, Denis Erkal, Prashin Jethwa, Mario Mateo, Edward W Olszewski, John I Bailey III (2018)Snake in the Clouds: A new nearby dwarf galaxy in the Magellanic bridge, In: Monthly Notices of the Royal Astronomical Society Oxford University Press
We report the discovery of a nearby dwarf galaxy in the constellation of Hydrus, between the Large and the Small Magellanic Clouds. Hydrus 1 is a mildy elliptical ultra-faint system with luminosity MV ∼ −4.7 and size 53 ± 3 pc, located 28 kpc from the Sun and 24 kpc from the LMC. From spectroscopy of ∼ 30 member stars, we measure a velocity dispersion of 2.7±0.5 km s−1 and find tentative evidence for a radial velocity gradient consistent with 3 km s−1 rotation. Hydrus 1's velocity dispersion indicates that the system is dark matter dominated, but its dynamical mass-to-light ratio M/L=66+29 −20 is significantly smaller than typical for ultra-faint dwarfs at similar luminosity. The kinematics and spatial position of Hydrus 1 make it a very plausible member of the family of satellites brought into the Milky Way by the Magellanic Clouds. While Hydrus 1's proximity and well-measured kinematics make it a promising target for dark matter annihilation searches, we find no evidence for significant gamma-ray emission from Hydrus 1. The new dwarf is a metal-poor galaxy with a mean metallicity [Fe/H]=−2.5 and [Fe/H] standard deviation of 0.4 dex, similar to other systems of similar luminosity. Alpha-abundances of Hyi 1 members indicate that star-formation was extended, lasting between 0.1 and 1 Gyr, with self-enrichment dominated by SN Ia. The dwarf also hosts a highly carbon-enhanced extremely metal-poor star with [Fe/H]∼ −3.2 and [C/Fe] ∼ +3.0.
P. S. Ferguson, J. D. Sakowska, N. Shipp, A Drlica-Wagner, T. S. Li, W. Cerny, K. Tavangar, A. B. Pace, J. L. Marshall, A. H. Riley, M. Adamów, J. L. Carlin, Y. Choi, D. Erkal, D. J. James, Sergey E. Koposov, N Kuropatkin, C. E. Martínez-Vázquez, S Mau, B. Mutlu-Pakdil, K. A. G. Olsen, G. S. Stringfellow, B Yanny (2021)DELVE-ing into the Jet: a thin stellar stream on a retrograde orbit at 30 kpc, In: Astrophysical Journal American Astronomical Society
We perform a detailed photometric and astrometric analysis of stars in the Jet stream using data from the first data release of the DECam Local Volume Exploration Survey (DELVE) DR1 and Gaia EDR3. We discover that the stream extends over ∼ 29 • on the sky (increasing the known length by 18 •), which is comparable to the kinematically cold Phoenix, ATLAS, and GD-1 streams. Using blue horizontal branch stars, we resolve a distance gradient along the Jet stream of 0.2 kpc/deg, with distances ranging from D ∼ 27−34 kpc. We use natural splines to simultaneously fit the stream track, width, and intensity to quantitatively characterize density variations in the Jet stream, including a large gap, and identify substructure off the main track of the stream. Furthermore, we report the first measurement of the proper motion of the Jet stream and find that it is well-aligned with the stream track suggesting the stream has likely not been significantly perturbed perpendicular to the line of sight. Corresponding author: Peter Ferguson [email protected] 2 DELVE Collaboration Finally, we fit the stream with a dynamical model and find that the stream is on a retrograde orbit, and is well fit by a gravitational potential including the Milky Way and Large Magellanic Cloud. These results indicate the Jet stream is an excellent candidate for future studies with deeper photometry, astrometry, and spectroscopy to study the potential of the Milky Way and probe perturbations from baryonic and dark matter substructure.
P Jethwa, G Torrealba, C Navarrete, J.A. Carballo-Bello, Thomas de Boer, Denis Erkal, S.E. Koposov, S Duffau, D Geisler, M Catelan, V Belokurov (2018)Discovery of a thin stellar stream in the SLAMS survey, In: Monthly Notices of the Royal Astronomical Society480(4)pp. 5342-5351 Oxford University Press
We report the discovery of a thin stellar stream - which we name the Jet stream - cross- ing the constellations of Hydra and Pyxis. The discovery was made in data from the SLAMS survey, which comprises deep g and r imaging for a 650 square degree region above the Galactic disc performed by the CTIO Blanco + DECam. SLAMS photomet- ric catalogues have been made publicly available. The stream is approximately 0.18 degrees wide and 10 degrees long, though it is truncated by the survey footprint. Its colour-magnitude diagram is consistent with an old, metal-poor stellar population at a heliocentric distance of approximately 29 kpc. We corroborate this measurement by identifying a spatially coincident overdensity of likely blue horizontal branch stars at the same distance. There is no obvious candidate for a surviving stream progenitor.
Nilanjan Banik, Jo Bovy, Gianfranco Bertone, Denis Erkal, T. J. L de Boer (2021)Novel constraints on the particle nature of dark matter from stellar streams, In: Journal of Cosmology and Astroparticle Physics
We analyze the distribution of stars along the GD-1 stream with a combination of data from the ${\it Gaia}$ satellite and the Pan-STARRS survey, and we show that the population of subhalos predicted by the cold dark matter paradigm are necessary and sufficient to explain the perturbations observed in the linear density of stars. This allows us to set novel constraints on alternative dark matter scenarios that predict a suppression of the subhalo mass function on scales smaller than the mass of dwarf galaxies. A combined analysis of the density perturbations in the GD-1 and Pal 5 streams leads to a $95\%$ lower limit on the mass of warm dark matter thermal relics $m_{\rm WDM}>4.6$ keV; adding dwarf satellite counts strengthens this to $m_{\rm WDM}>6.3$ keV.
Eugene Vasiliev, Vasily Belokurov, Denis Erkal (2020)Tango for three: Sagittarius, LMC, and the Milky Way, In: Monthly Notices of the Royal Astronomical Society501(2)pp. 2279-2304 Oxford University Press
We assemble a catalogue of candidate Sagittarius stream members with 5D and 6D phase-space information, using astrometric data from Gaia DR2, distances estimated from RR Lyrae stars, and line-of-sight velocities from various spectroscopic surveys. We find a clear misalignment between the stream track and the direction of the reflex-corrected proper motions in the leading arm of the stream, which we interpret as a signature of a time-dependent perturbation of the gravitational potential. A likely cause of this perturbation is the recent passage of the most massive Milky Way satellite – the Large Magellanic Cloud (LMC). We develop novel methods for simulating the Sagittarius stream in the presence of the LMC, using specially tailored N-body simulations and a flexible parametrization of the Milky Way halo density profile. We find that while models without the LMC can fit most stream features rather well, they fail to reproduce the misalignment and overestimate the distance to the leading arm apocentre. On the other hand, models with an LMC mass in the range (1.3±0.3)×1011M⊙ rectify these deficiencies. We demonstrate that the stream can not be modelled adequately in a static Milky Way. Instead, our Galaxy is required to lurch toward the massive in-falling Cloud, giving the Sgr stream its peculiar shape and kinematics. By exploring the parameter space of Milky Way potentials, we determine the enclosed mass within 100 kpc to be (5.6±0.4)×1011M⊙, and the virial mass to be (9.0±1.3)×1011M⊙, and find tentative evidence for a radially-varying shape and orientation of the Galactic halo.
A D Mackey, L R Cullinane, G S Da Costa, D Erkal, S E Koposov, V Belokurov (2022)The Magellanic Edges Survey III. Kinematics of the disturbed LMC outskirts, In: Monthly Notices of the Royal Astronomical Society Oxford University Press
We explore the structural and kinematic properties of the outskirts of the Large Magellanic Cloud (LMC) using data from the Magellanic Edges Survey (MagES) and Gaia EDR3. Even at large galactocentric radii (8○ < R < 11○), we find the north-eastern LMC disk is relatively unperturbed: its kinematics are consistent with a disk of inclination ∼36.5○ and line-of-nodes position angle ∼145○ east of north. In contrast, fields at similar radii in the southern and western disk are significantly perturbed from equilibrium, with non-zero radial and vertical velocities, and distances significantly in front of the disk plane implied by our north-eastern fields. We compare our observations to simple dynamical models of the Magellanic/Milky Way system which describe the LMC as a collection of tracer particles within a rigid potential, and the Small Magellanic Cloud (SMC) as a rigid Hernquist potential. A possible SMC crossing of the LMC disk plane ∼400 Myr ago, in combination with the LMC's infall to the Milky Way potential, can qualitatively explain many of the perturbations in the outer disk. Additionally, we find the claw-like and arm-like structures south of the LMC have similar metallicities to the outer LMC disk ([Fe/H] ∼ −1), and are likely comprised of perturbed LMC disk material. The claw-like substructure is particularly disturbed, with out-of-plane velocities >60 km s−1 and apparent counter-rotation relative to the LMC's disk motion. More detailed N-body models are necessary to elucidate the origin of these southern features, potentially requiring repeated interactions with the SMC prior to ∼1 Gyr ago.
S. Mau, W. Cerny, A. B. Pace, Y. Choi, A. Drlica-Wagner, L. Santana-Silva, A. H. Riley, D. Erkal, G. S. Stringfellow, M. Adamów, J. L. Carlin, R. A. Gruendl, D. Hernandez-Lang, N. Kuropatkin, T. S. Li, C. E. Martínez-Vázquez, E. Morganson, B. Mutlu-Pakdil, E. H. Neilsen, D. L. Nidever, K. A. G. Olsen, D. J. Sand, E. J. Tollerud, D. L. Tucker, B. Yanny, A. Zenteno, S. Allam, W. A. Barkhouse, K. Bechtol, E. F. Bell, P. Balaji, D. Crnojević, J. Esteves, P. S. Ferguson, C. Gallart, A. K. Hughes, D. J. James, P. Jethwa, L. C. Johnson, K. Kuehn, S. Majewski, Y.-Y. Mao, P. Massana, M. McNanna, A. Monachesi, E. O. Nadler, N.E.D Noel, A. Palmese, F. Paz-Chinchon, A. Pieres, J. Sanchez, N. Shipp, J. D. Simon, M. Soares-Santos, K. Tavangar, R. P. van der Marel, A. K. Vivas, A. R. Walker, R. H. Wechsler (2020)Two Ultra-faint Milky Way Stellar Systems Discovered in Early Data from the DECam Local Volume Exploration Survey, In: The Astrophysical Journal890(2)136 IOP Publishing
DOI: 10.3847/1538-4357/ab6c67
We report the discovery of two ultra-faint stellar systems found in early data from the DECam Local Volume Exploration survey (DELVE). The first system, Centaurus I (DELVE J1238–4054), is identified as a resolved overdensity of old and metal-poor stars with a heliocentric distance of ${____text{}}{D}_{____odot }={116.3}_{-0.6}^{+0.6}____,____mathrm{kpc}$, a half-light radius of ${r}_{h}={2.3}_{-0.3}^{+0.4}____,____mathrm{arcmin}$, an age of $____tau ____gt 12.85____,____mathrm{Gyr}$, a metallicity of $Z={0.0002}_{-0.0002}^{+0.0001}$, and an absolute magnitude of ${M}_{V}=-{5.55}_{-0.11}^{+0.11}____,____mathrm{mag}$. This characterization is consistent with the population of ultra-faint satellites and confirmation of this system would make Centaurus I one of the brightest recently discovered ultra-faint dwarf galaxies. Centaurus I is detected in Gaia DR2 with a clear and distinct proper motion signal, confirming that it is a real association of stars distinct from the Milky Way foreground; this is further supported by the clustering of blue horizontal branch stars near the centroid of the system. The second system, DELVE 1 (DELVE J1630–0058), is identified as a resolved overdensity of stars with a heliocentric distance of ${____text{}}{D}_{____odot }={19.0}_{-0.6}^{+0.5}____,____mathrm{kpc}$, a half-light radius of ${r}_{h}={0.97}_{-0.17}^{+0.24}____,____mathrm{arcmin}$, an age of $____tau ={12.5}_{-0.7}^{+1.0}____,____mathrm{Gyr}$, a metallicity of $Z={0.0005}_{-0.0001}^{+0.0002}$, and an absolute magnitude of ${M}_{V}=-{0.2}_{-0.6}^{+0.8}____,____mathrm{mag}$, consistent with the known population of faint halo star clusters. Given the low number of probable member stars at magnitudes accessible with Gaia DR2, a proper motion signal for DELVE 1 is only marginally detected. We compare the spatial position and proper motion of both Centaurus I and DELVE 1 with simulations of the accreted satellite population of the Large Magellanic Cloud (LMC) and find that neither is likely to be associated with the LMC.
Mark Gieles, Denis Erkal, Fabio Antonini, Eduardo Balbinot, Jorge Peñarrubia (2021)A supra-massive population of stellar-mass black holes in the globular cluster Palomar 5, In: Nature Astronomy Nature Research
Palomar 5 is one of the sparsest star clusters in the Galactic halo and is best-known for its spectacular tidal tails, spanning over 20 degrees across the sky. With N-body simulations we show that both distinguishing features can result from a stellar-mass black hole population, comprising ~20% of the present-day cluster mass. In this scenario, Palomar 5 formed with a `normal' black hole mass fraction of a few per cent, but stars were lost at a higher rate than black holes, such that the black hole fraction gradually increased. This inflated the cluster, enhancing tidal stripping and tail formation. A gigayear from now, the cluster will dissolve as a 100% black hole cluster. Initially denser clusters end up with lower black hole fractions, smaller sizes, and no observable tails. Black hole-dominated, extended star clusters are therefore the likely progenitors of the recently discovered thin stellar streams in the Galactic halo.
Michele De Leo, Ricardo Carrera, Noelia E.D Noel, Justin I. Read, Denis Erkal, Carme Gallart (2020)Revealing the tidal scars of the Small Magellanic Cloud, In: Monthly Notices of the Royal Astronomical Society495(1)pp. 98-113 Oxford University Press (OUP)
Due to their close proximity, the Large and Small Magellanic Clouds (LMC/SMC) provide natural laboratories for understanding how galaxies form and evolve. With the goal of determining the structure and dynamical state of the SMC, we present new spectroscopic data for ∼3000 SMC red giant branch stars observed using the AAOmega spectrograph at the Anglo-Australian Telescope. We complement our data with further spectroscopic measurements from previous studies that used the same instrumental configuration as well as proper motions from the Gaia Data Release 2 catalogue. Analysing the photometric and stellar kinematic data, we find that the SMC centre of mass presents a conspicuous offset from the velocity centre of its associated H i gas, suggesting that the SMC gas is likely to be far from dynamical equilibrium. Furthermore, we find evidence that the SMC is currently undergoing tidal disruption by the LMC within 2 kpc of the centre of the SMC, and possibly all the way into the very core. This is revealed by a net outward motion of stars from the SMC centre along the direction towards the LMC and an apparent tangential anisotropy at all radii. The latter is expected if the SMC is undergoing significant tidal stripping, as we demonstrate using a suite of N-body simulations of the SMC/LMC system disrupting around the Milky Way. Our results suggest that dynamical models for the SMC that assume a steady state will need to be revisited.
A Drlica-Wagner, Jeffrey L Carlin, David L Nidever, P. S. Ferguson, N Kuropatkin, M. Adamów, W. Cerny, Yumi Choi, J Esteves, C. E. Martínez-Vázquez, S Mau, A. E Miller, B. Mutlu-Pakdil, E Neilsen, K Olsen, A.B Pace, A. H. Riley, Joanna D Sakowska, DJ Sand, L. Santana-Silva, EJ Tollerud, D Tucker, AK Vivas, E Zaborowski, A. Zenteno, T Abbott, S Allam, K Bechtol, C Bell, Eric F. Bell, P Bilaji, C. R Bom, Julio A. Carballo-Bello, D. Crnojević, Maria-Rosa L. Cioni, A Diaz-Ocampo, T. L de Boer, DENIS ERKAL, RA Gruendl, D. Hernandez-Lang, ASHLEY KATE HUGHES, DEBORAH JAMES, L Johnson, TIANRU LI, Y.-Y. Mao, D Martinez-Delgado, POL MASSANA ZAPATA, M. McNanna, R Morgan, E. O. Nadler, NED Noël, A. Palmese, AHG Peter, ES Rykoff, JLG Sanchez, N. Shipp, Joshua D. Simon, A Smercina, M Soares-Santos, G Stringfellow, K. Tavangar, Roeland van der Marel, Alistair Walker, RH Wechsler, J. Wu, B Yanny, MF Fitzpatrick, Luan Huang, A Jacques, R Nikutta, AMY SCOTT (2021)The DECam Local Volume Exploration Survey: Overview and First Data Release, In: The Astrophysical journal. Supplement series256(2) IOP Publishing
DOI: 10.3847/1538-4365/ac079d
The DECam Local Volume Exploration survey (DELVE) is a 126-night survey program on the 4 m Blanco Telescope at the Cerro Tololo Inter-American Observatory in Chile. DELVE seeks to understand the characteristics of faint satellite galaxies and other resolved stellar substructures over a range of environments in the Local Volume. DELVE will combine new DECam observations with archival DECam data to cover ~15,000 deg2 of high Galactic latitude (|b| > 10°) southern sky to a 5σ depth of g, r, i, z ~ 23.5 mag. In addition, DELVE will cover a region of ~2200 deg2 around the Magellanic Clouds to a depth of g, r, i ~ 24.5 mag and an area of ~135 deg2 around four Magellanic analogs to a depth of g, i ~ 25.5 mag. Here, we present an overview of the DELVE program and progress to date. Furthermore, we also summarize the first DELVE public data release (DELVE DR1), which provides point-source and automatic aperture photometry for ~520 million astronomical sources covering ~5000 deg2 of the southern sky to a 5σ point-source depth of g = 24.3 mag, r = 23.9 mag, i = 23.3 mag, and z = 22.8 mag. DELVE DR1 is publicly available via the NOIRLab Astro Data Lab science platform.
Vasily A. Belokurov, Denis Erkal (2018)Clouds in Arms, In: Monthly Notices of the Royal Astronomical Society Letters Oxford University Press
We use astrometry and broad-band photometry from Data Release 2 of the ESA's Gaia mission to map out low surface-brightness features in the stellar density distribution around the Large and Small Magellanic Clouds. The LMC appears to have grown two thin and long stellar streams in its Northern and Southern regions, highly reminiscent of spiral arms. We use computer simulations of the Magellanic Clouds' in-fall to demonstrate that these arms were likely pulled out of the LMC's disc due to the combined influence of the SMC's most recent fly-by and the tidal field of the Milky Way.
N. Shipp, T. S. Li, A. B. Pace, D. Erkal, A. Drlica-Wagner, B. Yanny, V. Belokurov, W. Wester, S. E. Koposov, K. Kuehn, G. F. Lewis, J. D. Simpson, Z. Wan, D. B. Zucker, S. L. Martell, M. Y. Wang (2019)Proper Motions of Stellar Streams Discovered in the Dark Energy Survey, In: The Astrophysical Journal885(3) The American Astronomical Society
DOI: 10.3847/1538-4357/ab44bf
We cross-match high-precision astrometric data from Gaia DR2 with accurate multi-band photometry from the Dark Energy Survey (DES) DR1 to confidently measure proper motions for nine stellar streams in the DES footprint: Aliqa Uma, ATLAS, Chenab, Elqui, Indus, Jhelum, Phoenix, Tucana III, and Turranburra. We determine low-confidence proper motion measurements for four additional stellar streams: Ravi, Wambelong, Willka Yaku, and Turbio. We find evidence for a misalignment between stream tracks and the systemic proper motion of streams that may suggest a systematic gravitational in uence from the Large Magellanic Cloud. These proper motions, when combined with radial velocity measurements, will allow for detailed orbit modeling which can be used to constrain properties of the LMC and its on nearby streams, as well as global properties of the Milky Way's gravitational potential.
D. Boubert, D. Erkal, N. W. Evans, Robert Izzard (2017)Hypervelocity runaways from the Large Magellanic Cloud, In: Monthly Notices of the Royal Astronomical Society469(2)pp. 2151-2162
DOI: 10.1093/mnras/stx848
We explore the possibility that the observed population of Galactic hypervelocity stars (HVSs) originate as runaway stars from the Large Magellanic Cloud (LMC). Pairing a binary evolution code with an N-body simulation of the interaction of the LMC with the Milky Way, we predict the spatial distribution and kinematics of an LMC runaway population. We find that runaway stars from the LMC can contribute Galactic HVSs at a rate of 3 × 10−6 yr−1. This is composed of stars at different points of stellar evolution, ranging from the main sequence to those at the tip of the asymptotic giant branch. We find that the known B-type HVSs have kinematics that are consistent with an LMC origin. There is an additional population of hypervelocity white dwarfs whose progenitors were massive runaway stars. Runaways that are even more massive will themselves go supernova, producing a remnant whose velocity will be modulated by a supernova kick. This latter scenario has some exotic consequences, such as pulsars and supernovae far from star-forming regions, and a small rate of microlensing from compact sources around the halo of the LMC.
V Belokurov, Denis Erkal, NW Evans, SE Koposov, AJ Deason (2018)Co-formation of the disc and the stellar halo, In: Monthly Notices of the Royal Astronomical Society478(1)pp. 611-619 Oxford University Press
DOI: 10.1093/mnras/sty982
Using a large sample of Main Sequence stars with 7-D measurements supplied by Gaia and SDSS, we study the kinematic properties of the local (within ∼10 kpc from the Sun) stellar halo. We demonstrate that the halo's velocity ellipsoid evolves strongly with metallicity. At the low [Fe/H] end, the orbital anisotropy (the amount of motion in the radial direction compared to the tangential one) is mildly radial with 0.2 < β < 0.4. However, for stars with [Fe/H]> −1.7 we measure extreme values of β ∼ 0.9. Across the metallicity range considered, i.e. −3 1010M⊙ around the epoch of the Galactic disc formation, i.e. between 8 and 11 Gyr ago. The radical halo anisotropy is the result of the dramatic radialisation of the massive progenitor's orbit, amplified by the action of the growing disc.
Manuel Arca Sedda, Alessia Gualandris, Tuan Do, Anja Feldmeier-Krause, Nadine Neumayer, Denis Erkal (2020)On the origin of a rotating metal-poor stellar population in the Milky Way Nuclear Cluster, In: Astrophysical Journal Letters IOP Publishing
We explore the origin of a population of stars recently detected in the inner parsec of the Milky Way Nuclear Cluster (NC), which exhibit sub-solar metallicity and a higher rotation compared to the dominant population. Using state-of-the-art N-body simulations, we model the infall of massive stellar systems into the Galactic center, both of Galactic and extra-galactic origin. We show that the newly discovered population can either be the remnant of a massive star cluster formed a few kpc away from the Galactic center (Galactic scenario) or be accreted from a dwarf galaxy originally located at 10-100 kpc (extragalactic scenario) and that reached the Galactic center 3
T. S. Li, J. D. Simon, K. Kuehn, A. B. Pace, D. Erkal, K. Bechtol, B. Yanny, A. Drlica-Wagner, J. L. Marshall, C. Lidman, E. Balbinot, D. Carollo, S. Jenkins, C. E. Martínez-Vázquez, N. Shipp, K. M. Stringer, A. K. Vivas, A. R. Walker, R. H. Wechsler, F. B. Abdalla, S. Allam, J. Annis, S. Avila, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, C. Davis, J. De Vicente, P. Doel, T. F. Eifler, A. E. Evrard, B. Flaugher, J. Frieman, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, W. G. Hartley, D. L. Hollowood, K. Honscheid, D. J. James, E. Krause, M. A. G. Maia, M. March, F. Menanteau, R. Miquel, A. A. Plazas, E. Sanchez, B. Santiago, V. Scarpine, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. L. Tucker (2018)The First Tidally Disrupted Ultra-faint Dwarf Galaxy?: A Spectroscopic Analysis of the Tucana III Stream, In: The Astrophysical Journal866(1) IOP Publishing / The American Astronomical Society
DOI: 10.3847/1538-4357/aadf91
We present a spectroscopic study of the tidal tails and core of the Milky Way satellite Tucana III, collectively referred to as the Tucana III stream, using the 2dF+AAOmega spectrograph on the Anglo-Australian Telescope and the IMACS spectrograph on the Magellan Baade Telescope. In addition to recovering the brightest nine previously known member stars in the Tucana III core, we identify 22 members in the tidal tails. We observe strong evidence for a velocity gradient of 8.0 ± 0.4 km/s-1 deg-1 over at least 3° on the sky. Based on the continuity in velocity, we confirm that the Tucana III tails are real tidal extensions of Tucana III. The large velocity gradient of the stream implies that Tucana III is likely on a radial orbit. We successfully obtain metallicities for four members in the core and 12 members in the tails. We find that members close to the ends of the stream tend to be more metal-poor than members in the core, indicating a possible metallicity gradient between the center of the progenitor halo and its edge. The spread in metallicity suggests that the progenitor of the Tucana III stream is likely a dwarf galaxy rather than a star cluster. Furthermore, we find that with the precise photometry of the Dark Energy Survey data, there is a discernible color offset between metal-rich disk stars and metal-poor stream members. This metallicity-dependent color offers a more efficient method to recognize metal-poor targets and will increase the selection efficiency of stream members for future spectroscopic follow-up programs on stellar streams.
Alis J Deason, Denis Erkal, Vasily Belokurov, Azadeh Fattahi, Facundo A Gómez, Robert J. J Grand, Rüdiger Pakmor, Xiang-Xiang Xue, Chao Liu, Chengqun Yang, Lan Zhang, Gang Zhao (2020)The mass of the Milky Way out to 100 kpc using halo stars, In: Monthly Notices of the Royal Astronomical Society501(4)pp. 5964-5972 Royal Astronomical Society
We use a distribution function analysis to estimate the mass of the Milky Way out to 100 kpc using a large sample of halo stars. These stars are compiled from the literature, and the vast majority (~98%) have 6D phase-space information. We pay particular attention to systematic effects, such as the dynamical influence of the Large Magellanic Cloud (LMC), and the effect of unrelaxed substructure. The LMC biases the (pre-LMC infall) halo mass estimates towards higher values, while realistic stellar halos from cosmological simulations tend to underestimate the true halo mass. After applying our method to the Milky Way data we find a mass within 100 kpc of M(< 100 kpc) = 6.07 +/- 0.29 (stat.) +/- 1.21 (sys.) x 10^11 M_Sun. For this estimate, we have approximately corrected for the reflex motion induced by the LMC using the Erkal et al. model, which assumes a rigid potential for the LMC and MW. Furthermore, stars that likely belong to the Sagittarius stream are removed, and we include a 5% systematic bias, and a 20% systematic uncertainty based on our tests with cosmological simulations. Assuming the mass-concentration relation for Navarro-Frenk-White haloes, our mass estimate favours a total (pre-LMC infall) Milky Way mass of M_200c = 1.01 +/- 0.24 x 10^12 M_Sun, or (post-LMC infall) mass of M_200c = 1.16 +/- 0.24 x 10^12 M_Sun when a 1.5 x 10^11 M_Sun mass of a rigid LMC is included.
L R Cullinane, A D Mackey, G S Da Costa, D Erkal, S E Koposov, V Belokurov (2021)The Magellanic Edges Survey II. Formation of the LMC's northern arm, In: Monthly Notices of the Royal Astronomical Societypp. 1-25 Oxford University Press
DOI: 10.1093/mnras/stab3350
The highly-substructured outskirts of the Magellanic Clouds provide ideal locations for studying the complex interaction history between both Clouds and the Milky Way (MW). In this paper, we investigate the origin of a >20 • long arm-like feature in the northern outskirts of the Large Magellanic Cloud (LMC) using data from the Magellanic Edges Survey (MagES) and Gaia EDR3. We find that the arm has a similar geometry and metallicity to the nearby outer LMC disk, indicating that it is comprised of perturbed disk material. Whilst the azimuthal velocity and velocity dispersions along the arm are consistent with those in the outer LMC, the in-plane radial velocity and out-of-plane vertical velocity are significantly perturbed from equilibrium disk kinematics. We compare these observations to a new suite of dynamical models of the Magellanic/MW system, which describe the LMC as a collection of tracer particles within a rigid potential, and the SMC as a rigid Hernquist potential. Our models indicate the tidal force of the MW during the LMC's infall is likely responsible for the observed increasing out-of-plane velocity along the arm. Our models also suggest close LMC/SMC interactions within the past Gyr, particularly the SMC's pericentric passage ~150 Myr ago and a possible SMC crossing of the LMC disk plane ~400 Myr ago, likely do not perturb stars that today comprise the arm. Historical interactions with the SMC prior to ~1 Gyr ago may be required to explain some of the observed kinematic properties of the arm, in particular its strongly negative in-plane radial velocity.
Vasily A Belokurov, Denis Erkal (2020)Limit on the LMC mass from a census of its satellites, In: Monthly Notices of the Royal Astronomical Society495(3)pp. 2554-2563 Oxford University Press (OUP)
We study the orbits of dwarf galaxies in the combined presence of the Milky Way and Large Magellanic Cloud (LMC) and find six dwarfs that were likely accreted with the LMC (Car 2, Car 3, Hor 1, Hyi 1, Phe 2, and Ret 2), in addition to the Small Magellanic Cloud (SMC), representing strong evidence of dwarf galaxy group infall. This procedure depends on the gravitational pull of the LMC, allowing us to place a lower bound on the Cloud's mass of MLMC˃1.24×1011M⊙ if we assume that these are LMC satellites. This mass estimate is validated by applying the technique to a cosmological zoom-in simulation of a Milky Way-like galaxy with an LMC analogue where we find that while this lower bound may be overestimated, it will improve in the future with smaller observational errors. We apply this technique to dwarf galaxies lacking radial velocities and find that Eri 3 has a broad range of radial velocities for which it has a significant chance (˃0.4) of having been bound to the Cloud. We study the non-Magellanic classical satellites and find that Fornax has an appreciable probability of being an LMC satellite if the LMC is sufficiently massive (∼2.5×1011M⊙). In addition, we explore how the orbits of Milky Way satellites change in the presence of the LMC and find a significant change for several objects. Finally, we find that the dwarf galaxies likely to be LMC satellites are slightly smaller than Milky Way satellites at a fixed luminosity, possibly due to the different tidal environments they have experienced.
Alexandra L. Gregory, Michelle L. M. Collins, Denis Erkal, Erik Tollerud, Maxime Delorme, Lewis Hill, David J. Sand, Jay Strader, Beth Willman (2020)Uncovering the Orbit of the Hercules Dwarf Galaxy, In: Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP)
We present new chemo{kinematics of the Hercules dwarf galaxy based on Keck II{ DEIMOS spectroscopy. Our 21 conrmed members, including 9 newly con- rmed members, have a systemic velocity of vHerc = 46:4 1:3 kms
Denis Erkal, Douglas Boubert, Alessia Gualandris, N. Wyn Evans, Fabio Antonini (2018)A hypervelocity star with a Magellanic origin, In: Monthly Notices of the Royal Astronomical Society Oxford University Press (OUP)
Using proper motion measurements from Gaia DR2, we probe the origin of 26 previously known hypervelocity stars (HVSs) around the Milky Way. We find that a significant fraction of these stars have a high probability of originating close to the Milky Way centre, but there is one obvious outlier. HVS3 is highly likely to be coming almost from the centre of the Large Magellanic Cloud (LMC). During its closest approach, 21.1 +6.1 −4.6 Myr ago, it had a relative velocity of 870 +69 −66 kms −1 with respect to the LMC. This large kick velocity is only consistent with the Hills mechanism, requiring a massive black hole at the centre of the LMC. This provides strong direct evidence that the LMC itself harbours a massive black hole of at least 4×10 3 −10 4 M ⊙ .
Anirudh Chiti, Anna Frebel, Joshua D Simon, Denis Erkal, Laura J Chang, Lina Necib, Alexander P Ji, Helmut Jerjen, Dongwon Kim, John E Norris (2021)An extended halo around an ancient dwarf galaxy, In: NATURE ASTRONOMY Springer Nature
DOI: 10.1038/s41550-020-01285-w
The Milky Way is surrounded by dozens of ultra-faint (< $10^5$ solar luminosities) dwarf satellite galaxies. They are the surviving remnants of the earliest galaxies, as confirmed by their ancient (~13 billion years old) and chemically primitive stars. Simulations suggest that these systems formed within extended dark matter halos and experienced early galaxy mergers and supernova feedback. However, the signatures of these events would lie outside their core regions (>2 half-light radii), which are spectroscopically unstudied due to the sparseness of their distant stars. Here we identify members of the Tucana II ultra-faint dwarf galaxy in its outer region (up to 9 half-light radii), demonstrating the system to be dramatically more spatially extended and chemically primitive than previously found. These distant stars are extremely metal-poor (=-3.02; less than ~1/1000th of the solar iron abundance), affirming Tucana II as the most metal-poor known galaxy. We observationally establish, for the first time, an extended dark matter halo surrounding an ultra-faint dwarf galaxy out to one kiloparsec, with a total mass of >$10^7$ solar masses. This measurement is consistent with the expected ~2x$10^7$ solar masses using a generalized NFW density profile. The extended nature of Tucana II suggests that it may have undergone strong bursty feedback or been the product of an early galactic merger. We demonstrate that spatially extended stellar populations, which other ultra-faint dwarfs hint at hosting as well, are observable in principle and open the possibility for detailed studies of the stellar halos of relic galaxies.
Kathy Vivas, Clara Martínez-Vázquez, Alistair Walker, Vasily Belokurov, Ting Li, Denis Erkal (2021)Variable Stars in the giant satellite galaxy Antlia 2, In: The Astrophysical Journal American Astronomical Society
We report 350 pulsating variable stars found in four DECam fields (∼ 12 sq. deg.) covering the Antlia 2 satellite galaxy. The sample of variables includes 318 RR Lyrae stars and eight anomalous Cepheids in the galaxy. Reclassification of several objects designated previously to be RR Lyrae as Anomalous Cepheids gets rid of the satellite's stars intervening along the line of sight. This in turn removes the need for prolific tidal disruption of the dwarf, in agreement with the recently updated proper motion and peri-centre measurements based on Gaia EDR3. There are also several bright foreground RR Lyrae stars in the field, and two distant background variables located ∼ 45 kpc behind Antlia 2. We found RR Lyrae stars over the full search area, suggesting that the galaxy is very large and likely extends beyond our observed area. The mean period of the RRab in Antlia 2 is 0.599 days, while the RRc have a mean period of 0.368 days, indicating the galaxy is an Oosterhoff-intermediate system. The distance to Antlia 2 based on the RR Lyrae stars is 124.1 kpc (µ 0 = 20.47) with a dispersion of 5.4 kpc. We measured a clear distance gradient along the semi-major axis of the galaxy, with the SouthEast side of Antlia 2 being ∼ 13 kpc farther away from the NorthWest side. This elongation along the line of sight is likely due to the ongoing tidal disruption of Ant 2.
Pol Massana, Noelia E D Noël, David L Nidever, Denis Erkal, Thomas J L de Boer, Yumi Choi, Steven R Majewski, Knut Olsen, Antonela Monachesi, Carme Gallart, Roeland P van der Marel, Tomás Ruiz-Lara, Dennis Zaritsky, Nicolas F Martin, Ricardo R Muñoz, Maria-Rosa L Cioni, Cameron P M Bell, Eric F Bell, Guy S Stringfellow, Vasily Belokurov, Matteo Monelli, Alistair R Walker, David Martínez-Delgado, A Katherina Vivas, Blair C Conn (2020)SMASHing the low surface brightness SMC, In: Monthly Notices of the Royal Astronomical Society498(1)pp. 1034-1049 Oxford University Press
The periphery of the Small Magellanic Cloud (SMC) can unlock important information regarding galaxy formation and evolution in interacting systems. Here, we present a detailed study of the extended stellar structure of the SMC using deep colour–magnitude diagrams, obtained as part of the Survey of the MAgellanic Stellar History (SMASH). Special care was taken in the decontamination of our data from Milky Way (MW) foreground stars, including from foreground globular clusters NGC 362 and 47 Tuc. We derived the SMC surface brightness using a 'conservative' approach from which we calculated the general parameters of the SMC, finding a staggered surface brightness profile. We also traced the fainter outskirts by constructing a stellar density profile. This approach, based on stellar counts of the oldest main-sequence turn-off stars, uncovered a tidally disrupted stellar feature that reaches as far out as 12 deg from the SMC centre. We also serendipitously found a faint feature of unknown origin located at ∼14 deg from the centre of the SMC and that we tentatively associated with a more distant structure. We compared our results to in-house simulations of a 1 × 109 M⊙ SMC, finding that its elliptical shape can be explained by its tidal disruption under the combined presence of the MW and the Large Magellanic Cloud. Finally, we found that the older stellar populations show a smooth profile while the younger component presents a jump in the density followed by a flat profile, confirming the heavily disturbed nature of the SMC.
T.J.L de Boer, Denis Erkal, M. Gieles (2020)A closer look at the spur, blob, wiggle, and gaps in GD-1, In: Monthly Notices of the Royal Astronomical Society Royal Astronomical Society
The GD-1 stream is one of the longest and coldest stellar streams discovered to date, and one of the best objects for constraining the dark matter properties of the Milky Way. Using data from Gaia DR2 we study the proper motions, distance, morphology and density of the streamto uncover small scale perturbations. The proper motion cleaned data shows a clear distance gradient across the stream, ranging from 7 to 12 kpc. However, unlike earlier studies thatfound a continuous gradient, we uncover a distance minimum atφ1≈-50 deg, after which the distance increases again. We can reliably trace the stream between -85< φ1
J.D Simon, T.S Li, D. Erkal, A.B Pace, A. Drlica-Wagner, D.J James, J.L Marshall, K. Bechtol, T. Hansen, K. Kuehn, C. Lidman, S. Allam, J. Annis, S. Avila, E. Bertin, D. Brooks, D.L Burke, A. Carnero Rosell, M. Carrosco Kind, J. Carretero, L.N da Costa, J. De Vicente, S. Desai, P. Doel, T.F Eifler, S. Everett, P. Fosalba, J. Frieman, J. Garcia-Bellido, E. Gaztanaga, D.W Gerdes, D. Gruen, R.A Gruendi, J. Gschwend, G. Gutierrez, D.L Hollowood, K. Honscheid, E. Krause, N. Kuropatkin, N. MacCrann, M.A.G Maia, M. March, R. Miquel, A. Palmese, F. Paz-Chinchon, A.A Plazas, K. Reil, A. Roodman, E. Sanchez, B. Santiago, V. Scarpine, M. Schubnell, S. Serrano, M. Smith, E. Suchyta, G. Tarle, A.R Walker (2020)BIRDS OF A FEATHER? MAGELLAN/IMACS SPECTROSCOPY OF THE ULTRA-FAINT SATELLITES GRUS II, TUCANA IV, AND TUCANA V, In: The Astrophysical Journal IOP Publishing
We present Magellan/IMACS spectroscopy of three recently discovered ultra-faint Milky Way satellites, Grus II, Tucana IV, and Tucana V. We measure systemic velocities of vhel = −110.0±0.5 km s−1, vhel = 15.9+1.8 −1.7 km s−1, and vhel = −36.2+2.5 −2.2 km s−1 for the three objects, respectively. Their large relative velocities demonstrate that the satellites are unrelated despite their close physical proximity. We determine a velocity dispersion for Tuc IV of σ = 4.3+1.7 −1.0 km s−1, but we cannot resolve the velocity dispersions of the other two systems. For Gru II we place an upper limit (90% confidence) on the dispersion of σ < 1.9 km s−1, and for Tuc V we do not obtain any useful limits. All three satellites have metallicities below [Fe/H] = −2.1, but none has a detectable metallicity spread. We determine proper motions for each satellite based on Gaia astrometry and compute their orbits around the Milky Way. Gru II is on a tightly bound orbit with a pericenter of 25+6 −7 kpc and orbital eccentricity of 0.45+0.08 −0.05. Tuc V likely has an apocenter beyond 100 kpc, and could be approaching the Milky Way for the first time. The current orbit of Tuc IV is similar to that of Gru II, with a pericenter of 25+11 −8 kpc and an eccentricity of 0.36+0.13 −0.06. However, a backward integration of the position of Tuc IV demonstrates that it collided with the Large Magellanic Cloud at an impact parameter of 4 kpc ∼ 120 Myr ago, deflecting its trajectory and possibly altering its internal kinematics. Based on their sizes, masses, and metallicities, we classify Gru II and Tuc IV as likely dwarf galaxies, but the nature of Tuc V remains uncertain.
DENIS ERKAL, Alis J. Deason, Vasily Belokurov, Xiang-Xiang Xue, Sergey E. Koposov, Sarah A. Bird, Chao Liu, Iulia T. Simion, Chengqun Yang, Lan Zhang, Gang Zhao (2021)Detection of the LMC-induced sloshing of the Galactic halo, In: Monthly Notices of the Royal Astronomical Society Oxford University Press
A wealth of recent studies have shown that the LMC is likely massive, with a halo mass > 10¹¹MꙨ. One consequence of having such a nearby and massive neighbour is that the inner Milky Way is expected to be accelerated with respect to our Galaxy's outskirts (beyond ~ 30 kpc). In this work we compile a sample of ~ 500 stars with radial velocities in the distant stellar halo, rGC > 50 kpc, to test this hypothesis. These stars span a large fraction of the sky and thus give a global view of the stellar halo. We find that stars in the Southern hemisphere are on average blueshifted, while stars in the North are redshifted, consistent with the expected, mostly downwards acceleration of the inner halo due to the LMC. We compare these results with simulations and find the signal is consistent with the infall of a 1:5 10¹¹MꙨ LMC. We cross-match our stellar sample with Gaia DR2 and find that the mean proper motions are not yet precise enough to discern the LMC's effect. Our results show that the Milky Way is significantly out of equilibrium and that the LMC has a substantial effect on our Galaxy.
|
CommonCrawl
|
BMC Chemistry
Optimal partner wavelength combination method applied to NIR spectroscopic analysis of human serum globulin
Yun Han1,
Yun Zhong2,
Huihui Zhou1 &
Xuesong Kuang1
BMC Chemistry volume 14, Article number: 37 (2020) Cite this article
Human serum globulin (GLB), which contains various antibodies in healthy human serum, is of great significance for clinical trials and disease diagnosis. In this study, the GLB in human serum was rapidly analyzed by near infrared (NIR) spectroscopy without chemical reagents. Optimal partner wavelength combination (OPWC) method was employed for selecting discrete information wavelength. For the OPWC, the redundant wavelengths were removed by repeated projection iteration based on binary linear regression, and the result converged to stable number of wavelengths. By the way, the convergence of algorithm was proved theoretically. Moving window partial least squares (MW-PLS) and Monte Carlo uninformative variable elimination PLS (MC-UVE-PLS) methods, which are two well-performed wavelength selection methods, were also performed for comparison. The optimal models were obtained by the three methods, and the corresponding root-mean-square error of cross validation and correlation coefficient of prediction (SECV, RP,CV) were 0.813 g L−1 and 0.978 with OPWC combined with PLS (OPWC-PLS), and 0.804 g L−1 and 0.979 with MW-PLS, and 1.153 g L−1 and 0.948 with MC-UVE-PLS, respectively. The OPWC-PLS and MW-PLS methods achieved almost the same good results. However, the OPWC only contained 28 wavelengths, so it had obvious lower model complexity. Thus it can be seen that the OPWC-PLS has great prediction performance for GLB and its algorithm is convergent and rapid. The results provide important technical support for the rapid detection of serum.
Near infrared (NIR) spectroscopy is a green and developing analytical technique, which has been widely used in life sciences [1,2,3,4,5,6,7], agricultural products and food [8,9,10,11], soil [12,13,14], and other fields [15, 16]. For NIR spectroscopic analysis of complex system, wavelength selection is necessary and difficult. So far, many methods including continuous mode and discrete mode of wavelength selection have been successfully used in NIR spectroscopy analysis, but a general and effective method has not been found. Moving window partial least squares (MW-PLS) is a widely used and well performed wavelength selection method, which uses a moving window whose position and size can be changed to identify and select continuous wavebands in terms of the prediction effect, and such waveband can correspond to absorption of specific functional groups [13, 15, 16]. This method can achieve high prediction effect on most spectral data sets, so it often presents as the comparison method of new method to evaluate the performance of the new method. However, it can be seen from the papers [16,17,18], as a traversal algorithm for continuous wavebands, all possible continuous bands are screened, this method is time-consuming when encountering a large dataset. Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS) is a popular method for discrete wavelength selection [19], which creatively introduced noise to eliminate uninformative variables, but it cannot achieve satisfactory prediction results for some data sets.
Serum globulin (GLB), which is synthesized by human monocyte-phagocyte system, contains various antibodies in the serum of healthy people, so it can enhance the body's resistance to prevent infection. It is mainly used for immunodeficiency diseases as well as prevention and treatment of viral infections and bacterial infections such as infectious hepatitis, measles, chickenpox, mumps and herpes zoster. In addition, it can also be used in asthma, allergic rhinitis, eczema and other endogenous allergic diseases. Therefore, the GLB in human serum is very important for clinical trials and disease diagnosis. In previous studies [20, 21], FTIR/ATR spectroscopy was used for determination of GLB. The study found that for blood index, the NIR has higher quantitative analysis accuracy than the FTIR/ATR spectroscopy [6, 22]. The experimental results show that the molecular absorption information of GLB can be captured by NIR spectroscopy without reagent.
Optimal partner wavelength combination (OPWC) is a method of selecting discrete information wavelength by iteration. For the method, the best partner of each wavelength in a predetermined wavelength region was determined based on binary linear regression (BLR), and a partner wavelength subset (PWS) was obtained; then the best partner of each wavelength in the PWS was obtained with the same method. The iterative process may be continued until convergence was met, and the last obtained wavelength subset was called OPWC. On the basis of the OPWC, PLS model was established. In order to make full use of the samples, the leave-one-out cross validation (LOOCV) was adopted.
Because human serum is a complex multi-component system and the absorption interference of other components is very complex, it is difficult to extract the characteristic information of GLB. Therefore, OPWC-PLS method was employed to remove redundant wavelength and establish a high precision quantitative model. MW-PLS and MC-UVE-PLS methods were also performed for comparison. Experimental results showed that the OPWC-PLS has great prediction performance and the algorithm is convergent and rapid.
A total of 230 human serum samples were collected in this experiment and their GLB values were determined using routine clinical biochemical tests. This work was supported by Youth Innovation Talents Project of Colleges and Universities in Guangdong Province (No. Q18285), and all individual participants provided written informed consent. The study protocol was performed in accordance with relevant laws and institutional guidelines and was approved by local medical institutions and ethics committee. The obtained results were used as reference values in NIR spectroscopy analysis. The statistical analysis of the measured GLB values of 230 samples is given in Table 1.
Table 1 Statistical analysis of measured GLB values of 230 samples
The spectroscopy instrument was an XDS Rapid Content™ Liquid Grating Spectrometer (FOSS, Denmark) equipped with a transmission accessory and a 2 mm cuvette. The spectral scanning range was 780-2498 nm with a 2 nm wavelength gap; the detector were Si (780–1100 nm) and Pbs (1100–2498 nm). The temperature and relative humidity of the laboratory were 25 ± 1 °C and 46 ± 1% RH, respectively. Each sample was measured three times, and the mean value of the three measurements was used for modeling.
Modeling process
Leave-one-out cross validation (LOOCV) is commonly used as the object function for model selection, which aims to make full use of the samples information. In this study, LOOCV was conducted for modeling process, as described below. Only one sample was left out from modeling samples for the prediction, and the other samples were used as calibration set. This process was repeated until the prediction value of every modeling sample was obtained. The measured and predicted values of ith sample in modeling set were denoted as \( C_{{{\text{M}},{\kern 1pt} {\kern 1pt} i}} , \)\( \tilde{C}_{{{\text{M}},{\kern 1pt} {\kern 1pt} i}} , \)\( i = 1,{\kern 1pt} {\kern 1pt} \;2, \ldots ,\;n_{\text{M}} , \)\( n_{\text{M}} \) was the number of modeling samples. For all samples, the mean measured value was denoted as \( C_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{Ave}}}}^{{}} , \) and the mean predicted value was denoted as \( \tilde{C}_{{{\text{M}},{\kern 1pt} {\kern 1pt} {\text{Ave}}}}^{{}} \). The prediction accuracy was evaluated by the root-mean-square errors of cross validation and the predicted correlation coefficients, and denoted as SECV and RP,CV, respectively. The calculation formulas were as the follows:
$$ {\text{SECV }} = \sqrt {\frac{{\sum\nolimits_{i = 1}^{{n_{M} }} {(\tilde{C}_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} i}}^{{}} - C_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i}}^{{}} )^{2} } }}{{n_{M} }}} , $$
$$ {\text{R}}_{\text{P, CV}} = \frac{{\sum\nolimits_{i = 1}^{{n_{M} }} {(C_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} i}}^{{}} - C_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{Ave}}}}^{{}} )(\tilde{C}_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} i}}^{{}} - \tilde{C}_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{Ave}}}}^{{}} )} }}{{\sqrt {\sum\nolimits_{i = 1}^{{n_{M} }} {(C_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} i}}^{{}} - C_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{Ave}}}}^{{}} )^{2} (\tilde{C}_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} i}}^{{}} - \tilde{C}_{{{\text{M,}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{Ave}}}}^{{}} )^{2} } } }} $$
The model parameters were selected to achieve minimum SECV.
MW-PLS method
MW-PLS is a time-tested and popular method for screening continuous wavebands. This method uses several continuous wavelengths as a window, the size and position of which can be changed, and the PLS models are established for all possible windows in a predetermined search region of the spectrum. The information waveband was selected according to the minimum SECV. In this study, the search range of the MW-PLS was full spectrum region (780–2498 nm) with 860 wavelengths, and the initial wavelength (I) and number of wavelengths (N) of window as well as the number of PLS factors (F) were set as \( I \in \{ 780,\;782, \ldots ,\;2498\} \), \( N \in \{ 1,\;2, \ldots ,\;200\} \cup \{ 210,\;220, \ldots ,\;860\} \), and \( F \in \{ 1,\;2, \ldots ,20\} \). The LOOCV for PLS models was performed in each combination of (I, N, F), and the corresponding SECV and RP,CV were calculated. The optimal waveband with minimum SECV was selected to achieve the best prediction accuracy.
MC-UVE-PLS method
MC-UVE-PLS is a representative method for screening discrete wavelengths. For the method, lots of models are established with randomly selected calibration samples, then the coefficient stability of these models is calculated, and each variable is evaluated with the stability of the corresponding coefficient [19]. In this study, MC-UVE method was performed based on the full spectrum region, and Monte Carlo sampling operation 500 times. The number of variables was determined using the method in Ref. [19]. MC-UVE-PLS was rerun for 50 times and the best result was recorded for further analysis. The number of PLS factors F was set to be \( F \in \{ 1,\;2, \ldots ,30\} \).
OPWC-PLS method
Based on BLR, the best partner of each wavelength was screened for entire scanning region and a partner wavelength subset (PWS) is determined. Then, a new PWS of all wavelengths in the PWS are also determined according to above obtained correspondence. The same procedure was performed repeatedly until the results converged to optimal partner wavelength combination (OPWC). The specific steps are as follows:
Step 1 Assume that there are N wavelengths in the wavelength screening area \( \Delta \), namely, \( \Delta = \left\{ {\lambda_{1} ,\,\lambda_{2} , \ldots ,\,{\kern 1pt} \lambda_{N} {\kern 1pt} } \right\} \). For any fixed \( \lambda_{i} \in \Delta \), and \( \forall \lambda_{k} \in \Delta ,{\kern 1pt} {\kern 1pt} \;{\kern 1pt} k \ne i \), LOOCV was performed based on binary linear regression of wavelength combination \( (\lambda_{i} ,{\kern 1pt} \,\lambda_{k} ) \). The best partner of \( \lambda_{i} \) was identified and denoted as \( f(\lambda_{i} ) \) based on minimum \( {\text{SECV}}(\lambda_{i} ,{\kern 1pt} \lambda_{k} ) \). The formula is as follows,
$$ {\text{SECV}}(\lambda_{i} ,{\kern 1pt} f(\lambda_{i} )) = \mathop {\hbox{min} }\limits_{\begin{subarray}{l} k = 1,2, \cdots ,N \\ k \ne i \end{subarray} } {\text{SECV}}(\lambda_{i} ,{\kern 1pt} \lambda_{k} ) $$
The \( f(\Delta ) \) was partner wavelength subset (PWS(1)) of \( \Delta \), and its number of wavelengths was denoted by N(1). Theoretically, the best partner \( f(\lambda_{i} ) \) for each wavelength \( \lambda_{i} \) is unique, but several different wavelengths may have the same best partner. If some \( \lambda \) was not a best partner of any wavelength, then \( \lambda \notin \) PWS(1), and N(1) < N.
Step 2 According to the projection \( f \) defined above, the partner wavelength subset (PWS(2)) of PWS(1) could be obtained. It will be proved later that PWS converges to stable number of wavelengths after finite projection iterations. Suppose that PWS converges after s-times iterations, N(s) = N(s+1). And the PWS(s) was called optimal partner wavelength combination (OPWC). For OPWC, each wavelength was the best partner of some other wavelength.
The proof of convergence of algorithm
(1) If \( \forall {\kern 1pt} {\kern 1pt} i,{\kern 1pt} {\kern 1pt} j,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \ne j,{\kern 1pt} {\kern 1pt} {\kern 1pt} \lambda_{i} \ne \lambda_{j} \), \( f(\lambda_{i} ) \ne {\kern 1pt} f(\lambda_{j} ) \), then the projection \( f \) is a one-to-one mapping function defined on \( \Delta \), \( f(\Delta ) = \Delta \), i.e. the PWS stop shrinking after this projection.
(2) If \( \exists {\kern 1pt} {\kern 1pt} {\kern 1pt} i,{\kern 1pt} {\kern 1pt} j,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \ne j,{\kern 1pt} {\kern 1pt} \lambda_{i} \ne \lambda_{j} \), \( f(\lambda_{i} ) = f(\lambda_{j} ) \), then \( f(\Delta ) \) is a proper subset of \( \Delta \), which is set as \( f(\Delta )\; = \;\left\{ {f(\lambda_{i} )\left| {\lambda_{i} \in \Delta } \right.\} = \{ \lambda_{ 1}^{ ( 1 )} ,\lambda_{ 1}^{ ( 1 )} , \ldots \lambda_{{N^{(1)} }}^{ ( 1 )} } \right\} \), N(1) < N. Next further consider the projection of \( f(\Delta ) \), i.e.\( f^{(2)} (\Delta ) \): (a) If \( \forall {\kern 1pt} {\kern 1pt} i,{\kern 1pt} {\kern 1pt} j,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \ne j,{\kern 1pt} {\kern 1pt} \lambda_{i}^{(1)} \ne \lambda_{j}^{(1)} \), \( f(\lambda_{i}^{(1)} ) \ne {\kern 1pt} f(\lambda_{j}^{(1)} ) \), then function \( f \) is a one-to-one mapping defined on the \( f(\Delta ) \), \( f^{(2)} (\Delta ) = f(\Delta ) \), i.e. the PWS stop shrinking after this projection. b) If \( \exists {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i,{\kern 1pt} {\kern 1pt} j,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \ne j,{\kern 1pt} {\kern 1pt} {\kern 1pt} \lambda_{i}^{(1)} \ne \lambda_{j}^{(1)} ,{\kern 1pt} {\kern 1pt} {\kern 1pt} f(\lambda_{i}^{(1)} ) = f(\lambda_{j}^{(1)} ),{\kern 1pt} \) then \( f^{(2)} (\Delta ) \) is a proper subset of \( f(\Delta ) \), which is set as \( f^{(2)} (\Delta ) = \left\{ {f(\lambda_{i}^{(1)} )\left| {\lambda_{i}^{(1)} \in f(\Delta )} \right.} \right\} \)\( = \left\{ {\lambda_{ 1}^{ ( 2 )} ,{\kern 1pt} {\kern 1pt} \,\lambda_{ 2}^{ ( 2 )} ,{\kern 1pt} \ldots ,{\kern 1pt} \,\lambda_{{N^{(2)} }}^{ ( 2 )} } \right\} \), N(2) < N(1) < N.
Similarly considered the projection of \( f^{(s - 1)} (\Delta ) \), i.e.\( f^{(s)} (\Delta ) \): (a) If \( \forall {\kern 1pt} {\kern 1pt} i,{\kern 1pt} {\kern 1pt} j,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \ne j,{\kern 1pt} {\kern 1pt} \lambda_{i}^{(s - 1)} \ne \lambda_{j}^{(s - 1)} \), \( f(\lambda_{i}^{(s - 1)} ) \ne {\kern 1pt} f(\lambda_{j}^{(s - 1)} ) \), then the function \( f \) is a one-to-one mapping defined on the \( f^{(s - 1)} (\Delta ) \), \( f^{(s)} (\Delta ) = f^{(s - 1)} (\Delta ) \), i.e. the PWS stop shrinking after this projection. (b) If \( \exists {\kern 1pt} {\kern 1pt} {\kern 1pt} i,{\kern 1pt} {\kern 1pt} j,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} i \ne j,{\kern 1pt} {\kern 1pt} \lambda_{i}^{(s - 1)} \ne \lambda_{j}^{(s - 1)} \), \( f(\lambda_{i}^{(s - 1)} ) = f(\lambda_{j}^{(s - 1)} ),{\kern 1pt} \) then \( f^{(s)} (\Delta ) \) is a proper subset of \( f^{(s - 1)} (\Delta ) \), which is set as \( f^{(s)} (\Delta ) = \{{f(\lambda_{i}^{(s - 1)} )}\left| {\lambda_{i}^{(s - 1)} \in f^{(s - 1)} (\Delta )} \right.\} \)\( = \{ \lambda_{ 1}^{({\text{s)}}}, \lambda_{ 2}^{{ ( {\text{s)}}}}, \ldots, \lambda_{{N^{(s)} }}^{{ ( {\text{s)}}}} \} \),\( N^{(s)} < N^{(s - 1)} < \cdots < N \). Because the total number of wavelengths (N) is limited, the number of projections needed is limited.
In this study, the wavelength screening region for GLB spanned the entire scanning region (780–2498 nm), i.e. \( \Delta = \left\{ {780,{\kern 1pt} {\kern 1pt} 782, \ldots ,{\kern 1pt} {\kern 1pt} 2498} \right\} \), with 860 wavelengths. The number of PLS factors F was set to \( F \in \{ 1,\,2, \ldots ,\,20\} \).
The computer algorithms for the three methods discussed above were designed using MATLAB version 7.6.
Results with MW-PLS
The NIR spectra of 230 human serum samples in the scanning area (780–2498 nm) were shown in Fig. 1. As can be seen from the figure, absorption at about 2000 nm and 2400 nm has obviously strong noise. In order to obtain satisfactory results, wavelength selection must be carried out to overcome noise interference. For comparison, PLS model of the full spectrum region was first established. The corresponding SECV and RP,CV were 1.423 g L−1 and 0.935, respectively.
NIR spectra of 230 human serum samples in the scanning area (780–2498 nm)
MW-PLS method was performed to optimize waveband and improve prediction accuracy. Depending on minimum SECV value, the optimal MW-PLS model was selected out. The corresponding waveband was 1504 to 1820 nm, located in the long-NIR region (1100 to 2498 nm). Prediction effects (SECV and RP,CV) and parameters of the above two methods were summarized in Table 2. The results showed that the predicted values were highly correlated with clinical measurements for the two methods, and comparing with optimal PLS model in full spectrum region, the optimal MW-PLS model achieved better prediction effect with fewer wavelengths.
Table 2 Prediction effects of three methods
Results with MC-UVE-PLS
The MC-UVE method was performed for eliminating the uninformative variables. Based on the parameter settings in section "MC-UVE-PLS method", 180 wavelengths were selected, and the SECV and RP,CV for the corresponding PLS models were 1.153 g L−1 and 0.948, respectively. Compared with the result of PLS in the full spectrum range, the prediction ability of this method was not significantly improved, which may be because it only eliminates non information variables without considering the influence of interference variables, while serum is a complex system with multiple interference variables.
Results with OPWC-PLS
The OPWC method was performed for screening information wavelength based on the steps mentioned in section "OPWC-PLS method". Firstly, 104 best partners for all 860 wavelengths were determined according to the results of LOOCV-BLR analysis, and PWS(1) with 104 wavelengths was obtained. Thus, the number of wavelengths was greatly reduced after the first projection. The correspondence between all 860 wavelengths and their best partners was shown in Fig. 2. As shown in the figure, some wavelengths had the same best partner, such as the 2156 nm and 2190 nm as best partners of other wavelengths appeared 3 and 8 times, respectively, so projection \( f \) was not a one-to-one mapping function in the whole spectral region \( \Delta \). Obviously, \( f(\Delta ) \) was a subset of \( \Delta \) and the projection continues.
Best partners of 860 wavelengths in the full spectrum region
Based on the corresponding relationship determined above, the best partner of \( \lambda_{i}^{(1)} \) was easy to be selected, and the PWS(2) was obtained. Repeated the same process for PWS(2), and PWS(3) was obtained. As the projection progresses, the number of wavelengths decreased gradually until the number of wavelengths for PWS(6) no longer changed. The PWS(6) was the OPWC and it had only 28 wavelengths. Figure 3 showed the 28 wavelengths and their best partners. As the figure showed, the 28 wavelengths are divided into 14 groups, and the two wavelengths in each group are the best partners for each other.
Best partners of the selected 28 wavelengths
Based on PLS, the LOOCVs were performed for every PWS, and the corresponding minimum SECV value and number of wavelengths (N(s)) used are shown in Fig. 4. As shown in the figure, the N(s) and minimum SECV values have almost the same trend. After the first projection, both of them decrease rapidly, and the remaining wavelengths are more important, so as the number of projections increases, they slowly decrease. This may be due to the removal of a large amount of noise and background information from the original spectrum after the first projection, so both the N(s) and minimum SECV values decrease rapidly. The partner wavelength subset of the original spectrum contains less redundant information, so the N(s) and minimum SECV values decrease slowly in the later projection iteration.
Number of wavelengths and minimum SECV value for each projection
Comparison of OPWC-PLS and MW-PLS methods
Screening the information wavelengths of GLB in the human serum of a multi-component complex system is difficult and complicated. The wavelengths selected by the OPWC-PLS and MW-PLS methods, which correspond to the information of GLB, were shown in Fig. 5. As indicated in Fig. 5, the wavelengths selected by the OPWC method have a wider distribution range and partially coincides with the wavelengths selected by MW-PLS. This may be because the local characteristics of MW-PLS method make some wavelengths cannot be detected, which reflects the complexity of NIR model optimization and the commonness and difference of different methods.
Position of the selected wavelengths with MW-PLS and OPWC-PLS located the average spectrum
Figure 6 showed the relationship between the predicted and measured GLB values based on the MW-PLS and OPWC-PLS methods, respectively. The prediction effect and corresponding parameters N and F were summarized in Table 2. The SECV and RP,CV were 0.813 g L−1 and 0.978 with OPWC-PLS, and 0.804 g L−1 and 0.979 with MW-PLS, respectively. The results show that, like MW-PLS, the prediction effect of OPWC-PLS was also obviously better than that of the whole spectrum PLS, and the OPWC is an effective method for screening wavelengths. The phenomenon conveys that better prediction results can be achieved with fewer wavelengths. Thus one can conclude that it is very necessary to first perform wavelength selection before building a calibration model. The two methods had achieved almost the same good prediction results (SECV and RP,CV). However, the optimal OPWC-PLS model adopted only 28 wavelengths, while the other adopted 159 wavelengths. Therefore, the OPWC method has great prediction performance for wavelength selection.
Relationship between the predicted values and measured values of GLB based on a MW-PLS and b OPWC-PLS methods
The differences in prediction of the OPWC-PLS and MW-PLS methods for GLB illustrate that MW-PLS can achieve higher prediction accuracy, but it is time-consuming and employs more wavelengths, while OPWC-PLS can achieve similar prediction results with MW-PLS in less time. In addition, MW-PLS, as a continuous wavelength screening method, is more suitable for determining the object with relatively concentrated molecular absorption bands; while OPWC-PLS, as a discrete wavelength screening method, may be more suitable for determining the object with relatively fragmented molecular absorption bands.
The change of GLB content in human serum has important reference value for clinical trial and disease diagnosis. In this study, the OPWC-PLS method was employed for rapid analysis of GLB based on NIR spectroscopy. MW-PLS and MC-UVE-PLS methods were also employed for comparison. The results indicate that, OPWC-PLS and MW-PLS methods achieved satisfactory prediction results, while the MC-UVE-PLS method was not suitable for the data set of this study, and the prediction effect of the model is not significantly improved. The optimal OPWC-PLS model adopted 28 wavelengths, and corresponding SECV and RP,CV were 0.813 g L−1 and 0.978, respectively. The optimal MW-PLS model adopted 159 wavelengths, and corresponding SECV and RP,CV were 0.804 g L−1 and 0.979, respectively. The OPWC-PLS achieved almost the same prediction effect as MW-PLS with faster speed and fewer wavelengths. Therefore, OPWC is an efficient approach for information wavelength selection.
The predicted GLB values obtained by MW-PLS and OPWC-PLS were highly correlated with the reference values. Compared with traditional method, the method based on NIR spectroscopy has the merits of rapidity, simplicity and no chemical reagent. Therefore, the results have important reference value for the rapid determination of GLB. In addition, the wavelengths selected by the two methods are partially the same, reflecting the commonness and difference of different methods.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
GLB:
Globulin
NIR:
Near infrared
OPWC:
Optimal partner wavelength combination
MW-PLS:
Moving window partial least squares
MC-UVE:
Monte Carlo uninformative variable elimination
SECV:
Root-mean-square error of cross validation of prediction
RP,CV :
Correlation coefficient of prediction
BLR:
Binary linear regression
PWS:
Partner wavelength subset
LOOCV:
Leave-one-out cross validation
SD:
Chen JM, Peng LJ, Han Y et al (2018) A rapid quantification method for the screening indicator for β-thalassemia with near-infrared spectroscopy. Spectrochim Acta A. 193:499–506
Han Y, Pan T, Zhou HH, Yuan R (2018) ATR-FTIR spectroscopy with equidistant combination PLS method applied for rapid determination of glycated hemoglobin. Anal Methods 10:3455–3461
Yao LJ, Tang Y, Yin ZW et al (2017) Repetition rate priority combination method based on equidistant wavelengths screening with application to NIR analysis of serum albumin. Chemom Inte Lab Syst. 162:191–196
Han Y, Chen JM, Pan T, Liu GS (2015) Determination of glycated hemoglobin using near-infrared spectroscopy combined with equidistant combination partial least squares. Chemom Intell Lab Syst. 145:84–92
Lee Y, Lee S, In JY et al (2008) Prediction of plasma hemoglobin concentration by near-infrared spectroscopy. J Korean Med Sci 23:674–677
Pan T, Liu JM, Chen JM et al (2013) Rapid determination of preliminary thalassaemia screening indicators based on near-infrared spectroscopy with wavelength selection stability. Anal Methods 5(17):4355–4362
Yao LJ, Lyu N, Chen JM et al (2016) Joint analyses model for total cholesterol and triglyceride in human serum with near-infrared spectroscopy. Spectrochim Acta A. 159:53–59
Lyu N, Chen JM, Pan T et al (2016) Near-infrared spectroscopy combined with equidistant combination partial least squares applied to multi-index analysis of corn. Infrared Phys Technol 76:648–654
Guo HS, Chen JM, Pan T et al (2014) Vis-NIR wavelength selection for non-destructive discriminant analysis of breed screening of transgenic sugarcane. Anal Methods 6(10):8810–8816
Chen JY, Iyo C, Kawano S (2002) Effect of multiplicative scatter correction on wavelength selection for near infrared calibration to determine fat content in raw milk. J Near Infrared Spec. 10(4):301–307
Liu ZY, Liu B, Pan T et al (2013) Determination of amino acid nitrogen in tuber mustard using near-infrared spectroscopy with waveband selection stability. Spectrochim Acta A. 102:269–274
Pan T, Li MM, Chen JM (2014) Selection method of quasi-continuous wavelength combination with applications to the near-infrared spectroscopic analysis of soil organic matter. Appl Spectrosc 68(3):263–271
Pan T, Han Y, Chen JM et al (2016) Optimal partner wavelength combination method with application to near-infrared spectroscopic analysis. Chemom Intell Lab Syst. 156:217–223
Chen JM, Pan T, Liu GS et al (2014) Selection of stable equivalent wavebands for near-infrared spectroscopic analysis of total nitrogen in soil. J Innov Opt Health Sci. 7(4):1–9
Pan T, Chen ZH, Chen JM et al (2012) Near-infrared spectroscopy with waveband selection stability for the determination of COD in sugar refinery wastewater. Anal Methods 4(4):1046–1052
Li HD, Liang YZ, Xu QS et al (2009) Key wavelengths screening using competitive adaptive reweighted sampling method for multivariate calibration. Ana Chim Acta 648:77–84
Jiang JH, Berry RJ, Siesler HW et al (2002) Wavelength interval selection in multicomponent spectral analysis by moving window partial least-squares regression with applications to mid-infrared and near-infrared spectroscopic data. Anal Chem 74:3555–3565
Du YP, Liang YZ, Jiang JH et al (2004) Spectral regions selection to improve prediction ability of PLS models by changeable size moving window partial least squares and searching combination moving window partial least squares. Anal Chim Acta 501(2):183–191
Cai WS, Li YK, Shao XG (2008) A variable selection method based on uninformative variable elimination for multivariate calibration of near-infrared spectra. Chemometr Intell Lab. 90:188–194
Chen YF, Chen JM, Pan T et al (2015) Correlation coefficient optimization in partial least squares regression with application to ATR-FTIR spectroscopic analysis. Anal Methods 7:5780–5786
Kim YJ, Yoon G (2002) Multicomponent assay for human serum using mid-infrared transmission spectroscopy based on component-optimized spectral region selected by a first loading vector analysis in partial least-squares regression. Appl Spectrosc 56(5):625–632
Long XL, Liu GS, Pan T et al (2014) Waveband selection of reagent-free determination for thalassemia screening indicators using Fourier transform infrared spectroscopy with attenuated total reflection. J Biomed Opt 19(8):087004
This work was supported by Youth Innovation Talents Project of Colleges and Universities in Guangdong Province (No. Q18285) and Guangdong Ocean University Scientific Research Start-up Funding for the Doctoral Program (No. R17057).
Department of Data Science, Guangdong Ocean University, Haida Road 1, Mazhang District, Zhanjiang, 524088, China
Yun Han, Huihui Zhou & Xuesong Kuang
Zhanjiang No. 2 High School Hai Dong, Potou District, Zhanjiang, 524057, China
Yun Zhong
Yun Han
Huihui Zhou
Xuesong Kuang
YH analyzed the spectral data of human serum samples and optimized the wavelength model, and was a major contributor in writing the manuscript. YZ and HZ carried out the spectrum experiment. XK performed model validation. All authors read and approved the final manuscript.
Correspondence to Xuesong Kuang.
Consent statement
This study was approved by Experimental Animal Management Committee of Guangdong Ocean University, and every individual participant provided written informed consent. All individual participants were voluntary and their all information is confidential. The study protocol was performed in accordance with relevant laws and institutional guidelines.
Han, Y., Zhong, Y., Zhou, H. et al. Optimal partner wavelength combination method applied to NIR spectroscopic analysis of human serum globulin. BMC Chemistry 14, 37 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s13065-020-00689-z
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13065-020-00689-z
Near-infrared spectroscopy
Human serum globulin
Submission enquiries: [email protected]
|
CommonCrawl
|
Disproving continuity using epsilon delta
Suppose I wanted to show that $\ln(x)$ is not uniformly continuous on $\mathbb{R}_{>0}$.
I know if I negate the definition of uniform continuity, that I have to find an $\varepsilon$ such that
$ \forall \delta > 0 \quad \exists x $ :
$$ |x-y|<\delta \implies |f(x)-f(y)|\geq \varepsilon $$
Am I correct in saying I can choose epsilon without any restrictions? And for $x,y$ I have to make sure, that their absolute difference remains smaller than delta, for any $\delta$ chosen, now matter how small?
Would it then be correct to state that $\varepsilon:=\frac{\ln(2)}{2}$ and $x:= \delta $ and $y:= \frac{\delta}{2}$ work, but I couldn't for example choose $x:= e $ and $y:= 1$ ?
calculus real-analysis continuity
GNUSupporter 8964民主女神 地下教會
$\begingroup$ $\forall\,\delta>0, x_\delta := \delta/2, y_\delta := \delta$. $$|f(x)-f(y)| = |\ln(x) - \ln(y)| = |\ln(x/y)| = \ln2 = 2\varepsilon > \varepsilon$$ $\endgroup$ – GNUSupporter 8964民主女神 地下教會 Feb 17 '17 at 15:37
The definition of being uniformly continuous on $I\subseteq \mathbb{R}$ is $$\forall \varepsilon > 0 \; \exists \delta > 0 \quad \forall x,y \in {I}, \; |x-y|<\delta \Rightarrow |f(x)-f(y)|<\varepsilon$$
So its negation is (remembering that "not $\forall$ = $\exists$") is $$\exists \varepsilon > 0 \; \forall \delta > 0 \quad \exists x,y \in {I}, \; |x-y|<\delta \text { and } |f(x)-f(y)|>\varepsilon$$
So yes, you would be correct. Of course there are some $x$ and $y$ such that $|f(x)-f(y)|$ is still going to be less than $\varepsilon$, but you just need a pair for which it's not, and whose distance is less than $\delta$, to provide the counterexample.
AnalysisStudent0414AnalysisStudent0414
$\begingroup$ So as a kind of "strategy", you'd often try to construct some x and y so that they cancel out? As in, I try to construct something that makes $|f(x)-f(y)|$ greater than a certain value but still has the property of $|x-y|< \delta $ for all deltas and then choose epsilon accordingly? $\endgroup$ – Jonathan Feb 17 '17 at 15:41
$\begingroup$ Well, you can usually see where things go wrong by trying to prove that $f$ $\textit{is}$ uniformly continuous, like, in your example you would try to derive some inequality for $h$ from $$ln(x+h)-ln(x) < \varepsilon$$ $\endgroup$ – AnalysisStudent0414 Feb 17 '17 at 15:44
Not the answer you're looking for? Browse other questions tagged calculus real-analysis continuity or ask your own question.
Are there any geometric interpretations to uniform continuity?
Uniform continuity implies we just need a small enough $\delta$?
$\delta$ and $\epsilon$ in the continuity definition
Epsilon-delta definition of continuity
Continuity and epsilon-delta proofs
Prove the continuity of $x \sin(x)$ using $\epsilon -\delta$ method.
Continuity vs. Uniform Continuity in Layman's Terms
Uniform continuity: $\delta < \epsilon$
How come the $\epsilon-\delta$ definition of continuity is preferred over the sequential definition of continuity?
|
CommonCrawl
|
Characterisation of early responses in lead accumulation and localization of Salix babylonica L. roots
Wenxiu Xue1,
Yi Jiang1,
Xiaoshuo Shang1 &
Jinhua Zou ORCID: orcid.org/0000-0002-2740-49671
Lead (Pb) is a harmful pollutant that disrupts normal functions from the cell to organ levels. Salix babylonica is characterized by high biomass productivity, high transpiration rates, and species specific Pb. Better understanding the accumulating and transporting Pb capability in shoots and roots of S. babylonica, the toxic effects of Pb and the subcellular distribution of Pb is very important.
Pb exerted inhibitory effects on the roots and shoots growth at all Pb concentrations. According to the results utilizing inductively coupled plasma atomic emission spectrometry (ICP-AES), S. babylonica can be considered as a plant with great phytoextraction potentials as translocation factor (TF) value > 1 is observed in all treatment groups throughout the experiment. The Leadmium™ Green AM dye test results indicated that Pb ions initially entered elongation zone cells and accumulated in this area. Then, ions were gradually accumulated in the meristem zone. After 24 h of Pb exposure, Pb accumulated in the meristem zone. The scanning electron microscopy (SEM) and energy-dispersive X-ray analyses (EDXA) results confirmed the fluorescent probe observations and indicated that Pb was localized to the cell wall and cytoplasm. In transverse sections of the mature zone, Pb levels in the cell wall and cytoplasm of epidermal cells was the lowest compared to cortical and vessel cells, and an increasing trend in Pb content was detected in cortical cells from the epidermis to vascular cylinder. Similar results were shown in the Pb content in the cell wall and cytoplasm of the transverse sections of the meristem. Cell damage in the roots exposed to Pb was detected by propidium iodide (PI) staining, which was in agreement with the findings of Pb absorption in different zones of S. babylonica roots under Pb stress.
S. babylonica L. is observed as a plant with great potential of Pb-accumulation and Pb-tolerance. The information obtained here of Pb accumulation and localization in S. babylonica roots can furthers our understanding of Pb-induced toxicity and its tolerance mechanisms, which will provide valuable and scientific information to phytoremediation investigations of other woody plants under Pb stress.
Nowadays, the rapid growth of industrialization and human activities, such as mining and smelting of lead (Pb) ores, can result in environmental Pb pollution and its entering into the food chain, which poses a great risk to the health of both plants and human beings [1, 2]. Therefore, Pb pollution has been becoming a critical social problem today [3]. Among all the heavy metals, Pb toxicity is only lower than As [4]. It is not only non-essential for plant growth, but often toxic to plant metabolism. Pb disrupts normal functions from the cell to organ levels, such as Pb inducing damage of root tip meristematic cells and guard cells in leaves and disturbing the function of chloroplast, mitochondria, nucleolus and vacuole [5,6,7,8]. Evidence has demonstrated that Pb can be easily absorbed, transformed, and accumulated in plant tissues, in which the roots are the primary sites of accumulation [9,10,11,12]. A higher percentage of accumulated Pb is restricted within the roots, while only a small fraction of it is transported to the aerial parts of plants [13, 14]. Previous reports demonstrated that Pb toxicity is associated with the inducement of a low mitotic index, disturbance of mitosis, visible chloroplast alterations, plant cell malformations, mitochondrial system abnormalities, inward invagination of cell walls, plasma membrane distortions, oversized vacuoles, and irregular plastoglobuli formations [15,16,17,18,19,20,21].
The Salicaceae family contains the Salix (willows) and Populus (poplars) genera, which are comprised of several woody species and hybrids. Many of them have accommodated to certain ecological niches, for example the places that are nutrient-poor, dry, wet or contaminated by metal [22, 23]. Willows with the characteristics of high biomass, accumulating and translocating heavy metals to shoots easily are considered as excellent phytoremediation species [24,25,26,27]. S. babylonica is a willow species that grows in a wide range of climatic conditions and is one of the most widely cultivated willow species in China [28]. This species is preferred for the phytoremediation of trace metal-contaminated land due to its easy propagation and cultivation, fast growth, large biomass, and deep root system [23, 28]. An early study reported that S. babylonica tolerates and accumulates Pb, suggesting that it has considerable potential for remediating Pb pollution [29]. However, little is known in this respect.
Phytoremediation is an economic way which utilize the potential of plants to transform or eliminate the environmental contaminants by accumulating in their tissues, which is a cheap alternative that complements common, conventional methods [30, 31]. The success of phytoextraction is mainly determined by the identification of native high biomass yielding, the capability of the plants for the heavy metal accumulation and translocation, the tolerance to high heavy metal concentrations [32]. Better understanding the uptake, accumulation, transportation and distribution of heavy metals in plants and the toxic effects on tissues, organs and cells are needed and very important.
In the present investigation, the early responses of S. babylonica roots exposed to different Pb concentrations in respect to Pb uptake and accumulation, subcellular distribution, and the toxic effects of Pb on plant growth and cell damage were studied by utilizing inductively coupled plasma atomic emission spectrometry (ICP-AES), fluorescence labeling, propidium iodide (PI) staining, scanning electron microscopy (SEM), and energy-dispersive X-ray analyses (EDXA). The data will be very valuable in better our understanding of Pb-induced toxicity and the associated tolerance mechanisms in woody plants under Pb stress.
Effects of Pb on seedling growth
The Pb effects on S. babylonica root growth varied with different Pb concentrations after 7 d (Fig. 1). Compared to the control, Pb exerted significant inhibitory effects on roots and shoots (p < 0.05). The data also revealed that both root and shoot length decreased significantly (p < 0.05) as Pb concentrations increased.
Effects of Pb on S. babylonica root and shoot length exposed to 0, 1, 10, 50, or 100 μmol/L Pb for 7 d. Vertical bars denote the SE. Different letters indicate significant differences (n = 10, p < 0.05)
Pb accumulation
The ICP-AES data showed that the Pb levels in S. babylonica roots exposed to Pb solution for 7 d increased significantly (p < 0.05) when compared to the control and exhibited a gradually increasing trend as Pb concentrations increased (1, 10, 50 and 100 μmol/L) (Table 1). The Pb content in the stems and leaves exhibited the same trend as the roots. Pb accumulation in shoots was higher than that in the roots. At 100 μmol/L Pb treatment for 7 d, the root Pb accumulation was 78.78 ± 0.34 μg/g dry weight of the tissue, and the shoot Pb accumulation was 151.37 ± 0.16 μg/g dry weight of the tissue.
Table 1 Pb levels of different S. babylonica organs exposed to different Pb concentrations for 7 d
Translocation factors (TF) of all treatments were also calculated, the data also confirmed the Pb accumulation in S. babylonica shoots and roots under Pb stress. The highest TF was found at 1 and 10 μmol/L Pb treatment, and there was a decreasing trend in the TF values as Pb concentrations increased (Table 1). However, the TF value > 1 was observed in all treatment groups throughout the experiment (Table 1).
Effects of Pb on cell damage
In order to investigate the toxic effects of Pb on cell damage in root tips, PI dyes were used to visualize dead cells at longitudinal sections of S. babylonica root tips exposed to 0, 1, 10, 50, or 100 μmol/L Pb for 3, 6, 12, and 24 h (Fig. 2a–d). Red fluorescence is an indicator of cell damage. In the root tips of S. babylonica, the different Pb concentrations at different treatment time were responsible for the degree of cell damage caused by Pb (Fig. 2a–d). A significant red fluorescence signal was not observed in the control roots (Fig. 2A1–D1). Weak red fluorescence labeling gradually appeared in root tip cells exposed to 1 or 10 μmol/L Pb for 12 h (Fig. 2C2, C3), 50 μmol/L Pb for 6 h (Fig. 2B4), and 100 μmol/L Pb for 3 h (Fig. 2A5), indicating that Pb could induce cell damage as soon as 3 h after Pb exposure. Fluorescence intensity was more pronounced as Pb concentrations increased and due to prolonged exposure. The strongest fluorescence was observed in root tip cells treated with 100 μmol/L Pb. Data from the fluorescence density analysis confirmed these findings (Fig. 3). Cell damage increased significantly as Pb concentrations and treatment time increased (p < 0.05). In S. babylonica root tips, the Pb accumulation and location and the distribution of cell death are almost the same. The results showed the distribution of cell damage in various root areas after S. babylonica was stressed by 100 μmol/L Pb for 3, 6, 12 and 24 h (Fig. 4). When compared with meristem and mature zones, the degree of necrotic cells in the elongation zone was greatly higher after 3–12 h of Pb exposure (p < 0.05). After exposure to 100 μmol/L Pb for 24 h, the level of necrotic cells in the meristem zone was significantly higher (p < 0.05) compared to the elongation and mature zones.
Micrographs of S. babylonica roots using PI dye at longitudinal root tips exposed to different Pb concentrations (0, 1, 10, 50, or 100 μmol/L) for different treatment times (0, 3, 6, 12, and 24 h). A1–D1: Control without Pb for 3, 6, 12, and 24 h; A2–D2: 1 μmol/L Pb for 3, 6, 12, and 24 h; A3–D3: 10 μmol/L Pb for 3, 6, 12, and 24 h; A4–D4: 50 μmol/L Pb for 3, 6, 12, and 24 h; A5–D5: 100 μmol/L Pb for 3, 6, 12, and 24 h. Scale bar = 1 mm
Analysis of PI fluorescence density detected by Image J at longitudinal sections of roots exposed to 1, 10, 50, or 100 μmol/L Pb for 3, 6, 12, and 24 h. Vertical bars denote the SE. Different letters indicate significant differences (p < 0.05)
Distribution of PI fluorescence density detected by Image J in different zones of S. babylonica roots treated with 100 μmol/L Pb for 3, 6, 12, and 24 h. Vertical bars denote the SE. Different letters indicate significant differences (p < 0.05)
Cell damage was observed in transverse sections of the mature zone of S. babylonica roots exposed to 0, 1, 10, 50, or 100 μmol/L Pb for 24 h (Fig. 5a–e). In control root tip cells, there was very weak red fluorescence been observed, indicating a little cell damage in the normal plant growth (Fig. 5a). After 24 h in 1 μmol/L Pb treatment, compared to the control, stronger red fluorescence was detected suggesting more dead cells were induced by Pb in S. babylonica roots (Fig. 5b). This toxic effect increased as Pb concentrations increased (Fig. 5c–e). At low Pb concentrations (1 μmol/L), low levels of fluorescence intensity were mainly concentrated in epidermal and cortical cells near the epidermis (Fig. 5b). As Pb concentrations increased, strong red fluorescence was observed. The toxic effects of Pb on cell damage were observed in cortical cells near the vascular column of roots exposed to 10 μmol/L Pb (Fig. 5C). Increasing fluorescence intensity appeared in all root cortical cells treated with 50 or 100 μmol/L Pb (Fig. 5d, e). The fluorescence density analysis revealed significant Pb-induced cell damage at transverse sections of S. babylonica roots under Pb stress (p < 0.05); this toxic effect increased as Pb concentrations increased (Fig. 6).
Micrographs of S. babylonica roots using PI dye at transverse sections of root mature zone exposed to different Pb concentrations (0, 1, 10, 50, or 100 μmol/L) for 24 h. Scale bar = 200 μm. a: Control; b: 1 μmol/L Pb; c: 10 μmol/L Pb; d: 50 μmol/L Pb; e: 100 μmol/L Pb
Analysis of PI fluorescence density at transverse sections of root mature zone exposed to 1, 10, 50, or 100 μmol/L Pb for 24 h. Vertical bars denote the SE. Different letters indicate significant differences (p < 0.05)
Pb distribution in root tips
The Pb distribution in S. babylonica root tips exposed to 0, 1, 10, 50, or 100 μmol/L Pb for 3, 6, 12, and 24 h was conducted using a Pb-specific Leadmium Green AM dye probe. The fluorescent dye revealed a bright and clear green fluorescence in root tip cells of Pb-treated roots due to the Pb specific probe Leadmium™ Green AM solution (Fig. 7a–d). A significant green fluorescence signal was not detected in control root tips (Fig. 7A1–D1). A weak green fluorescence signal appeared first in the root tip cells exposed to 10, 50, or 100 μmol/L Pb for 3 h (Fig. 7A3–A5) compared to the control. This phenomenon was also observed in roots exposed to 1 μmol/L Pb for 6 h (Fig. 7B2). These data revealed that Pb could enter root cells after 3 h. The labeling of root tip cells increased as Pb concentrations increased and due to prolonged exposure. The strongest fluorescence in root tip cells was observed in 100 μmol/L Pb (Fig 7A5–D5). The fluorescence density analysis confirmed the above observations (Fig. 8). The Pb distribution in the 3 zones of S. babylonica roots treated with 100 μmol/L Pb for 3, 6, 12, and 24 h was analyzed by Image J (Fig. 9). The Pb levels in the 3 zones of roots exposed to Pb for 12 h were significantly different (p < 0.05) and ordered as follows: elongation zone > meristem zone > mature zone. The order of Pb contents after 24 h exposure was as follows: meristem area > elongation area > mature area. The above results showed that Pb absorbed and accumulated mainly in the meristem and elongation zones. These results exhibited the same trend in root cells damaged by Pb (Fig. 4).
Micrographs of S. babylonica roots using Leadmium Green AM dye at longitudinal sections of roots exposed to different Pb concentrations (0, 1, 10, 50, or 100 μmol/L) for different treatment times (3, 6, 12, and 24 h). a1–d1: Control without Pb for 3, 6, 12, and 24 h; a2–d2: 1 μmol/L Pb for 3, 6, 12, and 24 h; a3–d3: 10 μmol/L Pb for 3, 6, 12, and 24 h; a4–d4: 50 μmol/L Pb for 3, 6, 12, and 24 h; a5–d5: 100 μmol/L Pb for 3, 6, 12, and 24 h. Scale bar = 1 mm
Analysis of Leadamium™ Green AM dye fluorescence density detected by Image J at longitudinal sections of roots pretreated with 0, 1, 10, 50, or 100 μmol/L Pb for 3, 6, 12, and 24 h. Vertical bars denote the SE. Different letters indicate significant differences (p < 0.05)
Distribution of Leadamium™ Green AM dye fluorescence density in the meristem, elongation, and mature zones at longitudinal sections of root tips treated with 100 μmol/L Pb for 3, 6, 12, and 24 h
Subcellular localization of Pb
The SEM and EXDA results revealed the cellular localization of Pb in S. babylonica root tip cells exposed to 50 μmol/L Pb for 24 h, as well as the wt% of Pb localization in specific sites. At longitudinal sections, Pb ions were observed in the meristem, elongation, and mature zones of S. babylonica root caps exposed to Pb. Pb levels were ordered as follows: meristem zone (2.38 wt%) > elongation zone (1.37 wt%) > mature zone (1.10 wt%) > root cap (1.05 wt%) (Fig. 10). These findings exhibited the same trend as the Pb-specific Leadmium Green AM dye probe observations. In transverse sections of the mature zone, the epidermis, cortex, and vascular cylinder were easily distinguishable (Fig. 11a). The EDXA spectra revealed that Pb was located in the epidermis, cortex, and vascular cylinder after Pb stress. Pb distribution in these tissue cells was detected in both the cytoplasm and cell wall. Pb levels in the cell wall and cytoplasm of epidermal cells was the lowest compared to cortical and vessel cells. Pb content was ordered as follows: epidermal cells < cortical cells < vessel cells (Fig. 11b–f). Notably, an increasing trend in Pb content was detected in cortical cells from the epidermis to vascular cylinder (Fig. 11c–e). Pb levels were ordered as follows: cortical cells near vascular bundle > cortical cells between the epidermis and vascular cylinder > cortical cells near the epidermis (Fig. 11c–e). Pb levels in the protoderm, ground meristem and procambium of the transverse section of meristem were almost the same as those of the apical meristem (Fig. 12a). The Pb content in the cell wall and cytoplasm was ordered as follows: procambium > ground meristem > protoderm (Fig. 12b–f). The Pb content in the cell wall and cytoplasm of ground meristem gradually increased from the protoderm to procambium (Fig. 12c–e). Pb levels were ordered as follows: ground meristem cells near the protoderm < ground meristem cells from the protoderm to procambium < ground meristem cells near the procambium. These results demonstrated that the cell wall was the main Pb storage site in the ground meristem and protoderm. Moreover, the Pb levels in the procambium cytoplasm were higher compared to the ground meristem and protoderm. Based on the transverse sections, the Pb levels in mature zones were lower compared to the meristem zone.
SEM micrographs and Pb localization in different zones of root tip cells exposed to 50 μmol/L Pb for 24 h. a: Intact root (scale bar = 1 mm), b: Root cap (scale bar = 200 μm), c: Meristem zone (scale bar = 200 μm), d: Elongation zone (scale bar = 500 μm), e: Mature zone (scale bar = 500 μm). site of the analysis; x-axis energy [keV]
SEM micrographs and Pb localization at transverse sections in root mature zone cells exposed to 50 μmol/L Pb for 24 h. a: Transverse sections of the mature zone (scale bar = 500 μm), b: Epidermal cells (scale bar = 20 μm), c–e: Cortical cells (scale bar = 20 μm), f: Vessel cells (scale bar = 20 μm). *cell wall, cytoplasm, E: epidermis, C: cortex, and V: vascular cylinder
SEM micrographs and Pb localization at transverse sections in root meristem zone cells exposed to 50 μmol/L Pb for 24 h. a: Transverse section of meristem zone (scale bar = 200 μm), b: Protoderm (scale bar = 20 μm), c–e: Ground meristem (scale bar = 20 μm), f: Procambium (scale bar = 20 μm). *cell wall, cytoplasm, P: protoderm, G: ground meristem, and Pc: procambium
According to Buscaroli [33], the transport of Pb from the roots to shoots is a critical step in Pb phytoextraction. The most recognized standard criteria are based on BCF (Bio-concentration factor) or TF as an indication of phytoremediation potential for different plant species. The plants exhibiting BCF or TF values ≥1 are considered as hyperaccumulators which are potential candidates for phytoextraction, and plants with the values < 1 are constituted metal excluders which are not suitable for phytoextraction [11, 33]. Notably, the TF value > 1 of Pb from S. babylonica roots to shoots is observed in all treatment groups throughout the experiment, so S. babylonica can be considered as a plant with great phytoextraction potentials. The data here show that S. babylonica has the ability to uptake and accumulate Pb. TF value > 1, that is to say, large amounts of Pb are transported to the shoots from the roots, which differs from other plants, including Allium sativum, Ricinus communis, Brassica juncea, Neyraudia reynaudiana, and some other willow clones, in which, large amounts of Pb ions are accumulated in the roots and small amounts are transported to the shoots after Pb stress [13, 14, 25, 34, 35]. Nevertheless, some studies have reported on the use of crop plants and forest plants, including poplar and willows, to remove heavy metals from contaminated soils [22, 23, 26, 36,37,38,39,40]. Other investigations indicated that hyperaccumulators, plants that accumulate heavy metals from the soil into their shoots, are immensely useful in phytoextraction [23, 36]. In this study, S. babylonica is used to evaluate its capability for TF > 1, demonstrating that S. babylonica is considered as a plant with great potential of Pb-accumulation. These results are in accordance with the findings of Chandrasekhar and Ray [11]. Pb accumulation in S. babylonica here is much lower than the three plants (Eclipta prostrata (L.) L., Scoparia dulcis L. and Phyllanthus niruri L.) reported by Chandrasekhar and Ray [41], which maybe due to the different Pb treatment method, Pb concentration and treatment time. Wang et al. [42] indicated that the transport from underground to above ground could be explained by the plants can pre-adapt and improve their tolerance to heavy metals through accumulating heavy metals in initial cuttings before rooted. Besides, more investigations are needed to carry out for further confirming the ability of S. babylonica in phytoremediation of metal contamination.
Roots contact lead and other heavy metals directly in soil system, which are sensitive to environmental stress. Root apical meristem is crucial in immediate stress response through the activation of signal cascades in other plant organs [15]. Excessive Pb often results in environmental contamination and inhibited plant growth. Therefore, understanding Pb uptake and accumulation in root sites, as well as evaluating the action mechanisms of Pb toxicity in plant root tip cells and their consequences on root growth and cell damage are very important. In this study, after a short exposure period, the results demonstrated that compared with the control Pb can restrain the growth of S. babylonica root, and the inhibition increased when Pb concentration increased, which is in accordance with the results reported by Jiang et al. [15], Khan et al. [18], Liu et al. [34], Wierzbicka [43], and Jiang and Liu [44]. However, further research is needed to assess the long-term ecological risks of Pb contamination under field conditions.
Fluorescent Pb reagents are rarely used in plant studies, however, Leadmium Green AM dye has been successfully used to detect Pb in plant roots [15]. The uptake of Pb in S. babylonica root cells was investigated using the Pb-sensitive Leadmium™ Green AM dye in this study. Green fluorescence, which represents the binding of the dye to Pb, was observed in the meristem zone of S. babylonica under Pb stress. An early study indicated that the meristem of plant roots is one of the most sensitive sites to Pb toxicity [43]. The results of this study demonstrated that Pb ions were first accumulated in the elongation zone of root tips after exposure to Pb and were gradually transported to the meristem zone after prolonged exposure, suggesting that the meristem of plant root tips is a target of Pb. In S. babylonica roots under Pb stress, the Leadmium Green AM data are very similar to the PI staining data, which demonstrates the existence of significant relevance between Pb accumulation and cell death in roots exposed to Pb. The results here supported previous observations in which Pb was absorbed within hours in A. cepa root cells exposed to Pb [15, 43]. Rucińska-Sobkowiak et al. [45] demonstrated that accumulated Pb caused enlargements of the apical meristems adjacent to root caps, leading to cell wall thickening and increased in the vacuole, which explained plant tolerance to Pb stress.
PI is an intercalating agent and fluorescent molecule used to stain DNA for studying cell membrane damage in plant roots after the exposure to heavy metals. The damage extent of cell membrane and morphological changes of cell membrane integrity can be reflected by the quantity of PI entered the cells [46,47,48,49,50]. In this study, the toxic effects of Pb on the cell membrane damaged S. babylonica root tip cells, which was confirmed by PI staining. The observed cell damage was mainly in the meristem and elongation zones of root tips exposed to 100 μmol/L Pb for 3 h (Fig. 4). The PI staining data are in agreement with the Pb absorption observations in different zones of S. babylonica roots under Pb stress. Under heavy metal stress, reactive oxygen species (ROS) production also increased. ROS interact with various cellular components and lead to oxidative damage in nucleic acids, proteins, sugars, and lipids, which in turn cause oxidative stress to the intracellular membrane [50]. Under Pb stress, ROS-induced oxidative stress leads to lipid peroxidation in cell membranes, which in turn produce malondialdehyde (MDA) [17, 51]. Cell damage in the roots exposed to Pb may be explained by the fact that Pb causes the lipid peroxidation of membranes and oxidative damage, leading to permeability and fluidity changes of the membrane lipid bilayer and altering cell integrity. Consequently, ROS-induced cellular damage induces local programmed cell death, which generally affects plant growth and development.
EDXA is an analytical technique used for analyzing the localization of elements in biological specimens at the subcellular level [48, 52]. In this study, EDXA at longitudinal sections showed that Pb ions accumulated in the meristem, elongation, and mature zones of S. babylonica root tips exposed to Pb, and the accumulation and distribution of Pb exhibited the same trend as the fluorescent probe results. Additionally, the cell wall is considered the primary Pb accumulation sites then the cytoplasm. Cell wall is the first barrier for heavy metal to enter cells. Plant reduces the toxicity of Pb by binding it to the cell wall, which is one of the mechanisms of plant tolerance. At transverse sections, Pb levels in the meristem zone were high compared to the mature zone, supporting the findings of Eun et al. [53], which demonstrated that Pb accumulation occurred in both the apoplast and symplast, and that the Pb levels in the root meristem were the highest. Root hairs are located only in the root mature zone and increase the absorption surface area greatly, making the uptake of water and minerals more efficiently during osmosis. After Pb ions enter the roots, they penetrate cortical tissues and are translocated to aboveground tissues. This explains why the Pb levels in the mature zone are low compared to the meristem zone. Moreover, the results demonstrated that the Pb levels were ordered as follows: epidermal cells < cortical cells < vessel cells (Fig. 11b–f), which determined that Pb was easily translocated from roots to the aboveground part through vascular tissue. The Pb levels in the meristem zone were higher compared to the mature zone, indicating that the S. babylonica root tip meristem is a target of Pb accumulation and toxicity. Meristem cells are small and have thin walls without differentiation. Because these cells have not yet differentiated, they have a poor transport capacity. Thus, excess Pb can easily damage cell construction, thereby inducing a low mitotic index and production of large damaged cells that inhibit S. babylonica seedlings.
Based on the results obtained in this investigation, we can draw the following conclusions. Under Pb stress, Pb ions initially entered elongation zone cells and gradually accumulated in the meristem zone, and they were localized primarily to the cell wall then to cytoplasm. Pb level in epidermal cells was the lowest compared to cortical and vessel cells, and there was an increasing trend in cortical cells from the epidermis to vascular cylinder. Cell damage in the roots exposed to Pb detected by PI staining was in agreement with the findings of Pb absorption in different zones of S. babylonica roots under Pb stress by SEM with EDXA. At 100 μmol/L Pb treatment for 7 d, the root Pb accumulation was 78.78 ± 0.34 μg/g dry weight of the tissue, and the shoot Pb accumulation was 151.37 ± 0.16 μg/g dry weight of the tissue. The TF value showed > 1 in all treatment groups, although Pb exerted inhibitory effects on the root and shoot growth. Based on these characteristics, S. babylonica could be thought to have great potential for phytoextraction to Pb after the short-term investigation. The information obtained here would lead to a better understanding of Pb resistance and tolerance mechanisms, which would provide valuable and scientific information for phytoremediation investigations of other woody plants stressed by Pb. However, further investigation on long-term Pb accumulation and distribution at even higher concentrations of Pb treatment is still required.
Plant materials and growth conditions
S. babylonica used in this experiment was identified and offered by Professor Wenhui Zhang of Northwest A&F University, China. The collection of the experimental materials conforms to the institutional, national or international guidelines. Healthy woody cuttings (25 cm long) from 1-year-old S. babylonica shoots grown on the campus of Tianjin Normal University, China were collected and rooted in plastic buckets that contained distilled water. Seven-day-old woody cuttings with new roots were transferred to half-strength Hoagland nutrient solution containing 0, 1, 10, 50, or 100 μmol/L Pb and grown for 7 d. The nutrient solution consisted of 5 mM Ca (NO3)2, 5 mM KNO3, 1 mM KH2PO4, 1 mM MgSO4, 50 μM H3BO3, 10 μM FeEDTA, 4.5 μM MnCl2, 3.8 μM ZnSO4, 0.3 μM CuSO4, and 0.1 μM (NH4)6Mo7O24 adjusted to pH 5.5. Control seedlings were grown in the nutrient solution alone. Solutions were continuously aerated with an aquarium air pump. Experiments were conducted in a greenhouse under a 14/10 h light/dark photoperiod at 26/18 °C (day/night) and 65–75% humidity. The roots were protected from direct sunlight. Pb was supplied as lead nitrate [Pb (NO3)2]. All treatments were performed in triplicate. TF were calculated as follows [41]:
$$ \mathrm{TF}=\frac{\mathrm{Metal}\kern0.5em \mathrm{concentration}\kern0.5em \mathrm{in}\kern0.5em \mathrm{shoot}}{\mathrm{Metal}\kern0.5em \mathrm{concentration}\kern0.5em \mathrm{in}\kern0.5em \mathrm{root}}\kern0.5em $$
Determination of Pb
Control and experimental plants stressed by 0, 1, 10, 50, or 100 μmol/L Pb for 7 d were harvested randomly. In order to get rid of the traces of nutrients and Pb ions on their surfaces, the root samples were washed with running tap water for 30 min, 20 mM disodium ethylenediamine tetraacetic acid (Na2-EDTA) for 10 min and deionized water for 3 min in turn. Plant tissues were divided into roots and shoots (i.e., leaves, new stems, and old stems). Roots were dried at 45 °C for 72 h, 80 °C for 24 h, and 105 °C for 12 h, then ground with a cutting mill (IKA-Werke GmbH & CO. KG, Staufen, Germany). After weighing, dried-root material (0.2 g) was digested with a mixture of HNO3 and HClO4 (4:1, v/v) at 160 °C. Dried plant samples were prepared using the wet-digestion method [34]. After dry-ashing, Pb concentrations were analyzed using ICP-AES (Leeman Labs Inc., Hudson, NH, USA).
PI staining
S. babylonica root tips exposed to different Pb concentrations (0, 1, 10, 50, or 100 μmol/L) for 3, 6, 12, and 24 h were washed three times with phosphate-buffered saline (PBS, pH 7.0). Samples were soaked in 1 mmol/L PI (Sigma-Aldrich, Buchs, Switzerland) at 25 °C for 8 min in the dark, and then thoroughly washed with phosphate buffer (50 mmol/L, pH 7.8). According to the methods of Zou et al. [54], Eclipse 90i laser confocal scanning microscope (Nikon Corp., Tokyo, Japan) was adopted to examine the samples, with the excitation maximum set at 535 nm and that of fluorescence emission at 617 nm. Due to the extremely low penetrability across intact membranes, red fluorescence triggered by PI can only be observed in the nuclei of damaged cells. Red fluorescence is an indicator of cell damage [50, 54]. Fluorescence density was analyzed using the "Analyze and Measure" function in Image J software (NIH, Bethesda, MD, USA).
Fluorescence labelling of Pb
S. babylonica root tips exposed to different Pb concentrations (0, 1, 10, 50, or 100 μmol/L) for 3, 6, 12, and 24 h were soaked in EDTA solution (Na2-EDTA, 20 mmol/L) and washed with running water for 15 min. Then, root tips were washed with deionized water 3 times. Afterwards, experimental and control roots were stained using the Pb-specific probe Leadmium™ Green AM solution (Molecular Probes, Invitrogen, Carlsbad, CA, USA) for 90 min at 40 °C in the dark following the manufacturer's instructions to visualize Pb absorption and distribution [55]. Intact cells exhibited green fluorescence due to the Pb-specific probe Leadmium™ Green AM solution. Fluorescence density was analyzed using the "Analyze and Measure" function in Image J software to evaluate the Pb distribution in intact roots. Prepared samples were observed using a Nikon Eclipse 90i confocal laser scanning microscope with an exciter at 488 nm and a barrier at 590/50 nm.
Sample preparation for SEM and EDXA
The elemental distribution and composition of experimental plants were determined from freeze-dried root materials. S. babylonica roots treated with 50 μmol/L Pb for 24 h were removed from the Pb (NO3)2 solution and washed thoroughly. Samples (1 cm long) were cut from the root tips, soaked in 20 mM EDTA-NO2 solution for 15 min, and washed three times with ddH2O for 10 min. Materials were washed three times with PBS (pH 7.2) for 10 min. According to the methods referred by Shi et al. [52], the root tips were frozen quickly in liquid nitrogen and lyophilized in vacuum. An Emitech K550X sputter/coater (Quorum Group, London, England) was adopted to gild the root samples. A FEI Nova NanoSEM 230 (FEI Company, Oregon, USA) with a Genesis Apollo 10 EDXA (FEI Company, Oregon, USA) was employed to carry out EDXA. Via an X-ray and X-ray detector with a super ultra-thin window, the spectra were collected at 20 kV for 30–40 s. Pb contents were calculated as weight percent (Wt%) (i.e., the weight-based (or mass) percent concentration of a certain element relative to the gross element weight (or mass)).
Fifteen seedlings were involved in each treatment, which was repeated five times to achieve statistical validity. SPSS v17.0 (SPSS Inc., Illinois, USA) and SigmaPlot v8.0 (Systat Software Inc., San Jose, CA) were adopted to analyze the results. The data are expressed as the mean ± standard error (SE). A one-way analysis of variance (ANOVA) was applied to determine the differences between treatments. In the case of p < 0.05, the results were deemed as statistically significant.
Pb:
ICP-AES:
Inductively coupled plasma atomic emission spectrometry
TF:
Translocation factor
BCF:
Bio-concentration factor
Propidium iodide
SEM:
EDXA:
Energy-dispersive X-ray analyses
ROS:
MDA:
Malondialdehyde
PBS:
Phosphate buffer saline
SE:
ANOVA:
Obiora SC, Chukwu A, Toteu SF, Davies TC. Assessment of heavy metal contamination in soils around lead (Pb)-zinc (Zn) mining areas in Enyigba, southeastern Nigeria. J Geol Soc. 2016;87(4):453–62.
Kumar A, Kumar A, Cabral-Pinto MMS, Chaturvedi AK, Shabnam AA, Subrahmanyam G, Mondal R, Gupta DK, Malyan SK, S Kumar S, A Khan S, Yadav KK. Lead toxicity: Health hazards, influence on food chain, and sustainable remediation approaches. Int J Env Res Pub He. 2020;17(7):2179.
Khan I, Iqbal M, Shafiq F. Phytomanagement of lead-contaminated soils: critical review of new trends and future prospects. Int J Environ Sci Te. 2019;16(10):6473–88.
Zulfiqar U, Farooq M, Hussain S, Maqsood M, Hussain M, Ishfaq M, Ahmad M, Anjum MZ. Lead toxicity in plants: Impacts and remediation. J Environ Manag. 2019;250:UNSP 109557.
Leal-Alvarado DA, Espadas-Gil F, Saenz-Carbonell L, Talavera-May C, Santamaria JM. Lead accumulation reduces photosynthesis in the lead hyper-accumulator Salvinia minima baker by affecting the cell membrane and inducing stomatal closure. Aquat Toxicol. 2016;171:37–47.
Jiang Z, Zhang HN, Qin R, Zou JH, Wang JR, Shi QY, Jiang WS, Liu DH. Effects of lead on the morphology and structure of the nucleolus in the root tip meristematic cells of Allium cepa L. Int J Mol Sci. 2014;15(8):13406–23.
Ding GH, Li CY, Han X, Chi CY, Zhang DW, Liu BD. Effects of lead on ultrastructure of Isoetes sinensis palmer (Isoetaceae), a critically endangered species in China. PLoS One. 2015;10(9):e0139231.
Ferreyroa GV, Lagorio MG, Trinelli MA, Lavado RS, Molina FV. Lead effects on Brassica napus photosynthetic organs. Ecotox Environ Safe. 2017;140:123–30.
Kumar A, Prasad MNV, Sytar O. Lead toxicity, defense strategies and associated indicative biomarkers in Talinum triangulare grown hydroponically. Chemosphere. 2012;89(9):1056–65.
Saleem M, Asghar HN, Zahir ZA, Shahid M. Impact of lead tolerant plant growth promoting rhizobacteria on growth, physiology, antioxidant activities, yield and lead content in sunflower in lead contaminated soil. Chemosphere. 2018;195:606–14.
Chandrasekhar C, Ray JG. Copper accumulation, localization and antioxidant response in Eclipta alba L. in relation to quantitative variation of the metal in soil. Acta Physiol Plant. 2017;39(9):205.
Wang J, Ye S, Xue SG, Hartley W, Wu H, Shi LZ. The physiological response of Mirabilis jalapa Linn. To lead stress and accumulation. Int Biodeterior Biodegradation. 2018;128:11–4.
Liu DH, Jiang WS, Liu CJ, Xin CH, Hou WQ. Uptake and accumulation of lead by roots, hypocotyls and shoots of Indian mustard (Brassica juncea L.). Bioresource Tech. 2000;71(3):273–7.
Kiran BR, Prasad MNV. Responses of Ricinus communis L. (castor bean, phytoremediation crop) seedlings to lead (Pb) toxicity in hydroponics. Selcuk J Agri Food Sci. 2017;31(1):73–80.
Jiang Z, Qin R, Zhang HH, Zou JH, Shi QY, Wang JR, Jiang WS, Liu DH. Determination of Pb genotoxic effects in Allium cepa root cells by fluorescent probe, microtubular immunofluorescence and comet assay. Plant Soil. 2014;383(1–2):357–72.
Liu XJ, Shi QY, Zou JH, Wang JR, Wu HF, Wang JY, Jiang WS, Liu DH. Chromosome and nucleolus morphological characteristics in root tip cells of plants under metal stress. Fresenius Environ Bull. 2016;25(7):2419–26.
Hattab S, Hattab S, Flores-Casseres ML, Boussetta H, Doumas P, Hernandez LE, Banni M. Characterisation of lead-induced stress molecular biomarkers in Medicago sativa plants. Environ Exp Bot. 2016;123:1–12.
Khan MM, Islam E, Irem S, Akhtar K, Ashraf MY, Iqbal J, Liu D. Pb induced phytotoxicity in para grass (Brachiaria mutica) and castor bean (Ricinus communis l.): Antioxidant and ultrastructural studies. Chemosphere. 2018;200:257–65.
Kumar A, Prasad MNV. Plant-lead interactions: transport, toxicity, tolerance, and detoxification mechanisms. Ecotox Environ Safe. 2018;166:401–18.
El-Banna MF, Mosa A, Gao B, Yin XQ, Wang HY, Ahmad Z. Scavenging effect of oxidized biochar against the phytotoxicity of lead ions on hydroponically grown chicory: an anatomical and ultrastructural investigation. Ecotox Environ Safe. 2019;170:363–74.
Sha S, Cheng MH, Hu KJ, Zhang W, Yang YR, Xu QS. Toxic effects of Pb on Spirodela polyrhiza (L.): subcellular distribution, chemical forms, morphological and physiological disorders. Ecotox Environ Safe. 2019;181:146–54.
Zou JH, Wang G, Ji J, Wang JY, Wu HF, Ouyang YJ, Li BB. Transcriptional, physiological and cytological analysis validated the roles of some key genes linked cd stress in Salix matsudana Koidz. Environ Exp Bot. 2017;134:116–29.
Ouyang J, Li BB, Li CH, Shang XS, Zou JH. Cadmium effects on mineral accumulation and selected physiological and biochemical characters of Salix babylonica L. Pol J Environ Stud. 2017;26(6):2667–76.
Zhivotovsky OP, Kuzovkina JA, Schulthess CP, Morris T, Pettinelli D, Ge M. Hydroponic screening of willows (Salix L.) for lead tolerance and accumulation. Int J Phytoremediat. 2011;13(1):75–94.
Zhivotovsky OP, Kuzovkina YA, Schulthess CP, Morris T, Pettinelli D. Lead uptake and translocation by willows in pot and field experiments. Int J Phytoremediat. 2011;13(8):731–49.
Kersten G, Majestic B, Quigley M. Phytoremediation of cadmium and lead-polluted watersheds. Ecotox Environ Safe. 2017;137:225–32.
Zhao FL, Yang WD. Review on application of willows (Salix spp.) in remediation of contaminated environment. Acta Agriculturae Zhejiangensis. 2017;29(2):300–6.
Li H, Zhang GC, Xie HC, Li K, Zhang SY. The effects of the phenol concentrations on photosynthetic parameters of Salix babylonica L. Photosynthetica. 2015;53(3):430–5.
Wang QB, Chen GC, Fang J, Lou C, Zhang JF. Characteristics of soil lead tolerance, accumulation and distribution in Salix babylonica Linn. And Salix jiangsuensis J172. Bulletin Botanical Res. 2014;34(5):626–33.
Salazar MJ, Pignata ML. Lead accumulation in plants grown in polluted soils. Screening of native species for phytoremediation. J Geochem Explor. 2014;137:29–36.
Bernardino CAR, Mahler CF, Preussler KH, Novo LAB. State of the art of phytoremediation in Brazil-review and perspectives. Water Air Soil Pollut. 2016;227(8):272.
Koptsik GN. Problems and prospects concerning the phytoremediation of heavy metal polluted soils: a review. Eurasian Soil Sci. 2014;47(9):923–39.
Buscaroli A. An overview of indexes to evaluate terrestrial plants for phytoremediation purposes (review). Ecol Indic. 2017;82:367–80.
Liu DH, Zou J, Meng QM, Zou JH, Jiang WS. Uptake and accumulation and oxidative stress in garlic (Allium sativum L.) under lead phytotoxicity. Ecotoxicology. 2009;18(1):134–43.
Zhou CF, Huang MY, Li Y, Luo JW, Cai LP. Changes in subcellular distribution and antioxidant compounds involved in Pb accumulation and detoxification in Neyraudia reynaudiana. Environ Sci Pollut R. 2016;23(21):21794–804.
Wu HF, Wang JY, Li BB, Ouyang J, Wang JR, Shi QY, Jiang WS, Liu DH, Zou JH. Salix matsudana Koidz tolerance mechanisms to cadmium: uptake and accumulation, subcellular distribution, and chemical forms. Pol J Environ Stud. 2016;25(4):1739–47.
Courchesne F, Turmel MC, Cloutier-Hurteau B, Constantineau S, Munro L, Labrecque M. Phytoextraction of soil trace elements by willow during a phytoremediation trial in southern Quebec. Canada Int J Phytoremediat. 2017;19(6):545–54.
Ouyang J, Li BB, Xue WX, Jiang Y, Li CH, Shang XS, Zou JH. Cadmium uptake and accumulation, subcellular distribution and chemical forms in young seedlings of Salix babylonica L. Fresenius Environ Bull. 2019;28(5):3637–48.
Zou JH, Shang XS, Li CH, Ouyang J, Li BB, Liu XJ. Effects of cadmium on mineral metabolism and antioxidant enzyme activities in Salix matsudana Koidz. Pol J Environ Stud. 2019;28(2):989–99.
Shang XS, Xue WX, Jiang Y. Effects of calcium on the alleviation of cadmium toxicity in salix matsudana and its effects on other minerals. Pol J Environ Stud. 2020;29(2):2001–10.
Chandrasekhar C, Ray JG. Lead accumulation, growth responses and biochemical changes of three plant species exposed to soil amended with different concentrations of lead nitrate. Ecotox Environ Safe. 2019;171:26–36.
Wang WW, Cheng LK, Hao JW, Guan X, Tian XJ. Phytoextraction of initial cutting of Salix matsudana for Cd and Cu. Int J Phytoremediation. 2019;21(2):84–91.
Wierzbicka M. Resumption of mitotic activity in Allium cepa L. root tips during treatment with lead salts. Environ Exp Bot. 1994;34(2):173–80.
Jiang WS, Liu DH. Effects of Pb2+ on root growth, cell division and nucleolus of Brassica juncea L. Isr J Plant Sci. 1999;47(3):153–6.
Rucińska-Sobkowiak R, Nowaczyk G, Krzesłowska M, Rabęda I, Jurga S. Water status and water diffusion transport in lupine roots exposed to lead. Environ Exp Bot. 2013;87:100–9.
Liao TT, Shi YL, Jia JW, Wang L. Sensitivity of different cytotoxic responses of vero cells exposed to organic chemical pollutants and their reliability in the bio-toxicity test of trace chemical pollutants. Biomed Environ Sci. 2010;23(3):219–29.
Liao TT, Jia RW, Shi YL, Jia JW, Wang L, Chua H. Propidium iodide staining method for testing the cytotoxicity of 2,4,6-trichlorophenol and perfluorooctane sulfonate at low concentrations with vero cells. J Environ Sci Heal A. 2011;46(14):1769–75.
Wang JR, Shi QY, Zou JH, Jiang Z, Wang JY, Wu HF, Jiang WS, Liu DH. Cellular localization of copper and its toxicity on root tips of Hordeum vulgare. Fresenius Environ Bull. 2015;24(7):2394–405.
Shi QY, Wang JR, Zou JH, Jiang Z, Wu HF, Wang JY, Jiang WS, Liu DH. Cadmium localization and its toxic effects on root tips of barley. Zemdirbyste. 2016;103(2):151–8.
Wu HF, Wang JY, Ouyang J, Li BB, Jiang WS, Liu DH, Zou JH. Characterisation of early responses to cadmium in roots of Salix matsudana Koidz. Environ Toxicol Chem. 2017;99(5–6):913–25.
Ashraf U, Tang XR. Yield and quality responses, plant metabolism and metal distribution pattern in aromatic rice under lead (Pb) toxicity. Chemosphere. 2017;176:141–55.
Shi QY, Wang JR, Zou JH, Jiang Z, Wang JY, Wu HF, Jiang WS, Liu DH. Cd subcellular localization in root tips of Hordeum vulgare. Pol J Environ Stud. 2016;25(2):903–8.
Eun SO, Youn HS, Lee Y. Lead disturbs miceotubule organization in root meristem of Zea mays. Physiol Plant. 2000;110:357–65.
Zou JH, Wang G, Ji J, Wang JY, Ouyang J, Lie BB. Cadmium' s effect on the organization of microtubular cytoskeleton in root tips cells of Salix matsudana Koidz. Pol J Environ Stud. 2018;27(2):939–46.
Piper CS. Soil and plant analysis. Australia: Monograph, Waite Agric Res Inst. The University of Adelaide; 1942.
We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.
This project was supported by Natural Science Foundation of China (grant No. 31901184) and Doctor Foundation of Tianjin Normal University (grant No. 52XB1914). The funding body supported the study, analysis of data and writing the manuscript.
Tianjin Key Laboratory of Animal and Plant Resistance, College of Life Science, Tianjin Normal University, Tianjin, 300387, China
Wenxiu Xue, Yi Jiang, Xiaoshuo Shang & Jinhua Zou
Wenxiu Xue
Yi Jiang
Xiaoshuo Shang
Jinhua Zou
WX participated in plant cultivation, experimental operation and collecting the materials. YJ participated in the data analysis and helped draft the manuscript. XS carried out materials collecting. JZ designed the study and drafted the manuscript. All authors have read and approved this manuscript.
Correspondence to Jinhua Zou.
Xue, W., Jiang, Y., Shang, X. et al. Characterisation of early responses in lead accumulation and localization of Salix babylonica L. roots. BMC Plant Biol 20, 296 (2020). https://doi.org/10.1186/s12870-020-02500-6
Energy-dispersive X-ray analyses (EDXA)
Fluorescence labeling
Lead (Pb)
Propidium iodide (PI)
Salix babylonica L.
|
CommonCrawl
|
Observation of ionospheric disturbances induced by the 2011 Tohoku tsunami using far-field GPS data in Hawaii
Long Tang1,
Xiaohong Zhang1 &
Zhe Li1
In this study, we employ far-field GPS total electron content (TEC) observed in Hawaii to detect the ionospheric disturbances induced by the 2011 Tohoku tsunami. We observed tsunami-driven traveling ionospheric disturbances (TIDs) at two different times: at about 12:40 UT, there were TIDs in the disturbance series propagating at approximately 260 m/s in an outward direction from the tsunami's source, and then, the signals began to weaken and gradually disappeared after 14:00 UT; however, at about 17:30 UT, the TIDs appeared again in the disturbance series with similar propagation characteristics. According to the observation times, the former TIDs can be attributed to the straight tsunami from the mainshock, while the latter TIDs are most likely driven by tsunami from aftershocks. Furthermore, we also observed tsunami-like TIDs at about 11:50 UT with similar horizontal velocity and direction compared to tsunami waves. However, the arrival time of the TIDs was about 1.5 earlier than tsunami waves in the sea level and should be induced by other sources.
A tsunami is generated when a large oceanic earthquake or volcanic eruption causes a rapid displacement of the ocean floor. Tsunami detection by ionospheric monitoring is originated from the view indicated in Hines (1972) and Peltier and Hines (1976). The atmospheric gravity waves (AGWs) produced by tsunami can propagate obliquely in the atmosphere. During the upward propagation, the exponential decrease of the atmospheric density leads to significant increase of the gravity wave's amplitude due to the energy conservation law. The AGWs interact with the plasma at ionospheric height, leading to the generation of traveling ionospheric disturbances (TIDs). The tsunami-driven TIDs have similar propagation characteristics in terms of horizontal velocity, direction, period, and observation time compared to the tsunami waves causing them (Rolland et al. 2010).
After the devastating 2004 Indian Ocean Tsunami, scientific communities pay many interests on observing tsunami by ionospheric sounding (Artru et al. 2005; Liu et al. 2006; Occhipinti et al. 2006; Mai and Kiang 2009; Hickey et al. 2010; Rolland et al. 2010; Galvan et al. 2011; Liu et al. 2011; Makela et al. 2011; Iyemori et al. 2013; Occhipinti et al. 2013). Due to the high spatial and temporal resolution, total electron content (TEC) derived from ground-based GPS has widely been used as the observations in the studies. The tsunami-driven TIDs are observed in ionosphere TEC after many tsunami events using ground-based GPS stations, suggesting that the ionosphere is sensitive to the tsunami waves and the ionospheric sounding has the potential application in tsunami warning. Although many scholars obtain a wealth of research results on the study of GPS ionospheric tsunami sounding, further research is necessary for more reasonable and comprehensive recognition of the issue, considering the complexity of real tsunami propagation.
The Tohoku (Japan) earthquake (Mw = 9) occurred at 05:46 UT on 11 March 2011 and then triggered powerful tsunami. Several scholars used the near-field GEONET GPS data in Japan to analyze this event and observed the tsunami-induced TIDs in ionosphere TEC (Liu et al. 2011; Rolland et al. 2011; Tsugawa et al. 2011; Occhipinti et al. 2013). In this study, we apply the GPS TEC observations in Hawaii, which is far away from the epicenter, to detect the tsunami-driven TIDs after the 2011 Tohoku tsunami. Hawaii is located at the center of the Pacific Ocean that is very suitable for tsunami monitoring in the open sea.
The slant ionospheric TEC s can be calculated from the geometry-free combination of GPS L 1 and L 2 carrier phases (L 4 = L 1 − L 2) for each satellite-receiver pair:
$$ s\kern0.5em =\kern0.5em {L}_4/k\kern0.5em +\kern0.5em b $$
where k is the conversion factor between TEC and observation (k ≈ 0.105 m/TECU); b is an unknown constant bias. Although Equation 1 cannot acquire the absolute value of TEC at a particular time, it can capture the TEC variation over time with high precision, which is important for TID detection.
In this paper, we employ a second-order numerical difference method to eliminate the diurnal variation and the bias in TEC (Tang and Zhang 2014). This method is very simple and beneficial to real-time application. Compared to the first-order numerical difference applied in Hernandez-Pajares et al. (2006), the second-order numerical difference can also effectively detrend the TEC series with lower satellite elevation angle which is meaningful for tsunami monitoring. The difference process is as follows:
$$ \begin{array}{c}\hfill \varDelta s(t)\kern0.5em =\kern0.5em s(t)\kern0.5em -\kern0.5em 0.5\left(s\left(t\kern0.5em -\kern0.5em \tau \right)\kern0.5em +\kern0.5em s\left(t\kern0.5em +\kern0.5em \tau \right)\right)\hfill \\ {}\hfill {\varDelta}^2s(t)\kern0.5em =\kern0.5em \varDelta s(t)\kern0.5em -\kern0.5em 0.5\left(\varDelta s\left(t\kern0.5em -\kern0.5em \tau \right)\kern0.5em +\kern0.5em \varDelta s\left(t\kern0.5em +\kern0.5em \tau \right)\right)\hfill \end{array} $$
where t is the observation epoch; τ is the time step; Δs(t) and Δ2 s(t) are the first-order and second-order difference series, respectively.
The second-order difference series and primitive TID signal have the same period (T), and the ratio of amplitude is 4 sin4(πτ/T) (Tang and Zhang 2014). According to the expression, the correlation between the amplitude ratio and the TID period is plotted in Figure 1 with τ = 300 s. As shown in Figure 1, the sensitive period scope is 6 to 25 min with ratio bigger than 0.5, which is suitable for the TID detection induced by the tsunami.
The ratio of the amplitude in term of TIDs' period. The time step is 300 s and the maximum ratio is 4 with a period of 10 min. TID, traveling ionospheric disturbance.
The GPS observations in Hawaii are collected from the public website of UNAVCO (http://www.unavco.org/), and the locations of the stations are showing in Figure 2. The number of the ground-based stations is about 60, and data sampling rate is 30 s, which is enough to observe the TIDs induced by AGWs. A single-layer model with a height of 350 km is used to obtain the vertical TEC (vTEC) values and positions of ionospheric pierce point (IPP). Then, we employ the second-order numerical difference method to extract the vTEC variation series.
The locations of GPS ground stations and tsunami stations in Hawaii. The green dots indicate the GPS ground stations, the blue triangle notes the tide-gauge station, and the yellow square marks the DART station.
In order to distinguish the tsunami-driven TIDs from the vTEC variation series, we plot the vTEC variations as a function of distance and time, namely, the time-distance map: the origin is the epicenter of the earthquake, the X-axis is the observation time, and the Y-axis is the distance between the IPP and the epicenter. Figure 3 shows the time-distance map of vTEC variations during different observation times. Considering that tsunami-driven TIDs have similar propagation characteristics as the tsunami causing them, an effective way to distinguish ionospheric signals associated with tsunami is to search for perturbations with horizontal velocity around 200 m/s (the approximate speed of tsunami) and direction outward from the tsunami's source (Galvan et al. 2011).
The time-distance maps of vTEC variations during different observation times. Panels (a-c) are for all the visible satellites and panel (d) for satellite PRN 22. The black lines in panel (a) and panel (d) indicate the propagation velocity, and the ellipse in panel (c) marks the position of tsunami-driven TIDs. The black line in panel (a) approximately separates TIDs in different times.
As shown in Figure 3a, there are obvious tsunami-like TIDs with horizontal velocity of approximately 260 m/s in an outward direction from the epicenter that appeared at about 11:50 UT, which is observed by GPS PRN 29. This horizontal velocity is similar to the tsunami speed over adjacent area, which can reach to \( v\kern0.5em =\kern0.5em \sqrt{gh}\kern0.5em =\kern0.5em 242 \) m/s with a depth (h) of 6 km and a gravity (g) of 9.8 m/s2. To examine the signal in the ionosphere, the sea-level tsunami measurements recorded by a coastal tide gauge in Nawiliwili and a DART buoy (51407) at adjacent area are plotted in Figure 4. As can be seen in Figure 4, the tide gauge and DART buoy firstly observed the tsunami waves at about 13:10 and 13:20 UT. As showing in Figure 2, the trajectory of GPS satellite PRN 21 is very close to the position of DART buoy (51407). Figure 5a shows the vTEC variation series derived from satellite PRN 21 using ground station AHUP. Comparing Figures 4b and 5a, we can see that the arrival time of the TIDs was about 1.5 h earlier than the tsunami waves in the sea level. Considering that the tsunami signals in the ionosphere and sea level should have similar arrival time, the TIDs observed at about 11:50 UT are not triggered by the tsunami from the mainshock.
The sea-level tsunami series and time-frequency diagrams. Panel (a) is the coastal tide gauge in Nawiliwili, and panel (b) is the DART 51407 buoy. Panels (c) and (d) are corresponding time-frequency diagrams.
The vTEC variation series and time-frequency diagrams. The results are derived from satellite PRN 21 (panel (a)) and satellite PRN 22 (panel (b)) using the observations in station AHUP. Panels (c) and (d) are corresponding time-frequency diagrams.
Carefully examining Figure 3a, there are tsunami-like TIDs with a horizontal velocity of approximately 260 m/s in an outward direction from the epicenter that occurred at about 12:40 UT observed by GPS PRN 29. Then, the TIDs were detected by GPS PRN 21 at about 13:00 UT. Different from the previous TIDs that appeared at about 11:50 UT, the arrival time of the TIDs is consistent to the tsunami waves in the sea level with only 20-min intervals (seen in Figures 4b and 5a). Figures 4c, d and 5c also present the corresponding time-frequency diagrams for observed tsunami waves and ionospheric signals. The frequencies for tsunami waves during 13:00 to 15:00 UT are 1 ~ 2 mHz centered 1.42 mHz (period about 12 min). The center frequency is also 1.42 mHz for the vTEC variation series during this period, suggesting that the period of TIDs is similar to the tsunami waves as well. Based on the similar horizontal velocity, direction, arrival time, and period, the observed TIDs that appeared about 12:40 UT in the ionosphere are confirmed induced by tsunami waves.
As can be seen in Figure 3, the TID signals began to weaken and gradually disappeared after 14:00 UT. Furthermore, the water level of tsunami waves also decreased after 15:00 UT as shown in Figure 4. This indicates that the tsunami waves were over the Hawaii area and propagated to more remote areas. However, the amplitudes of vTEC variations began to increase at about 16:30 UT (see Figures 3c and 5b). This can be attributed to the diurnal variation of plasma density in the ionosphere increasing from night to day (the local time is about 06:00). During the period of sunrise or sunset, the energy in the atmosphere varies drastically and leads to the instability of the atmosphere, which might induce the ionospheric disturbances (Somsikov 1995).
As shown in Figure 3c, there were tsunami-like TIDs with horizontal velocity of approximately 250 m/s in an outward direction from the epicenter that appeared again at about 17:30 UT. Due to the superposition of vTEC variations, the signals are not very clear. To see more clearly, we use the ellipse mark of the position of tsunami-driven TIDs in Figure 3c and plot the time-distance map separately for satellite PRN 22 in Figure 3d. As discussed above, the horizontal velocity of the tsunami-driven TIDs is also similar to tsunami speed over adjacent area. Furthermore, the water level increased significantly at about 17:30 UT recorded by the tide gauge (see Figure 4a), indicating the arrival of tsunami waves. This means that the observation time between tsunami-like TIDs and tsunami waves is consistent. It should be noted that the DART buoy did not record the increased water level at similar time. The reason may be that the tsunami waves did not pass the region near the buoy.
Similarly, to compare the period between the detected TIDs and the tsunami waves, the vTEC variation series derived from satellite PRN 22 using the observations in station AHUP and corresponding time-frequency diagram are also plotted in Figure 5. As can be seen in Figures 4c and 5d, both of the TIDs and tsunami waves have center frequencies of 1.42 and 1.75 mHz (period about 9.5 min), respectively, during 17:00 to 19:00 UT, indicating that they have similar periods. To remove possible recurrent TIDs, we also process the data on the same time before and after the event day and do not observe similar results. As shown in Figure 6, the disturbance velocities are about 1,500 and 900 m/s on the day before and after the event day, respectively, during 17:30 ~ 18:00 UT, which is far larger than the tsunami velocity. So, it can also be confirmed that the observed TIDs that appeared about 17:30 UT in the ionosphere on the event day were triggered by the tsunami in view of their similar propagation characteristics in terms of horizontal velocity, direction, period, and observation time.
The time-distance maps of vTEC variations for GPS PRN 22. The left panel is the day before the event day, and the right panel is the day after the event day.
To obtain more reliable results, the propagation characteristics for all observed TIDs are estimated (Wang et al. 2007; Zhang et al. 2013), and the results are listed in Table 1. The basic processes for the estimation method are as follows: 1) extract the Fourier coefficient complex phase differences (time delay in the frequency domain) between the TEC variation series observed at least three stations by Fourier transform; 2) assuming the TIDs are planar waves, the wave numbers in the X-axis (pointing to the east direction) and Y-axis (pointing to the north direction) can be solved according to the coordinates of stations and the phase differences; and 3) estimate the horizontal velocity and azimuth of the TIDs according to the horizontal wave numbers. As seen from the table, the horizontal velocities for the TIDs that appeared at about 12:40 and 17:30 UT are all about 244 m/s, which are very close to the tsunami velocity at the adjacent area, confirming that they are triggered by the tsunami waves.
Table 1 The propagation characteristics for observed TIDs
The above observations show that the tsunami-driven TIDs appeared at different times: the former TIDs were observed at about 12:40 UT and the latter TIDs at about 17:30 UT. According to the observation time, the tsunami waves that induced the former TIDs can be attributed to the mainshock. As mentioned above, the tide gauge and DART buoy recorded significant amplitudes of the tsunami waves. They began at about 13:00 UT and then gradually decreased after 15:00 UT suggesting the lack of energy. However, another group of tsunami waves with larger amplitudes that triggered the latter TIDs appeared again at about 17:30 UT. Considering the similar propagation direction to tsunami waves observed at about 17:30 UT, the possible cause leading to the latter tsunami waves might be the aftershocks. According to the records, there were several aftershocks (Mw > 6.5) near the epicenter of mainshock within 5 h that might trigger new tsunami.
In this paper, we use the far-field ionospheric TEC derived from ground-based GPS observations in Hawaii to detect the TIDs triggered by the 2011 Tohoku tsunami. The tsunami-driven TIDs had similar propagation characteristics in terms of horizontal velocity, direction, period, and observation time compared to the tsunami waves causing them, confirming again that the ionosphere is sensitive to the tsunami waves. These tsunami-driven TIDs appeared at different times: the former TIDs are observed at about 12:40 UT and disappeared at about 14:00 UT, and then, the latter TIDs appeared at about 17:30 UT. This is the first time we observed the phenomenon. The former TIDs can be attributed to the straight tsunami from the mainshock just like previous literatures, while the latter TIDs are most likely driven by tsunami from aftershocks. Furthermore, we also observed tsunami-like TIDs at about 11:50 UT with similar horizontal velocity and direction but different arrival time compared to tsunami waves in the sea level, suggesting that they may be induced by other sources.
The results supply a new case showing that the tsunami can trigger the TIDs. More importantly, the observation results indicate that the straight tsunami from the mainshock might not be the only source to induce the disturbances. This study provides a new recognition of tsunami-driven TIDs and support for the future tsunami-warning system.
Artru J, Ducic V, Kanamori H, Lognonné P, Murakami M (2005) Ionospheric detection of gravity waves induced by tsunamis. Geophys J Int 160:840–848
Galvan DA, Komjathy A, Hickey MP, Mannucci AJ (2011) The 2009 Samoa and 2010 Chile tsunamis as observed in the ionosphere using GPS total electron content. J Geophys Res 116:A06318, doi:10.1029/2010JA016204
Hernandez-Pajares M, Juan JM, Sanz J (2006) Medium-scale traveling ionospheric disturbances affecting GPS measurements: spatial and temporal analysis. J Geophys Res 111:A07S11, doi:10.1029/2005JA011474
Hickey MP, Schubert G, Walterscheid RL (2010) Atmospheric airglow fluctuations due to a tsunami-driven gravity wave disturbance. J Geophys Res 115:A06308, doi:10.1029/2009JA014977
Hines CO (1972) Gravity waves in the atmosphere. Nature 239:73–78, doi:10.1038/239073a0
Iyemori T, Tanaka Y, Odagi Y, Yasuharu S, Masahiko T, Masahito N, Mitsuru U, Domingo R, Edwin C, Jose I, Sadato Y, Kunihito N, Mitsuru M, Hiroyuki S (2013) Barometric and magnetic observations of vertical acoustic resonance and resultant generation of field-aligned current associated with earthquakes. Earth Planets Space 65:901–909
Liu JY, Tsai YB, Ma KF, Chen YK, Tsai HF, Lin CH, Kamogawa M, Lee CP (2006) Ionospheric GPS total electron content (TEC) disturbances triggered by the 26 December 2004 Indian Ocean tsunami. J Geophys Res 111:A05303, doi:10.1029 /2005JA011200
Liu JY, Chen CH, Lin CH, Tsai HF, Chen CF, Kamogawa M (2011) Ionospheric disturbances triggered by the 11 March 2011 M9.0 Tohoku earthquake. J Geophys Res 116:A06319, doi:10.1029/2011JA016761
Mai CL, Kiang JF (2009) Modeling of ionospheric perturbation by 2004 Sumatra tsunami. Radio Sci 44:RS3011, doi:10.1029/2008RS004060
Makela JJ, Lognonné P, Hébert H, Gehrels T, Rolland L, Allgeyer S, Kherani A, Occhipinti G, Astafyeva E, Coïsson P, Loevenbruck A, Clévédé E, Kelley MC, Lamouroux J (2011) Imaging and modeling the ionospheric airglow response over Hawaii to the tsunami generated by the Tohoku earthquake of 11 March 2011. Geophys Res Lett 38:L00G02, doi:10.1029/2011GL047860
Occhipinti G, Lognonné KEA, Hebert H (2006) Three-dimensional waveform modeling of ionospheric signature induced by the 2004 Sumatra tsunami. Geophys Res Lett 33:L20104, doi:10.1029/2006GL026865
Occhipinti G, Rolland L, Lognonné P, Watada S (2013) From Sumatra 2004 to Tohoku-Oki 2011: the systematic GPS detection of the ionospheric signature induced by tsunamigenic earthquakes. J Geophys Res 118:3626–3636, doi:10.1002/jgra.50322
Peltier WR, Hines CO (1976) On the possible detection of tsunamis by a monitoring of the ionosphere. J Geophys Res 81:1995–2000, doi:10.1029/JC081i012p01995
Rolland LM, Occhipinti G, Lognonné P, Loevenbruck A (2010) Ionospheric gravity waves detected offshore Hawaii after tsunamis. Geophys Res Lett 37:L17101, doi:10.1029/2010GL044479
Rolland LM, Lognonné P, Astafyeva E, Kherani EA, Kobayashi N, Mann M, Munekane H (2011) The resonant response of the ionosphere imaged after the 2011 off the Pacific coast of Tohoku Earthquake. Earth Planets Space 63:853–857
Somsikov VM (1995) On mechanisms for the formation of atmospheric irregularities in the solar terminator region. J Atmos Terr Phys 57:75–83
Tang L, Zhang X-H (2014) A multi-step multi-order numerical difference method for traveling ionospheric disturbances detection. In: China Satellite Navigation Conference (CSNC) 2014 Proceedings: Volume II. Springer, Berlin Heidelberg, pp 331–340, doi:10.1007/978-3-642-54743-0_27
Tsugawa T, Saito A, Otsuka Y, Nishioka M, Maruyama T, Kato H, Nagatsuma T, Murata KT (2011) Ionospheric disturbances detected by GPS total electron content observation after the 2011 off the Pacific coast of Tohoku Earthquake. Earth Planets Space 63:875–879
Wang M, Ding F, Wan W-X, Ning B-Q, Zhao B-Q (2007) Monitoring global traveling ionospheric disturbances using the worldwide GPS network during the October 2003 storms. Earth Planets Space 59:407–419
Zhang X-H, Tang L, Guo B-F (2013) Research on medium-scale traveling ionospheric disturbances using a modified SRTI method. Chin J Geophys 56:3953–3959, doi:10.6038/cjg20131201
The GPS data used in this study are provided by the Plate Boundary Observatory operated by UNAVCO for EarthScope (www.unavco.org/). The DART data and tide data are provided by the National Data Buoy Center (NDBC) and Center for Operational Oceanographic Products and Services (CO-OPS) of the National Oceanic and Atmospheric Administration. This study was supported by the National Natural Science Foundation of China (Grant No. 41474025, No. 41204030), Fundamental Research Funds for the Central Universities (Grant No. 2014214020201), and the Surveying and Mapping Foundation Research Fund Program, National Administration of Surveying, Mapping and Geoinformation (13-02-07).
School of Geodesy and Geomatics, Wuhan University, Wuhan, 430079, China
Long Tang
, Xiaohong Zhang
& Zhe Li
Search for Long Tang in:
Search for Xiaohong Zhang in:
Search for Zhe Li in:
Correspondence to Xiaohong Zhang.
LT and XZ conceived and designed the experiments; LT and ZL performed the experiments and analyzed the data; and LT wrote the manuscript. All authors read and approved the final manuscript.
Tang, L., Zhang, X. & Li, Z. Observation of ionospheric disturbances induced by the 2011 Tohoku tsunami using far-field GPS data in Hawaii. Earth Planet Sp 67, 88 (2015) doi:10.1186/s40623-015-0240-0
Total electron content
Traveling ionospheric disturbances
Coupling of the High and Mid Latitude Ionosphere and Its Relation to Geospace Dynamics
|
CommonCrawl
|
EMF induced due to moving rod in magnetic field
When a conducting rod moves in a uniform magnetic field as shown.
By Lorentz force it is easy to explain that EMF induced is BvL and upper end is positive and lower end is negative.
But in books, this concept is explained by Faraday's law of electromagnetic induction as the area swapped by conductor is changing and EMF is induced. But why we take area swapped into account?
I think that magnetic flux through conductor remains constant as B is constant. I am not able to justify this concept using Faraday's Law (by area swapped). Why area swapped method is used? Please help.
electromagnetism magnetic-fields electromagnetic-induction
ManuManu
$\begingroup$ Could you please mention the name of the book or abstract of that part where you read that? $\endgroup$ – SarGe Jun 17 '20 at 6:06
$\begingroup$ In generally all high school physics books under the chapter "Electromagnetic induction", the analysis of induced EMF in moving rod in uniform magnetic field is taken using Faraday's Law. $\endgroup$ – Manu Jun 17 '20 at 6:12
$\begingroup$ vL can be interpreted as the "area mapped out". $\endgroup$ – my2cts Jun 17 '20 at 9:29
$\begingroup$ I want to know what is the reason behind this interpretation? $\endgroup$ – Manu Jun 17 '20 at 13:42
"I think that magnetic flux through conductor remains constant as B is constant."
It's not the flux "through the conductor" that matters. It's the flux through the area swept out by the conductor. Imagine that the straight conductor (length $\ell$) is lying on a table, and that there is a uniform magnetic field acting downwards. (Actually there is : the vertical component of the Earth's field.) You then move the conductor across the table at speed v in a direction at right angles to itself. In time $\Delta t$ it sweeps out an area $\ell v \Delta t$
The flux through the swept out area is $$\Delta \Phi = (\ell v \Delta t)B$$
So according to Faraday's law, the induced emf is $$\mathscr E=\frac {\Delta \Phi}{\Delta t}=\frac {(\ell v \Delta t)B}{\Delta t}=B\ell v$$ So we have recovered the result that you obtained from the magnetic Lorentz force. In my opinion the magnetic Lorentz force is more fundamental than Faraday's law when the emf is due to movement of conductors. However Faraday's law has the merit of spanning two types of electromagnetic induction: this one and the type due to changing flux through a stationary circuit, which depends on the electric field part of the Lorentz force.
Philip WoodPhilip Wood
I am proving that the area-sweeping technique gives the same result as the Lorentz force method. Using a battery across the two rods in parallel doesn't change the idea as we shall see.
Magnetic Flux $\phi=\int_A \mathbf{B}.d\mathbf{A}$
Faraday's law of electromagnetic induction transforms as follows: \begin{align*} \text{EMF }\varepsilon&=-\frac{d\phi}{dt}\\ \varepsilon&=-\frac{d}{dt}\left(\int_A \mathbf{B}.d\mathbf{A}\right)\\ \varepsilon&=-\mathbf{B}.\frac{d}{dt}\left(\int_A d\mathbf{A}\right)&(\because \mathbf{B}\text{ is uniform})\\ \varepsilon&=-\mathbf{B}.\frac{d\mathbf{A}}{dt}&(\because \mathbf{A}\text{ is unidirectional})\tag{1}\\ \varepsilon&=-\mathbf{B}.\frac{d(\mathbf{l}\times\mathbf{L})}{dt}\\ \varepsilon&=-\mathbf{B}.\left(\frac{d\mathbf{l}}{dt}\times\mathbf{L}\right)&(\because \mathbf{L}\text{ is constant})\\ \varepsilon&=-\mathbf{B}.\left(\mathbf{v}\times\mathbf{L}\right)&(\because \mathbf{v}dt=d\mathbf{l})\\ \varepsilon&=-\mathbf{L}.\left(\mathbf{B}\times\mathbf{v}\right)&(\because \mathbf{B}.(\mathbf{C}\times\mathbf{A})=\mathbf{A}.(\mathbf{B}\times\mathbf{C}))\\ \varepsilon&=\left(\mathbf{v}\times\mathbf{B}\right).\mathbf{L}&(\because \mathbf{A}\times\mathbf{B}=-\mathbf{B}\times\mathbf{A})\tag{2} \end{align*} In the figure, $\mathbf{F}_{\text{Lorentz}}=q(\mathbf{E}+\mathbf{v}\times\mathbf{B})=q(\mathbf{v}\times\mathbf{B})$
Work along the moving rod $=q(\mathbf{v}\times\mathbf{B}).\mathbf{L}\Rightarrow \varepsilon = (\mathbf{v}\times\mathbf{B}).\mathbf{L}\tag{3}$
So, the area-sweeping technique $(1)$ produces $(2)$ for the shown configuration. The trick also works for a single rod without a circuit even though there is no real area changing that changes the flux in turn inducing an EMF. $\mathbf{F}_{\text{Lorentz}}$ is along the rod in this latter case. Regardless, the EMF produced is the same due to taking the dot product of $\mathbf{F}_{\text{Lorentz}}\equiv q(\mathbf{v}\times\mathbf{B})$ with $\mathbf{L}$ in $(3)$. The difference is only that there is force required to move the rod towards the right in the former case because $v_{e^-}$ gives a component of Lorentz force on the rod towards the left.
Sameer BahetiSameer Baheti
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged electromagnetism magnetic-fields electromagnetic-induction or ask your own question.
How is there a change in magnetic flux from a moving conductor through a uniform magnetic field?
EMF in a single moving wire in a magnetic field
How does change in magnetic flux density only induce EMF?
Why is magnetic flux used to measure induced emf? And some other questions on electromagnetic induction
Can emf be induced by reducing the radius of a loop?
Magnetic Field and Induced Current in a rod
Can the emf induced in a conductor passing through the magnetic field be explained via magnetic flux?
Change in Magnetic Flux: Eddy Currents vs Moving Wire
Why is an EMF induced when the magnetic flux remains constant?
|
CommonCrawl
|
Home Journals IJSDP Visualization of Foundation Evaluation for Urban Rail Transit Based on CGB Technology Integration
Visualization of Foundation Evaluation for Urban Rail Transit Based on CGB Technology Integration
Lili Dong | Jin Wu | Wei Wang* | Yu Zhou
School of Architecture and Urban Planning, Chongqing University, Chongqing 400030, China
School of Architecture and Urban Planning, Chongqing Jiaotong University, Chongqing 400074, China
School of Art, Design and Architecture, University of Huddersfield, HD1 3DH, West Yorkshire, United Kingdom
[email protected]
In urban rail transit projects, the traditional method of foundation evaluation faces problems like vague description, unclear process, and fuzzy evaluation system. To solve these problems, this paper sets up a scientific evaluation system based on CGB technology integration and analytic hierarchy process (AHP). The CGB technology integration refers to the integrated application of computer-aided design (CAD), geographic information system (GIS), and building information modeling (BIM). Taking Qingxihe Station, Line 6 of Chongqing Rail Transit (CRT) as the object, the authors constructed a 3D geological model of the construction site, created a novel three-layer system through data analysis, and evaluated and compared the suitability of each layer as the supporting layer of the foundation. Finally, effective suggestions were put forward on the selection of the supporting layer. Our research successfully visualizes the whole process of foundation evaluation, and enhances the accuracy of the evaluation results. The research findings provide a good reference for the selection of the supporting layer of foundations in urban rail transit projects.
CGB technology integration, foundation evaluation, analytic hierarchy process (AHP), visualization
The CGB technology integration refers to the integrated application of computer-aided design (CAD), geographic information system (GIS), and building information modeling (BIM). To meet the various needs of urban digitalization, the CGB technology integration jointly utilizes macro-geographic information and micro-building information, facilitating queries and analyses [1].
Compared with the CAD data, the BIM data and GIS data are not highly compatible. To share and merge these data, it is necessary to explore the industry foundation classes (IFC) model of the BIM and the CityGML model of the GIS, and develop a method capable of automatically extracting the GIS surface model with multiple levels of details (LODs) from the BIM entity model. In this way, the LOD 100-400 models could be obtained from the IFC and CityGML model, overcoming the difficulty in merging the BIM with the GIS and paving the way for the CGB technology integration [2-14].
The BIM and GIS are the two most popular digital technologies in the research of urban rail transit. For instance, D'Amico et al. [15] integrated the BIM with the GIS into the design of transport infrastructure, and suggested that the interoperable sharing models supplemented by the GIS data could minimize or eliminate the possible conflicts between infrastructure design and environmental constraints. Liu et al. [16] fully utilized the advantages of the BIM (e.g. 3D visualization, parametrization, and virtual simulation) to solve foundation engineering, a basic problem in rail transit, and thus improved the quality and efficiency of metro construction. Chen et al. [17] realized the conversion between geometric and semantic information through the BIM and 3D GIS data exchange method of rail transit, and defined an integrated 3D spatial data model, achieving unified management and seamless expression of the data on rail transit and its surroundings. He et al. [18] displayed and analyzed the spatial distribution of unfavorable geological bodies in the GIS, evaluated the karst collapse risk in the area crossed by the tunnel, and assessed the safety risk of metro tunnel on the BIM platform based on construction and monitoring information, laying the basis for tunnel safety prewarning.
Based on CGB technology integration and analytic hierarchy process (AHP), this paper develops a scientific method to visualize the foundation evaluation in urban rail transit projects. Qingxihe Station, Line 6 of Chongqing Rail Transit (CRT) was taken as the research case to evaluate the suitability of different layers to serve as the support layer of the foundation. The authors detailed the selection of evaluation method and the establishment of evaluation system, and verified the proposed method through case analysis, shedding new light on the visualization of the results of engineering geological investigation.
2. CGB-Based Visualization of Foundation Evaluation
In engineering geological investigation, engineering geological evaluation is one of the key contents. The utmost goal of engineering geological investigation lies in foundation evaluation, a part of engineering geological evaluation. Through foundation evaluation, the suitability of each layer as the supporting layer of the foundation could be quantified. The CGB technology integration provides a desirable tool to visualize the foundation evaluation. As shown in Figure 1, the CGB-based visualization of foundation evaluation mainly acquires the spatial distribution of the geological information in the construction site through investigation and field survey, models the spatial situation with discrete data points, and then analyzes the geological data using the information retrieval and processing functions of CGB technology integration [19-26].
Figure 1. The roadmap of CGB-based visualization of foundation evaluation
2.1 Selection of evaluation method
According to the response from construction parties, there are several problems with the current method for foundation evaluation: the foundation quality is described vaguely by qualitative words (e.g. general, good, and poor), without any quantitative comparison; the evaluation items are complex and not weighted; the evaluation process and results derivation are not visible. To solve these problems, the AHP was introduced to provide a multi-factor evaluation system for foundation evaluation, and fully integrate qualitative analysis with quantitative analysis.
2.2 Establishment of evaluation system
(1) Level division
The contents of foundation evaluation were divided as per the requirements of relevant codes. According to the goal, items, and objects of foundation evaluation, a multi-layer evaluation model was established, in which each layer controls and is controlled by its upper and lower layers. As shown in Figure 2, the established model consists of the goal layer (evaluation goal), the criteria layer (evaluation items), and the alternative layer (evaluation objects).
(2) Construction of judgment matrix for pairwise comparison
Once the evaluation model is established, it is necessary to determine the judgment matrix of each layer, that is, to judge the relative importance of each factor on each layer and express it as a numerical value. Based on the hierarchy of the three layers, it is also necessary to determine the importance of each factor on the lower layer relative to each relevant factor on the upper layer (goal A or criterion Z). Suppose factor Ak on layer A is correlated with factors B1, B2, …, Bn on the lower layer. Then, the judgement matrix of layer A can be constructed as Table 1, where Ak is the numerical value of the importance of Bi relative to Bj. The relative importance is usually rated against a nine-point scale (Table 2).
Table 1. The judgement matrix
$\vdots$
b1n
Table 2. The scale of importance
Two factors are equally important.
The former factor is slightly more important than the latter.
The former factor is strongly more important than the latter.
The former factor is very strongly more important than the latter.
The former factor is extremely more important than the latter.
The relative importance falls between two of the above levels.
Reciprocals of above
If the importance of factor i relative to factor j is bij, then the importance of factor j relative to factor i is bji=1/bij.
Figure 2. The multi-layer evaluation model
The judgment matrix must also satisfy:
$b_{i j}>0 ; b_{j i}=1 / b_{i j} ; b_{i i}=1(i=1,2, \cdots, n)$ (1)
Formula (1) shows that the judgement matrix is symmetric. In special cases, the judgment matrix must also be transitive:
${{b}_{ij}}\cdot {{b}_{jk}}={{b}_{ik}}$ (2)
(3) Single ranking
Single ranking is to sort the factors on a layer by the importance relative to each relevant factor on the upper layer. The single ranking is equivalent to the calculation of the characteristic roots and eigenvectors of the judgement matrix. In other words, judgement matrix B should satisfy:
$BW={{\lambda }_{\max }}W$ (3)
where, $\lambda_{\max }$ is the maximum characteristic root of B; W is the normalized eigenvector corresponding to $\lambda_{\max }$; Wi, a component of W, is the weight of factor i in single ranking.
Besides, the consistency of the judgment matrix should be verified by computing its consistency index CI:
$CI=\frac{{{\lambda }_{\max }}-n}{n-1}$ (4)
If the judgement matrix is fully consistent, CI=1; the greater the $\lambda_{\max }-n$, the larger the CI, and the less consistent is the judgement matrix. Since the sum of n eigenvalues of B equals n, CI is equivalent to the mean of n-1 characteristic roots other than $\lambda_{\max }$.
Table 3. The mean random consistency index (RI)
When the order of the judgment matrix is greater than 2 (Table 3), the ratio of CI to RI (the mean random consistency index of the same order) is defined as the random consistency ratio (CR) of the matrix. If CR≤0.01, the judgement matrix has satisfactory consistency; Otherwise, the judgment matrix needs to be adjusted.
The consistency of overall ranking results should be verified in a similar manner. From top to bottom, the consistency needs to be checked layer by layer. Let CIj(k) and RIj(k) be the CI and RI of a factor on layer k relative to factor j on layer k-1 in single ranking, respectively. Then, the CR of layer k in overall ranking can be expressed as:
$C{{R}^{(\text{k})}}=\frac{\sum\limits_{j-1}^{{{n}_{i}}}{w_{j}^{(k-1)}CI_{j}^{(k)}}}{\sum\limits_{j-1}^{{{n}_{i}}}{w_{j}^{(k-1)}RI_{j}^{(k)}}}$ (5)
Similarly, if CR(k)≤0.10, the overall ranking results have satisfactory consistency.
(4) Overall ranking
Overall ranking is to sort the importance of all factors on the current layer relative to the superior layer, based on the single ranking results of the current layer relative to all the other layers. The overall ranking needs to be performed layer by layer from top to bottom. Suppose the importance ranking of the n factors on layer k-1 relative to the goal layer satisfy:
${{w}^{(k-1)}}={{(w_{1}^{(k-1)},\cdots ,w_{n}^{(k-1)})}^{T}}$ (6)
The single ranking vector of nk factors on layer k relative to criterion j on layer k-1 can be defined as:
$\text{u}_{\text{j}}^{(k)}={{(\text{u}_{\text{1j}}^{(k)},\text{u}_{\text{2j}}^{(k)},\cdots ,\text{u}_{{{\text{n}}_{\text{i}}}\text{j}}^{(k)})}^{T}}$. j=1, 2, ···, n; k=1, 2, ···, nk (7)
The importance of factors not linked to criterion j was set to zero. Then, a matrix of order nk×n can be obtained:
${{U}^{(k)}}=\text{(u}_{\text{1}}^{\text{(k)}}\text{,u}_{\text{2}}^{\text{(k)}}\text{,}\cdots \text{,u}_{\text{n}}^{\text{(k)}}\text{)}=\left( \begin{matrix} \text{u}_{\text{11}}^{\text{(k)}} & \text{u}_{\text{12}}^{\text{(k)}} & \cdots & \text{u}_{\text{1n}}^{\text{(k)}} \\ \text{u}_{\text{21}}^{\text{(k)}} & \text{u}_{\text{22}}^{\text{(k)}} & \cdots & \text{u}_{\text{2n}}^{\text{(k)}} \\ \vdots & \vdots & \vdots & \vdots \\ \text{u}_{{{\text{n}}_{\text{i}}}\text{1}}^{\text{(k)}} & \text{u}_{{{\text{n}}_{\text{i}}}2}^{\text{(k)}} & \cdots & \text{u}_{{{\text{n}}_{\text{i}}}\text{n}}^{\text{(k)}} \\\end{matrix} \right)$ (8)
where, column j in U(k) is the single ranking vector of nk factors on layer k relative to criterion j on layer k-1. Then, the overall ranking of all factors on layer k can be expressed as:
${{w}^{(k)}}={{(w_{1}^{(k)},\cdots ,w_{n}^{(k)})}^{T}}$ (9)
Then,
$\begin{align} & {{\text{w}}^{(k)}}={{U}^{(k)}}{{\text{w}}^{(k-1)}} =\left( \begin{matrix} \text{u}_{\text{11}}^{\text{(k)}} & \text{u}_{\text{12}}^{\text{(k)}} & \cdots & \text{u}_{\text{1n}}^{\text{(k)}} \\ \text{u}_{\text{21}}^{\text{(k)}} & \text{u}_{\text{22}}^{\text{(k)}} & \cdots & \text{u}_{\text{2n}}^{\text{(k)}} \\ \vdots & \vdots & \vdots & \vdots \\ \text{u}_{{{\text{n}}_{\text{i}}}\text{1}}^{\text{(k)}} & \text{u}_{{{\text{n}}_{\text{i}}}2}^{\text{(k)}} & \cdots & \text{u}_{{{\text{n}}_{\text{i}}}\text{n}}^{\text{(k)}} \\\end{matrix} \right)\left( \begin{matrix} \text{w}_{\text{1}}^{\text{(k-1)}} \\ \text{w}_{\text{2}}^{\text{(k-1)}} \\ \vdots \\ \text{w}_{\text{n}}^{\text{(k-1)}} \\\end{matrix} \right)=\left( \begin{matrix} \sum\limits_{j-1}^{n}{\text{u}_{\text{1j}}^{\text{(k)}}\text{w}_{\text{j}}^{\text{(k-1)}}} \\ \sum\limits_{j-1}^{n}{\text{u}_{\text{2j}}^{\text{(k)}}\text{w}_{\text{j}}^{\text{(k-1)}}} \\ \vdots \\ \sum\limits_{j-1}^{n}{\text{u}_{{{\text{n}}_{\text{i}}}\text{j}}^{\text{(k)}}\text{w}_{\text{j}}^{\text{(k-1)}}} \\\end{matrix} \right) \\ \end{align}$ (10)
$\text{w}_{i}^{(k)}=\sum\limits_{\text{j-1}}^{\text{n}}{u_{ij}^{(k)}}w_{j}^{(k-1)}$, i=1, 2, ···, n (11)
Through the above steps, the score of each alternative (evaluation object) can be obtained. The score ranking determines the relative importance (suitability) of each object to the goal.
3. Case Study
3.1 Project overview
The case project is Qingxihe Station, Line 6 of CRT. Lying below Yuegang Longitudinal Road and Yuegang Middle Road, the north-south oriented station crosses the interaction between the two roads, adopts an open cut double-layer rectangular frame, and has a 12m-long island platform. The total length, maximum clear width, and maximum clear height are 290.9m, 24.3m, and 16.31m, respectively. There are two air ducts and six entrances and exits (two of which are reserved), all of which adopt open-cut rectangular frames.
3.2 CGB-based foundation evaluation
The main construction project of case is a metro station (an underground space project) and its ancillary works. The following items should be considered to evaluate the foundation of the project: uniformity of each layer, sorting of overburden composition, thickness of each layer, mechanical properties of each layer, groundwater effect, and adverse geological phenomena.
3.2.1 Establishment of evaluation model
Based on the above items, an AHP structure was set up, consisting of a goal layer, a criteria layer, and an alternative layer (as shown in Figure 3).
According to the complexity classification of geological environment in relevant codes, the importance of evaluation items in the project were ranked as mechanical properties, adverse geological phenomena, groundwater effect, uniformity, thickness, and sorting. On this basis, the judgement matrix of the criteria layer was established as Table 4.
The consistency of the judgement matrix was computed as 0.0156, indicating that the matrix is sufficiently consistent.
Figure 3. The AHP structure
Table 4. The judgement matrix of the criteria layer
Groundwater depth
Groundwater seasonality
Adverse geological phenomena
3.2.2 Evaluation of each item
After setting up the judgement matrix of the criteria layer, it is necessary to establish a judgement matrix of each item for plain fill, silty clay, strongly weathered sandy mudstone, moderately weathered sandstone, and moderately weathered sandy mudstone.
(1) Uniformity evaluation
Taking silty clay for example, the 3D model data of uniformity were imported to ArcScene. Then, the 3D Analyst tool was called from the ArcToolBox to convert the triangulated irregular network (TIN) model (as shown in Figure 4) of the upper and lower surfaces of the silty clay layer into grids. Then, a new grid map (as shown in Figure 5) was obtained by removing the grids of the upper and lower surfaces, revealing the thickness of silty clay across the construction site.
Next, the classification parameters were configured to reclassify the grids, producing a bar chart on the thickness of silty clay across the construction site (as shown in Figure 6). The bar chart visually displays the thickness data of silty clay in any location of the site. Then, the uniformity of the silty clay layer was judged by the standard deviation (SD) and the proportion of the thickness concentration area in the total area. The uniformities of the other layers were obtained in a similar manner. The uniformities of all layers are summed up in Table 5.
The plain fill is sporadically distributed in the construction site, showing a poor uniformity. Based on Table 5, the layers could be ranked in descending order of uniformity: silty clay, moderately weathered sandy mudstone, moderately weathered sandstone, strongly weathered sandy mudstone, and plain fill. On this basis, the judgement matrix of uniformity was established as Table 6.
Figure 4. The TIN model of geological information
Figure 5. The grid map
Figure 6. The bar chart of silty clay thickness
(2) Sorting evaluation
The sorting of each layer against the overburden directly affects the bearing capacity of the foundation, exerting a huge impact on the stability of buildings on the surface. The sorting quality mainly depends on the uniformity of the size of clastic particles. The more uniform the size, the better the sorting.
According to the project data, the plain fill is mainly composed of sandstone and sandy mudstone blocks (fragments). The size of the skeleton particles falls within 20-500mm, and could surpass 1m in local areas. The content of these particles is generally 20-30%. In relatively thick sections, the content of these blocks (fragments) increases significantly to 70-80% in local areas, while the particle size also increases. In the plain fill, the skeleton particles have a nonuniform distribution of content, and significant changes in particle size. By contrast, there is no obvious inclusion in silty clay. This layer mainly consists of clay, with few hard matters.
Through the above analysis, the layers could be ranked in descending order of sorting: moderately weathered sandy mudstone/moderately weathered sandstone, strongly weathered sandy mudstone, silty clay, and plain fill. On this basis, the judgement matrix of sorting was established as Table 7.
(3) Thickness evaluation
For construction projects, the supporting layer cannot be stable without a good and uniform distribution. Besides, the thickness of the supporting layer could affect the basic design of the foundation.
To clearly understand the thickness of layers beneath the construction site, the CGB technology integration was introduced to quantify the thickness of each layer through ArcGIS data analysis, and display the analysis results on 3D models. The mean and proportion of thickness concentration area in Table 5 were referenced to evaluate the thickness of each layer.
Through the above analysis, the layers could be ranked in descending order of thickness: moderately weathered sandy mudstone, silty clay, strongly weathered sandy mudstone, moderately weathered sandstone, and plain fill. On this basis, the judgement matrix of thickness was established as Table 8.
(4) Evaluation of mechanical properties
The mechanical properties of soil-rock mass must be considered in foundation evaluation and foundation design. Whether a layer is suitable as the supporting layer largely depends on the quality of its mechanical parameters.
The mechanical properties of each layer were evaluated against the standard bearing capacity of rock foundation mentioned in the project data. Then, the layers were ranked in descending order of mechanical properties as Table 9, where the figures marked with an asterisk are derived from relevant codes and empirical values of the region.
As shown in Table 9, the layers could be ranked in descending order of mechanical properties: moderately weathered sandstone, moderately weathered sandy mudstone, strongly weathered sandy mudstone, silty clay, and plain fill. On this basis, the judgement matrix of mechanical properties was established as Table 10.
Table 5. The uniformities of all layers
Min. (m)
Max. (m)
Mean (m)
Thickness concentration area (m)
Proportion of thickness concentration area (%)
Silty clay
Strongly weathered sandy mudstone
Moderately weathered sandstone
Moderately weathered sandy mudstone
Table 6. The judgement matrix of uniformity
Plain fill
Table 7. The judgement matrix of sorting
Table 8. The judgement matrix of thickness
Table 9. The mechanical parameters
Type of index
Standard bearing capacity of rock foundation (kPa)
Table 10. The judgement matrix of mechanical properties
(5) Groundwater effect evaluation
Groundwater effect was included in foundation evaluation, because most structures of the project are underground and affected by groundwater. The spatial information of survey points was imported to ArcGIS, and then projected to the geological model of the site, producing a 3D model of groundwater depth (as shown in Figure 7).
Based on project data and the empirical values of the region, the permeability coefficients of plain fill, silty clay, strongly weathered sandy mudstone, moderately weathered sandstone, and moderately weathered sandy mudstone were obtained as 5×10-5cm/s, 5×10-6cm/s, 2×10-5cm/s, 1.2×10-5cm/s, and 2×10-6cm/s, respectively.
Through the above analysis, the layers could be ranked in descending order of groundwater effect: moderately weathered sandy mudstone, silty clay, moderately weathered sandstone, strongly weathered sandy mudstone, and plain fill. On this basis, the judgement matrix of groundwater effect was established as Table 11.
(6) Adverse geological phenomena evaluation
Adverse geological phenomena usually refer to the geological phenomena in and around the construction site that are not conducive to engineering construction, such as landslides, debris flows, ground collapses, and hidden karsts.
The project data show that the construction site, located on the east wing of the Yuelai syncline, has a normal stratigraphic sequence, without adverse geological effects like landslides, ground collapses, or faults. Hence, the factors in the judgement matrix of adverse geological phenomena are of equal importance.
3.2.3 Results of foundation evaluation
Based on the evaluation system and the judgment matrix of the criteria layer, the suitability of each layer as supporting layer of the foundation was calculated according to the results of each judgement matrix, yielding the results of foundation evaluation.
(1) Display of weight distribution
Through single ranking and overall ranking, the weights of all criteria of our model were obtained (as shown in Figure 8).
(2) Results of the judgement matrix of the criteria layer
The results of the judgement matrix of the criteria layer are presented in Table 12 below.
(3) Evaluation results and comparison chart
The criteria weights were coupled with the results of the judgement matrix of the criteria layer to produce the suitability of different layers (Table 13, Figure 9). Obviously, moderately weathered sandstone and moderately weathered sandy mudstone are the suitable supporting layers of the foundation.
3.2.4 Results analysis
In the survey report of the metro station, there is a complete section of foundation evaluation. This section reports that, the construction site is distributed with silty clay and a small amount of plain fill, sandstone, and sandy mudstone, as confirmed by drilling, geological mapping, and survey. According to the engineering geological features of soil-rock mass in the site, the upper fill cannot serve as the supporting layer of the foundation, due to the great variation in thickness, poor uniformity, and low strength. The silty clay in the lower part will cause differential settlement, if it acts as the supporting layer: the silty clay varies greatly in thickness, and exists as lenticles or thins out in local areas, not to mention its poor strength. However, the moderately weathered rocks in the lower part are an ideal supporting layer of the foundation, thanks to their high strength and stability. The conclusion of the report agrees well with our AHP results.
Figure 7. The model of groundwater depth
Table 11. The judgement matrix of groundwater effect
Table 12. The results of the judgement matrix of the criteria layer
Characteristic roots
CR values
Bar charts
(0.037,
0.4284,
0.284)T
a.jpg
(a) The bar chart of uniformity of different layers
(0.0293,
0.349,
b.jpg
(b) The bar chart of sorting of different layers
0.4225)T
(c) The bar chart of thickness of different layers
(0.0364, 0.0546,
(d) The bar chart of mechanical properties of different layers
(0.2,0.2,0.2,0.2, 0.2)T
(e) The bar chart of groundwater depth of different layers
0.1833, 0.4306)T
(f) The bar chart of groundwater seasonality of different layers
(g)The bar chart of adverse geological phenomena of different layers
Table 13. The suitability of each layer
Figure 8. The pie chart of weight distribution
Figure 9. The suitability of each layer
This paper successfully visualizes foundation evaluation based on CGB technology integration and the AHP. Multiple influencing factors were quantified, making the evaluation process and results more reliable. The evaluation process is completely visible, and the evaluation results are highly accurate, allowing every construction party to check the judgement of each factor easily and independently. The research findings provide a good reference for the selection of the supporting layer of foundations in urban rail transit projects.
This work was supported by Chongqing Social Career and People's Livelihood Guarantee Science and Technology Innovation Special Program (cstc2016shmszx30017); Chongqing Fundamental and Frontier Research (cstc2017jcyjAX0260); Chongqing Graduate Education Innovation Fund (CYS18227).
[1] Wang, H., Pan, Y., Luo, X. (2019). Integration of BIM and GIS in sustainable built environment: A review and bibliometric analysis. Automation in Construction, 103: 41-52. https://doi.org/10.1016/j.autcon.2019.03.005
[2] Colucci, E., De Ruvo, V., Lingua, A., Matrone, F., Rizzo, G. (2020). HBIM-GIS integration: From IFC to cityGML standard for damaged cultural heritage in a multiscale 3D GIS. Applied Sciences, 10(4): 1356. https://doi.org/10.3390/app10041356
[3] Stouffs, R., Tauscher, H., Biljecki, F. (2018). Achieving complete and near-lossless conversion from IFC to CityGML. ISPRS International Journal of Geo-Information, 7(9): 355-163. https://doi.org/10.3390/ijgi7090355
[4] Jusuf, S.K., Mousseau, B., Godfroid, G., Soh, J.H.V. (2017). Path to an integrated modelling between IFC and CityGML for neighborhood scale modelling. Urban Science, 1(3): 25. https://doi.org/10.3390/urbansci1030025
[5] Deng, Y., Cheng, J. C., Anumba, C. (2016). Mapping between BIM and 3D GIS in different levels of detail using schema mediation and instance comparison. Automation in Construction, 67: 1-21. https://doi.org/10.1016/j.autcon.2016.03.006
[6] Diakite, A.A., Zlatanova, S. (2020). Automatic geo-referencing of BIM in GIS environments using building footprints. Computers, Environment and Urban Systems, 80: 101453. https://doi.org/10.1016/j.compenvurbsys.2019.101453
[7] Tsilimantou, E., Delegou, E.T., Nikitakos, I.A., Ioannidis, C., Moropoulou, A. (2020). GIS and BIM as integrated digital environments for modeling and monitoring of historic buildings. Applied Sciences, 10(3): 1078. https://doi.org/10.3390/app10031078
[8] Hijazi, I., Donaubauer, A., Kolbe, T.H. (2018). BIM-GIS integration as dedicated and independent course for geoinformatics students: Merits, challenges, and ways forward. ISPRS International Journal of Geo-Information, 7(8): 319. https://doi.org/10.3390/ijgi7080319
[9] Koutamanis, A. (2020). Dimensionality in BIM: Why BIM cannot have more than four dimensions? Automation in Construction, 114: 103153. https://doi.org/10.1016/j.autcon.2020.103153
[10] Kumar, K., Labetski, A., Ohori, K.A., Ledoux, H., Stoter, J. (2019). The LandInfra standard and its role in solving the BIM-GIS quagmire. Open Geospatial Data, Software and Standards, 4(1): 1-16. https://doi.org/10.1186/s40965-019-0065-z
[11] Arroyo Ohori, K., Diakité, A., Krijnen, T., Ledoux, H., Stoter, J. (2018). Processing BIM and GIS models in practice: experiences and recommendations from a GeoBIM project in the Netherlands. ISPRS International Journal of Geo-Information, 7(8): 311-318. https://doi.org/10.3390/ijgi7080311
[12] Zhang, L., El-Gohary, N.M. (2020). Automated IFC-based building information modelling and extraction for supporting value analysis of buildings. International Journal of Construction Management, 20(4): 269-288. https://doi.org/10.1080/15623599.2018.1484850
[13] Mirarchi, C., Pavan, A., De Marco, F., Wang, X., Song, Y. (2018). Supporting facility management processes through end-users' integration and coordinated BIM-GIS technologies. ISPRS International Journal of Geo-Information, 7(5): 191-197. https://doi.org/10.3390/ijgi7050191
[14] Marzouk, M., Othman, A. (2020). Planning utility infrastructure requirements for smart cities using the integration between BIM and GIS. Sustainable Cities and Society, 57: 120-126. https://doi.org/10.1016/j.scs.2020.102120
[15] D'Amico, F., Calvi, A., Schiattarella, E., Di Prete, M., Veraldi, V. (2020). BIM and GIS data integration: A novel approach of technical/environmental decision-making process in transport infrastructure design. Transportation Research Procedia, 45: 803-810. https://doi.org/10.1016/j.trpro.2020.02.090
[16] Liu, B., Sun, X. (2018, March). Application analysis of BIM technology in metro rail transit. In IOP Conference Series: Earth and Environmental Science, 128: 28-31. https://doi.org/10.1088/1755-1315/128/1/012028
[17] Chen, G., Xue, M., Hu, Z.J., Liu, Y.Z. (2019). GIS+BIM spatial framework for urban rail transit project. Bulletin of Surveying and Mapping, (S2): 262-266. https://doi.org/10.13474/j.cnki.11-2246.2019.0639
[18] He, G.F., Luo, X.Q., Zhang, H. (2019). Technology of early warning and forecast of metro tunnel safety based on BIM and GIS. Urban Mass Transit, (7): 161-164. https://doi.org/10.16037/j.1007-869x.2019.07.038
[19] Dzikunoo, E.A., Vignoli, G., Jørgensen, F., Yidana, S. M., Banoeng-Yakubo, B. (2020). New regional stratigraphic insights from a 3D geological model of the Nasia sub-basin, Ghana, developed for hydrogeological purposes and based on reprocessed B-field data originally collected for mineral exploration. Solid Earth, 11(2): 349-361. https://doi.org/10.5194/se-11-349-2020
[20] Chen, L., Wang, H., Xu, X., Zhang, Y., Wang, C., Song, J., Han, L. (2019). Geological exploration using integrated geophysical methods in tunnel: A case. Geotechnical and Geological Engineering: An International Journal, 38(2): 1111-1119. https://doi.org/10.1007/s10706-019-01075-w
[21] Ku, T., Palanidoss, S., Zhang, Y., Moon, S.W., Wei, X., Huang, E.S., Goh, K.H. (2020). Practical configured microtremor array measurements (MAMs) for the geological investigation of underground space. Underground Space, 19: 1-12. https://doi.org/10.1016/j.undsp.2020.01.004
[22] Kanik, M., Ersoy, H. (2019). Evaluation of the engineering geological investigation of the Ayvali dam site (NE Turkey). Arabian Journal of Geosciences, 12(3): 89. https://doi.org/10.1007/s12517-019-4243-1
[23] Soldo, L., Vendramini, M., Eusebio, A. (2019). Tunnels design and geological studies. Tunnelling and Underground Space Technology, 84, 82-98. https://doi.org/10.1016/j.tust.2018.10.013
[24] Di Giulio, G., Ercoli, M., Vassallo, M., Porreca, M. (2020). Investigation of the Norcia basin (Central Italy) through ambient vibration measurements and geological surveys. Engineering Geology, 267: 105501. https://doi.org/10.1016/j.enggeo.2020.105501
[25] Zhang, C.C., Gu, P., Cao, F.X. (2020). Discussion on the hazards of hydrogeological problems in engineering geological survey. Construction & Design for Engineering, 40-41. https://doi.org/10.13616/j.cnki.gcjsysj.2020.03.220
[26] Gao, Y., Liang, G.H., Zhou, Y.Y. (2019). Exploration of geotechnical engineering investigation under complex topographical and geological conditions. Frontiers Research of Architecture and Engineering, 2(4): 20-23. https://doi.org/10.30564/frae.v2i4.1512
|
CommonCrawl
|
What would the acceleration be if the mass is halved?
Anjali Asked in Physics Feb 7, 2022
Gitesh kumar Garg Asked in Physics Jan 24, 2022
What is the acceleration produced by a force of 12 newtons exerted on an object of mass 3 kg?
Anubhav Agarwal Asked in Physics Jun 15, 2022
If the unit of mass and length be doubled and that of time is halved, then how the unit of force, work and powèr is affected.
Gitesh kumar Garg Asked in Physics Sep 14, 2021
What force would be needed to produce an acceleration of 4 m/s^2 in a ball of mass 6 kg ?
How much would a $70 \mathrm{~kg}$ man weigh on the moon? What would be his mass on the earth and on the moon? (Acceleration due to gravity on $\mathrm{moon}=1.63 \mathrm{~m} / \mathrm{s}^{2}$ )
What is the acceleration due to gravity of the moon relative to that of the earth?
What is meant by the term acceleration? State the SI unit of acceleration.
Anjali Asked in Physics Jan 26, 2022
What is the other name of negative acceleration?
Which of the following could not be a unit of acceleration?
A force of 5 N gives a mass my an acceleration of
A car travelling at 20 km/h speeds up to 60 km/h in 6 seconds. What is its acceleration?
|
CommonCrawl
|
APSIPA Transactions on Signal and Information Processing
Robust and efficient content-ba...
Robust and efficient content-based music retrieval system
II. RELATED WORKS
A) Music content representation
B) Noise reduction
III. THE SYSTEM ARCHITECTURE OVERVIEW
IV. MUSIC RETRIEVAL PROCESS
A) MFCCs
B) Chroma feature
C) PAA and the adapted SAX
V. THE STORAGE STRUCTURES AND THE SCORING MECHANISM OF THE DATABASE
A) AFPI structure
B) Entropy and the mechanism for merging the 51 partial scores given by each AFPI tree
C) Mechanism for merging the 51 partial scores given by each AFPI tree
D) Noise suppression based on AKLT
VI. EVALUATING THE PERFOMANCE BY EXPERIMENTS
A) Experimental data and measures
B) Experimental results
Birajdar, Gajanan K. and Patil, Mukesh D. 2020. Speech/music classification using visual and spectral chromagram features. Journal of Ambient Intelligence and Humanized Computing, Vol. 11, Issue. 1, p. 329.
APSIPA Transactions on Signal and Information Processing, Volume 5
2016 , e4
Yuan-Shan Lee (a1), Yen-Lin Chiang (a1), Pei-Rung Lin (a1), Chang-Hung Lin (a1) and Tzu-Chiang Tai (a2)
1 Department of Computer Science and Information Engineering, National Central University, Jhongli, Taiwan
2 Department of Computer Science and Information Engineering, Providence University, Taichung, Taiwan
Copyright: © The Authors, 2016
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
DOI: https://doi.org/10.1017/ATSIP.2016.4
Published online by Cambridge University Press: 28 March 2016
Fig. 1. The main structure of t he proposed music retrieval system.
Fig. 2. A diagram for the music retrieving process in our system.
Fig. 3. Shepard helix of pitch perception. The vertical dimension is tone height, and the angular dimension is chroma.
Fig. 4. A 128-dimensional time series vector is reduced to an 8-dimensional vector of PAA representation [11].
Fig. 5. An illustration of a symbolic sequence. The PAA representation shown in Fig. 4 is converted into a symbolic sequence of three distinct symbols: a, b, c, via the original SAX method.
Fig. 6. The pdf curves for the standard Cauchy and the standard Gaussian distributions. The curve exactly on the solid-colored area is the Cauchy distribution curve.
Fig. 7. An example of a cumulative distribution function (CDF) P of the time series variable x. When n=5, the breakpoints are located in the positions of P(1/5), P(2/5), P(3/5), and P(4/5), respectively. To find x from P(x), the inverse function of a CDF is required.
Fig. 8. Example of AFPI tree structure, where n=6 and K=3. This figure is redrafted from [2].
Table 1. Two examples of pattern relations.
Table 2. The common bits.
Fig. 9. The flowchart of AKLT.
Table 3. The example of how to calculate accuracy.
Fig. 10. Comparison of the proposed system and baseline system.
Fig. 11. Comparison between different dimensions of features.
Fig. 12. Comparison between noised music clips and enhanced music clips.
This work proposes a query-by-singing (QBS) content-based music retrieval (CBMR) system that uses Approximate Karbunen–Loeve transform for noise reduction. The proposed QBS-CBMR system uses a music clip as a search key. First, a 51-dimensional matrix containing 39-Mel-frequency cepstral coefficients (MFCCs) features and 12-Chroma features are extracted from an input music clip. Next, adapted symbolic aggregate approximation (adapted SAX) is used to transform each dimension of features into a symbolic sequence. Each symbolic sequence corresponding to each dimension of MFCCs is then converted into a structure called advanced fast pattern index (AFPI) tree. The similarity between the query music clip and the songs in the database is evaluated by calculating a partial score for each AFPI tree. The final score is obtained by calculating the weighted sum of all partial scores, where the weighting of each partial score is determined by its entropy. Experimental results show that the proposed music retrieval system performs robustly and accurately with the entropy weighting mechanism.
Digital music data on the Internet are explosively growing. Therefore, applications of content-based music retrieval (CBMR) system are more and more popular. Searching music by a particular melody of a song directly is more convenient than by a name of a song for people. Moreover, according to the survey from the United Nations [1], the 21st century will witness even more rapid population ageing than did the century just past; therefore, it is important to develop an efficient and accurate way to retrieve the music data.
A CBMR method is a more effective approach for a music retrieval system than the text-based method. A CBMR system aims to retrieve and query music by acoustic features of music, while a text-based music retrieval system only takes names, lyrics, and ID3 tags of songs into consideration.
Query-by-singing (QBS) is a popular method in CBMR. Many approaches based on QBS have been developed currently. Huang [2] proposed a QBS system by extracting the pitches and the volumes of the music. The data are used to build an index structure via advanced fast pattern index (AFPI) and Alignment [3] as its searching technique.
Lu et al. [4] proposed an extraction mechanism that regards audio data as a sequence of music notes, and then a hierarchical matching algorithm was performed. Finally, the similarity scores of each song to the query were combined with respect to the pitches and the rhythm by a linear ranking formula. This approach is accurate when an instrumental clip is given as the search key; however, the accuracy decreases when the input is the humming voice from a human. Cui et al. [5] introduced a music database that applied both text-based and content-based techniques simultaneously. Various acoustic features are regarded as indices and trained by a neural network mechanism. Such a design of the music database provides high efficiency and accuracy due to its good algorithms, but it lacks in portability since the implementation is too complicated. Inspired by Huang [2], a QBS-CBMR system was proposed in our previous work [6]. An entropy-weighting mechanism was developed to determine the final similarity.
Presently, state-of-the-art QBS-CBMR systems can achieve high accuracy under clean environments. However, under noisy environments, the performance might degrade due to the mismatch between the noisy feature and the clean-trained model. Motivated by this concern, this paper extends the previous work [6]. Noise effects are further reduced by applying Approximate Karbunen–Loeve transform (AKLT) [7] for preprocessing. The proposed robust QBS-CBMR system has five stages:
(1) Noise reduction: Considering the real case of music retrieval, the noise in the music clips may impact the results. Therefore, we use AKLT as preprocess to reduce the influence of the noise for all music clips.
(2) Feature extraction: The input music clip is first converted into a 39-dimensional Mel-frequency cepstral coefficients (MFCCs) [8,9] and 12-dimensional Chroma [10]. For each music clip, there are totally 51-dimensional features. Second, each dimension of features is transformed into symbolic sequences using the adapted symbolic aggregate approximation (adapted SAX) method [11], which is proposed in this work. These symbolic sequences are also called the SAX representation.
(3) The AFPI tree structure: Following feature extraction stage, the input music is transformed into 51 symbolic sequences with respect to 51 features. In the proposed QBS-CBMR system, symbolic sequences are regarded as a search key. Finally, these symbolic sequences are stored by a tree structure called the AFPI tree [2] due to high efficiency for the retrieval task.
(4) Score calculation: The results of the music retrieval task are determined by the "scores". After the two stages mentioned above, music clips are transformed into 51 AFPI trees. A partial score is calculated for each AFPI tree first. The final score is then obtained by the weighted summation of all partial scores, where the weighting of each partial score is determined by its entropy [12]. The higher scores denote the higher similarity between the query music clip and the songs in the database.
The rest of this paper contains the following sections: Section II briefly reviews related works. Section III briefs the overview of the proposed music retrieval system. The details of the feature extraction stage are discussed in Section IV. Section V describes how the database works to search the music clip input in detail. In Section VI, we present the performance of the proposed system through some experiments. Finally, Section VII concludes the paper.
The MFCCs were first proposed by Davis and Mermelstein in 1980 [13]. The MFCCs are non-parametric representations of the audio signals and are used to model the human auditory perception system [9]. Therefore, MFCCs are useful for audio recognition [14]. This method had made important contributions in music retrieval to date. Tao et al. [8] developed a QBS system by using the MFCCs matrix. For improved system efficiency, a two-stage clustering scheme was used to re-organize the database.
On the other hand, the Chroma feature proposed by Shepard [10] has been applied in studies of music retrieval with great effectiveness. Xiong et al. [15] proposed a music retrieval system that used Chroma feature and notes detection technology. The main concept of this system is to extract a music fingerprint from the Chroma feature. Sumi et al. [16] proposed a symbol-based retrieval system that uses Chroma feature and pitch features to build queries. Moreover, to make the system with high precision, conditional random fields has been used to enhance features.
Chroma features can work well when queries and reference data are played from different music scores. It has been found that Chroma features can identify songs in different versions. Hence, we can use Chroma features to identify all kinds of songs, even cover versions [17]. This research extends our previous work [6]. Compared with [6], a new feature vector containing 39-MFCCs features and 12-Chroma features are extracted.
The actual application must eliminate environmental noise. Otherwise, the accuracy of the music retrieval results decreases. Shen et al. [18] proposed a two-layer structure Hybrid Singer Identifier, including a preprocessing module and a singer modeling module. In the preprocessing module, the given music clip is separated into vocal and non-vocal segments. After the audio features are extracted, vocal features are fed into Vocal Timbre and Vocal Pitch models, and non-vocal features are fed into Instrument and Genre models. It had been proven that the work of [18] is robust against different kinds of audio noises. However, the noise is not removed and so the performance will be still affected by noise.
Mittal and Phamdo [19] proposed a Karhunen–Loeve transform (KLT)-based approach for speech enhancement. The basic principle is to decompose the vector space of the noisy speech into two subspaces, one is speech-plus-noise subspace and the other is a noise subspace. The signal is enhanced by removing the noise subspace from the speech-plus-noise subspace [20]. The KLT can perform the decomposition of noisy speech. Since the computational complexity of KLT is very high, the proposed system uses AKLT with wavelet packet expansion [7] to process the noise reduction of input music clips.
The structure of the proposed system is developed based on the work of Huang [2]. Figure 1 demonstrates the proposed system. The feature extraction stage converts music files into 51 symbolic sequences, which are stored using tree structures. The methods used in the feature extraction stage are discussed in detail in Section III.
The feature extraction stage mainly contains two steps: (1) transform music files into 39 of the MFCCs features [8,9] and 12 of the Chroma features [10]; (2) convert each dimension of MFCCs into a symbolic sequence by the piecewise aggregate approximation (PAA) method [11] and the adapted SAX [11].
After feature extraction, each of the 51 symbolic sequences is then stored using a tree structure called the AFPI tree. Next, the 51 AFPI trees are used to generate a final score to evaluate the similarity between the query music clip and the songs in the database.
Two components stored in the database for each song are:
(1) 51 AFPI tree structures.
(2) Music IDs and other information.
The searching process only accesses these components instead of the original audio files, so that the proposed music retrieval system is portable.
In the proposed implementation, two music retrieval-related operations are performed: adding a complete music file into the database (the ADD operation), and searching from the database with a music clip file (the SEARCH operation). Both operations run the music retrieving process and access the tree structures in the database. However, this study is focused on the SEARCH operation for the following reasons:
• For a user, searching a database to find a song is more desirable than simply "donating" (adding) a song to the database.
• The only two differences between ADD and SEARCH are: (1) ADD builds the structure, while SEARCH searches the structure; (2) SEARCH analyzes the result from the database structures, while ADD does not.
Figure 2 shows the feature extraction stage, which is performed as follows:
MFCCs are non-parametric representations of audio signals, which model the human auditory perception system [9,13]. Therefore, MFCCs are regarded as a useful feature for audio recognition.
The derivation of MFCCs is based on the powers of the Mel windows. Let X ω denotes the ωth power spectral component of an audio signal, S k be the power in kth Mel window, and M represents the number of the Mel windows usually ranging from 20 to 24. Then S k can be calculated by:
(1) $$S^k = \sum\limits_{\omega = 0}^{F/2 - 1} W_{\omega}^k \cdot X_{\omega}, \quad k = 1,2, \ldots, M,$$
where W k is the kth Mel window, and F is the number of samples in a frame, which must be a power of 2, and usually set to 256 or 512 that makes each frame ranging from 20 to 30 ms approximately.
Let L denote the desired order of the MFCCs. Then, we can calculate the MFCCs from logarithm and cosine transforms.
(2) $$c_n = \sum\limits_{k=1}^M \log \lpar S^k \rpar \cos \left[\lpar k - 0.5 \rpar {n\pi \over M} \right], \quad n = 1,2, \ldots, L.$$
Shepard proposed the use of tone height and Chroma to perform the perception of pitch [10]. The Chroma vector can be divided into 12 semi-tone families, between 0 and 1 into 12 equal parts. Additionally, 12 semi-tones constitute an octave. Shepard conceptualized pitch perceived by humans as a helix with a 1D line. Figure 3 illustrates this helix with its two dimensions. The vertical dimension is the continuous tone height, and the angular dimension is the Chroma. The Shepard decomposition of pitch can be expressed as
(3) $$f_p = 2^{h + c},$$
where p is pitch, f is frequency, h is tone height, $c \in [0,1)$ , and $h \in Zh \in \hbox{Z}$ .
The Chroma for a given frequency can then be calculated as follows:
(4) $$c = \log_2 f_p - \lfloor x \log_2 f_p \rfloor,$$
where $\lfloor\cdot \rfloor$ denotes the greatest integer function. Chroma is the fractional part of the 2-based logarithm of frequency. Like the ideas of pitch, some frequencies are mapped to the same class.
The PAA method [11] reduces an n-dimensional vector into a w-dimensional one. The vector generated by the PAA method is called the PAA representation. An example of the PAA representation is demonstrated in Fig. 4.
After converting each row of the feature matrix into the PAA representations, the adapted SAX method is applied to the PAA representations to construct symbolic sequences shown in Fig. 5. The adapted SAX method is developed based on the work of Lin et al. [11]. The difference between these two methods is that the original SAX uses the Gaussian distribution curves, while the adapted SAX uses Cauchy distribution, whose probability density function (PDF) and cumulative distribution function (CDF) curves both look similar to the Gaussian curves. Figure 6 illustrates the difference.
When implementing both of the original SAX or the adapted SAX method, the inverse function of the CDF of the specified distribution is required in order to determine, where the breakpoints are located (see Fig. 7).
The CDF of the Gaussian distribution is represented as the following equation:
(5) $${1 \over 2} \left[1 + \hbox{erf} \left({x - \mu \over \sqrt{2\sigma^2}} \right) \right],$$
where erf(·) denotes the complementary error function.
Based on the work of Huang [2], this paper proposes the adapted SAX by applying Cauchy distribution. The CDF of the Cauchy distribution is represented as:
(6) $${1 \over \pi} \left[1 + \hbox{arctan} \left({x - x_0 \over \gamma} \right) \right] + {1 \over 2},$$
where x 0 x 0 is the location parameter, and γγ represents the half of the interquartile range.
The adapted SAX with (6) is a more feasible and convenient way than the original SAX with (5). Both (5) and (6) are increasing functions for all real numbers x, while (5) contains the Gaussian error function (erf), whose inverse function does not commonly exist in many standard libraries in popular programming languages such as C++ and Java. However, (6) has a "special function", the arctan function, whose inverse function is identical to the tangent function in almost all standard libraries of the major programming languages. Hence, implementation of the adapted SAX with (6) is more feasible and convenient compared with implementation of the original SAX with (5).
The storage of the database contains 51 AFPI tree structures [2]. Each tree is constructed by the SAX representation of the corresponding MFCCs feature. Firstly, the partial score for the similarity between the query music clip and the songs in the database is calculated from each AFPI tree. Secondly, the final score for the overall similarity is determined by the proposed weighted summation of the partial scores, with respect to the entropy [12] of the SAX representation in each feature of the query.
The proposed QBS-CBMR system only stores the AFPI trees and the information of the songs in the database. Therefore, the proposed system is portable. The two methods used to build the storage structure in the proposed system are described in detail as follows.
Figure 8 shows an AFPI tree. The root of the tree consists of n pointers for each distinct symbol that can possibly be generated in the SAX representation [2,3]. Each pointer in the root for the symbol S i points to a child node, which contains n sets of pattern relation records, respectively, in which each set is for another symbol S j that can be either the same as or different from S i . Each record of pattern relation contains an integer for the music ID and a K-bit binary code for storing relation states.
The kth bit from the right in the K-bit binary code determines whether there exists a subsequence of the SAX representation from a particular music ID, in which the subsequence has a length of k+1 beginning with S i and ending with S j .
For example, in Fig. 8, the pointer for the symbol "b" at the root points to a child node, in which the pattern relation set for "a" contains a binary code of 010 for song 1 and a binary code of 011 for song 2. The binary code just mentioned for song 2 means that there are subsequences "ba" and "b*a", but not "b**a", of the SAX representation for the song. In this example, each asterisk can be replaced with any symbol.
When searching for a song with a music clip, the whole SAX representation of each MFCCs feature from the music clip is parsed into a set of pattern relations, which is then compared with the respective AFPI tree. Table 1 shows two examples of a pattern relation set for a SAX representation.
After a feature from the music clip is parsed into a pattern relation set, it is compared with all pattern relation sets stored in the AFPI tree. This step obtains a partial score for the similarity of the music clip to each song in the database. Pattern relation sets can be compared in the following two steps:
(1) Find the intersection of the two sets of (S i , S j ) from each pattern relation set.
(2) For every (S i , S j ) in the intersection set, find the number of common bits in the binary codes of the two pattern relation sets. This step is illustrated in Table 2 as an example.
*Common bits and the two binary codes must be in the same position.
†"Yes" and "No" are marked in the order of the content of the two binary codes, where "Yes" means that a pair of the common bits at the specified position exists.
Table 2 shows the common bits of the binary codes from song 2 and the music clip in the example in Table 1. Since the summation of the number of common bits is 1+2=3, the partial score given by the AFPI tree is thus 3 points.
The concept of information entropy was first introduced by Claude E. Shannon in 1948 [12]. Information Entropy Methods are still widely used in many computer science fields and an entropy method was applied in this study.
The information entropy [12] H(X) of a discrete random variable X can be calculated by the following formula:
(7) $$H \lpar X \rpar = \sum\limits_i P \lpar x_i \rpar \log_b P \lpar x_i \rpar ,$$
where x i denotes each possible event of X, and P(x i ) is the probability that the event x i occurs, while b is the logarithmic base used, usually set to 2 for binary logarithm.
In the proposed system, entropy in (7) is applied for the SAX representation sequence {a 1, a 2, …, a n }. We set x i as each distinct symbol that can be found in the sequence, and P(x i ) represents the probability that a j equals x i , where j is a uniformly distributed random variable of an integer between 1 and n (inclusive).
To search for a music clip, each of the 51 AFPI trees gives a particular partial score for the similarity to the music clip, and the 51 scores are combined. Next, the combined score is the "final score", which is then ranked as the final ranking result.
The final score is obtained by calculating the weighted summation of all partial scores with the corresponding entropy, which can be evaluated as the following equation:
(8) $$R_{final} = \sum\limits_{k = 1}^{13} R_k \cdot H \lpar X_k \rpar ,$$
where R k is the partial score given with respect to the kth MFCCs feature, and H(X k ) represents the entropy of the random variable X k from the SAX representation sequence of the search key in the kth MFCCs feature.
Walter and Zhang proposed [21] that, for N random vectors $\{{\bf x}_{n} \in R^{d} \vert n = 1,2, \ldots, N\}$ , the following steps can find the approximate KL basis. First, expand N vectors into complete wavelet packet coefficients. Then calculate the variance at each node and search this variance tree for the best basis. Sort the best basis vector in decreasing order, and select the top m best basis vectors to form a matrix U. Finally, transform N random vectors using the matrix U and diagonalize the covariance matrix R N of these vectors to obtain the eigenvectors. Since we expect m≪d, the reduction in computational load must be considered. Figure 9 illustrates the flowchart.
A linear estimator is used for noise reduction [19]. Let z, y, and w be K-dimensional vectors denoting noisy speech, clean speech, and noise, respectively. Transform z and w into wavelet packet domain. Then, calculate z and w to build the variance trees T z and T w , respectively. Because clean speech and noises are independent, subtract T w from T z node-by-node, and calculate the eigen-decomposition of clean speech by transforming the subtraction of the variance tree with AKLT[7].
Let $\tilde{\bf R}_{y} = {\bf U}_{y} {\bf \Lambda}_{y} {\bf U}_{y}^{H}$ denote the eigen-decomposition. Finally, let M be the number of eigenvalue of $\tilde{\bf R}_{y}$ greater than zero. Let ${\bf U}_{y} = \lsqb {\bf U}_{1}, {\bf U}_{2} \rsqb $ , where U 1 donates the $K\,{\times}\,M$ matrix of eigenvectors with positive eigenvalues,
(9) $${\bf U}_1 = \{u_{yk} \vert \lambda_y \lpar k \rpar \gt 0 \}.$$
Let ${\bf z}^{T} = {\bf U}_{y}^{H} {\bf z} = {\bf U}_{y}^{H} {\bf y} + {\bf U}_{y}^{H} {\bf w} = {\bf y}^{T} + {\bf w}^{T}$ . The covariance matrix ${\bf R}_{w^{T}}$ of w T is ${\bf U}_{y}^{H} {\bf R}_{w} {\bf U}_{y}$ . Let $\sigma_{wT}^{2} (k)$ be the kth diagonal element of ${\bf R}_{w_{T}}$ . The obtained estimate of $\tilde{\bf y}$ is
(10) $$\tilde{\bf y} = {\bf Hz}, {\bf H} = {\bf U}_y {\bf QU}_y^H,$$
where Q is a diagonal matrix.
(11) $${\bf Q} = diag \lpar q_{kk} \rpar , q_{kk} = \left\{\matrix{a_k^{1/2}, \hfill &k = 1,2, \ldots, M \hfill \cr 0, \hfill &\hbox{otherwise}, \hfill} \right.$$
(12) $$a_k = \left\{\matrix{\exp \left({-v\sigma_{wT}^2 \lpar k \rpar \over \lambda_y \lpar k \rpar} \right), \hfill & {k = 1, 2, \ldots, M} \hfill \cr 0, \hfill &\hbox{otherwise}, \hfill} \right.$$
where v (=0.5) is a predetermined constant.
The assumptions are that the noise is stationary and that the pure noise vector w is known. The pure noise vector w can be obtained from the noise only frame.
In this section, we evaluate the performance in accuracy and efficiency of the proposed QBS-CBMR system.
The database consists of 200 songs of various languages: 44 in Chinese, 51 in English, and 105 in Japanese. A total of 110 songs of the database are sung by male artists, 62 are by female artists, and the remaining 28 is a choral. In all, 67 different artists are involved and the songs are of various tempos and lengths so that the database is as diversified as possible. All songs in our database are sampled in 22 050 Hz and 8 bits-per-sample.
The proposed system is implemented with Java SE7, and the experiments are run on Windows 8.1, with a 3.4 GHz i7−3770 CPU, and a RAM of 4 GB.
The performance was measured in terms of Accuracy, which was calculated as follows:
(13) $$\hbox{A}ccuracy = {\sum\nolimits_{i=0}^N R(i) \over N},$$
where N is the total number of tests. In each test, R(i) returns 1 or 0 depending on whether a given music clip can or cannot be searched in the top n ranking, respectively. Therefore, different Accuracy rankings are calculated with different n.
Table 3 shows the results obtained when using this method to compute accuracy. The first row is the name of input music clips. When n=2, there are two correct results of music retrieval that can be found in all four songs. Hence, the Accuracy is 50%.
Twenty songs were randomly chosen from the 100 songs in the database. In the prelude song, there are usually no obvious changes, until the chorus. Therefore, accuracy is higher when using music clips from the chorus than when using music clips from the prelude song. Music clips from the chorus are used in this experiment.
The database contains N songs and M music clips. Each music clip is of t seconds. We test the total amount of the songs within the top n ranks among M search keys (M≤N). In the following experiments, we set N=200, M=20, and t=20±1 s.
The system, which uses the AFPI tree and the Fusion of AFPI and Alignment (FAA) tree without entropy-weighting mechanism is selected to be the baseline system [2]. Figure 10 shows the comparison of the proposed system and the baseline system. The experimental results show that the proposed method with entropy-weighting outperforms the baseline system.
The first experiment shows that using AFPI with entropy weighting is better than using only AFPI or FAA. The proposed music retrieval system uses the AFPI method for the database and the entropy-weighting summation mechanism for determining the final ranking, while [2] claims that the FAA method performs better than using AFPI only, since the methods used in the music retrieval processes are different. The first experiment compared the accuracy of the methods with and without AFPI and FAA and determined whether or not entropy-weighting should be used in the summation mechanism for the final result.
The second experiment used MFCCs and Chroma as features for music retrieval. Every music clip is an interception of the chorus part in 20 s. Figure 11 shows the comparison between different dimensions of features. The result about the second experiment is that the accuracy of using MFCCs and Chroma features is obviously better than only use MFCCs feature.
In the third experiment, 5 db white noise and pink noise was added to each music clip, and AKLT was used to exclude the effect of noise. Figure 12 shows the result of noisy and enhanced music clips in music retrieval. The feature is as same as the previous experiments, MFCCs and Chroma features. It is clear to find that after using AKLT to enhance the music clips, the music retrieval result is better than without using AKLT.
The robust QBS music retrieval system proposed in this study first converts the 51-dimensional features, which include MFCCs and Chroma features, into symbolic sequences by applying adapted SAX methods. The symbolic sequence is then used to construct the AFPI tree. Finally, the entropy-weighting mechanism is proposed to determine the final ranking. Noise effects are further reduced by applying AKLT preprocessing. The experimental results show that the proposed QBS-CBMR system outperforms the baseline system. Future studies will optimize the parameters of the proposed method such as the length of symbolic sequence and the dimension of the PAA representation. Moreover, a large database will be used to demonstrate the efficiency of the system.
This research was supported in part by Ministry of Science and Technology, Taiwan, R.O.C., under Grant MOST 104-2218-E-126-004.
[1] Chamie, J.: World population ageing: 1950–2050. Population Division, Department of Economic and Social Affairs, United Nations Secretariat, New York, 2005, 34–42.
[2] Huang, J.: Novel pattern indexing techniques for efficient content-based music retrieval. M.S. thesis, National Cheng Kung University, Institute of Computer Science and Information Engineering, Taiwan, R.O.C., 2008.
[3] Su, J.-H.; Huang, Y.-T.; Yeh, H.-H.; Tseng, V.-S.: Effective content-based video retrieval using pattern indexing and matching techniques. Expert Syst. Appl., 37 (7) (2010), 5068–5085.
[4] Lu, L.; You, H.; Zhang, H.-J.: A new approach to query by humming in music retrieval, in IEEE Int. Conf. Multimedia and Expo (ICME), 2001, 595–598.
[5] Cui, B.; Liu, L.; Pu, C.; Shen, J.; Tan, K.-L.: QueST: querying music databases by acoustic and textual features, in ACM Int. Conf. Multimedia, 2007, 1055–1064.
[6] Chiang, Y.-L.; Lee, Y.-S.; Hsieh, W.-C.; Wang, J.-C.: Efficient and portable content-based music retrieval system, in IEEE Int. Conf. Orange Technologies (ICOT), 2014, 153–156.
[7] Yang, C.-H.; Wang, J.-F.: Noise suppression based on approximate KLT with wavelet packet expansion, in IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP), 2002, 13–17.
[8] Tao, D.-C.; Liu, H.; Tang, X.-O.: K-box: a query-by-singing based music retrieval system, in ACM Int. Conf. Multimedia, 2004, 464–467.
[9] Tu, M.-C.; Liao, W.-K.; Chin, Y.-H.; Lin, C.-H.; Wang, J.-C.: Speech based boredom verification approach for modern education system, in Int. Symp. Information Technology in Medicine and Education, 2012, 87–90.
[10] Shepard, R.N.: Circularity in judgments of relative pitch. J. Acoust. Soc. Am., 36 (12) (1964), 2346–2353.
[11] Lin, J.; Keogh, E.; Lonardi, S.; Chiu, B.: A symbolic representation of time series, with implications for streaming algorithm, in Data Mining and Knowledge Discovery Workshop (DMKD), 2003, 2–11.
[12] Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J., 27 (3) (1948), 379–423.
[13] Davis, S.B.; Mermelstein, P.: Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoust. Speech Signal Process., 28 (4) (1980), 357–366.
[14] Tyagi, V.; Wellekens, C.: On desensitizing the Mel-cepstrum to spurious spectral components for robust speech recognition, in IEEE Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP), 2005, 529–532.
[15] Xiong, W.; Yu, X.-Q.; Shi, J.-H.: Music retrieval system using chroma feature and notes detection, in IET Int. Conf. Smart and Sustainable City (ICSSC), 2013, 476–479.
[16] Sumi, K.; Arai, M.; Fujishima, T.; Hashimoto, S.: A music retrieval system using chroma and pitch features based on conditional random fields, in IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), 2012, 1997–2000.
[17] Peeters, G.: Chroma-based estimation of musical key from audio-signal analysis, in Proc. 7th Int. Conf. Music Inf. Retrieval (ISMIR), 2006, 115–120.
[18] Shen, J.; Shepherd, J.; Cui, B.; Tan, K.-L.: A novel framework for efficient automated singer identification in large music databases. ACM Trans. Inf. Syst., 27 (3) (2009), 18:1–18:31.
[19] Mittal, U.; Phamdo, N.: Signal/noise KLT based approach for enhancing speech degraded by colored noise. IEEE Trans. Speech Audio Process., 8 (2) (2000), 159–167.
[20] Ephraim, Y.; Van Trees, H.L.: A signal subspace approach for speech enhancement. IEEE Trans. Speech Audio Process., 3 (4) (1995), 251–266.
[21] Walter, G.G.; Zhang, L.: Orthonormal wavelets with simple closed-form expressions. IEEE Trans. Signal Process., 46 (8) (1998), 2248–2251.
Yuan-Shan Lee received his M.S. degree in Applied Mathematics from National Dong Hwa University, Hualien, Taiwan, in 2013, and he is presently working toward the Ph.D. degree in Computer Science and Information Engineering in National Central University, Taoyuan, Taiwan. His current research interests include machine learning, blind source separation, neural network, and signal processing.
Yen-Lin Chiang received his B.S. degree in Computer Science from National Central University, Taiwan, in 2015. He is presently pursuing his M.S. degree at National Tsing Hua University, Taiwan. His general researches include machine learning and pattern recognition for multimedia processing.
Pei-Rung Lin is presently working toward the B.S. degree at National Central University, Taiwan. Her recent work has been in the areas of music retrieval and score following. Her general researches include machine learning and pattern recognition.
Chang-Hung Lin received his B.S. degree in Computer Science and Information Engineering from National Central University, Taoyuan, Taiwan. His general research interests include singing evaluation for karaoke applications, machine learning, and pattern recognition.
Tzu-Chiang Tai received his M.S. and Ph.D. degrees in Electrical Engineering from National Cheng Kung University, Tainan, Taiwan, in 1997 and 2010, respectively. He was with Philips Semiconductors and United Microelectronics Corporation (UMC) from 1999 to 2003. Presently, he is an Assistant Professor in the Department of Computer Science and Information Engineering, Providence University, Taichung, Taiwan. His research interests include signal processing, reconfigurable computing, VLSI design automation, and VLSI architecture design.
|
CommonCrawl
|
Bibliometric analysis of global Lassa fever research (1970–2017): a 47 – year study
Henshaw Uchechi Okoroiwu ORCID: orcid.org/0000-0003-2658-37881,
Francisco López-Muñoz2,3,4,5 &
F. Javier Povedano-Montero2,6
Lassa fever has been a public health concern in the West African sub-region where it is endemic and a latent threat to the world at large. We investigated the trend in Lassa fever research using bibliometric approach.
We used the SCOPUS database employing "Lassa fever" as search descriptor. The most common bibliometric indicators were applied for the selected publications.
The number of scientific research articles retrieved for Lassa fever research from 1970 to 2017 was 1101. The growth of publications was more linear (r = 0.67) than exponential (r = 0.53). The duplication time of the scientific articles was 9.19 years. Small number of authors were responsible for bulk of the article production (transience index of 78.89%). The collaboration index was 4.59 per paper. The Bradford core consisted of 19 journals in which Journal of Virology was at the top (4.6%). Majority of the output were from USA government agencies. United States was the most productive country. Joseph B. McCormick was the most productive author, while New England Journal of Medicine published the two most cited articles.
The growth of scientific Literature on Lassa fever was of linear pattern with high transient authors indicating low productivity and non-specialized authors from other related areas publishing sporadically. This study provides a helpful reference for medical virologists, epidemiologist, policy decision makers, academics and Lassa fever researchers.
Lassa fever is a viral hemorrhagic fever that was first described in the town Lassa in the North-East of Nigeria [1, 2]. Lassa virus (LASV), the causative agent of Lassa fever, is a negative strand RNA virus belonging to the old world complex family Arenaviridae characterized by the appearance of "sandy" ribosomes encapsulated in the virion as seen in electron microscope [3,4,5]. The reservoir/natural host of the virus is the multimammate rat Mastomys natalensis which live close to human settlement [4]. The virus may be transmitted from human to human giving rise to nosocomial or community-based outbreaks [5]. Mastomys natalensis shed the virus in urine [6] and contamination of human food is a more likely mode of transmission. Clinical manifestations of Lassa fever ranges from asymptomatic infection to hemorrhagic fever [7]. Approximately one-third of Lassa fever survivors develop bilateral or unilateral sudden-onset sensorineural hearing loss (SNHL) from which some patients fully recover [3, 8]. Lassa fever is endemic in West Africa causing an estimated 500,000 cases and 5000 deaths per year [3]. The countries implicated in the epidemic are Sierra Leone: Panguma and Kenema; Liberia: Zorzor, Phebe and Gianta; Nigeria: Jos, Onitsha, Zonkwa, Vom, Imo, Laffia, Irrua and Abakiliki [4, 9]. However, more recent oversea cross border infection has been reported in 2016 in Germany [10]. Lassa virus is a class 1A infective agent thus requiring a high-level containment, biosafety level 4 facility to diagnose or research [11].
Bibliometric studies are important tools for evaluating the social and scientific relevance of a given discipline within a specified time frame. The term "bibliometrics" was introduced in 1969 by Allan Pritchard, to define the use of mathematical and statistical procedures to the process of propagation of written communication in the field of scientific disciplines, via quantitative study of the varying aspect of this type of communication [12]. Despite the methodological limitations of bibliometric studies, they remain useful tools for evaluating the social and scientific importance of a selected discipline [13] considering the fact that they give an insight of the growth, size and distribution of scientific literature in the field of interest within a specified time frame [14]. To facilitate the understanding of on-going Lassa fever research output, social network analysis (SNA) of bibliometric data are usually used. Evaluation of research collaboration and its subsequent evolution over time is usually done using co-authorship based SNA as same is the most apparent and assessable indicator for collaboration [15, 16]. SNA metrics can highlight network patterns and identify its most influential participants [17]. Results of bibliometric analysis plays major role as proxy indicator of research cum development as well as in strategic planning [18].
This study was aimed at identifying Lassa fever research activities and to analyse the structure of the evolving Lassa fever research community network over time, and identify existing research collaborations and influential action. This study will be important to researchers, clinicians, research funders and health policy makers in order to adopt stringent policies regarding infectious disease in view of Lassa fever.
The SCOPUS database was used for this study. Scopus was selected as it is the largest abstract and citation database of peer-reviewed literature including: scientific journals, books and conference proceedings. Scopus index nearly 22,000 titles from over 5000 publishers, of which 20,000 are peer-reviewed journals in the scientific, medical, technical and social sciences (including the arts and humanities). Comparatively, Scopus is more comprehensive and user friendly to be used in biomedical discipline when compared to other bibliometric database for literature research, and it is well documented as the world's largest database for abstract and citation information used in various bibliometric studies [19, 20]. We retrieved articles published from 1970 (year of the first record) to 2017 containing the descriptors "Lassa fever" limited to three fields: title, key word and/or abstract, using remote-downloading techniques. This study took into account all original articles, reviews, brief reports, letters to the editor, editorials, and more.
The bibliometric indicators used in this study includes: Price's productivity index, Price's law, duplication time and annual growth rate, Lokta's productivity level (PL), Price's transient index, Bradford zones, impact factor and co-authorship index.
Price's Law [21] is broadly used indicator of productivity used in assessing the productivity of a particular discipline or country. It uses exponential growth evaluation which is an important feature of scientific productivity. To assess if a scientific production in Lassa fever research follows Price law of exponential growth, the generated data is modelled into linear adjustment according to the equation y = 0.6676x-1308 and exponential plot according to the equation y = 8E-32e0.0373. Price law is said to be fulfilled when the coefficient of determination of the exponential plot is greater than that of the linear plot.
Bradford's law [22], was used to determine the distribution of scientific literature on Lassa fever in this study. Bradford's law is a bibliometric indicator of dispersion of scientific literature. Bradford proposed concentric zones of productivity (Bradford zones) with decreasing density of information. He hypothesized each zone to contain similar number of documents. However, the number of journals that are produced increases from one zone to the next. Bradford's postulated zones aids in identifying journals that are widely used in a specified discipline. The stratification of journals in the different Bradford zones are viz.: 1, n, n2… The number of articles is stratified into 3 groups of approximately same size in which one is the core zone while the other two are the peripheral zones.
We also used duplication time and annual growth as indicators of productivity of scientific literature. Duplication time and annual growth are associated with growth assessment. Duplication time refers to the time (years) it takes a subject to duplicate its production. On the other hand, annual growth refers to the value of the present growth in comparison with that of the previous year. The equation for the duplication time is viz.:
$$ D=\frac{Ln2}{b} $$
where b is the constant that relates the growth rate with the already acquired size of the discipline. The annual growth rate was calculated using the equation:
$$ \mathrm{R}=100\left({\mathrm{e}}^{\mathrm{b}}-1\right) $$
Lotka's productivity index (PL) was used to assess productivity of authors. Lotka's author distribution law was proposed on the basis of the number of published articles known as "quadratic inverse of scientific production" [23]. Lotka carried out quantitative evaluation of publication of authors and realized that there are large number of authors that publish few articles than the number that publish many. The law states that within the scientific community, the number of authors (A) that have published a specific number of articles (n) within a given period, that is A(n) authors equals the number of authors that have published a single article A(1) within the same time, divided by the square of n. It is represented mathematically as:
$$ A(n)=\frac{A(1)}{n^2} $$
According to Lotka's index, authors are divided into 3 categories of productivity: those who published a single paper referred to as small producers, those who published between 2 to 9 papers who are referred to as mid-range producers and lastly those who published 10 or more papers, who are referred to as large producers.
Price's transience index was used to assess the number of authors having a single publication. The calculation is given as a percentage of the quotient of authors with a single publication among all authors. It is expressed mathematically as:
$$ IT=\frac{authors\ with\ a\ single\ publication}{all\ author}\times 100 $$
Impact factor (IF) was used as an indicator of publication repercussion. Impact factor as a bibliometric indicator was developed by Institute for Scientific Information (Philadelphia, PA, USA). It is published yearly in the Journal Citation Report (JCR) section of Science Citation Index Expanded (SCI). The calculation takes into account the number of times the journal was cited in the source SCI within the two preceding years. The impact factor data of 2017 by JCR was used for this study.
Co-authorship index was used to determine level of collaboration in the publication of Lassa fever related documents.
The last indictor used in this study is the national participation index (PI) in overall scientific publication in Lassa fever and in the field of infectious diseases in world's ten most productive countries in biomedical and health sciences during the period 1970–2017. Participation index reflects quotient between the number of documentation produced by a given country and the total number of documents obtained in the repertoire.
Evaluation of global publication
Using the search criteria, we recovered 1101 research publications within the 47 years period (1970–2017). Of these, 67.67% (n = 745) were Original articles, while 17.35% (n = 191), 4.90% (n = 54), 2.27% (n = 25), 2.09% (n = 23), 1.82% (n = 20), 1.73% (n = 19), 1.18% (n = 13), 0.45% (n = 5), 0.36% (n = 4) and 0.18% (n = 2) were Reviews, Letters, Editorials, Notes, Short surveys, Book chapters, Conference papers, Articles in press, Errata and Books, respectively (Table 1).
Table 1 Contributing literature type
The chronological distribution of the publication showed that there has been notable increase in the number of articles generated in the area of Lassa fever research (Fig. 1). To determine whether the increase of scientific literature followed Price's law, the obtained data were linearly adjusted in accordance with the equation y = 0.6676x-1308, and another adjustment in the exponential curve in accordance with the equation y = 8E-32e0.037. Hence, Price law is not fulfilled (r = 0.6707 in linear adjustment versus r = 0.5270 in exponential adjustment). This shows that growth of scientific literature in the area of Lassa fever research is in the linear growth stage.
Chronological distribution of scientific literature on Lassa fever within the study period. b Linear trendline. a Exponential trendline
Figure 2 shows the temporal production of the literature publication. To calculate the duplication time, the dispersion graph was adjusted to the equation y = 45.365e0.0751x, and a determination coefficient of 0.87. The production covered 47 years. Hence, applying the equation for calculating duplication time, the result is 9.19 years. That means that production of scientific literature in area of Lassa fever doubles every 9.19 years.
Temporal evolution of publication in Lassa fever. \( D=\frac{Ln2}{b}=\frac{0.68904}{0.0751}=9.19 \). Production doubles every 9.19 years
Table 2 shows the stratification of the authors in groups according to their productivity level (PL). We observed that the largest group is made up of authors with a single publication (PL = 0), accounting for 78.89% whereas large producers (PL ≥ 1) with over 10 published papers accounted for 1.20%, being the least fraction of the group. Hence, price transient index that corresponds to occasional authors who have produced one paper is 78.89.
Table 2 Classification of authors based on productivity
Table 3 shows the distribution of journals per Bradford zone. Nineteen (4.34%) of the journals made up the core zone, while 74 (16.89%) and 345 (78.77%) made up the zone 1 and 2.
Table 3 Bradford division of journals
Figure 3 shows Bradford distribution, global data. This is a semi – logarithm diagram of the aggregate number of articles versus the aggregate number of journals (r). The straight zone has been considered for r = 19 to r = 93, adjusted to y = 4.3723x + 321.11 equation with a high value of the determination coefficient (0.9919).
Bradford distribution, global data
Characteristics of collaboration
The 1101 published articles recorded in this study were produced by 3179 authors with a mean co-authorship of 4.59. The document with the most authors had 84 signatures and the most frequent number of signature was 1 (Table 4).
Table 4 Analysis of collaboration among authors
Analysis of sources with highest publication
Table 5 shows the top 10 journals used for the dissemination of research publications on Lassa fever with their corresponding impact factors (IFs) according to JCR of 2016 and the participation index (PI) of the journals on total database in the analyzed period. All the journals have impact factor with 6 and 3 of them having impact factors greater than 4 and 2, respectively.
Table 5 Analysis of the top 10 sources with the largest number of publicationsa
Productivity of countries
Countries that are most productive in publishing documents on Lassa fever were listed in Table 6. The United States was the most productive (n = 450) whose PI is 40.87, followed by United Kingdom (n = 117; PI = 10.63), Germany (n = 96; PI = 8.72), Nigeria (n = 65; PI = 4.90) and Sierra Leone (n = 54; PI = 4.90).
Table 6 Top 20 most productive countries in Lassa fever research
However, if we consider the productivity of the most productive countries on Lassa fever research in relation to their overall production in the field of infectious disease, only United States, United Kingdom, Germany, France and Canada of the top 10 most productive countries in the field of infectious disease were active (among the top 10 in biomedical research) in the production of scientific literature on Lassa fever. United States, and United Kingdom were leading consistently in all areas of infectious diseases (Lassa fever, Tuberculosis, AIDS) and Medicine (Table 7).
Table 7 Relationship between production of scientific literature on Lassa fever and total production in some fields of infectious diseases in world 10 most productive countries in biomedical and health sciences
Productivity of institutions
Table 8 shows the top 20 most productive institutions in Lassa fever research. Centre for Disease Control (USA) was the most productive institution (10.26%; n = 13) followed by Bernhard Nocht Institute fur Tropenmedizin Harmburg, Germany (Bernhard Nocht Institute for Tropical Medicine) (5.54%; n = 16), U.S. Army Medical Research Institute of Infectious Disease (4.81%; n = 53), Scripps Research Institute, USA (4.72%; n = 52) and others. Sixty five percent (65%) of the top 20 most productive institutions were in the United States, whereas the rest were from Germany, United Kingdom, Sierra Leone, Nigeria and France. Seven of the institutions (35%) are government agencies (35%), while 4, 3, 3, 2 and 1% are public/ Government Universities, nonprofit/ governmental organizations, private universities, private laboratories and government hospital, respectively.
Table 8 Top 20 most productive institutions in Lassa fever research
Productivity of authors
Table 9 shows top 6 most productive authors in Lassa fever research. Five authors in the top 6 most productive list are from United States and were affiliated with University of Texas Health Science Centre, Zalgens Labs, University of Maryland and Centre for Predictive Medicine for Biodefence and Emerging Infectious Disease. The other is from Germany and affiliated to Bernhard Nocht Institute for Tropical Medicine.
Table 9 Top 6 authors with the most publications related to Lassa fever
Citation analysis of articles
Table 10 shows top 10 most cited articles. Manually analyzing the articles showed that the top cited article mentioned Lassa fever in passing while the other publications dealt on public health concern, description, diagnosis and management of Lassa fever.
Table 10 Top 10 most cited articles
The document type utilized mostly by authors in studied repertoire is mostly original article accounting approximately 67.67% of all publications. This shows that the subject matter is clinical research or experimental.
This study observed a linear growth of scientific literature in Lassa fever with average annual increase of 15.73% showing non fulfillment of Price' Law of growth. This pattern of growth of scientific literature is at variance with previous reports in other areas of medical research [13, 24, 25]. This finding shows poor growth and low interest. This poor research output could be due to the limitation imposed by the Biosafety level 4 requirement for Lassa fever research [26]. More so, the escalating funding for infectious diseases such as HIV/AIDS may have created a shortage of funding for other diseases of regional burden such as Lassa fever [27]. Funding has been reported to have a positive influence on research output and citations for a particular disease [28].
This study also showed high transience rate of 78.89% indicating that the authors were mainly occasional publishers. This could be interpreted as low productivity or an indication of presence of researchers of other related specialties that have sporadically published in this field [29]. Only 1.20% of the authors were large producers. This shows that a large part of the scientific literatures emanated from a small number of researchers.
The proportion of papers signed by more than one author was 69.57%. This showed high level of co-operation among researchers. The mean co-authorship index is 2.88. Collaboration between authors indicates teamwork. Team work becomes essential considering multifaceted nature of contemporary research as well as cost implications.
Bradford analysis of the studied repertoire showed 19 journals in the core zone. This means that only 19 journals produced 33.79% of the published literatures. This shows a high concentration of publications by a small amount of journals. Of this number, Journal of Virology amassed the highest number of publication (51) accounting for 4.63% of all publications. The top 10 journals were mainly those dedicated to virology, infectious diseases or tropical diseases. Only British Medical Journal and The Lancet were the multidisciplinary journals.
United States of America and United Kingdom topped the ranking of research publication in Lassa fever. Both dominated research output in Lassa fever and accounted 64.57% of the total research output. This observation is consistent with previous research in other biomedical research [13, 24, 30]. The United States alone accounted for 40.87% of all research output in this area of study. It could not be far from the fact that these two countries are home to the pharmaceutical companies that manufacture ribavirin, an agent approved for the management of Lassa fever: Copegus, by Gentech Laboratories USA, UK- member of Roche group; Rebetol, by Merck Sharp and Dome, USA, UK; Ribasphere, by Kadmon Corporation, USA. The United States also houses the Zalgen Lab that deals on diagnostic logistics in Lassa fever. Same institution is responsible for production of the rapid test kit ReLASV for Lassa fever. Nigeria and Sierra Leone were the only African countries present on the list. These two countries have been reported to be endemic with Lassa fever [4, 9]. There would be a relationship with this endemic disease and the public health concern translated into scientific production [31]. More so, the fact that Nigeria was the first country where the disease was identified [2] may be a contributing factor to Nigeria's commitment. In relation to top 10 most productive in biomedical sciences, the interest of China in AIDS and tuberculosis and lack of same on Lassa fever was remarkable. Similar trend was also found in Australia and Spain. Institutions in the United Sates were the leading organizations in Lassa fever research which goes further to corroborate the finding that USA tops in global publication in Lassa fever. More than half (60%) of the institution were located in United States. The remaining slot went to other European countries such as Germany, and UK. This suggests that creating first-class research institutes is fundamental to improving academic level of a country. Sierra Leone and Nigeria were the only African country that housed institution that ranked among the first top 20 institutions (10/20). Kenema Government Hospital Sierra Leone is a center of international effort to combat Lassa fever with support from World Health Organization and UNAMSIL (United Nations Mission in Sierra Leone). Also, Sierra Leone is among the three members (Guinea, Liberia, Sierra Leone) of the Manu River Union Lassa fever Network established in partnership with World Health Organization, United States Foreign Disaster Assistance and United Nations [32]. Institute of Human Virology Nigeria is a leading local non-governmental organization addressing HIV/ AIDS crisis in Nigeria that has expanded its services to other infectious diseases [33]. University of Ibadan is a government owned Nigerian University whose mission is to expand the frontiers of knowledge through provision of excellent conditions for learning and research.
McCormick JB, Gunthers S, Garry RF, Salvato MS, Lukashevich IS and Grant DS were the top 6 authors who published the most studies in the Lassa fever research. McCormick JB focused on the treatment, epidemiology, general characterization of the disease and diagnosis while Gunters S, focused on the molecular characterization of the disease, and diagnosis. Garry RF focused on treatment and diagnosis of the disease, while Salvato MS dedicated her effort to molecular characterization of the disease, diagnosis and vaccine prototype production. Grant DS made impact in the area of diagnosis, origin and evolution of the disease, and epidemiology of the disease.
The total number of the cited articles within the studied repertoire was 23,125 giving an average citation count of 21 per article. The article "Legionnaire Disease: description of an epidemic of pneumonia" published by New England Journal of Medicine was the most cited article. On critical analysis, it is quite ironic that an article devoted to epidemiology of Legionaires disease made the highest number of citation on Lassa fever study analysis. These citations emanated from a two-line sentence from the author summary of the article referring to discovery of Lassa fever and Ebola made possible via epidemiology investigations. This scenario is one of those limitations of citations as a bibliometric impact parameter. The wide circulation of the journal may have leveraged it to achieve same at the cost of content relating to the topic in question. The second most cited article was an article on clinical trial on the effectiveness of ribavirin in preventing mortality in Lassa fever also published by New England of Medicine. This goes further to buttress how active the journal is as well as justify the high impact factor of the journal (55.5). The third and fourth articles were on cellular receptors of Lassa fever virus and social and environmental risk factors in the emergence of infectious diseases. It is worthy of note that McCormick is the author of the second, and tenth articles as well as the most productive author. Only one of the journals (New England Journal of Medicine) is an open access journal (with 6 months embargo). The others (Science, Nature Medicine, Critical Reviews in Microbiology, Journal of Infectious Diseases, Proceedings of the National Academy of Sciences of the United States of America and Current Topics in Microbiology and Immunology) are toll access journals while Reviews in Infections Disease is hybrid. These journals were all well-established journals. This shows that open access and toll access are uniformly favored in citation for well-established journals, but the impact tend to peak when such established journal is open access as in the case of New England Journal of Medicine.
However, this study contains some limitations which are inherent in bibliometric analysis. This study includes papers from SCOPUS database. The criteria set by the databases themselves determine the subsequent development of the studied materials [34]. We might have excluded papers on Lassa fever if the authors have not put our study inclusion descriptors in the titles or as key words. More so, local journals that are not indexed in SCOPUS during the study period were also not included in our study.
Despite the above named limitations, this study has been able to illuminate on the characteristics of Lassa fever research output from 1970 to 2017. There is a slow growth in research activities related to Lassa fever from 1970 to 2017. This research demonstrated that the growth of Lassa fever related literature favors a linear path rather than exponential unlike other biomedical fields. The bulk of publications in the field of Lassa fever research are published by high income countries such as the United States of America and United Kingdom. This study showed that majority of the most productive institutions were resident in the USA. Bulk of the articles were produced by very few of the participating authors. Thus, this study provides a helpful reference for medical virologists, epidemiologist, policy decision makers, academics and Lassa fever researchers.
JCR:
LASV:
Lassa virus
PI:
Participation index
PL:
Productivity level
SCI:
SNA:
Social network analysis
SNHL:
Asogun A, Adomeh D, Ehimuan J, Odia I, Hass M, Gabriel M, et al. Molecular diagnostics for Lassa fever at Irrua Specialist Teaching Hospital, Nigeria: Lessons learnt from two years of laboratory operation. Plos Negl Trop Dis. 2012;6(9):el839. https://doi.org/10.1371/journal.pntd.0001839.
Frame JD, Baldwin JM Jr, Gocke DJ, Troup JM. Lassa fever, a new virus disease of man from West Africa: clinical description and pathological findings. Am J Trop Med Hygiene. 1970;19:670–6.
Mateer EJ, Huang C, Shehu NY, Paessler S. Lassa fever-induced sensorineural hearing loss: a neglected public health and social burden. PLoS Negl Trop Dis. 2018;12(2):e0006187. https://doi.org/10.1371/journal-Pnt.0006187.
Okoroiwu HU, Akpotuzor JO. Lassa -a latent threat to West Africa: how ready are we? J Global Infect Dis. 2018;10(3):169–70. https://doi.org/10.4103/jgid.jgid_42_18.
Olschlager S, Lelke M, Emmerich P, Panning M, Drosten C, Hass M, et al. Improved detection of Lassa virus by reverse transcription-PCR targeting the 5'region of S RNA. J Clin Microbiol. 2010;48(6):2009–13.
Walker DH, Wulff H, Lange JV, Murphy FA. Comparative pathology of Lassa virus infection in monkeys, Guinea-pigs, and mastomys natalensis. Bull World Health Organ. 1975;52:533–4.
Yun NE, Walker DH. Pathogenesis of Lassa fever. Viruses. 2012;4(10):2031–48.
Cummins D, McCormick JB, Bennett D, Samba JA, Farrar B, Machin SJ, Fisher-Hoch SP. Acute sensorineural deafness in Lassa fever. JAMA. 1990;264(16):2093–6.
Fitchet-Calvet E, Rogers DJ. Risk map of Lassa fever in West Africa. PLoS Negl Trop Dis. 2009;3:e388.
World Health Organization. Emergency preparedness response. Lassa fever. 2017. Available from: https://www.who.int/csr/don/archive/disease/lassa_fever/en/. Accessed 13 Apr 2018.
Buchmeier MJ, de La Tone JC, Peters CJ. Arenaviridae: the viruses and their replication. In: Knipe DM, Howley PM, (eds), Field virology, 5th edition, volume 2. Philadelphia: Lippincott Williams & Wilkins; 2006. p. 1791–1851.
López-Muñoz F, Shen WW, Pae C, Moreno R, Rubio G, Molina JD, et al. Trends on literature on atypical antipsychotics in South Korea: a bibliometric study. Psychiatry Invest. 2013;10:8–16.
López-Muñoz F, Vieta E, Rubio G, Garcia-Garcia P, Alamo C. Bipolar disorder as an emerging pathology in the scientific literature: a bibliometric approach. J Affect Disord. 2006;92:161–70.
López-Pinero JM, Terrada ML. Los indicadores bibliometricos y la eveluacion de la actividad medico-cientifica III. Los indicadores de produccion, circulacion y dispersion, consumo de information y repercusion. Med Clin (Barc). 1992;98:142–8.
Hagel C, Weidemann F, Gauch S, Edwards S, Tinnemann P. Analysing published global Ebola virus disease research using social network analysis. PLoS Negl Trop Dis. 2017;11(10):e0005747. https://doi.org/10.1371/journal.pntd.0005747.
Abassi A, Hossain L, Leydesdorf L. Betweenness centrality as a driver of preferential attachment in the evolution of research collaboration networks. J Informetrics. 2012;6:403–12. https://doi.org/10.1016/j.joi.2012.01.002.
Knoke D, Burt RS. Prominence. Applied network analysis; 1983. p. 195–222.
Meo SA, Al Masri AA, Usmani AM, Memon AN, Zaidi SZ. Impact of GDP, spending on R&D, Number of Universities and scientific journals on research publication among Asian countries. PLoS One. 2013;8:e66449. https://doi.org/10.1371/journal.pone.0066449 PMID:23840471. Pries T, editor.
Falagas ME, Pitsouni EI, Melietzis GA, Pappas G. Comparison of Pumed, Scopus, web of science and Google Scolar: strength and weakness. FASEB J. 2008;22(2):338–42.
Kulkarni AV, Aziz B, Shams I, Busse JW. Comparisons of citations in web of science, Scopus, and Google scholar for article published in general medical journals. JAMA. 2009;30(10):1092–6.
Price DJS. Little science, big science. New York: Columbia University Press; 1963.
Bradford SC. Documentation. London: Crosby Lockwood; 1948.
Lotka AJ. The frequency distribution of scientific productivity. J Wash Acad Sci. 1926;12:317–23.
López-Muñoz F, Rubio G, Molina JD, Shen WW, Perez-Nieto, Moreno R. Mapping the scientific research in atypical antipsychotic drugs in Spain: a bibliometric assessment. Actas Esp Psiquiatr. 2013;41(6):349–60.
Garcia-Garcia P, López-Muñoz F, Callejo J, Martin-Agueda B, Alamo C. Evolution of Spanish Scientific production in international obstetrics and gynecology journals during the period 1986–2002. Eur J Obstetr Gynecol. 2005;123:150–6.
Center for Disease control and prevention Management of patients with suspected viral hemorrhagic fever. Center for Disease Control and Prevention, Atlanta, GA. Available at: https://www.cdc.gov/MMWR/preview/mmwrhtml/00037085.htm. Accessed 22 Apr 2018.
Fleischer T, Kevany S, Benatar SR. Will escalating spending on HIV treatment displace funding for treatment of other disease? Afr Med J. 2010;100(1):32–4.
Head MG, Fitchett JR, Derrick G, Wurie FB, Meldrum J, Kumari N, et al. Comparing research investment to United Kingdom institutions and published outputs for tuberculosis, HIV and Malaria a systematic analysis across 1997–2013. Health Res Policy Syst. 2015;13(1):63.
Povedano-Montero FJ, López-Muñoz F, Hidalgo Santa Cruz F. Bibliomatric analysis of the scientific production in the area of optometry. Arch Soc Esp Oftamol. 2016;91(4):160–9.
Sweileh WM, Al-Jabi SW, Sawallia AF, Abu-Taha AS, Zyoud SH. Bibliometric analysis of publications on campylobacter: (2000–2015). J Health Popul Nutr. 2016;35:36.
Culquichicon C, Hernandez-Pacherres A, Laban-Seminario LM, Cardona-Ospina JA, Rodriguez-Morales AJ. Where are we 60 years of paragonimiasis research? A bibliometric assessment. Le Infezioni in Medicina. 2017;2:142–9.
World Health Organization (WHO). Lassa fever. 2017. Available at: http://www.who.int/en/news-room/fact-sheets/detail/Lassa-fever. Accessed 24 Apr 2018.
Daily Trust. Virology institute establishes research centre in Abuja. This Day Newspaper Nigeria, July 24, 2018. Available at: https://www.dailytrust.com.ng/virology-institute-establishes-research-centre-in-abuja-262358.html. Accessed 28 Nov 2018.
Gómez I, Bordons M. Limitaciones en el uso de los indicadores bibliometricos para la evaluacion cientifica. Politica Cientifica. 1996;46:21–6.
This authors declare that they did not receive funding for this research from any source.
Datasets generated and analysed in this study are within the article. The primary source of data, SCOPUS is publicly available.
Haematology Unit, Department of Medical Laboratory Science, University of Calabar, Calabar, Nigeria
Henshaw Uchechi Okoroiwu
Faculty of Health Sciences, University Camilo José Cela, Madrid, Spain
Francisco López-Muñoz & F. Javier Povedano-Montero
Neuropsychopharmacology Unit, Hospital 12 de Octubre Research Institute (i+12), Madrid, Spain
Francisco López-Muñoz
Portucalense Institute of Neuropsychology and Cognitive and Behavioural Neurosciences (INPP), Portucalense University, Porto, Portugal
Thematic Network for Cooperative Health Research (RETICS), Addictive Disorders Network, Health Institute Carlos III, MICINN and FEDER, Madrid, Spain
Faculty of Biomedical Sciences and Health, European University of Madrid, Madrid, Spain
F. Javier Povedano-Montero
HUO conceived the study, analysed data, performed literature search and prepared the manuscript; FLM performed database analysis and data curation, analysed data and edited the initial manuscript; FJP performed database analysis and data curation, analysed data and edited the initial manuscript; All authors read and approved the final manuscript.
Correspondence to Henshaw Uchechi Okoroiwu.
This study is based analysis from secondary data, thus, did not require ethical clearance.
Okoroiwu, H.U., López-Muñoz, F. & Povedano-Montero, F.J. Bibliometric analysis of global Lassa fever research (1970–2017): a 47 – year study. BMC Infect Dis 18, 639 (2018). https://doi.org/10.1186/s12879-018-3526-6
Lassa fever
Lassa research
Bibliometric analysis
|
CommonCrawl
|
Edit Method
Method Name:*
Method Full Name:*
Description with Markdown (optional):
**TayPO**, or **Taylor Expansion Policy Optimization**, refers to a set of algorithms that apply the $k$-th order Taylor expansions for policy optimization. This generalizes prior work, including [TRPO](https://paperswithcode.com/method/trpo) as a special case. It can be thought of unifying ideas from trust-region policy optimization and off-policy corrections. Taylor expansions share high-level similarities with both trust region policy search and off-policy corrections. To get high-level intuitions of such similarities, consider a simple 1D example of Taylor expansions. Given a sufficiently smooth real-valued function on the real line $f : \mathbb{R} \rightarrow \mathbb{R}$, the $k$-th order Taylor expansion of $f\left(x\right)$ at $x\_{0}$ is $$f\_{k}\left(x\right) = f\left(x\_{0}\right)+\sum^{k}\_{i=1}\left[f^{(i)}\left(x\_{0}\right)/i!\right]\left(x−x\_{0}\right)^{i}$$ where $f^{(i)}\left(x\_{0}\right)$ are the $i$-th order derivatives at $x\_{0}$. First, a common feature shared by Taylor expansions and trust-region policy search is the inherent notion of a trust region constraint. Indeed, in order for convergence to take place, a trust-region constraint is required $|x − x\_{0}| < R\left(f, x\_{0}\right)^{1}$. Second, when using the truncation as an approximation to the original function $f\_{K}\left(x\right) \approx f\left(x\right)$, Taylor expansions satisfy the requirement of off-policy evaluations: evaluate target policy with behavior data. Indeed, to evaluate the truncation $f\_{K}\left(x\right)$ at any $x$ (target policy), we only require the behavior policy "data" at $x\_{0}$ (i.e., derivatives $f^{(i)}\left(x\_{0}\right)$).
Code Snippet URL (optional):
Currently: methods/Screen_Shot_2020-07-13_at_11.56.21_PM_IEMINS5.png Clear
Add A Method Collection
Attached collections:
POLICY GRADIENT METHODS
Create a new collection.
New collection name:
--------- Audio Computer Vision General Graphs Natural Language Processing Reinforcement Learning Sequential
Parent collection (if any):
Remove a collection
Add A Method Component
Add:*
Tick if this dependency is optional
Remove a method component
Taylor Expansion Policy Optimization
Introduced by Tang et al. in Taylor Expansion Policy Optimization
TayPO, or Taylor Expansion Policy Optimization, refers to a set of algorithms that apply the $k$-th order Taylor expansions for policy optimization. This generalizes prior work, including TRPO as a special case. It can be thought of unifying ideas from trust-region policy optimization and off-policy corrections. Taylor expansions share high-level similarities with both trust region policy search and off-policy corrections. To get high-level intuitions of such similarities, consider a simple 1D example of Taylor expansions. Given a sufficiently smooth real-valued function on the real line $f : \mathbb{R} \rightarrow \mathbb{R}$, the $k$-th order Taylor expansion of $f\left(x\right)$ at $x_{0}$ is
$$f_{k}\left(x\right) = f\left(x_{0}\right)+\sum^{k}_{i=1}\left[f^{(i)}\left(x_{0}\right)/i!\right]\left(x−x_{0}\right)^{i}$$
where $f^{(i)}\left(x_{0}\right)$ are the $i$-th order derivatives at $x_{0}$. First, a common feature shared by Taylor expansions and trust-region policy search is the inherent notion of a trust region constraint. Indeed, in order for convergence to take place, a trust-region constraint is required $|x − x_{0}| < R\left(f, x_{0}\right)^{1}$. Second, when using the truncation as an approximation to the original function $f_{K}\left(x\right) \approx f\left(x\right)$, Taylor expansions satisfy the requirement of off-policy evaluations: evaluate target policy with behavior data. Indeed, to evaluate the truncation $f_{K}\left(x\right)$ at any $x$ (target policy), we only require the behavior policy "data" at $x_{0}$ (i.e., derivatives $f^{(i)}\left(x_{0}\right)$).
Source: Taylor Expansion Policy Optimization
reinforcement Learning 1 100.00%
Usage Over Time
This feature is experimental; we are continuously improving our matching algorithm.
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign
|
CommonCrawl
|
Are high nurse workload/staffing ratios associated with decreased survival in critically ill patients? A cohort study
Anna Lee1,
Yip Sing Leo Cheung2,
Gavin Matthew Joynt1,
Czarina Chi Hung Leung1,
Wai-Tat Wong1 and
Charles David Gomersall1Email author
Received: 4 March 2017
Published: 2 May 2017
Despite the central role of nurses in intensive care, a relationship between intensive care nurse workload/staffing ratios and survival has not been clearly established. We determined whether there is a threshold workload/staffing ratio above which the probability of hospital survival is reduced and then modeled the relationship between exposure to inadequate staffing at any stage of a patient's ICU stay and risk-adjusted hospital survival.
Retrospective analysis of prospectively collected data from a cohort of adult patients admitted to two multi-disciplinary Intensive Care Units was performed. The nursing workload [measured using the Therapeutic Intervention Scoring System (TISS-76)] for all patients in the ICU during each day to average number of bedside nurses per shift on that day (workload/nurse) ratio, severity of illness (using Acute Physiology and Chronic Health Evaluation III) and hospital survival were analysed using net-benefit regression methodology and logistic regression.
A total of 894 separate admissions, representing 845 patients, were analysed. Our analysis shows that there was a 95% probability that survival to hospital discharge was more likely to occur when the maximum workload-to-nurse ratio was <40 and a more than 95% chance that death was more likely to occur when the ratio was >52. Patients exposed to a high workload/nurse ratio (≥52) for ≥1 day during their ICU stay had lower risk-adjusted odds of survival to hospital discharge compared to patients never exposed to a high ratio (odds ratio 0.35, 95% CI 0.16–0.79).
Exposing critically ill patients to high workload/staffing ratios is associated with a substantial reduction in the odds of survival.
Personnel staffing
Staffing costs are the major contributor to the high cost of intensive care, with the major component being nursing staff. Although recent studies have demonstrated that decreased nurse/patient or nurse/bed ratios are associated with worse outcome [1–6], this finding is not universal [7–10] and the relationship between nurse workload-to-staffing ratios and patient outcome is unclear [11, 12]. This is reflected in the variability of nursing staff levels in different countries. In the UK, it is recommended nurse/patient ratios are at least one nurse to two patients [13]. In the USA, ratios range from 1.29 to 3.8 [10].
Most studies have examined the association between average nursing staff to patient ratios and outcome. However, the average nursing staff to patient ratio may be an insensitive measure. Firstly, it does not take into account case mix variability. In a study of 396 patients in one ICU, the daily Therapeutic Intervention Scoring System-28 (TISS-28) points varied from 13 to 58 [12]. Secondly, the use of average values implies that days of low nursing staffing can be compensated for by days of high staffing. This seems unlikely particularly as surveillance is a key role of intensive care nurses [10]: having two nurses detect an abnormality on one day will not compensate for an undetected abnormality on another.
A study examining the relationship between nursing shifts with and without a death found that a death was more likely when the nurse-to-patient ratio was lower, the patient turnover was higher, and the number of life-sustaining procedures was higher [14]. The number of life-sustaining procedures was used as a measure of workload, but this is not a validated measure of workload. Thus, the important question of whether the nurse workload/staffing ratio is associated with patient outcome remains unanswered.
Due to the complexities outlined above, the relationship between workload/staffing ratio and survival may be neither linear nor logistic. In particular, it is likely that there is a ceiling effect such that decreasing workload/staffing ratios improves survival, up to a certain point, and thereafter further decreases in workload/staffing have no effect. However, increasing workload/staffing ratios is less likely to be subject to a ceiling effect. Thus, a linear or logistic analysis is unlikely to be helpful. A more useful approach may be to dichotomize the workload/staffing ratios into understaffed and adequately staffed.
Unfortunately, there is no consensus on what constitutes understaffing. Although it has been suggested that an "accomplished" critical care nurse should be capable of managing 40–50 Therapeutic Intervention Scoring System-76 (TISS-76) points [15], this workload threshold has not been validated. Similarly, although 46 Nine Equivalents of Nursing Manpower Use Score (NEMS) points were assumed to be equivalent to the nursing activities of one nurse per day, the mean NEMS was only 26.5 in a large sample of ICUs [16]. However, if the threshold at which increasing staffing no longer produces improvements in survival can be identified, workload/staffing ratios above this can be defined as inadequate staffing and ratios below this, adequate staffing.
We hypothesized that exposure to inadequate workload/staffing ratio during a patient's ICU stay is independently associated with increased mortality. We therefore carried out a cohort study to first identify a threshold workload/staffing ratio above which staffing was considered inadequate and then determined whether exposure to inadequate staffing at any stage of a patient's ICU stay was associated with increased mortality after risk adjustment.
Design and participants
Approval to carry out the study was obtained from the Clinical Research Ethics Committee of The Chinese University of Hong Kong, which waived the need for patient consent. The study was a retrospective analysis of prospectively collected audit data from a cohort of consecutive patients admitted to two ICUs in Hong Kong over a period of 5 months. Patients were followed up until hospital discharge. Patients who had been admitted for <4 h, <16 years of age, with the diagnosis of burns or transferred to an ICU in another hospital were excluded.
Both ICUs were the sole adult ICU in their respective hospitals, providing care for both medical and surgical patients. ICU1 was a 12-bed ICU in a 600-bed district hospital, while ICU2 was a 22-bed ICU in a 1400-bed tertiary referral university teaching hospital. Patients were largely cared for in open ward areas. Single rooms were only used for patients requiring isolation or when there were no other beds available. There were no student nurses, and nursing assistants' role was restricted to turning, washing, applying pressure for haemostasis, collection of disposable equipment for procedures, cleaning of re-usable equipment and maintenance of stocks of disposable equipment. Typically, 2–3 assistants were rostered for every 10 patients. Respiratory therapists were not employed, but patients were assessed and treated by trained physiotherapists 1–2 times per day.
Nursing shifts were from 0700 to 1400, 1400 to 2100 and 2100 to 0700. Typically, night shifts were staffed with 75% of the number of staff on day shifts. There was no difference in staffing between weekdays and weekends. However, the two ICUs did not make day to day changes to their maximum patient capacity in response to the number of staff actually available (eg reduced numbers due to sickness), resulting in shift to shift variation in nurse/patient ratios. No non-regular staff were employed to make up any shortfall in nursing numbers. This gave us the opportunity to observe a natural experiment of the relationship between workload/staffing ratios and outcome.
In ICU1, the physician staffing during office hours consisted of a single on-site Intensivist supported by internal medicine residents. Night-time and weekend staffing consisted of internal medicine residents supervised by an Intensivist or Internal Medicine specialist, who was largely off-site but conducted daily ward rounds and was available to return to the ICU if required. In ICU2, 2–3 Intensivists provided on-site care during office hours. The exact number of Intensivists rostered was based purely on availability. A sole Intensivist provided on-site care from 0800 to 1500 at weekends. At all other times, a sole Intensivist was available to provide advice and, if necessary, to return to the ICU to provide direct patient care. At all times, the Intensivists were supported by intensive care and/or anaesthesia residents. If a doctor was unable to work, a replacement was found.
The primary outcome was survival to hospital discharge. This was collected from the electronic Hospital Authority clinical information system by one of the investigators (YSLC).
The exposure variable was the workload/staffing ratio threshold. Data required to calculate TISS-76 [15], Acute Physiology and Chronic Health Evaluation III (APACHE III) and the average number of bedside nurses working on each day were collected. TISS-76 data were collected by a dedicated audit nurse. For each day, the total TISS-76 and average total number of bedside nurses in the ICU were collected. A unit nursing workload/staffing ratio was calculated from the total TISS-76 divided by the total number of bedside nurses. The average number of nurses working on each day was calculated to allow for the fact that different shifts might have different numbers of nurses. A bedside nurse was classified as a nurse whose primary responsibility on that shift was direct patient care. Those nurses whose primary role was administrative or who were rostered to educational activities were excluded from the calculation. For each patient, the unit workload/staffing ratio that patient was exposed to was recorded each day during the patient's ICU stay. Data on each individual nurse working in the ICU were also collected: years of ICU nurse experience and post-registration qualification in intensive care nursing.
Data were checked for normality using Shapiro–Wilk's test. Median and interquartile ranges are reported. The Mann–Whitney U test was performed to compare differences in medians between the two ICUs. The Chi-square test was used to test for differences in proportions. For the three patients with missing workload/staffing ratios, we assumed that these patients always had workload/staffing ratio <40. Pearson's r was used to test for co-linearity between TISS and workload-to-staffing ratio.
The initial analysis was to determine the threshold workload/staffing ratio above which the probability of hospital survival is reduced. This, in effect, is the same as examining the incremental cost–effectiveness ratio of drug treatments (standard vs new) [17, 18]. We estimate the trade-off between extra 'cost' of workload (∆W) and extra 'effect' of staffing (∆S) on patient outcome. The comparison of workload and staffing in ratio form between survivors and nonsurvivors was transformed to linear values by using the net-benefit analysis framework methodology [17, 19]. Thus, the incremental net benefit (INB) was estimated by the following formula:
$${\text{INB}} = \left( {\Delta S \times \lambda } \right) - \Delta W$$
where λ is the maximum workload/staffing ratio threshold acceptable.
We used a series of mixed-effects regressions, with random intercepts by ICU location and patient, to estimate the INB. Using each patient's net benefit (NB i ) [defined as (S × λ) − W] as the dependent variable, we ran the following mixed-effect regression:
$${\text{NB}}_{i} = \beta_{0} + \beta_{\text{outcome}} {\text{Outcome}}_{i} + {\text{covariates}} + \varepsilon_{i}$$
where Outcome i is the ith patient's hospital discharge status (β outcome = 1 for dead and 0 for alive) and ε i is a stochastic error term. Equation 1 is fitted many times with different values of maximum workload/staffing ratio threshold acceptable (λ). The following covariates, included in the mixed-effects regressions, were selected for confounding adjustments on the basis of a causal directed acyclic graph (DAG) approach [20]: age, APACHE III score, readmissions, urgency of admission (elective or emergency), type of admission (medical or surgical), acute renal failure and ICU location (Additional file 1: Figure S1). We estimated the 90% confidence intervals (provides for 5%, one-sided test of hypotheses) around the INB from the regression results to determine the threshold at which decreasing workload/staffing ratio was associated with increased survival to hospital discharge and the threshold at which increasing workload/staffing ratio was associated with decreased survival. The analysis was at the patient-day level.
Once the thresholds were estimated, we internally validated our thresholds by performing multivariate logistic regression on a bootstrapped sample (1000 repetitions). The second part of the analysis was to model the relationship between a day or more of exposure to workload/staffing ratios above or below the identified threshold and changes in survival, adjusting for the same confounders as in the net-benefit regression analysis. An interaction between maximum workload-to-nurse ratio threshold and APACHE III score was included in the multivariate logistic regression models. We also included an intragroup correlation in the model to adjust for multiple admissions by the same patient.
Model calibration was assessed using the Hosmer–Lemeshow (HL) goodness-of-fit statistics with 8 degrees of freedom and plotting a calibration belt [21]. The calibration belt is a fitted polynomial logistic function curve between the logit transformation of the predicted probability and outcome with surrounding 80% CI (light grey area) and 95% CI (dark grey area) [21]. The calibration belt is more useful than the HL test as it highlights ranges of significant miscalibration [21]. To assess the discrimination performance, an area under the receiver operating characteristic (AUROC) curve was constructed and a c-statistic was estimated. The Nagelkerke's R 2 was used to estimate the overall performance of the logistic regression model. Statistical analysis was performed using STATA version 14 (StataCorp, College Station, TX) and the calibration belt was plotted using R version 3.2.5 (R Foundation for Statistical Computing, Vienna, Austria).
There were 925 admissions during the study period. Thirty-one were excluded: 11 admitted for less than 4 h; 9 transferred to an ICU in another hospital; 4 aged less than 16 years, and 7 had burns as the primary diagnosis. Thus, there were 894 separate admissions in the cohort (Table 1), representing 845 patients. Among the 894 episodes, there were 98 deaths in ICU and 166 deaths before hospital discharge. There were 242 and 652 episodes in ICU1 and ICU2, respectively. Characteristics of patients by ICU are shown in Table 1 and at hospital discharge in Table 2. Nurses in ICU1 had a median of 2 (IQR 1–5) years intensive care nursing experience. Those in ICU2 had a median of 5 (IQR 4–7) years of experience (P < 0.001). In ICU1, 28% and in ICU2 67% of nurses had a post-registration qualification in intensive care nursing (P < 0.001). This qualification involves an additional year of part-time training, placement in a training ICU, coursework and formal examination.
Characteristics of patients by Intensive Care Unit (ICU)
ICU1 (242 episodes)
Median (IQR) age (years)
Number of males/females
Median (IQR) APACHE III
Number of elective ICU admissions (%)
Number of patients with acute renal failure (%)
Number of surgical patients (%)
Number of cardiac surgical patients (%)
Median (IQR) total TISS-76 score/episode
Median (IQR) bedside nurses per patient per day during ICU stay
2.6 (1.5–4.9)
Median (IQR) total TISS/nurse ratio
Number of patients with more than one ICU admission (%)
Median (IQR) length of stay in ICU
Number of in-ICU deaths (%)
Number of in-hospital deaths (%)
Median (IQR) APACHE III-predicted risk of death (%)
IQR interquartile range, APACHE III Acute Physiology and Chronic Health Evaluation III, TISS-76 76 item Therapeutic Intervention Scoring System
Characteristics of patients at hospital discharge
Mortality (166 episodes)
Survival (728 episodes)
The results of the mixed-effects regression analysis to identify threshold values are shown in Fig. 1. The lower 90% confidence interval crosses zero when the workload/staffing ratio is 40. This indicates that there is more than 95% probability that survival to hospital discharge is more likely to occur when the maximum workload-to-nurse ratio is <40. The upper 90% confidence interval crosses zero when the ratio is 52, indicating that there is a more than 95% chance that death is more likely to occur when the ratio is >52.
Maximum workload/nurse ratio with 90% confidence intervals from the net-benefit regression analysis
There were 275 admissions with workload/staffing ratio always <40 during the ICU stay. In the model examining the relationship between survival and workload/staffing in these patients, there was a significant interaction between workload/staffing ratio and APACHE III score (P < 0.01). The calibration belt was acceptable (Fig. 2) despite a significant HL test (P = 0.002), and the AUROC was 0.88 (95% CI 0.85–0.90). The overall performance of the model was satisfactory (R 2 = 0.45). Among patients with an APACHE III score of 60, patients with workload/staffing ratio of always <40 during the ICU stay were twice as likely to survive to hospital discharge (odds ratio 2.28, 95% CI 1.07–4.80) than other patients with same severity of illness but receiving higher workload/staffing ratio (ie. sometimes or always above 40). When APACHE III was between 70 and 130, a workload/staffing ratio always less than 40 was not significantly associated with survival. Survival was less likely to occur when APACHE III score was above 130 even when workload/staffing ratio was always less than 40 (odds ratio 0.24, 95% CI 0.09–1.01 at APACHE III score = 130). Crude mortality was 10.5% (ICU mortality) and 15.6% (hospital mortality) when workload/staffing ratio was always less than 40 during the patients' ICU stay.
Calibration belt for logistic regression model examining the effect of workload/staffing ratio threshold set at 40 on survival
There were 27 admissions with workload/staffing ratio of ≥52 during the ICU stay (10 admissions had workload/staffing ratio of ≥52 for 50–100% of the time). In ICU2, there was one day when the workload/staffing ratio was 55. On the two days when workload/staffing ratio was ≥52, which occurred in ICU2 (22 beds), the mean number of nurses was 17 and the mean total unit TISS was 911. In the model examining the relationship between survival and workload/staffing ratio at 52, there was no significant interaction between workload/staffing ratio and APACHE III score (P = 0.94), and thus the final model without an interaction term was selected. The calibration was acceptable [HL test P = 0.21, calibration belt showed no over- or under-prediction intervals (not shown)] and the AUROC was 0.88 (95% CI 0.85–0.90). The overall performance of the model was satisfactory (R 2 = 0.44). After adjusting for confounders, patients with high workload/staffing ratio (≥52) for one or more days during their ICU stay were less likely to survive to hospital discharge (OR 0.35, 95% CI 0.15–0.84) than other patients with lower workload/staffing ratio (always <52). Crude mortality was 18.5% (ICU mortality) and 29.6% (hospital mortality) when workload/staffing ratio was ≥52 for at least one day during the patients' ICU stay.
There was no correlation between average patient TISS and workload/nurse ratio (R 2 = 0.04).
Our data demonstrate that there is an association between nurse workload/staffing ratios and hospital survival, above certain thresholds and depending on severity of illness. In our setting, a TISS per nurse of 52 or more was associated with an adjusted odds ratio of 0.35 for survival. This applied to all patients regardless of severity of illness. A TISS per nurse of <40 for the patient's entire ICU stay was not associated with an increased chance of survival for extremely sick patients but was associated with increased survival among those who were only moderately severely ill (APACHE III ≤60). Our finding that a threshold of 52 TISS points was the workload/staffing ratio above which all patients (in our two ICUs) had an increased risk of death corresponds closely to the previous suggestion that a highly "accomplished" critical care nurse should be capable of managing up to 50 TISS points [15]. This relationship between workload/staffing ratios and outcome may explain our finding that both ICUs had similar crude mortalities despite the fact that the severity of illness, measured by APACHE III, was higher in ICU1. In ICU1 the median TISS/nurse ratio was significantly lower than in ICU2 (36 vs 40) (Table 1).
TISS-76 is a revised form of the original TISS score and uses 76 possible interventions to quantify nursing workload [15]. To give an example of the workload which corresponds to 52 TISS points, a paralysed mechanically ventilated patient who required emergency surgery in the past 24 h, receiving more than one vasoactive drug, amiodarone infusion, parenteral nutrition, renal replacement therapy, non-scheduled bolus and intermittent scheduled IV medication, two IV antibiotics, endotracheal suction, 6 units of blood or fresh frozen plasma transfusion in 24 h, platelet transfusion, multiple stat blood tests, hourly neurological and non-neurological observations, fluid input and output monitoring, electrocardiographic monitoring, with an arterial line, central venous catheter, nasogastric tube and urinary catheter in situ would score 52 points.
Two other studies of workload/staffing ratios have quantified workload in terms of nursing interventions rather than number of patients. Castillo-Lorente et al. [22] were unable to demonstrate a statistically significant increase in ICU mortality with increasing peak TISS-28-to-nurse ratios. However, the TISS-28-to-nurse ratio in the high workload/staffing group was >29.8 TISS-28 points per nurse [22]. TISS-28 is a simplified version of TISS-76 which was devised, in part, to reduce the time required for data collection [23]. Although TISS-28 and TISS-76 are not directly comparable, a previous study suggests that a TISS-28 of 29.8 corresponds to a TISS-76 of approximately 30 [22]. This is well below our lower threshold of 40, although for reasons discussed below the absolute values of these thresholds should be interpreted with caution. Neuraz et al. demonstrated that one or more death on a shift was associated with a patient-to-nurse ratio >2.5 and a greater number of life-supporting procedures per patient [14]. Although it is difficult to directly compare these data with ours, the threshold demonstrated by Neuraz et al. [14] appears high, if 50 TISS points are achievable by an accomplished critical care nurse. However, this may simply reflect the nature of their analysis. For the patient-to-nurse ratio to have an impact on mortality on the same shift would likely require a very major deficit in nursing care.
Our workload/staffing ratio was calculated on a unit basis rather than individual patient basis because this more closely reflects the way our nursing teams work. Our patients are largely nursed in open ward areas, and while there is one nurse with primary responsibility for each patient, the work is shared between the nurses. The choice of TISS score rather than newer scores such as the Nursing Activities Score [24] also reflects the work carried out by our nurses and the way we measured staffing. We measured staffing by counting the number of nurses involved in direct patient care, excluding those with predominantly managerial and administrative tasks. We also did not count the number of healthcare assistants (nurse assistants). Healthcare assistants are mainly responsible for carrying out tasks like washing patients, changing linen, special room cleansing procedures and patient positioning. Thus several of the additional tasks added to TISS to create the Nursing Activities Score [24] are not carried out by our bedside nurses.
Calculating workload/staffing ratios on a unit basis also minimized the risk of confounding by severity of illness. Greater severity of illness is associated with increased mortality and increased TISS for an individual patient. However, a greater severity of illness of an individual patient is unlikely to substantially alter the unit TISS/nurse ratio in a 22 bedded unit. This is supported by the absence of a correlation (R 2 = 0.04) between patient TISS and unit TISS/staffing ratio (our measure of workload/staffing). This indicates that our finding that workload/staffing ratio is associated with hospital survival is unlikely to be due to confounding by severity of illness.
The magnitude of adjusted odds ratio and the fact that the duration of exposure to high workload/staffing ratios associated with poor outcome may have been as little as one day, should raise questions about the advisability of "making do" when there is a shortage of staff. While it can be argued that it is inappropriate to deny ICU admission to a patient when there are available bed spaces, it is important to recognize that this may have a significant impact on survival for those patients already in the ICU if staffing levels are inadequate. Furthermore, it should raise questions about whether ICUs should be staffed on the basis of actual workload rather than number of patients [12] and conversely, whether bed capacity should be determined by actual nursing workload and staffing rather than physically empty beds, and about plans for sustainable ICU expansion in the face of disasters. The recent ACCP statement on surge capacity in a disaster recommended that ICU capacity be increased by up to 200% above baseline capacity [25]. Our data suggest that if the physical bed capacity is not matched by an increase in staffing, there may be a very substantial decrease in survival, which will at least partially negate any beneficial effect of increasing beds.
As our study was a purely observational study, it cannot demonstrate a causal relationship between workload/staffing ratios and hospital survival. Despite our use of risk adjustment techniques, there remains a possibility of residual confounding. However, there are other studies which support our findings [1–3, 10] and there are plausible mechanisms by which increased workload/staffing ratios might decrease survival. These mechanisms include the association of lower ICU nurse staffing levels with increased morbidity such as prolonged duration of weaning [26], increased nosocomial infection [2, 27–31], inadequate nutrition [32], pressure sores [2] and critical incidents [33].
Our study has a number of other weaknesses. Firstly, the analysis was retrospective. However, the data were prospectively collected. Secondly, the data were collected in only two ICUs. This may limit the generalizability of the results. Although our thresholds are considerably higher than those used by Castillo-Lorente et al. [22], we believe that the workload/staffing ratios in our ICUs are not unusual. Altafin et al. [34] reported a mean TISS 28 score that equates to a TISS 76 of 24.5 with a nurse-to-patient ratio of 1:10 and a nursing technician ratio of 1:2. Even counting the technicians as nurses, this would give a mean ratio of close to 50. Blatnik [35] reported a mean TISS-28/nurse ratio of 47, equating to a TISS-76/nurse ratio of 54. Even in high-income countries, workload/staffing ratios may be high, exceeding the equivalent of a TISS-76/staff ratio of 64 on 3% of all patient days in an ICU in the Netherlands [36]. Indirect data also support our contention, TISS-28 scores equivalent to a TISS-76 score of 35–48 [35, 37, 38] were reported from Germany, Slovenia and Columbia. Unless nurse/patient ratios (which were not reported) were greater than 1:1, this would suggest a substantial proportion of patients were exposed to the levels of staffing that are associated with worse outcome. Survey data suggest the mean nurse/patient ratio in Germany is 1:2.7 [39]. Nevertheless, we would caution against the direct application of our specific thresholds of 40 and 52 in determining how many nurses other individual ICUs require. Levels of training and the roles carried out by nurses differ in different ICUs. In particular, the role and number of supporting staff may affect the appropriate nurse workload/staffing ratio for a particular ICU. Our ICUs consisted predominantly of open areas with few individual rooms. Although this may be unusual in some parts of the world, it is common in Asia where only 36.9% of ICU beds are in individual rooms and 14.5% of ICUs have no single rooms [40]. Thirdly, we did not record limitation of therapy orders. It is possible that limitation of therapy may have resulted in some confounding by increasing the risk of death while reducing measured nursing workload. This would have the effect of reducing the probability of finding an association between high workload/staffing ratios and survival, making our finding more robust. Furthermore, any such effect would have been minimized by our method of calculating workload/staffing ratios. We calculated daily unit workload/staffing ratios and then assigned that ratio to each patient present in the ICU on that day rather than calculating daily individual patient workload/staffing ratios.
Ultimately, we believe that the optimal design to establish a causal relationship between nurse workload/staffing ratios and mortality is a cluster randomized controlled trial or a multi-centre step-wedge interrupted time series to compare the effect of determining bed capacity based on workload/staffing ratios against bed capacity determined by physical bed spaces. However, considerable preparatory work will be required before such studies can be properly designed. The first step is to confirm our findings prospectively.
Our data indicate that exposure to as little as one day of high workload/staffing ratios is associated with a substantially increased risk of death in critically ill patients. This confirms a previous finding that excessive workload/staffing ratios are associated with increased mortality [14] and refutes the findings of a study involving relatively low workload/staffing ratios that suggested there is no relationship between workload/staffing ratios and outcome [22]. If confirmed, our finding has significant implications for nurse staffing in Intensive Care Units, suggesting that staffing should be based on workload, not just patient numbers, and that "making do" with fewer nurses even for a short time or temporary increases in ICU capacity without a corresponding increase in staffing may adversely affect patient outcome.
ACCP:
American College of Chest Physicians
APACHE:
Acute Physiology and Chronic Health Evaluation
area under the receiver operating characteristic
directed acyclic graph
HL:
Hosmer–Lemeshow
ICU:
INB:
incremental net benefit
NEMS:
Nine Equivalents of Nursing Manpower Use Score
odds ratio
TISS:
Therapeutic Intervention Scoring System
YSLC, CDG, GMJ and AL were involved in the study design. YSLC collected the data. AL and CDG analysed and interpreted the data. All authors drafted or critically revised the work for intellectual content, read and approved the final version of the manuscript, and agree to be accountable for all aspects of the work. All authors read and approved the final manuscript.
The datasets generated during and/or analysed during the current study are not publicly available as consent for publication of raw data was not obtained from study participants, but are available from the corresponding author on reasonable request.
Approval to carry out the study was obtained from the Clinical Research Ethics Committee of The Chinese University of Hong Kong, which waived the need for patient consent.
Internal funding from the Department of Anaesthesia and Intensive Care, The Chinese University of Hong Kong.
13613_2017_269_MOESM1_ESM.pdf Additional file 1: Figure S1. Directed acyclic graph. Assumptions made in the workload/staffing ratio threshold and hospital mortality relationship to identify the set of variables needed for confounding adjustment.
Department of Anaesthesia and Intensive Care, The Chinese University of Hong Kong, 4th Floor, Main Clinical Block and Trauma Centre, Prince of Wales Hospital, Shatin, NT, Hong Kong
Intensive Care Unit, Prince of Wales Hospital, Shatin, Hong Kong
Kane RL, Shamliyan TA, Mueller C, Duval S, Wilt TJ. The association of registered nurse staffing levels and patient outcomes: systematic review and meta-analysis. Med Care. 2007;45:1195–204.View ArticlePubMedGoogle Scholar
Stone PW, Mooney-Kane C, Larson EL, Horan T, Glance LG, Zwanziger J, et al. Nurse working conditions and patient safety outcomes. Med Care. 2007;45:571–8.View ArticlePubMedGoogle Scholar
Cho SH, Hwang JH, Kim J. Nurse staffing and patient mortality in intensive care units. Nurs Res. 2008;57:322–30.View ArticlePubMedGoogle Scholar
West E, Barron DN, Harrison D, Rafferty AM, Rowan K, Sanderson C. Nurse staffing, medical staffing and mortality in intensive care: an observational study. Int J Nurs Stud. 2014;51:781–94.View ArticlePubMedGoogle Scholar
Checkley W, Martin GS, Brown SM, Chang SY, Dabbagh O, Fremont RD, et al. Structure, process, and annual ICU mortality across 69 centers: United States Critical Illness and Injury Trials Group Critical Illness Outcomes Study. Crit Care Med. 2014;42:344–56.View ArticlePubMedPubMed CentralGoogle Scholar
Sakr Y, Moreira CL, Rhodes A, Ferguson ND, Kleinpell R, Pickkers P, et al. The impact of hospital and ICU organizational factors on outcome in critically ill patients: results from the Extended Prevalence of Infection in Intensive Care study. Crit Care Med. 2015;43:519–26.View ArticlePubMedGoogle Scholar
Sales A, Sharp N, Li YF, Lowy E, Greiner G, Liu CF, et al. The association between nursing factors and patient mortality in the Veterans Health Administration: the view from the nursing unit level. Med Care. 2008;46:938–45.View ArticlePubMedGoogle Scholar
West E, Mays N, Rafferty AM, Rowan K, Sanderson C. Nursing resources and patient outcomes in intensive care: a systematic review of the literature. Int J Nurs Stud. 2009;46:993–1011.View ArticlePubMedGoogle Scholar
Numata Y, Schulzer M, van der Wal R, Globerman J, Semeniuk P, Balka E, et al. Nurse staffing levels and hospital mortality in critical care settings: literature review and meta-analysis. J Adv Nurs. 2006;55:435–48.View ArticlePubMedGoogle Scholar
Kelly DM, Kutney-Lee A, McHugh MD, Sloane DM, Aiken LH. Impact of critical care nursing on 30-day mortality of mechanically ventilated older adults. Crit Care Med. 2014;42:1089–95.View ArticlePubMedPubMed CentralGoogle Scholar
Mountain SA, Hameed SM, Ayas NT, Norena M, Chittock DR, Wong H, et al. Effect of ambient workload in the intensive care unit on mortality and time to discharge alive. Healthc Q. 2009;12 Spec No Patient:8–14.Google Scholar
Kiekkas P, Sakellaropoulos GC, Brokalaki H, Manolis E, Samios A, Skartsani C, et al. Association between nursing workload and mortality of intensive care unit patients. J Nurs Scholarsh. 2008;40:385–90.View ArticlePubMedGoogle Scholar
Bray K, Wren I, Baldwin A, St LU, Gibson V, Goodman S, et al. Standards for nurse staffing in critical care units determined by: the British Association of Critical Care Nurses, The Critical Care Networks National Nurse Leads, Royal College of Nursing Critical Care and In-flight Forum. Nurs Crit Care. 2010;15:109–11.View ArticlePubMedGoogle Scholar
Neuraz A, Guerin C, Payet C, Polazzi S, Aubrun F, Dailler F, et al. Patient mortality is associated with staff resources and workload in the ICU: a multicenter observational study. Crit Care Med. 2015;43:1587–94.View ArticlePubMedGoogle Scholar
Keene AR, Cullen DJ. Therapeutic Intervention Scoring System: update 1983. Crit Care Med. 1983;11:1–3.View ArticlePubMedGoogle Scholar
Moreno R, Reis MD. Nursing staff in intensive care in Europe: the mismatch between planning and practice. Chest. 1998;113:752–8.View ArticlePubMedGoogle Scholar
Hoch JS, Briggs AH, Willan AR. Something old, something new, something borrowed, something blue: a framework for the marriage of health econometrics and cost–effectiveness analysis. Health Econ. 2002;11:415–30.View ArticlePubMedGoogle Scholar
Shaffer ML, Watterberg KL. Joint distribution approaches to simultaneously quantifying benefit and risk. BMC Med Res Methodol. 2006;6:48.View ArticlePubMedPubMed CentralGoogle Scholar
Hoch JS, Dewa CS. A clinician's guide to correct cost–effectiveness analysis: think incremental not average. Can J Psychiatry. 2008;53:267–74.PubMedGoogle Scholar
Shrier I, Platt RW. Reducing bias through directed acyclic graphs. BMC Med Res Methodol. 2008;8:70.View ArticlePubMedPubMed CentralGoogle Scholar
Nattino G, Finazzi S, Bertolini G. A new test and graphical tool to assess the goodness of fit of logistic regression models. Stat Med. 2016;35:709–20.View ArticlePubMedGoogle Scholar
Castillo-Lorente E, Rivera-Fernandez R, Rodriguez-Elvira M, Vazquez-Mata G. Tiss 76 and Tiss 28: correlation of two therapeutic activity indices on a Spanish multicenter ICU database. Intensive Care Med. 2000;26:57–61.View ArticlePubMedGoogle Scholar
Miranda DR, de Rijk A, Schaufeli W. Simplified Therapeutic Intervention Scoring System: the TISS-28 items—results from a multicenter study. Crit Care Med. 1996;24:64–73.View ArticlePubMedGoogle Scholar
Miranda DR, Nap R, de Rijk A, Schaufeli W, Iapichino G. Nursing activities score. Crit Care Med. 2003;31:374–82.View ArticlePubMedGoogle Scholar
Hick JL, Einav S, Hanfling D, Kissoon N, Dichter JR, Devereaux AV, et al. Surge capacity principles: care of the critically ill and injured during pandemics and disasters: CHEST consensus statement. Chest. 2014;146:e1S–16S.View ArticlePubMedGoogle Scholar
Thorens JB, Kaelin RM, Jolliet P, Chevrolet JC. Influence of the quality of nursing on the duration of weaning from mechanical ventilation in patients with chronic obstructive pulmonary disease. Crit Care Med. 1995;23:1807–15.View ArticlePubMedGoogle Scholar
Vicca AF. Nursing staff workload as a determinant of methicillin-resistant Staphylococcus aureus spread in an adult intensive therapy unit. J Hosp Infect. 1999;43:109–13.View ArticlePubMedGoogle Scholar
Fridkin SK, Pear SM, Williamson TH, Galgiani JN, Jarvis WR. The role of understaffing in central venous catheter-associated bloodstream infections. Infect Control Hosp Epidemiol. 1996;17:150–8.View ArticlePubMedGoogle Scholar
Hugonnet S, Chevrolet JC, Pittet D. The effect of workload on infection risk in critically ill patients. Crit Care Med. 2007;35:76–81.View ArticlePubMedGoogle Scholar
Hugonnet S, Uckay I, Pittet D. Staffing level: a determinant of late-onset ventilator-associated pneumonia. Crit Care. 2007;11:R80.View ArticlePubMedPubMed CentralGoogle Scholar
Kelly D, Kutney-Lee A, Lake ET, Aiken LH. The critical care work environment and nurse-reported health care-associated infections. Am J Crit Care. 2013;22:482–8.View ArticlePubMedPubMed CentralGoogle Scholar
Honda CK, Freitas FG, Stanich P, Mazza BF, Castro I, Nascente AP, et al. Nurse to bed ratio and nutrition support in critically ill patients. Am J Crit Care. 2013;22:e71–8.View ArticlePubMedGoogle Scholar
Beckmann U, Baldwin I, Durie M, Morrison A, Shaw L. Problems associated with nursing staff shortage: an analysis of the first 3600 incident reports submitted to the Australian Incident Monitoring Study (AIMS-ICU). Anaesth Intensive Care. 1998;26:396–400.PubMedGoogle Scholar
Altafin JAM, Grion CMC, Tanita MT, Festti J, Cardoso LTQ, Veiga CF, et al. Nursing Activities Score and workload in the intensive care unit of a university hospital. Rev Bras Ter Intensiva. 2014;26:292–8.View ArticlePubMedPubMed CentralGoogle Scholar
Blatnik J, Lesnicar G. Propagation of methicillin-resistant Staphylococcus aureus due to the overloading of medical nurses in intensive care units. J Hosp Infect. 2006;63:162–6.View ArticlePubMedGoogle Scholar
Seynaeve S, Verbrugghe W, Claes B, Vandenplas D, Reyntiens D, Jorens PG. Adverse drug events in intensive care units: a cross-sectional study of prevalence and risk factors. Am J Crit Care. 2011;20:e131–40.View ArticlePubMedGoogle Scholar
Muehler N, Oishi J, Specht M, Rissner F, Reinhart K, Sakr Y. Serial measurement of Therapeutic Intervention Scoring System-28 (TISS-28) in a surgical intensive care unit. J Crit Care. 2010;25:620–7.View ArticlePubMedGoogle Scholar
Rubiano S, Gil F, Celis-Rodriguez E, Oliveros H, Carrasquilla G. Critical care in Colombia: differences between teaching and nonteaching intensive care units. A prospective cohort observational study. J Crit Care. 2012;27:104.e9-17.View ArticlePubMedGoogle Scholar
Graf J, Reinhold A, Brunkhorst FM, Ragaller M, Reinhart K, Loeffler M, et al. Variability of structures in German intensive care units: a representative, nationwide analysis. Wien Klin Wochenschr. 2010;122:572–8.View ArticlePubMedGoogle Scholar
Arabi YM, Phua J, Koh Y, Du B, Faruq MO, Nishimura M, et al. Structure, organization, and delivery of critical care in Asian ICUs. Crit Care Med. 2016;44:e940–8.View ArticlePubMedGoogle Scholar
|
CommonCrawl
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Exploring Approaches for Detecting Protein Functional Similarity within an Orthology-based Framework
Christian X. Weichenberger ORCID: orcid.org/0000-0002-2176-02741,
Antonia Palermo1,
Peter P. Pramstaller1 &
Francisco S. Domingues1
Scientific Reports volume 7, Article number: 381 (2017) Cite this article
Protein function predictions
Protein functional similarity based on gene ontology (GO) annotations serves as a powerful tool when comparing proteins on a functional level in applications such as protein-protein interaction prediction, gene prioritization, and disease gene discovery. Functional similarity (FS) is usually quantified by combining the GO hierarchy with an annotation corpus that links genes and gene products to GO terms. One large group of algorithms involves calculation of GO term semantic similarity (SS) between all the terms annotating the two proteins, followed by a second step, described as "mixing strategy", which involves combining the SS values to yield the final FS value. Due to the variability of protein annotation caused e.g. by annotation bias, this value cannot be reliably compared on an absolute scale. We therefore introduce a similarity z-score that takes into account the FS background distribution of each protein. For a selection of popular SS measures and mixing strategies we demonstrate moderate accuracy improvement when using z-scores in a benchmark that aims to separate orthologous cases from random gene pairs and discuss in this context the impact of annotation corpus choice. The approach has been implemented in Frela, a fast high-throughput public web server for protein FS calculation and interpretation.
Gene products can be compared in many different ways, researchers for example have been performing comparisons between proteins based on the amino acid similarity1 for many years. Proteins can further be computationally compared regarding their molecular function and biological role in the cell, provided there is a set of standardized annotations that can be used to calculate measures of semantic similarity2. The Gene Ontology (GO) Consortium has been making major contributions in this area by establishing a controlled vocabulary of defined terms for annotating gene and gene product properties across species3. More specifically, GO provides three independent ontologies to describe biological molecules: Molecular Function (MF), Biological Process (BP), and Cellular Component (CC). Starting as a collaborative annotation project between fruit fly, yeast, and mouse model organism databases, GO now contains a growing collection of ontology terms as well as a large amount of annotations from all kingdoms of life. GO terms are organized in a directed acyclic graph (DAG) with the so-called root term being the most common term with only child terms. More specific terms are further away from the root and any term (other than the root) may have multiple child and parent terms.
Soon after the introduction of GO, researchers started to apply semantic similarity concepts from natural language ontologies to GO-annotated gene products4, defining a function that maps a pair of GO terms to a numeric value expressing the semantic similarity (SS) of this pair. Now, more than a decade later, roughly 100 different methods have been suggested to compute SS for pairs or sets of GO terms5. Several reviews are available6,7,8,9, which attempt to classify the different methods into groups with related SS measures. Generally, these measures make use of the structure of the DAG and/or combine this with information from a GO annotation corpus, which provides the mapping between GO terms and gene products. Following the classification suggested by Guzzi et al.7, measures may be distinguished by their use of ancestor terms, whether they utilize DAG structural properties such as term depths or path lengths, or if they consider the term information content, which relates to negative logarithm of the probability to observe a GO term or a more specific term in the annotation corpus given the DAG. The SS measures have also been classified as pairwise or group-wise, according to how the multiple GO terms annotated to each protein are taken into account. In pairwise measures two proteins are compared semantically by either computing all possible pairwise semantic similarities between the GO terms annotated to each of the two proteins and determining a suitable combined score (often referred to as mixing strategy), for example by taking the maximum of all SS values. Alternatively, group-wise measures perform a direct comparison of sets of GO terms. Ultimately, some measures apply vector space models, where the GO terms annotated to a protein are encoded as a vector, and two vectors are then used to compute the SS of the respective proteins.
Protein semantic similarity has been applied to answer various biological questions, such as protein-protein interaction (PPI) prediction10, and conversely, prediction of false positives in PPI networks11, prediction of pathways from PPI genome-wide data12, imputation of missing values in expression data13, construction of SS networks to discover disease genes14, protein function prediction15, just to name some recent applications, see Pesquita et al. for a more complete overview9. Performance of SS measures was also benchmarked on PPI data and on their relationship to sequence similarity and gene expression data, summarized in the CESSM tool for assessment of SS measures16.
Over time, several groups identified and discussed drawbacks of individual SS measures and the annotation process as a whole. Evidently, protein annotation is biased and is influenced by different research interests, with model organisms of human disease for example being better annotated17 and promising gene products (e.g. disease associated genes) or specific gene families having a higher number of annotations. These biases have been analysed over time18 and lead to correlations between the number of GO terms a protein is annotated with, which in turn affects applications that involve SS measures19. Protein annotations with GO terms include evidence codes, which inform about the source of the annotations and which are frequently interpreted as a quality index. In particular, use of automated annotations (inferred from electronic annotation, IEA), which form the majority of all available annotations, has been investigated in most SS assessments. In general, these assessments report that including annotations with IEA evidence codes improves performance most of the time7, and it is encouraging to see electronic annotations improve over time20. Finally, certain SS measures are affected by shallow annotations, which are very generic/unspecific annotations close to the root, in the sense that two proteins furnished with only shallow annotations receive high SS scores14, 21.
Considerable effort has been put into developing software packages and web servers to compute SS, see reviews by Guzzi et al.7 and Gan et al.6, with subsequent work published as web services or extensions to existing web resources22,23,24,25 or as downloadable source code26,27,28,29. Notably, the GOssTo framework allows calculation of several popular SS measures both as a standalone Java program or through a web service30, whereas the Semantic Measures Library offers about 50 SS measures in a command line Java toolkit. Ultimately, Mazandu et al. provide the most complete set of SS measures both for online calculation31 and as a high-performance Python command line tool5.
Here, we present an orthology-based evaluation of different functional similarity (FS) measures, resulting from the combination of six popular SS measures with five mixing strategies. We tackle annotation bias by introducing protein-specific z-scores and discuss their performance when compared to conventional FS scores. This work also reinvestigates the influence of electronic annotations and annotation corpus choice on the performance of the FS measures with respect to all three available gene ontologies. We furthermore combine FS scores from the different ontologies and compare their performance. The methods can be run on a new web server, http://frela.eurac.edu, which allows high-throughput calculations on an average of several ten thousand protein pairs per minute. The server facilitates biological interpretation of computed FS scores. The software, including the source code for the web server, is available for download from our web server.
In this work, we have compared the performance of selected functional similarity measures in discriminating pairs of orthologous genes from random pairs and utilized protein-based z-scores as a means to improve the discriminatory power of these FS measures. To summarize, we have chosen the following six pairwise SS measures: Resnik (simRes), Lin (simLin), Schlicker (simRel), information coefficient (simIC), Jiang and Conrath (simJC), and graph information content (simGIC). We combined these SS measures with each of the following five mixing strategies: average (fsAvg), maximum (fsMax), maximum of best matches (fsBMM), best matches averaged (fsBMA), and mean of best matches (fsABM), which gives a total of 30 FS measures that were investigated in our study.
All SS measures (except simGIC) discussed here make use of the most informative common ancestor (MICA), that is, the common ancestor of the two compared GO terms that has maximal information content. The earliest SS measure is simRes, which simply selects the information content of the MICA. This measure is not normalized and neglects any information about the contributing GO terms. Since simRes is not bounded from above, any normalization procedure (such as division by the highest information content value32) depends on the annotation corpus itself, and therefore hampers score comparison between corpora. To limit these drawbacks, simLin normalizes simRes by the sum of the contributing GO terms' information contents, such that MICAs that are close to their GO terms receive a higher score than those that are higher up in the GO graph. In simRel, the level of annotation detail is addressed by weighting simLin with the counter-probability of the MICA. By this, shallow annotations receive less relevance than MICAs further away from the root. A criticism of this approach was that annotations close to the root or to the leaves receive too little weights, which was addressed by an alternative weighting scheme in the simIC measure. Initially designed as a metric33, then transformed into a SS measure34, simJC shows little difference to simLin 35. Ultimately, simGIC takes into account all ancestor terms and, most notably, does not make use of MICAs.
Two gene products are functionally compared by computing a semantic similarity score matrix filled with SS scores formed by pairing annotations made to one gene product versus the annotations to the other. The mixing strategies operate on the SS score matrix in order to obtain a single FS score. Each of the mixing strategies has been designed with a specific goal. When choosing fsAvg 4, a high score can only be obtained if all SS scores are high, i.e. if the two proteins are annotated with very similar terms. On the other hand, fsMax 36 highlights the largest SS score and thus points to the most similar subfunctionality. The remaining mixing strategies are variations of row and column maxima functions, which express the highest similarity of one specific annotation to all annotations of the comparison partner and vice versa. Historically, fsABM was mentioned first37, and was later discussed in detail as an improved mixing strategy38 by taking the average over all row and column maxima. Like fsAvg, this strategy works well if both proteins have many related annotations. The fsBMA 34 strategy computes the average of the averaged row and column maxima and therefore takes better into account individual high similarities, whereas fsBMM 39 puts emphasis on an asymmetric view by selecting the score obtained from the better average maximum, highlighting good partial matches in the respective protein.
Annotation Bias and Score Distribution
Several studies have examined annotation bias in GO. Early on, Wang et al.19 have reported that heavily studied and annotated disease genes and their orthologues in model organisms are one of the sources of bias, and that this would be influencing gene prioritization results. The authors propose a correction by a power transformation for FS scores that uses parameters estimated from random background distributions of FS scores. Shallow annotations can be interpreted as the inverse of annotation bias, and Chen et al.40 suggested to overcome shallow annotations by calculating an information content overlap ratio FS score, which essentially is a reciprocal average of information content scores. This idea has been refined by Teng et al.24 by taking into account shared information content during score calculation, a concept that has been introduced earlier albeit tackled differently34. Motivated by the aforementioned findings of Wang et al.19, Schulz et al.41 introduced a method to calculate exact p-values for a fixed set of query terms and applied this concept for similarity searches in phenotype ontologies. A temporal analysis of annotation bias in GO database releases over ten years revealed an increase for human data, but also a decrease in yeast data18. Another source for annotation bias are high-throughput experiments, which contribute disproportionally large amounts of annotations by only few published studies42, being further propagated by automated methods. This huge body of electronic annotations (evidence code IEA) has a strong influence on scores. Figure 1 shows the distribution of score averages calculated from pairwise comparison of 1000 randomly chosen mouse proteins for each annotated human protein in the respective annotation corpus for the simLin/fsAvg measure. The scores are computed both when taking into account annotations with evidence code IEA (IEA(+)) and also when excluding IEA annotations (IEA(−)), see Table 1 for dataset sizes. In the distributions we find no single average score higher than 0.4, and since the averages were calculated over randomly selected protein pairs, this can conservatively be seen as a global upper limit for distinguishing random from non-random matches by simLin/fsAvg score. Each distribution exhibits a sharp peak in the lower score ranges that is absent in the case of scores calculated from non-electronic annotations. Since the scores are averages of proteins with randomly chosen partners, the peaks correspond to background similarity due to the electronic annotation process. This can be rather extreme, as for example is the case in the BP ontology (Fig. 1a), where in the presence of IEA annotations there are 13.4% scores below 0.05, but in their absence, this is reduced to 0.7%. The score distribution within the CC ontology (Fig. 1c) furthermore shows that IEA annotations can contribute substantially to obtaining higher scores, as the tail of the IEA(+) distribution is elongated by almost 30% of the length of the IEA(−) distribution. This tail accounts to proteins annotated with only IEA evidence codes.
Average simLin/fsAvg score distributions for BP, MF, and CC ontologies for human/mouse protein pairs. For a human protein P, the score average is computed by forming pairs of proteins (P, R) over 1000 randomly selected mouse proteins R, with the kernel density estimates of the respective distributions being displayed for the IEA(+) dataset (black solid lines, density computed from 93806 annotated proteins, also see Table 1) and the IEA(−) dataset (grey lines, 21212 annotated proteins). (a) BP score distribution. Manually annotated protein pairs show a clear peak at a simLin/fsAvg score of 0.15. Including IEA evidence codes in the annotation corpus generates a second peak very close to 0.0. A large portion of this peak can be attributed to the roughly 70000 human gene products, which are exclusively annotated with IEA evidence codes. (b) MF based score distribution. Unlike BP with its sharp peak for the manual annotations, this ontology is characterized by a more uniform distribution of scores, with a notable peak near 0.27, generated by approximately 1600 proteins. GO enrichment analysis of these proteins shows that they are significantly enriched in "protein binding" (GO:0005155, p < 10−100), suggesting that gene products annotated to this term generally yield much higher than average simLin/fsAvg MF scores. (c) CC score distribution. Here, both manual and electronic annotation peaks are closer to each other than in the other two ontologies. Furthermore, electronic annotations are characterized by densities in the upper score range (>0.3), where the manual annotation scores have already tailed off.
Table 1 Number of annotated proteins in the organism data sets.
As mentioned before, genes with a higher number of GO annotations tend to receive higher FS scores. In other words, genes with annotation bias introduced by a large number of GO annotations are expected to have on average higher FS scores. To address this, Konopka et al.43 suggested using as a similarity threshold the 95% quantile derived from scores calculated from random protein pairs. Alternatively, we propose to improve the similarity scores of two proteins by taking into account their respective score background distribution and calculate a similarity z-score (see Methods) that is less affected by annotation biases of specific proteins. This requires calculation of mean and standard deviation for each protein P by evaluating FS scores from protein pairs (P, Q), where proteins Q are randomly sampled. The so derived mean score for protein P represents a baseline score that varies from protein to protein (see Fig. 1), and describes the expected FS score found for random protein pairs, which in turn reflects (but is not limited to) annotation bias. Together with a protein-specific standard deviation this allows to transform a FS score into a normalized z-score, which adjusts for the annotation baseline of the proteins compared.
As a first example, we consider the orthologous genes encoding human and mouse alcohol dehydrogenase 1 (UniProt accession numbers (UPANr) P07327 and P00329, respectively), which have a simLin/fsAvg IEA(+) score of 0.22 in BP ontology. A similar score is computed for the unrelated pair of a human G-protein coupled receptor and a mouse histone protein (UPANr Q96LA9 and Q8CGP5). However, the latter protein pair has an average simLin/fsAvg score of 0.21 for 1000 randomly chosen mouse proteins, and in this context the particular score of 0.22 should no longer be considered high enough to call the pair similar. By calculating a z-score for these specific pairs of proteins, we find for the orthologous alcohol dehydrogenases z(P07327, P00329) = 2.97, and for the unrelated pair of proteins we compute z(Q96LA9, Q8CGP5) = 0.41, now giving a clear distinction on protein functional similarity.
As another example we examine the PARK2 disease gene, which encodes an E3 ubiquitin protein ligase. Mutations on this gene have been shown to be causative for various forms of Parkinson's Disease. As a disease gene it is heavily annotated with 91 GO terms in BP ontology, and can therefore be considered an annotation-biased gene that tends to receive on average higher FS scores. In fact, the mean simGIC/fsBMA IEA(+) functional similarity score between PARK2 and 1000 randomly selected mouse proteins equals to 0.186, which corresponds to the 99.7 percentile of all human protein scores. In this context, comparison of PARK2 protein (UPANr O60260) with two other mouse proteins, FBXL2 (Q8BH16) and GATAD1 (Q920S3), results in a similar, but very high simGIC/fsBMA IEA(+) score of 0.53. However, z(O60260, Q8BH16) = 5.58 whereas z(O60260, Q920S3) = 2.68 provides help to discriminate a protein involved in an E3 ubiquitin-ligase complex (FBXL2) from a zinc finger protein (GATAD1). (All examples can be reproduced on our Frela server).
Discriminating between Orthologues and Random Protein Pairs
In a recent study, Wu et al. used orthology relationships to quantify the ability of a SS measure to distinguish orthologues determined by phylogeny from an equally sized set of randomly paired proteins to demonstrate the superiority of their newly developed SS measure over others27. We have extended this idea and made use of high quality orthology relationships to define optimal thresholds for separating pairs of orthologues from random protein pairs for both raw FS scores and z-scores. This allowed us to directly compare FS measures under various conditions, to determine the optimal measure and additionally to investigate possible improvements achieved by applying a z-score calculated on top of the conventional FS scores. Since each measure is evaluated on the same set of cases and controls, all measures considered were equally exposed to any existing annotations bias, shallow annotations, or any other flaws in the GO ontology. It is furthermore known that GO annotations are incomplete and erroneous44, 45, and it is not guaranteed that no true positive was selected as a control by random sampling or conversely, that orthologues are not recognized due to very low function similarity described by GO itself. These are challenges to the orthology-based evaluation framework we have applied. However, all measures are tested on the same set of cases and controls, and therefore within this testing framework these effects cancel out when comparing measure performance on a relative basis9. Therefore, this allows ranking the tested measures; yet, we discourage to draw conclusions on an absolute scale such as transferring the error rates to other applications.
In Fig. 2 we present scatter plots for the percentages of correctly assigned protein pairs after selecting an optimal score threshold for different functional similarity measures using annotations from BP ontology (see paragraph "Benchmarking" in Methods section). These plots therefore show the ideal case where each measure is applying an optimal threshold, which separates best the cases (orthologues) from the controls (random pairs) based on a FS raw score (x-axis) and respective z-score (y-axis). In general, application of z-scores mildly improved accuracy; FS scores based on fsMax are the only exception. The overall lowest error rates were observed for the closely related and well annotated human/mouse orthologues: We found an error rate of approximately 2% (Fig. 2a), when including annotations with IEA evidence codes, and the best measures are simJC/fsBMA and simGIC/fsBMA. Interestingly, when excluding IEA evidence codes from the annotation corpus, we saw an increase in error rate to approximately 6%, but the best performing measures were still simJC and simGIC in combination with either fsBMA or fsABM (Fig. 2b).
Error rate scatter plots comparing raw and z-scores for different FS scores in BP ontology. Each panel plots the error rates of raw scores (x-axis) versus z-scores for pairs of orthologues and controls from selected organisms with a smaller inlay panel in the upper left corner focusing on the best scoring measures. The thick grey diagonal line serves as a measure performance indicator, as any point on this line corresponds to a measure that achieves exactly the same error rate with raw and with z-scores. Any deviation from this line therefore indicates an error rate improvement by either using the raw scores or the z-scores. The first column (panels a,c, and e) presents results from an annotation corpus including electronic annotations, whereas the second column shows the outcome where electronic annotations have been excluded. The rows correspond to the organism pairs used for scoring by orthology: first row, human/mouse; second row, human/fly; third row, mouse/fly. A legend present in each lower right corner gives rise to the colour encoding of SS measures and symbols coding for mixing strategies.
The much more remotely related organisms human and fly posed a greater challenge to functional similarity characterization, reflected by an error rate of about 10.5% for the IEA(+) corpus (Fig. 2c), and simGIC/fsBMA being again the best performing measure. Here, we found some fsMax based scores yielding very low error rates, which became the best performing measure when using IEA(−) data: simIC/fsMax, simRel/fsMax, and simJC/fsMax (Fig. 2d). The good performance can be attributed to the fact that for IEA(−) data fsMax behaves similar to the generally well-performing fsBMA mixing strategy due to small number of annotations per protein9. We have already mentioned the very low error rates for human/mouse orthologues, such that by transitivity the mouse/fly orthologues also yielded similar results as the human/fly comparison. We also found the same measures ranking top. For both IEA(+) and IEA(−), the error rate in mouse/fly orthologues was higher by one to two percent compared to the human/fly orthologue dataset (Fig. 2c,d,e and f).
Notably, introduction of z-scores improved accuracy of the best performing measures only marginally. The biggest gains were observed for fsAvg-based measures, which simply compute the average of all entries in the SS matrix. Biologically, fsAvg is a rather unsuitable measure to quantify protein functional similarity, as shown by all plots in Fig. 2, where the measure frequently is associated with the highest error rates. By definition, fsAvg will perform only well if the two compared proteins are very similar in terms of GO annotations. Since we are comparing orthologues, proteins might already have deviated in function to a certain degree, which can be better picked up by row and column maxima mixing strategies. Remarkably, fsMax stood out both positively and negatively. This measure is characterized by the lowest error rate in the human/fly IEA(−) dataset when combined with simIC, simRel, and simLin SS measures (Fig. 2d), but fsMax-based z-scores performed worst in combination with simLin and simRel for human/mouse orthologues when using the IEA(+) dataset (Fig. 2a).
In the MF ontology, z-scores calculated with the simGIC/fsBMA measure based on the IEA(+) dataset distinguished themselves with the lowest error rates for any set of orthologues we have investigated. When restricting annotations to the IEA(−) dataset, various other measures ranked top, and simGIC/fsBMA consistently ranked in the top 20% (see Supplementary Fig. S1). This is in line with the results presented by Pesquita et al.9, who found that both simGIC (albeit used as a group-wise measure) and simRes/fsBMA performed best in a sequence similarity-based benchmark using MF ontology. Since the CC ontology describes cellular locations, it is not surprising to find rather high error rates when using this ontology for protein functional similarity (see Supplementary Fig. S2). However, as described later in Section "Combining Orthologies", it increased accuracy when calculating scores from multiple orthologies for a pair of proteins.
In all three ontologies we have observed that fsBMA mixing strategy contributed many times to achieve lower error rates. Furthermore, even though several other SS measures were seen in the top ranks, as a general guideline we tend to recommend simGIC as the semantic similarity of choice for computing functional similarity. Use of z-scores generally improved measure performance, but especially for simGIC/fsBMA raw scores and z-scores performed almost equally well. Nevertheless, we see value of z-score usage as an additional discriminatory tool in ranking results, as we have seen in the PARK2 example above. When looking for closely related proteins in a proteome-wide scan, due to its nature fsAvg can be a good mixing strategy and based on the results from the orthology relationships, we expect z-scores to be especially beneficial. In most circumstances, fsMax should not be used when computing FS scores based on an IEA(+) annotation corpus, as this mixing strategy generally overestimates protein functional similarity. On the other hand, when looking for very good single matches of GO terms, fsMax will be a good choice, especially in an annotation corpus without electronic annotations, and has been recommended for protein-protein interaction analysis32. Broader partial matches are best detected by fsBMA. Overall, we find simGIC/fsBMA as a recommendable all-round FS measure.
Our results confirm previous findings that inclusion of electronic annotations improves measure accuracy46, 47. In particular, Rogers and Ben-Hur47 identified a 16% increase in average accuracy as classifier overestimate when predicting protein function using BP ontology and electronic annotations. In BP, when using the IEA(+) dataset we find an average decrease of 3.9% in error rate for both raw and z-score based measures. One reason for this improvement is explained by orthologues that are annotated with only IEA evidence codes such as the pair given by UPANrs (Q9UBX7, Q9QYN3), which is annotated electronically only with GO term GO:0006508 (proteolysis). In our human/mouse orthology dataset, we found 221 orthologous gene pairs where both genes are annotated with IEA evidence codes only, implicating that GO terms have been transferred during an automated annotation process. Moreover, these 221 gene pairs make already 39% of all pairs that are only present in the human/mouse orthologue IEA(+) set, whereas the remaining 61% are pairs where only one protein is annotated exclusively with IEA evidence codes. The improvement with IEA can also be explained by the effect of IEA annotation being partially based on orthology, resulting in a similar GO annotation between orthologues and therefore the similarity measures achieve better discrimination from the background of unrelated proteins. Overall, when including automated annotations, accuracy was higher, but this is also due to minimally and solely electronically annotated proteins, which should be kept in mind when making a decision whether to involve automated annotations. Our web service therefore provides means to investigate FS score composition, so that such cases can be detected easily.
Choice of Annotation Corpus
The annotation corpus forms the basis for calculation of FS scores and therefore has an influence on the score magnitude, which needs to be kept in mind during score interpretation43, 46. It is therefore of interest if protein functional similarity should be computed with data retrieved only from the organism(s) under investigation rather than taking all available data from all annotated organisms. To examine this question within our orthology framework, we restricted the annotation corpus to include only data from the organisms that form the orthologue pairs (AOO) and compared it to our default corpora that do not impose any restrictions on organisms (ALL). For example, for human/mouse orthologues, SS measures were computed from GO graphs containing information content related to only human and mouse protein annotations (AOO), and were compared to SS measures computed from GO annotations of proteins from all organisms (ALL).
When excluding electronic annotations we consistently observed better scoring performance for the BP ontology when using ALL corpus data (see Methods for details on the analysis). On the one hand, this is quite remarkable, as this observation holds for 29 SS/MS combinations, with simJC/fsAvg being the only exception where the null hypothesis that the two datasets' mean ranks are equal was not rejected at the significance level of α = 0.01. On the other hand, the magnitude of the difference is rather small: overall, error rate differences are mostly below 1% of the respective case/control dataset size. The measures with the lowest error rates in BP, simJC and simGIC in combination with fsBMA (Fig. 2a), showed only very modest gain of performance when used with ALL corpus data (<0.7% error rate differences). The biggest improvement was found for simRel/fsBMM, where the (mean) error rate drops by 1.48% from 716.73 (CI = [714.63, 718.83], AOO corpus) to 665.51 (CI = [663.51, 667.50], ALL corpus) false assignments in a dataset of 1726 mouse/fly orthologues and 1726 controls sampled 225 times (Supplementary Fig. S3). A rather similar picture is seen when examining CC ontology, where all SS measures except simRes resulted in smaller error rates when computing scores based on ALL corpus data. However, the performance gain is generally larger than in BP ontology, as for example we found a 5.95% drop in error rate for simIC/fsBMM when using ALL corpus data for human/mouse orthologues (Supplementary Fig. S4). For MF ontology, there were significant improvements for simLin, simRel, and simIC measures with ALL corpus data, but the magnitude in error rate decrease is below 0.5% (Supplementary Fig. S4). Finally, when including electronic annotations there was no clear improvement in using ALL corpus of annotations versus AOO.
To summarize, when discriminating orthologues from unrelated proteins it seems to be beneficial to rely on a larger set of annotation corpora instead of restricting to the annotations in the respective organisms. Even though gain of accuracy is generally small and is only evident when restricting to IEA(−) annotations, it is much more convenient to compute scores based on a single database derived from all available annotation corpora instead of using corpora tailored to organisms.
Combining Ontologies
A decade ago, Schlicker et al. suggested combining FS scores from BP and MF ontologies into a single FS score by computing the root mean square of the two ontologies48, which to our knowledge has never undergone any performance investigations. The orthology testing framework described here, provided an appropriate environment for investigating the performance of FS measures based on BP+MF combined ontologies, as well as the effect of combing all three ontologies (BP+MF+CC) into a single FS score. Our analysis of BP+MF, and BP+MF+CC combined ontology FS scores reflects an increasingly specific filtering strategy starting with biological processes: an annotation to BP is made when a gene product activity is part of, or regulates, or is upstream of but still necessary for, a biological program49. This broad context makes BP annotations especially valuable for protein functional similarity calculations. Requiring additional similarity by MF, that is, comparable biochemical or signalling activity, will generally narrow down the list of matching candidate proteins and increase the overall sensitivity of the query. Demanding protein colocation by utilizing annotations from CC ontology presents the ultimate refinement in protein functional similarity that can be retrieved from GO. For completeness, the Frela server allows score calculation based on any combination of ontologies.
When looking at the results we need to keep in mind that combined ontologies require every protein be annotated to each single ontology, which results in a smaller number of orthologous gene pairs used in the assessment (see Table 2). For example, there are 2240 human/fly orthologues annotated in BP, including automated annotations. However, there are only 1868 orthologues annotated to both BP and MF ontologies, and this number drops to 1516 for orthologous gene pairs with annotations in all the ontologies.
Table 2 Number of orthologous gene pairs used in this study.
In general, we observed substantially lower error rates in all three orthology data sets in comparison to the scores based on a single ontology (Fig. 3). Even though FS measures involving the CC ontology showed the highest error rates of all three ontologies, inclusion of this ontology further improved the error rates when compared to BP+MF combined ontologies, especially for the closely related human/mouse organisms (Fig. 3a). Also, use of combined ontologies tends to level out the performance of FS measures, in the sense that error rate differences became smaller.
Error rates of various FS measures computed for different ontologies. Each panel shows error rates for different measures (x-axis, measure names separated by a hyphen) for all three gene ontologies and the two combined ontologies (encoded by symbols, see legend to the right of each panel). Error rates for IEA(+) dataset-derived scores are shown in black, and the corresponding scores calculated for the IEA(−) dataset are depicted as grey symbols in a separate column to the right of the corresponding IEA(+) results. Since the error rates are given as percentages, we refer to Table 2 for dataset sizes and Supplementary Data File S1 for underlying raw data. Panels represent all orthology relationships that entered this study: (a) human/mouse orthologues; (b) human/fly orthologues; and (c) mouse/fly orthologues.
We observed that for the majority of measures applied to human/mouse orthologues, the BP+MF+CC combined ontology is superior to any other single or combined ontology within the IEA(−) dataset, with fsAvg and simLin/fsMax being the only exceptions (Fig. 3a, grey circles).
In Fig. 4 we highlight two measures that, when using electronic annotations, ranked top in the BP+MF human/mouse and human/fly combined ontology, respectively (also see Fig. 3a and b). Utilizing simGIC/fsBMA, orthologues from the closely related human and mouse organisms are very well separated from their controls, for both BP and MF ontologies (Fig. 4a, upper part). The corresponding two peaks in the combined BP+MF score are accordingly well separated (Fig. 4a, lower part), such that the overall error rate is only 1.24%. For the more remotely related organism pair human/fly, the densities for cases and controls calculated with the simIC/fsBMA measures overlap by some extent (Fig. 4b). Notably, there is a smaller fraction of orthologues that do not share any similarity in the MF ontology, but do have considerable high BP scores (Fig. 4b, circles on the upper half of the x-axis). One of these orthologues is given by the human/fly pair Q6ZYL4/B7Z018 (raw score in BP is 0.50, z-score is 5.09), but due to only a single annotation in MF for each of the two proteins, which evaluates to a raw MF score of 0.0, and therefore heavily penalizes calculation of the combined score, this orthologous pair became a false negative in the BP+MF combined ontology.
Distribution of BP, MF, and BP+MF combined FS scores (IEA(+) dataset) for selected orthologies. The upper part of the figure shows a scatter plot of BP (x-axis) and MF (y-axis) scores of orthologous gene pairs (cases, displayed as circles) and randomly selected gene pairs (controls, visualized as crosses) from the respective organisms. On top of this scatter plot the two-dimensional density function of these two distributions is displayed by solid and dashed iso-lines for cases and controls, respectively. Each two-dimensional point in this scatter plot is mapped to a real value by the F BP+MF function, which is the root mean square of the two individual scores. In the bottom part of the figure we show the one-dimensional density function of the so-computed F BP+MF scores for cases and controls using the same line styles as above. The crossing point of the two density graphs defines the optimal threshold for minimizing the error rate. (a) Human/mouse orthologues and controls with their associated simGIC/fsBMA scores. The simGIC semantic similarity in conjunction with the F BP+MF function greatly separates cases from controls, with an error rate of only 1.24%. (b) Human/fly orthologues and controls with their associated simIC/fsBMA scores. On average, we find 207.28 (CI = [205.86, 208.69]) incorrectly assigned pairs out of 1868 cases and 1868 controls, which corresponds to an error rate of 5.55%.
The results indicate that there are considerable differences between established FS scores, where some measures perform better in the identification of functionally related proteins. In addition, the results suggest that combining FS scores from the three ontologies (BP, MF, and CC) tends to improve the performance of the various FS measures. Future work should address several remaining issues, in particular problematic cases should be investigated, for example where functional similarity scores are unexpectedly low, in order to identify the limitations in the current functional similarity approaches and devise new solutions. Many approaches have been proposed for measuring functional relationships, but in the current work we have only assessed a few of the most established. It would be desirable to extend the current analysis to other approaches, as for example group-wise SS measures. Finally, the current work focused on orthologous genes as a reference set of functionally related proteins, future work should focus on investigating FS-based approaches for concrete applications like candidate gene prioritization.
Our work is summarized in the Frela web server located at http://frela.eurac.edu, which supports calculation of different FS measures by combining any of the six SS measures with five different mixing strategies discussed in this study. Frela can be invoked in two major modes: the interactive mode receives input directly from the user through web entry forms, while the batch mode processes uploaded files. It is possible to perform calculations on either the IEA(+) or the IEA(−) dataset for any of the BP, MF, and CC ontologies or combinations thereof (Fig. 5a). For similarity computations involving proteins from human, mouse, or fruit fly organisms, the server accepts UniProt/SwissProt accession numbers as protein identifiers. Since internally we are using the MySQL database provided by the GO consortium, which uses gene or protein identifiers as supplied by contributors, other organisms' identifiers must match those stored in the database. For example, the Arabidopsis thaliana HSK gene, which codes for a homoserine kinase, is deposited under TAIR accession code "locus:2827533" in the GO database and as such can be used as query input to Frela. For further information, we refer to the GO web page, which offers all necessary details on stored gene and protein identifier systems.
Screenshot of Frela web interface. (a) Main input web form. This form provides all necessary elements to run a protein functional similarity calculation with Frela. The user can choose between manual data entry (interactive protein-protein comparison) and file upload (batch mode). We support computation of pairwise protein functional similarities for any organism or a scan of a single protein versus all annotated human, fly, or mouse proteins. In addition, it is possible to specify use of either the IEA(+) or the IEA(−) dataset with the "Use electronic annotations (IEA)" option. In this panel we show the parameters used in the example given in the "Web Server" section, which is a scan of the human STX1B protein (UPANr P61266) versus all annotated proteins from the fly BP ontology, calculating simGIC/fsBMA scores and including IEA evidence codes. (b) Semantic similarity score matrix for the pair of syntaxin proteins given by UPANrs P61266 and Q7KVY7 with the same parameters as in panel (a). Since the FS score is computed from pairwise SS scores derived from the GO annotations from the respective proteins being compared, this SS score matrix provides important insight into FS score calculation and interpretation. The background of a cell is a colour gradient, which corresponds to the SS score and ranges from white (no similarity) to dark red (highest similarity). Cells involved in FS score composition are surrounded by thick black lines, and the line style informs about the type of maximum the cell contains: row maximum, dotted line; column maximum, dashed line; row and column maximum, solid line. The fsBMA score is then calculated from the sum of row maxima (6.279957) and column maxima (2.837199) as 1/2× (6.279957/15 + 2.837199/5) = 0.4930518, since in BP, P61266 is annotated with 15 GO terms (number of rows) and Q7KVY7 is annotated with 5 terms (number of columns). The third column of this SS matrix corresponds to GO term GO:0002121, "inter-male aggressive behavior", which does not share any similarity with any of the GO terms from the human protein. Dropping this term from the annotation of the fly protein results in an improved fsBMA score of 0.563981775.
Both the interactive and batch mode allow input of protein pairs as input. Any identifier entered in the interactive mode will activate the organism auto detection and conveniently preselect an organism name in the corresponding dropdown menu. In case no matching UniProt protein accession code has been found in the server's list of annotated human, mouse or fly accession codes, we assume a different organism with the appropriate identifier has been entered. At this point it is important to underline that due to expensive precalculations, the web server exclusively computes protein FS z-scores for comparisons between human, mouse, and fly organisms with protein identifiers provided as UniProt accession codes.
In interactive mode, we furthermore support scanning a protein versus all annotated proteins for a chosen ontology. Execution time depends on the chosen parameters and especially on the number of annotations made to the query protein. For example, a simRel/fsBMA run of a human protein annotated with 15 GO terms in BP versus all BP-annotated fly proteins including IEA evidence codes requires roughly 9000 pairwise score calculations, which is completed after 15 seconds.
In addition to score computation of protein pairs, batch mode conveniently offers all versus all score calculation of two uploaded files containing protein identifiers. This is especially useful when scoring a list of proteins versus a panel of proteins of interest, such as a disease-related set of proteins.
After calculations are finished, a downloadable result table is displayed, which is sorted by z-score or raw score, depending on the input parameters. For each row in this table, the semantic similarity matrix can be displayed by clicking on the corresponding score. The matrix is composed of GO term names linked to the GO website and color-codes term SS scores from white (no similarity) to red (high similarity). Based on the chosen FS measure, various types of maxima are denoted by line styles of the affected cells, which greatly simplify understanding score calculation and further score interpretation.
As an example we have selected the human syntaxin gene STX1B (UPANr P61266), which has been associated with Parkinson's Disease50. When evaluating simGIC/fsBMA scores for the BP IEA(+) ontology, its fly orthologue Syx1A (UPANr Q7KVY7) ranks on position 50 of the hit list sorted by z-score (and position 107 when sorted by raw score). One reason for this rather low rank is that the fly protein is annotated with GO term GO:0002121, "inter-male aggressive behavior", a term which is obviously missing in the annotation of the human protein (Fig. 5b). This results in a lower fsBMA score and therefore worsens its rank in the similarity hit list. On the other hand, this pair receives the highest z-score when running the scan for the MF ontology. Most importantly, calculating combined BP+MF scores ranks the pair of orthologues on top position, underlining the power of combining scores from different ontologies.
The abovementioned examples can easily be reproduced using the web server located at http://frela.eurac.edu, where each calculation takes less than a minute. Furthermore, the web server provides interactive versions of the graphs presented in Fig. 2, facilitating choice of an appropriate functional similarity score. The web server is freely available, and the source code can be downloaded from our web site.
An annotation corpus A establishes a relationship between a GO term t and gene G (or gene product). We have retrieved GO graphs and annotations as of September 2015 as MySQL dumps from the GO website and extracted data for human and the model organisms Drosophila melanogaster (fly) and Mus musculus (mouse), which form the basis for all of our comparisons. For each of the BP, MF, and CC ontologies we compute the term probabilities P(t) = N(t)/N(root), where N(t) denotes the number of proteins annotated with a term t or any of its descendants, and root is the ontology's unique term without any ancestors. The GO term information content is then given by I(t) = −log(P(t)).
Semantic and Functional Similarity Measures
In this study we concentrate on six frequently used pairwise SS measures. Let therefore s and t be two GO terms we want to compare semantically and let S(s, t) denote the set of all common ancestors of s and t. The measure defined by Resnik51 is given by
$${simRes}(s,t)=\,\mathop{{\rm{\max }}}\limits_{c\in S(s,t)}I(c),$$
whereas Lin's measure52 takes into account the information content of the most informative common ancestor relative to the information content of the two terms,
$${simLin}(s,t)=\,\mathop{{\rm{\max }}}\limits_{c\in S(s,t)}\frac{2\cdot I(c)}{I(s)+I(t)}.$$
Schlicker weights Lin's measure by common ancestor term probability39,
$$simRel(s,t)=\,\mathop{max}\limits_{c\in S(s,t)}(\frac{2\cdot I(c)}{I(s)+I(t)}\cdot (1-P(c))),$$
and a similar approach has been proposed independently53 by introducing the information coefficient measure as
$${simIC}(s,t)=\,\frac{2\cdot \mathop{\max }\limits_{c\in S(s,t)}I(c)}{I(s)+I(t)}\cdot (1-\frac{1}{1-\mathop{{\rm{\max }}}\limits_{c\in S(s,t)}I(c)}).$$
Jiang and Conrath's measure33 has been shown to be equivalent with Lin's measure35 but is included in our work for historical purposes. It is defined by
$${simJC}(s,t)=\frac{1}{1+I(s)+I(t)-2\cdot \mathop{{\rm{\max }}}\limits_{c\in S(s,t)}I(c)}.$$
Finally, we use the graph information content measure9, which by design is a group-wise SS measure, as a pairwise measure for consistency as suggested by Li et al.53:
$${simGIC}(s,t)=\frac{{\sum }_{c\in \{S(s,s)\cap S(t,t)\}}I(c)}{{\sum }_{c\in \{S(s,s)\cup S(t,t)\}}I(c)}$$
Let us assume protein P is annotated with m GO terms t 1 , t 2 ,.., t m and protein R is annotated with n GO terms r 1 , r 2 ,.., r n , then the matrix M is given by all possible pairwise SS values s ij = sim(t i , r j ) with sim being one of the SS measures introduced above, i = 1, 2,.., m and j = 1, 2,.., n. Functional similarity is computed from the SS entries of M according to a specific mixing strategy (MS), and in this work we investigate five different mixing strategies. A frequently used MS uses the maximum value of the matrix, \({fsMax}={{\rm{\max }}}_{i,j}{s}_{ij}\), whereas fsAvg takes the average over all entries, \({fsAvg}=\frac{1}{m\times n}\sum _{i,j}{s}_{ij}\). Furthermore, using the maximum of averaged row and column best matches has been suggested for incomplete annotations, \({fsBMM}=\,{\rm{\max }}(\frac{1}{m}\sum _{i}{{\rm{\max }}}_{j}{s}_{ij},\frac{1}{n}\sum _{j}{{\rm{\max }}}_{i}{s}_{ij})\). Instead of taking the maximum, averaging gives the so-called best match average \({fsBMA}=\frac{1}{2}(\frac{1}{m}\sum _{i}{{\rm{\max }}}_{j}{s}_{ij}+\frac{1}{n}\sum _{j}{{\rm{\max }}}_{i}{s}_{ij})\), and conversely, the averaged best match is defined as \({fsABM}=\frac{1}{m+n}(\sum _{i}{{\rm{\max }}}_{j}{s}_{ij}+\sum _{j}{{\rm{\max }}}_{i}{s}_{ij})\).
We additionally study the effect of combining multiple gene ontologies into a single score, as suggested previously48. We focus on pooling scores from BP and MF ontologies and from BP, MF, and CC ontologies. A functional similarity F is computed by combining a SS measure with any mixing strategy defined above over any of the different ontologies: biological process (F BP), molecular function (F MF), and cellular component (F CC). We compute the combined measures as the functions \({{F}}_{BP+MF}=\sqrt{\frac{1}{2}({{F}}_{BP}^{2}+{{F}}_{MF}^{2})}\) and \({{F}}_{BP+MF+CC}=\sqrt{\frac{1}{3}({{F}}_{BP}^{2}+{{F}}_{MF}^{2}+{{F}}_{CC}^{2})}\).
Z-score calculation
Given a pair of target organisms, for each annotated protein in the respective corpora we compute a score background distribution by random sampling without replacement of 1,000 annotated proteins from the corresponding second organism. From this distribution we compute the mean and standard deviation. For example, if we choose human and mouse, in our dataset we have 21,212 human proteins furnished with BP annotations (excluding IEA) and therefore yield 21,212 mean values and standard deviations, one pair for each annotated protein. Conversely, we find 9,714 manually annotated proteins for mouse (see Table 1). Guided by the Central Limit Theorem for the sum of independent random variables, we define for a pair of proteins P and R the similarity z-score as
$${z}({P},{R})=\frac{2\times {F}({P},{R})-({{\mu }}_{{P}}+{{\mu }}_{{R}})}{{({{\sigma }}_{{P}}^{2}+{{\sigma }}_{{R}}^{2})}^{1/2}},$$
where μ and σ denote the mean values and standard deviations for proteins P and R, respectively, and F(P, R) is a functional similarity measure between P and R based on a combination of any supported SS and MS. From the definitions above it follows that the mean values and standard deviations depend on the ontology, the organism, and the functional similarity, but for reasons of simplicity these parameters are omitted in the formula.
An alternative approach to score the protein pair (P, R) is provided by a modified z-score54, which utilizes medians instead of means and may be considered for distributions deviating heavily from Gaussians. We have reviewed its applicability, and for most of our functional similarity measures we find only small differences between means and medians in representative sets of randomly paired proteins. We therefore remain with the more traditional definition of z-scores described above. Further discussion concerning modified z-scores is provided in Section "Modified z-Score" in the Supplementary Material.
Annotation Corpora
We investigate the influence of various compositions of annotation corpora on FS scores (hereafter briefly called scores or raw scores when a distinction to z-scores is needed). First, since there is much discussion about the impact of automated annotations, we utilize datasets that include data inferred from electronic annotations (IEA(+)) and such that do not (IEA(−)). The latter is implemented by omitting annotations to GO evidence code IEA. Simultaneously, we examine the effects of incorporating annotation data from organisms other than those being compared. Whenever we compare proteins from our set of target organisms (human, fly, mouse), we distinguish between scores calculated with all available annotations in GO (ALL corpus) and scores that are based only on annotations made to the organisms belonging to the proteins under investigation (AOO corpus).
GO delivers annotations to genes and gene products, however, organisms are annotated with different gene identifier systems. In our case, mouse and fly gene identifiers are internally mapped to UniProt accession codes through the respective mappings retrieved from Ensembl 82 Biomart, which has been released in September 2015, matching the release date of the GO database we use. Human annotations do not undergo identifier mapping, as GO already delivers annotations using UniProt accession codes.
We use orthology relationships as a means to evaluate a measure's ability to discriminate between functionally related and unrelated pairs of proteins: orthologous genes should have similar functions and therefore we expect them to have higher scores than randomly selected pairs of genes. In particular, we use Ensembl 82 Biomart to retrieve one to one orthology relationships between human/mouse, human/fly, and mouse/fly organism pairs of protein coding genes55. In order to remove trivial cases, we exclude orthologous pairs with 80% sequence identity or higher (see Table 2). The list of orthologous pairs serves as cases and an equally sized matched control set is constructed by randomly replacing the orthologous gene with another gene from the same organism (without replacement and without fixed points). Given a score threshold h, we assign true and false positives (TP and FP), and true and false negatives (TN and FN) according to standard nomenclature56. We then determine the optimal threshold h *, for which the error rate is minimized. The error rate (or fraction of incorrect) is the number of incorrectly assigned pairs divided by the number of all pairs, (FP + FN)/(TP + FP + TN + FN). Since this error rate calculation involves randomly drawn controls, the outcome will differ from one random control set to another. We therefore repeat optimal threshold determination and its associated error rate computation on 225 sets of control proteins and compute 99% confidence intervals (CI) of the mean for the so-obtained 225 error rates of both raw scores and z-scores. In this manuscript, we therefore always report mean error rates of 225 individual error rates, called "error rate" for brevity. Accordingly, reported thresholds, especially those shown in the Frela web interface, refer to an average over 225 optimal thresholds h *. We refer to Supplementary Data File S1 and Supplementary Fig. S7 for a comprehensive list of optimal thresholds, error rates, and confidence intervals for all three orthology pair sets, ontologies and FS measures.
We specifically test if there are SS/MS measures that significantly perform better (or worse) when using the AOO annotation corpus instead of the ALL corpus. For an arbitrary fixed ontology and a SS/MS measure, we perform a Wilcoxon signed-rank test that compares the individual error rates based on z-scores of each of the 225 case/random control datasets between ALL and AOO corpora (with IEA evidence code annotations), using cases from all three orthology relationships we investigate, human/mouse, human/fly, and mouse/fly. In order to detect whether the ALL or the AOO corpus results in better error rates, we perform one-sided tests. A test is considered significant at the level of α = 0.01 after Bonferroni adjustment for multiple hypothesis testing of 30 measures (six SS measures times five mixing strategies).
Our software extends the Dintor framework57 for functional similarity analysis and is implemented in the Python programming language. We make use of a client/server architecture, where the computation server is decoupled from the web server. This specifically allows employing a locally installed server that is queried by users from different hosts with computationally inexpensive client software. On a 2.3 GHz AMD Opteron processor with 32GB of RAM, Frela computes BP protein functional similarity scores of 10,000 random protein pairs from the human organism in about 10 seconds and scales linearly with the number of protein pairs compared. Time doubles when utilizing the simGIC SS measure due to more complex graph algorithms used in score calculation (for a more complete set of timings, see Supplementary Table S1). The web front end is served through Apache and supports various calculation modes with an emphasis on visualization of functional similarity score derivation. The package can be downloaded from our web server, http://frela.eurac.edu.
Needleman, S. B. & Wunsch, C. D. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J. Mol. Biol. 48, 443–453 (1970).
Pesquita, C. Semantic Similarity in the Gene Ontology. Methods Mol Biol 1446, 161–173, doi:10.1007/978-1-4939-3743-1_12 (2017).
Ashburner, M. et al. Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat Genet 25, 25–29 (2000).
Lord, P. W., Stevens, R. D., Brass, A. & Goble, C. A. Investigating semantic similarity measures across the Gene Ontology: the relationship between sequence and annotation. Bioinformatics 19, 1275–1283 (2003).
Mazandu, G. K., Chimusa, E. R., Mbiyavanga, M. & Mulder, N. J. A-DaGO-Fun: an adaptable Gene Ontology semantic similarity-based functional analysis tool. Bioinformatics 32, 477–479 (2016).
Gan, M., Dou, X. & Jiang, R. From ontology to semantic similarity: calculation of ontology-based semantic similarity. ScientificWorldJournal 2013, 793091 (2013).
Guzzi, P. H., Mina, M., Guerra, C. & Cannataro, M. Semantic similarity analysis of protein data: assessment with biological features and issues. Brief Bioinform 13, 569–585 (2012).
Pesquita, C., Faria, D., Falcao, A. O., Lord, P. & Couto, F. M. Semantic similarity in biomedical ontologies. PLoS Comput Biol 5, e1000443, doi:10.1371/journal.pcbi.1000443 (2009).
ADS MathSciNet PubMed PubMed Central Article Google Scholar
Pesquita, C. et al. Metrics for GO based protein semantic similarity: a systematic evaluation. BMC Bioinformatics 9 Suppl 5, S4 (2008).
Vafaee, F., Rosu, D., Broackes-Carter, F. & Jurisica, I. Novel semantic similarity measure improves an integrative approach to predicting gene functional associations. BMC Syst Biol 7, 22 (2013).
Montanez, G. & Cho, Y.-R. Predicting False Positives of Protein-Protein Interaction Data by Semantic Similarity Measures. Current Bioinformatics 8, 339–346 (2013).
Jaromerska, S., Praus, P. & Cho, Y.-R. Distance-wise pathway discovery from protein-protein interaction networks weighted by semantic similarity. J Bioinform Comput Biol 12, 1450004 (2014).
Yang, Y., Xu, Z. & Song, D. Missing value imputation for microRNA expression data by using a GO-based similarity measure. BMC Bioinformatics 17 Suppl 1, 10 (2016).
Jiang, R., Gan, M. & He, P. Constructing a gene semantic similarity network for the inference of disease genes. BMC Syst Biol 5 Suppl 2, S2 (2011).
Radivojac, P. et al. A large-scale evaluation of computational protein function prediction. Nat Methods 10, 221–227 (2013).
Pesquita, C., Pessoa, D., Faria, D. & Couto, F. CESSM: Collaborative Evaluation of Semantic Similarity Measures (2009).
Rhee, S. Y., Wood, V., Dolinski, K. & Draghici, S. Use and misuse of the gene ontology annotations. Nat Rev Genet 9, 509–515 (2008).
Gillis, J. & Pavlidis, P. Assessing identity, redundancy and confounds in Gene Ontology annotations over time. Bioinformatics 29, 476–482 (2013).
Wang, J., Zhou, X., Zhu, J., Zhou, C. & Guo, Z. Revealing and avoiding bias in semantic similarity scores for protein pairs. BMC Bioinformatics 11, 290 (2010).
Skunca, N., Altenhoff, A. & Dessimoz, C. Quality of computationally inferred gene ontology annotations. PLoS Comput Biol 8, e1002533, doi:10.1371/journal.pcbi.1002533 (2012).
ADS CAS PubMed PubMed Central Article Google Scholar
Wang, H., Azuaje, F., Bodenreider, O. & Dopazo, J. Gene Expression Correlation and Gene Ontology-Based Similarity: An Assessment of Quantitative Relationships. Proc IEEE Symp Comput Intell Bioinforma Comput Biol 2004, 25–31 (2004).
Chicco, D. & Masseroli, M. Software Suite for Gene and Protein Annotation Prediction and Similarity Search. IEEE/ACM Trans Comput Biol Bioinformƒ 12, 837–843 (2015).
Song, X., Li, L., Srimani, P. K., Yu, P. S. & Wang, J. Z. Measure the Semantic Similarity of GO Terms Using Aggregate Information Content. IEEE/ACM Trans Comput Biol Bioinform 11, 468–476 (2014).
Teng, Z. et al. Measuring gene functional similarity based on group-wise comparison of GO terms. Bioinformatics 29, 1424–1432 (2013).
Xu, Y., Guo, M., Shi, W., Liu, X. & Wang, C. A novel insight into Gene Ontology semantic similarity. Genomics 101, 368–375 (2013).
Peng, J. et al. Measuring semantic similarities by combining gene ontology annotations and gene co-function networks. BMC Bioinformatics 16, 44 (2015).
Wu, X., Pang, E., Lin, K. & Pei, Z.-M. Improving the measurement of semantic similarity between gene ontology terms and gene products: insights from an edge- and IC-based hybrid method. PLoS One 8, e66745 (2013).
Zhang, S.-B. & Lai, J.-H. Semantic similarity measurement between gene ontology terms based on exclusively inherited shared information. Gene 558, 108–117 (2015).
Zhang, S.-B. & Lai, J.-H. Exploring information from the topology beneath the Gene Ontology terms to improve semantic similarity measures. Gene 586, 148-157 (2016).
Caniza, H. et al. GOssTo: a stand-alone application and a web tool for calculating semantic similarities on the Gene Ontology. Bioinformatics 30, 2235–2236 (2014).
Mazandu, G. K. & Mulder, N. J. DaGO-Fun: tool for Gene Ontology-based functional analysis using term information content measures. BMC Bioinformatics 14, 284 (2013).
Jain, S. & Bader, G. D. An improved method for scoring protein-protein interactions using semantic similarity within the gene ontology. BMC Bioinformatics 11, 562 (2010).
Jiang, J. J. & Conrath, D. W. Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy 19–33 (1997).
Couto, F. M., Silva, M. J. & Coutinho, P. M. Measuring semantic similarity between Gene Ontology terms. Data & Knowledge Engineering 61, 137–152, doi:10.1016/j.datak.2006.05.003 (2007).
Mazandu, G. K. & Mulder, N. J. Information content-based gene ontology semantic similarity approaches: toward a unified framework theory. Biomed Res Int 2013, 292063 (2013).
Sevilla, J. L. et al. Correlation between gene expression and GO semantic similarity. IEEE/ACM Trans Comput Biol Bioinform 2, 330–338, doi:10.1109/TCBB.2005.50 (2005).
Azuaje, F., Wang, H. & Bodenreider, O. In Proceedings of the ISMB'2005 SIG meeting on Bio-ontologies 9–10 (2005).
Wang, J. Z., Du, Z., Payattakool, R., Yu, P. S. & Chen, C.-F. A new method to measure the semantic similarity of GO terms. Bioinformatics 23, 1274–1281 (2007).
Schlicker, A., Domingues, F. S., Rahnenführer, J. & Lengauer, T. A new measure for functional similarity of gene products based on Gene Ontology. BMC Bioinformatics 7, 302 (2006).
Chen, X. et al. A sensitive method for computing GO-based functional similarities among genes with 'shallow annotation'. Gene 509, 131–135 (2012).
Schulz, M. H., Kohler, S., Bauer, S. & Robinson, P. N. Exact score distribution computation for ontological similarity searches. BMC Bioinformatics 12, 441, doi:10.1186/1471-2105-12-441 (2011).
Schnoes, A. M., Ream, D. C., Thorman, A. W., Babbitt, P. C. & Friedberg, I. Biases in the experimental annotations of protein function and their effect on our understanding of protein function space. PLoS Comput Biol 9, e1003063 (2013).
Konopka, B. M., Golda, T. & Kotulska, M. Evaluating the significance of protein functional similarity based on gene ontology. J Comput Biol 21, 809–822 (2014).
du Plessis, L., Skunca, N. & Dessimoz, C. The what, where, how and why of gene ontology–a primer for bioinformaticians. Brief Bioinform 12, 723–735, doi:10.1093/bib/bbr002 (2011).
Jones, C. E., Brown, A. L. & Baumann, U. Estimating the annotation error rate of curated GO database sequence annotations. BMC Bioinformatics 8, 170, doi:10.1186/1471-2105-8-170 (2007).
Altenhoff, A. M., Studer, R. A., Robinson-Rechavi, M. & Dessimoz, C. Resolving the ortholog conjecture: orthologs tend to be weakly, but significantly, more similar in function than paralogs. PLoS Comput Biol 8, e1002514 (2012).
Rogers, M. F. & Ben-Hur, A. The use of gene ontology evidence codes in preventing classifier assessment bias. Bioinformatics 25, 1173–1177 (2009).
Schlicker, A., Rahnenführer, J., Albrecht, M., Lengauer, T. & Domingues, F. S. GOTax: investigating biological processes and biochemical activities along the taxonomic tree. Genome Biol 8, R33 (2007).
Thomas, P. D. The Gene Ontology and the Meaning of Biological Function. Methods Mol Biol 1446, 15–24, doi:10.1007/978-1-4939-3743-1_2 (2017).
Wang, J.-Y. et al. The RIT2 and STX1B polymorphisms are associated with Parkinson's disease. Parkinsonism Relat Disord 21, 300–302 (2015).
Resnik, P. Using Information Content to Evaluate Semantic Similarity in a Taxonomy 448–453 (1995).
Lin, D. An Information-Theoretic Definition of Similarity 296–304 (1998).
Li, B., Wang, J. Z., Feltus, F. A., Zhou, J. & Luo, F. Effectively integrating information content and structural relationship to improve the GO-based similarity measure between proteins. ArXiv e-prints (2010).
Iglewicz, B. & Hoaglin, D. C. How to Detect and Handle Outliers (1993).
Vilella, A. J. et al. EnsemblCompara GeneTrees: Complete, duplication-aware phylogenetic trees in vertebrates. Genome Res 19, 327–335, doi:10.1101/gr.073585.107 (2009).
Baldi, P., Brunak, S., Chauvin, Y., Andersen, C. A. & Nielsen, H. Assessing the accuracy of prediction algorithms for classification: an overview. Bioinformatics 16, 412–424 (2000).
Weichenberger, C. X. et al. Dintor: functional annotation of genomic and proteomic data. BMC Genomics 16, 1081, doi:10.1186/s12864-015-2279-5 (2015).
The authors thank Daniele Di Domizio for support in setting up the Frela web server and Johannes Martin for consultancy in client/server web programming. The research was funded by the Department of Innovation, Research, Development and Cooperatives of the Autonomous Province of Bolzano-South Tyrol.
Center for Biomedicine, European Academy of Bozen/Bolzano (EURAC), (Affiliated to the University of Lübeck, Lübeck, Germany), Viale Druso 1, 39100, Bolzano, Italy
Christian X. Weichenberger, Antonia Palermo, Peter P. Pramstaller & Francisco S. Domingues
Christian X. Weichenberger
Antonia Palermo
Peter P. Pramstaller
Francisco S. Domingues
C.X.W. conceived of this study, performed experiments, analysed data, implemented the calculation software backend and wrote the manuscript. A.P. implemented the web server. P.P.P. supervised the work and reviewed the manuscript. F.S.D. conceived of this study, and assisted in the analysis and in writing the manuscript. All authors read and approved the final manuscript.
Correspondence to Christian X. Weichenberger.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Data File S1
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
Weichenberger, C.X., Palermo, A., Pramstaller, P.P. et al. Exploring Approaches for Detecting Protein Functional Similarity within an Orthology-based Framework. Sci Rep 7, 381 (2017). https://doi.org/10.1038/s41598-017-00465-5
The Aqaba Earthquake, 22 November 1995 (7.3 Mw): insights on the seismicity and active faulting of Gulf of Aqaba
Makrem Harzali
Emna Medhioub
Samir Bouaziz
Arabian Journal of Geosciences (2021)
The arrhythmogenic cardiomyopathy-specific coding and non-coding transcriptome in human cardiac stromal cells
Johannes Rainer
Viviana Meraviglia
Alessandra Rossini
BMC Genomics (2018)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
About Scientific Reports
Guide to referees
Guest Edited Collections
Scientific Reports Top 100 2019
Scientific Reports Top 10 2018
Editorial Board Highlights
10th Anniversary Editorial Board Interviews
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
nature.com sitemap
Protocol Exchange
Nature Index
Nature portfolio policies
Author & Researcher services
Scientific editing
Nature Research Academies
Libraries & institutions
Librarian service & tools
Partnerships & Services
Nature Conferences
Nature Africa
Nature China
Nature India
Nature Italy
Nature Korea
Nature Middle East
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
Chemistry and Chemical Engineering (46)
Materials and Applied Sciences (1)
From 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 — To 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857
https://akjournals.com/search?q=%22copper+complex%22
"copper complex" x
Sort by RelevanceArticle A-ZArticle Z-ADate - Old to RecentDate - Recent to OldAuthor A-ZAuthor Z-AJournal A-ZJournal Z-A
Page:12345
N-containing copper complexes in wheat production
Cereal Research Communications
https://doi.org/10.1556/crc.34.2006.1.170
Authors: P. Szakál, R. Schmidt, M. Barkóczi, R. Kalocsai, D. Beke, and O. Csatai
Szakál, P. Schmidt, R. 1997 Copper fertilization of wheat with copper complex and changes in flour quality. 17. Arbeitstagung Die Bedeutung der Mengen-und Spurelemente. Jena. p. 53–64. Szentpéteri Zs., Jolánkai M
Thermal decomposition of copper complexes of 1-phenyl-3-methyl-4-acyl-5-pyrazolone in air atmosphere
Author: Y. Akama
Copper complexes of some 1-phenyl-3-methyl-4-acyl-5-pyrazolones have been prepared. The complexes were characterized by elemental analyses and thermal analyses. It was shown that the melting points decrease linearly in increasing the molecular weight of the complexes.
Thermochemistry of copper complex of 6-benzylaminopurine
Authors: Y. Xu-Wu, Z. Hang-Guo, S. Wu-Juan, W. Xiao-Yan, and G. Sheng-Li
The copper(II) complex of 6-benzylaminopurine (6-BAP) has been prepared with dihydrated cupric chloride and 6-benzylaminopurine. Infrared spectrum and thermal stabilities of the solid complex have been discussed. The constant-volume combustion energy, Δc U, has been determined as −12566.92±6.44 kJ mol−1 by a precise rotating-bomb calorimeter at 298.15 K. From the results and other auxiliary quantities, the standard molar enthalpy of combustion, Δc H m θ, and the standard molar of formation of the complex, Δf H m θ, were calculated as −12558.24±6.44 and −842.50±6.47 kJ mol−1, respectively.
Thermal stabilities of nanocomposites: Mono- or binuclear Cu complexes intercalated or immobilised in/on siliceous materials
Nanopages
https://doi.org/10.1556/Nano.2008.00001
Authors: I. Szilágyi, I. Labádi, and I. Pálinkó
Various copper(II) complexes as guests were immobilised among the layers of montmorillonite or on silica gel as hosts. Anchoring took place through hydrogen bonds and ion exchange for montmorillonite, while the forces of interaction were either hydrogen bonding or covalent bonds for the copper complex-silica gel nanohybrids. The thermal stabilities of these substances were studied under oxidizing atmosphere and it was found that anchoring increased the durability of the host-guest complexes relative to the host-free ones.
Cobalt, Nickel and Copper Complexes of Benzylamino-p-chlorophenylglyoxime. Thermal and thermodynamic data
https://doi.org/10.1023/a:1013178431244
Author: H. Arslan
Vic-dioxime ligandsand their metal complexes are used in analytical, bio, pigment and medicinal chemistry. Complexes of nickel(II), copper(II), and cobalt(II) with benzylamino-p-chlorophenylglyoxime (BpCPG) are synthesised. Thermal behaviour of these complexes was studied in dynamic nitrogen atmosphere by DTA, DTG and TG techniques. GC-MS combined system was used to identify the products during pyrolytic decomposition. The pyrolytic end products were identified by X-ray powder diffraction. Thermoanalytical data of these complexes are presented in this communication. Interpretation and mathematical analysis of these data and evaluation of order of reaction, the energy and entropy of activation based on the integral method using the Coats-Redfern equation and the approximation method using the Horowitz-Metzger equation are also given. The metal complexes undergo decomposition in three stages and metal oxides remained as end products of the complexes.
Conjugated oxidation of thiols and amines in the presence of copper complexes
Authors: I. Tarkhanova, M. Gantman, A. Chizhov, and V. Smirnov
The oxidation of aliphatic thiols by air oxygen under mild conditions catalyzed by copper(II) complexes with amines (e.g. benzylamine) has been investigated. It was found that the reaction proceeds as a conjugated oxidation of both amines and thiols. The products of transformation of benzylamine have been identified. Copper(I) dodecanethiolate was synthesized for the first time, and its composition was determined. The thiolate is shown to be an intermediate of the process in the systems which demonstrate low activity. The intermediate in the most active systems is a mixed-ligand copper(II) complex containing both molecules of amine and thiolate anions in the coordination sphere.
Thermal investigation of cobalt, nickel and copper complexes with 8-aminoquinoline
Correlation between thermal stability and crystal field splitting energy
Authors: A. M. Donia and H. A. El-Boraey
Complexes of Co(II), Ni(II) and Cu(II) with 8-aminoquinoline were prepared and characterized, and their thermal behaviour and decomposition pathways were studied. The thermal stabilities are discussed in terms of ionic radii, crystal field splitting energy and steric hindrance. The effective roles of the counter-ions (Cl− and NO3 −) on the decomposition temperatures and the final products were also clarified. The energies of activation (E a) and the orders of some decomposition reactions were determined. Light is shed on the nature of the interaction of the water of crystallization and the polymorphic transformation phenomenon.
Development and validation of gel-chromatographic and spectrophotometric methods for quantitative analysis of bioactive copper complexes in new antihypocupremical formulations
Acta Chromatographica
https://doi.org/10.1556/achrom.22.2010.3.3
Authors: I. Savic, G. Nikolic, I. Savic, and M. Cakic
Gel-permeation chromatographic (GPC) and visible spectrophotometric methods have been developed and validated for quantitative analysis of complexes of copper(II) with the polysaccharides pullulan and dextran, active pharmaceutical compounds in new antihypocupremical formulations. Linearity, precision, accuracy, specificity, and limits of detection (LOD) and quantification (LOQ) were determined in accordance with ICH Q2(R1) guidelines. GPC was performed isocratically with redistilled water as mobile phase at a flow rate of 1 mL min−1. Visible spectrophotometry was performed in water, using 640 nm for direct assay of the copper(II) complex with pullulan and dextran. The calculated F and t values at the 95% confidence level were less than the theoretical values, showing there were no significant differences between the performance of the methods.
Synthesis, crystal structure, and thermodynamics of a high-nitrogen copper complex with N, N-bis-(1(2)H-tetrazol-5-yl) amine
Authors: Bao-Di Xue, Qi Yang, San-Ping Chen, and Sheng-Li Gao
A new high-nitrogen complex [Cu(Hbta)2]·4H2O (H2bta = N,N-bis-(1(2)H-tetrazol-5-yl) amine) was synthesized and characterized by elemental analysis, single crystal X-ray diffraction and thermogravimetric analyses. X-ray structural analyses revealed that the crystal was monoclinic, space group P2(1)/c with lattice parameters a = 14.695(3) Å, b = 6.975(2) Å, c = 18.807(3) Å, β = 126.603(1)°, Z = 4, D c = 1.888 g cm−3, and F(000) = 892. The complex exhibits a 3D supermolecular structure which is built up from 1D zigzag chains. The enthalpy change of the reaction of formation for the complex was determined by an RD496–III microcalorimeter at 25 °C with the value of −47.905 ± 0.021 kJ mol−1. In addition, the thermodynamics of the reaction of formation of the complex was investigated and the fundamental parameters k, E, n,
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\Updelta S_{ \ne }^{{{\uptheta}}}$$ \end{document}
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\Updelta H_{ \ne }^{{{\uptheta}}}$$ \end{document}
\documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{upgreek} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $$\Updelta G_{ \ne }^{{{\uptheta}}}$$ \end{document}
were obtained. The effects of the complex on the thermal decomposition behaviors of the main component of solid propellant (HMX and RDX) indicated that the complex possessed good performance for HMX and RDX.
Synthesis, characterization and thermal decomposition of new complexes of p-methyl-, p-trifluoromethyl- and p-bromo-phenylalanine with copper
Author: Y. Q. Jia
|
CommonCrawl
|
lorentzian function excel
click image for details and preview: astrophysicsformulas.com will help you with astrophysics and physics exams, including graduate entrance exams such as the GRE. If you haven't already done so work through the peak fitting tutorial. Physics Formulas and Tables E-book: Click image to see details and preview: To find the area under a Lorentzian (i.e. But it should not be a matter in some range. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. L(\nu) \ = \ A \frac{\gamma}{(\nu - \nu_{0})^{2} + \gamma^{2}} google_ad_height = 60; Probability Density Function The general formula for the probability density function of the Cauchy distribution is \( f(x) = \frac{1} {s\pi(1 + ((x - t)/s)^{2})} \) where t is the location parameter and s is the scale parameter.The case where t = 0 and s = 1 is called the standard Cauchy distribution.The equation for the standard Cauchy distribution reduces to SEE ALSO: Cauchy Distribution , Damped Exponential Cosine Integral , Fourier Transform--Lorentzian Function , Gaussian Function , Hyperbolic Secant , Logistic Distribution , … Join the initiative for modernizing math education. cosinus ungerade: z.b. The Lorentzian function has Fourier transform. The Cauchy distribution f {\displaystyle f} is the distribution of the x-intercept of a ray issuing from {\displaystyle } with a uniformly distributed angle. The requested URL index.php was not found on this server. Apache/2.2.22 (Linux) Server at Port 80 1 \]. Simultaneous Emission S4: Extracted parameters of the double Lorentzian fit applied to the A, B, 2A, 2B, 2A+B and 3B modes as a function of temperature for the simultaneous emission model. The Lorentzian function can also be used as an apodization function, although its instrument function Hi Everyone, So I have no idea how to extract the FWHM from a lorentzian distribution: IGOR will curve fit a lorentzian like so: y0+A/[(x-x0)^2+B] x0=peak position y0=y intercept A=height dependence (?) google_ad_client = "ca-pub-5205698000600672"; I manually made it in an Excel file. Explore anything with the first computational knowledge engine. \frac{1}{2}\frac{A}{\gamma} = A \frac{\gamma}{(\nu - \nu_{0})^{2} + \gamma^{2}} Brief Description. We'll build from fitting a simple straight line function to a "full" Rietveld. Gauss equation. The function performs fitting to data with multiple peaks. \]. First, we must define the exponential function as shown above so curve_fit can use it to do the fitting. The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. "main.m" include the example code to use nlinfit. It is also the … The Lorentzian is also a well-used peak function with the form: I(2θ) = w 2. w 2 + (2θ − 2 θ 0) 2: where w is equal to half of the peak width (w = 0.5 H). Access list of astrophysics formulas download page: The Lorentzian FWHM calculation (or full width half maximum) is actually straightforward and can be read off from the equation. to integrate from $-\infty$ to $+\infty$), make the following substitution: \] % LORENTZFIT fits a single- or multi-parameter Lorentzian function to data. The Lorentzian function has Fourier Transform. Copyright © 2012 astrophysicsformulas.com 4.8. specifying the width. If a centered LB function is used, … Sample Curve Parameters. is illustrated above. Hi, I have a list of coordinates and relative intensity for each coordinate. Download. Ideal line shapes include Lorentzian, Gaussian and Voigt functions, whose parameters are the line position, maximum height and half-width. Built-in Fitting Models in the models module¶. sinus "weder, noch" : z.b. \int^{+ \frac{\pi}{2}}_{- \frac{\pi}{2}} //-->,
|
CommonCrawl
|
A brief review of Taylor's proposal
Magnetic fluctuations in homogeneous streaming turbulence
Velocity dependence
Implication to spacecraft data analysis
Referring to Kolmogorov's turbulence spectrum
Full paper
On the applicability of Taylor's hypothesis in streaming magnetohydrodynamic turbulence
R. A. Treumann1, 2,
W. Baumjohann3 and
Y. Narita3Email authorView ORCID ID profile
Earth, Planets and Space201971:41
https://doi.org/10.1186/s40623-019-1021-y
Received: 7 November 2018
Accepted: 27 March 2019
We examine the range of applicability of Taylor's hypothesis used in observations of magnetic turbulence in the solar wind. We do not refer to turbulence theory. We simply ask whether in a turbulent magnetohydrodynamic flow the observed magnetic frequency spectrum can be interpreted as mapping of the wavenumber turbulence into the stationary spacecraft frame. In addition to the known restrictions on the angle of propagation with respect to the fluctuation spectrum and the question on the wavenumber dependence of the frequency in turbulence which we briefly review, we show that another restriction concerns the inclusion or exclusion of turbulent fluctuations in the velocity field. Taylor's hypothesis in application to magnetic turbulence encounters its strongest barriers here. It is applicable to magnetic turbulence only when the turbulent velocity fluctuations can practically be completely neglected against the bulk flow speed. For low flow speeds the transformation becomes rather involved. This account makes even no use of the additional scale dependence of the turbulent frequency, viz. the existence of a "turbulent dispersion relation".
MHD turbulence
Taylor's hypothesis
Velocity turbulence
Magnetic fluctuations
Taylor (1938), struggling with stationary turbulence, suggested that its wavenumber spectrum could be directly inferred, if only the turbulence were embedded into a sufficiently fast bulk flow. Stationarity implies that the total time derivative vanishes. With \(\vec {V}=\vec {V}_0+\vec {U}_0+\delta \vec {V}\) the total velocity, \(\vec {V}_0\) bulk and \(\vec {U}_0\) mean large-scale (turbulent mechanical energy-carrying) eddy velocities one trivially has
$$\begin{aligned} \frac{{\mathrm{d}}\delta \vec {V}}{{\mathrm{d}}t} = \frac{\partial \delta \vec {V}}{\partial t}+\vec {V}\cdot \nabla \delta \vec {V}=0 \end{aligned}$$
for the turbulent fluctuations \(\delta \vec {V}\) of \(\vec {V}\). Neglecting the nonlinearity in the convective term then, for any observer at rest at location \(\vec {x}_s=\vec {x}\pm \vec {V}t\), the flow maps the original turbulent wavenumber \(\vec {k}\)-spectrum onto an easily detectable stationary observer's (spacecraft) frequency \(\omega _s\)-spectrum
$$\begin{aligned} \delta {V}(\vec {x}_s,t) & = {} \frac{1}{(2\pi )^4}\int {\mathrm{d}}\omega _k\,{\mathrm{d}}\vec {k}\exp \left[ -i\omega _kt + i\vec {k}\cdot \vec {x}\right] \nonumber \\ \omega _s & = {} \omega _k\pm \vec {k}\cdot \vec {V} \end{aligned}$$
where \(\omega _k\) is the possible internal frequency of turbulence, the turbulent "dispersion relation".1 Assuming that the latter is negligible \(\omega _k\ll |\vec {k} \cdot \vec {V}|\) compared with the total speed of the flow, observation of the frequency spectrum then apparently directly reproduces the wavenumber spectrum of velocity turbulence.
In spite of its appealing simplicity and its just mentioned logical assumptions, Taylor's hypothesis (as it is commonly called) has a number of critical implications when applied to non-mechanical fluctuations and turbulence like magnetic power spectral densities. These are frequently overlooked or entirely ignored, thus making sense to check for the validity and applicability of Taylor's hypothesis in these cases. These implications concern the following:
Though probably not the most important in fast flows, Taylor's hypothesis applies in this form only to time intervals when the turbulence can indeed be considered stationary.
It ignores any nonlinearity in the stationarity condition, which can, however, be justified again as a reasonable approximation to sufficiently fast flows and sufficiently small fluctuation amplitudes.
More crucially, the exponential in the Fourier representation of the velocity fluctuations depends itself on the velocity fluctuations, as is obvious from the Galilei-transformed observer's frequency. This means that it contains self-interactions of the fluctuations. These, for the turbulent velocity field can be neglected or summed up into the large-scale energy-carrying mean eddy velocity \(\vec {U}_0\) (Tennekes 1975). Its neglect in this place for the magnetic fluctuations \(\delta \vec {B}\) must, however, be questioned for two reasons: It sensitively affects the fluctuation phase, and in addition, it causes correlation between magnetic and velocity fluctuations because of their different scales. Consistency requires that the argument of the exponential in the magnetic fluctuation field should be expanded up to second order in \(\delta \vec {V}\). We will account for this effect below.
The transformation is, within these limitations, justified well for the turbulent velocity fluctuations \(\delta \vec {V}\). Through the continuity equation, it can also be justified for the fluctuations of density \(\delta N\) in compressible turbulence and through that, under additional assumptions on the equation of state, also for the fluctuations \(\delta T\) of temperature.
Any straightforward application to \(\delta \vec {B}\), the turbulent magnetic field fluctuations (see, e.g. Roberts et al. 2014) can be defended only under rather severe restrictions, as will be demonstrated below. What concerns this last point, it requires some explanation. In application to the electromagnetic field we note the following:
It is common knowledge that the electromagnetic field in moving media is not Galilei invariant. It is Lorentz invariant for all flows, whether relativistic or nonrelativistic.
This general remark on invariance seems to disqualify (though maybe not expressed as rude as by Saint-Jacques and Baldwin 2000) any application of Taylor's hypothesis in its nonrelativistic version to electromagnetic turbulence2 and in particular3 to magnetohydrodynamic (MHD) turbulence. However, the correct relativistic approach in ideal MHD just introduces the induction electric field, which can be considered an additional constraint to be satisfied afterwards when accounting for the turbulent spectrum of the electric field. This allows the separate consideration of \(\delta \vec {B}\) as functional of \(\delta \vec {V}\) in ideal MHD.
The electromagnetic field in classical physics does never become turbulent by itself. Without any exception, the electromagnetic field is secondary to turbulence, being the consequence of formation of turbulent vortices, electric currents, and possibly even weak large-scale charge-separation fields in a conducting turbulent medium (plasma, conducting fluid \(\dots\)) as consequence of the turbulent velocity field, as well as density and temperature gradients, i.e. inhomogeneities. Turbulence is basically mechanical. The electromagnetic field reacts passively to it. Transformation of the turbulent velocity field into the observer's (spacecraft) frame according to Taylor–Galilei can, within weak assumptions, stand up. Its effect on the electromagnetic field is by no means straightforward to account by transformation from the turbulent source into a moving frame.
The reason for applying Taylor's hypothesis to magnetic fluctuations in MHD turbulence is justified by the wish to infer the otherwise difficult to access power density spectrum as function of wavenumber \(\vec {k}\) by interpreting the frequency spectrum as the Galilei transformed wavenumber spectrum.
Naively speaking, for solar wind studies, one needs to be in the high-speed MHD regime for Taylor's hypothesis to applicable; in MHD turbulence the characteristic speed is the Alfvén speed, \(V_A\). If \(V_A \ll V_{{\mathrm {sw}}}\) (smaller than the flow speed in the solar wind) and the IMF direction is not too far from radial, then one "usually" accepts the use of Taylor's hypothesis. If one is in a regime where \(V_A\) is smaller than the flow speed, which sometimes happens (e.g. the period when the solar wind "disappeared"), one has to be careful. If one is not in the MHD regime and the phase speed of the ambient fluctuations exceeds \(V_A\) (e.g. whistler turbulence), the Taylor hypothesis needs to be considered carefully. Also, if the spacecraft speed is high, as happened with the Helios data (Goldstein et al. 1986) and will almost certainly happen with the Parker Solar Probe Mission and the Solar Orbiter Mission, one again needs to examine use of the Taylor hypothesis carefully. Furthermore, one often used technique for ascertaining the degree to which the Taylor hypothesis can introduce errors is to use multiple spacecraft to quantitatively measure the effects of the propagating fluctuations (Matthaeus et al. 2010). Still, it is worth noting that Taylor's hypothesis may break down even under the condition of supersonic or super-Alfvénic flow when the mapping quality from the frequencies onto the (streamwise) wavenumbers is degraded due to various effects, for example, (1) random sweeping effect by the flow velocity variation (Wilczek and Narita 2012), (2) the existence of counter-propagating Alfvén wave contributes as an effective sweeping (Narita 2017), and (3) formation of phase coherence or solitons (Nariyuki and Hada 2006).
In the following we discuss the validity of Taylor's hypothesis for the turbulent fluctuation of velocity \(\delta \vec {V}\) and magnetic field \(\delta \vec {B}\). We show that for the former its application is justified, while for the latter its application is subject to severe restrictions.
We do not refer to any turbulence theory nor model equations except when taking them as an input. More is not required for the limited purpose of this note. It would introduce further unnecessary complications. Taylor's hypothesis is not intrinsic to turbulence theory. Its validity (or invalidity) can be demonstrated independently of any theory of turbulence when occupying an observer's point of view, asking the question, in which way turbulent fluctuations do transform into the observer's frame such that from observation of fluctuations in frequency space one may infer the turbulent wavenumber spectrum.
This is a simple practical question. What is meant here is, in the first place, the spectrum of fluctuations. It is not the power density spectrum. In order to, in a second step, calculate the power density spectrum, which is central to turbulence theory, one must know how the fluctuations themselves transform.
After having clarified this question for the velocity fluctuations, we then ask for their effect on any related magnetic field fluctuations. We show that the modification of the magnetic fluctuation spectrum by application of the Taylor hypothesis runs into complications. Only under strict and severe assumptions, Taylor–Galilei transformation of the magnetic field makes some approximate sense but requires caution in the interpretation of the results.
In recent years sophisticated measurements of solar wind turbulence (cf., e.g. Goldstein et al. 1995; Podesta 2010; Tu and Marsch 1995; Zhou et al. 2004, for reviews) have advanced our knowledge about the evolution of turbulence in a highly conducting (i.e. collisionless) magnetised ideal plasma, in this case the fast streaming solar wind as a paradigm of fast streaming magnetised stellar winds. Since the latter are barely accessible (and probably, on the timescale of observations and spatial scale of expansion, by no means ideally conducting and collisionless), such measurements are important for understanding their dynamics, evolution of turbulence, its contribution to dissipation, entropy generation, and possibly even generation of observable thermal and nonthermal radiation. In other fields like fluid mechanics, hydrology, and meteorology, which are all dealing with turbulence, information obtained comparably easily from solar wind turbulence is as well valuable.
The fast solar wind stream seen by stationary observers (spacecraft) on multi-scales (Schwartz et al. 2009) is advantageous when transporting the frozen-in turbulence across the fixed frame. What is usually observed, are temporal fluctuations which can be transformed into frequency spectra. Taylor's hypothesis (Taylor 1938) comes in here for help (Roberts et al. 2014) when one wishes to infer the spectrum in wavenumber space.
This problem has become of interest since roughly two decades in coincidence with spectral observations of solar wind turbulence reaching down into the assumed dissipative range (cf., e.g. Alexandrova et al. 2009; Huang and Sahraoui 2015; Sahraoui et al. 2009, 2013), multi-spacecraft observations were combined to directly measure spatial spectra, observations of turbulent electric fields (Chen et al. 2011) became available, and turbulence spectra in both the velocity (Podesta 2009; Podesta et al. 2007) and plasma density (first in situ observations of electron density spectra, already exhibiting all much later confirmed details, date back to Celnikier et al. 1983) were measured directly (Chen et al. 2012; Šafránková et al. 2013, 2016).
According to Taylor's hypothesis the frequency of change in the velocity fluctuations measured in the spacecraft frame is given by Eq. (2). It holds reasonably well if either \(\omega _k\) is known, or otherwise the internal turbulent variations are not remarkable \(\omega _k \ll \vec {k} \cdot \vec {V}\) compared with the flow. In the dissipative regime this will not be true anymore. Molecular scales are of no interest here, but dissipation sets on at much longer scales in the Hall regime already (Alexandrova et al. 2009; Narita et al. 2006; Sahraoui et al. 2012) and on the presumable electron gyro-scales (Sahraoui et al. 2009, 2013) where anomalous dissipation takes over as the ultimate sink of turbulent magnetic energy (for a particular argument, see, Treumann and Baumjohann 2017), which is just a fraction of the mechanical energy stored in the turbulence.
The observation that the change in frequency depends on the angle between the turbulent wavenumber and the streaming velocity does not substantially violate Taylor's assumption; it, however, affects the isotropy of the observed turbulence. The angular dependence of Taylor's hypothesis tells the trivial truth that any stationary turbulent eddies which propagate at angles larger than
$$\begin{aligned} \theta _V > \cos ^{-1}(\omega _k/kV) = \cos ^{-1}(1-\omega _s/kV) \end{aligned}$$
remain unaffected by the transformation into the spacecraft system, a boundary which can be easily obtained from observations if distinguishing angles, thus limiting the reliable wavenumber range. All turbulent fluctuations near this angular boundary become mapped to either zero frequency \(\omega _s\approx 0\) or \(\omega _s\approx 2\omega _{{k}}\), depending on the direction of wave number with respect to the streaming velocity, for \(\omega _k\ne 0\) causing a frequency-dependent deformation and directional anisotropy of the spectrum of turbulent fluctuations, which to distinguish from other anisotropies poses an own problem.
For a system of eddies each of which rotates at some mean eddy velocity \({\bar{v}}_e\) in some direction (peaked at wavenumber \(k_0\)), the frequency is an angular frequency \(\omega _k = \vec {k} \cdot \vec {{\bar{v}}}_e\) with \(\vec {k}\) the eddy wavenumber. Then Eq. (2) becomes
$$\begin{aligned} \omega _s = \vec {k} \cdot \vec {V} + \vec {k} \cdot \vec {v}_e = kV \left( \cos \theta _V + \frac{{\bar{v}}_e}{V}\cos \omega _k t\right) \end{aligned}$$
The time average over the fast eddy rotations yields
$$\begin{aligned} \omega _s = kV \left( \cos \theta _V + {\frac{1}{2}}\frac{{\bar{v}}_e}{V}\right) \end{aligned}$$
implying a velocity dependent correction factor which for large eddy speeds \(0<|\cos \theta _V|<1\) dominates. This transition from flow to eddy dominated transformation causes a break in the Taylor-transformed spectrum. Since \(v_e(k)\) depends on wavenumber, the break will be smoothed.
A more subtle observation is that the turbulent frequency \(\omega _k\) may depend on wavenumber \(\vec {k}\). This "turbulent dispersion relation" is primarily unknown and usually neglected under the assumption that the internal phase velocity of turbulence \(\omega _k/k\ll V_0\) is small. It includes quasi-modes, evanescent oscillations, which in turbulence play the role of "virtual" waves,4 not being waves in real space-time. It is unknown whether weak turbulence theory (Yoon 2007) can describe it. One way of accounting for them is assuming that internal spatial transport of small-scale eddies by large-scale eddies at fixed wavenumber \(\vec {k}\) causes spectral Doppler broadening (Fung et al. 1992; Kaneda 1993; Tennekes 1975).
Taylor's hypothesis originally referred to stationary, homogeneous, unmagnetised fluid turbulence. The solar wind, on sufficiently large scales, may be treated as a fluid. It is, however, expanding and thus inhomogeneous and nonstationary, and it is magnetised. This raises the question to what degree Taylor's hypothesis does apply to it. As mentioned in the introduction, expansion requires reference to thermodynamics. Thus in application of Taylor's hypothesis one is restricted to local conditions only. In addition the presence of the magnetic field raises the question whether it also holds in application to magnetic fluctuations.
The magnetic field needs not to be relativistically transformed when dealing with streaming homogeneous media. This and its easy observation from spacecraft is its apparent advantage over the turbulent electric field \(\vec {E}\) which in the observer's frame is given by
$$\begin{aligned} \vec {E}^\prime = \vec {E}-\vec {V} \times \vec {B} \end{aligned}$$
where \(\vec {V}\) is the full velocity vector. This well-known relation (which violates the Galilei transformation) causes severe problems in the observation of magnetic turbulence when applying it to the power spectra of turbulence. Applying Taylor's hypothesis one may consider the turbulent fluctuations \(\delta \vec {B}\) of the magnetic field, expressing them as Fourier transforms in space and time in the observer's (primed) frame
$$\begin{aligned} \delta \vec {B}(t^\prime ,\vec {x}^\prime ) = \frac{1}{16\pi ^4} \int {\mathrm{d}}\omega \,{\mathrm{d}}\vec {k}\,\delta \vec {B}_{\omega \vec {k}} {\mathrm{e}}^{-i\omega t^\prime +i\vec {k} \cdot \vec {x}^\prime } \end{aligned}$$
where \(\delta \vec {B}_{\omega \vec {k}}\) is a function of frequency \(\omega\) and wavenumber \(\vec {k}\) with unknown relation between the two.5 The magnetic field needs not to be transformed. However, in the exponential of its Fourier transform the time and space coordinates are subject to transformation from the turbulence into the observer's frame. Transforming these by using Taylor's prescription of using the Galilei transform6
$$\begin{aligned} \vec {x} = \vec {x^\prime }-\vec {V}t^\prime , \quad t^\prime =t, \quad V\ll c \end{aligned}$$
the fluctuation becomes
$$\begin{aligned} \delta \vec {B}(t,\vec {x}) = \frac{1}{16\pi ^4}\int {\mathrm{d}}\omega \,{\mathrm{d}}\vec {k}\,\delta \vec {B}_{\omega \vec {k}} {\mathrm{e}}^{-i(\omega + \vec {k}\cdot \vec {V}) t + i\vec {k}\cdot \vec {x}} \end{aligned}$$
This suggests that one just has to shift the frequency by the amount \(\omega _s=\omega +\vec {k}\cdot \vec {V}\) which seems simple matter, accounting for the perfect transformation of the wavenumber spectrum of fluctuations into the frequency spectrum in the observers frame. No doubt this holds as long as the full velocity vector \(\vec {V}=\vec {V}_0+\delta \vec {V}\) is known and the above condition on the angle for the full velocity is satisfied, which implies that highly oblique turbulent fluctuations or eddies remain unaffected by Taylor's hypothesis and should be excluded from its application. As long as observations do not strictly distinguish between fluctuations with wavenumbers \(\vec {k}\Vert \vec {V}\) and those with wavenumbers \(\vec {k}\perp \vec {V}\), blind application of Taylor's hypothesis would introduce errors.
These almost trivial conditions might not seem to be severe but have to be respected (or checked) in any straightforward data analysis. In addition, however, there is another more subtle, interesting, and quite complicated condition which to our knowledge has never been discussed. This we treat separately in the next section.
The neglect of any wavenumber dependence in the turbulent frequencies is a weak though not too disturbing approximation in the application of Taylor's hypothesis. A more complicated problem is the dependence on the eddy velocities \(\delta \vec {V}\) which are the building blocks of the turbulence.
The Fourier transform of velocity fluctuations yields
$$\begin{aligned} \delta \vec {V}(t,\vec {x}) = \frac{1}{16\pi ^4}\int {\mathrm{d}}w\,{\mathrm{d}}\vec {\kappa }\,\delta \vec {V}_{w\vec {\kappa }} {\mathrm{e}}^{-i(w + \vec {\kappa }\cdot \vec {V}) t+i\vec {\kappa }\cdot \vec {x}} \end{aligned}$$
where we used \(\vec {\kappa }\) for the wave number of the velocity fluctuations and w for its frequency. The full velocity \(\vec {V}=\vec {V}_0+\delta \vec {V}\) appears here in the exponential, which complicates the problem. Assuming that in the exponent the turbulent fluctuations are not overwhelmingly important, we replace \(V\rightarrow V_0 +\vec {U}_0\), where \(\vec {U}_0\) is the mean velocity of the energy-carrying large vortices in the turbulence which affects the evolution of small-scale eddies. This effect of self-interaction in turbulence remains (Tennekes 1975). We then write
$$\begin{aligned} \delta \vec {V}(t,\vec {x}) = \frac{1}{16\pi ^4}\int {\mathrm{d}}w\,{\mathrm{d}}\vec {\kappa }\,\delta \vec {V}_{w\vec {\kappa }} {\mathrm{e}}^{-i(w(\vec {\kappa }) + \vec {\kappa }\cdot \vec {V}_0+\vec {U}_0) t+i\vec {\kappa }\cdot \vec {x}} \end{aligned}$$
for the turbulent fluctuations of the velocity. For the moment we will neglect \(U_0\) but will revive it below in the context with Doppler broadening.
The replacement \(\vec {V}\rightarrow \vec {V_0}\) is, however, not allowed in the Fourier expression of the magnetic fluctuations \(\delta \vec {B}\) unless good reasons can be found for its justification. There we have to account for the full speed, the flow plus its turbulent fluctuations, in the exponent
$$\begin{aligned} \delta \vec {B}(t,\vec {x}) = \frac{1}{16\pi ^4}\int {\mathrm{d}}\omega \,{\mathrm{d}}\vec {k}\,\delta \vec {B}_{\omega \vec {k}} {\mathrm{e}}^{-i\{\omega (\vec {k}) + \vec {k} \cdot [\vec {V}_0+\delta \vec {V}(t,\vec {x})]\}t+i\vec {k}\cdot \vec {x}} \end{aligned}$$
The argument that the fluctuations \(\delta \vec {V}\) are small for high-speed flows is not a good one, because even for \(V_0=0\) in the stationary frame of the turbulence the turbulent magnetic field fluctuations are determined by the fluctuations \(\delta \vec {V}\) in the velocity field. This is another expression for the fact that the electromagnetic field never becomes turbulent by itself. Its fluctuations are always the consequence of turbulence in the electromagnetically active medium. This is taken care of by Maxwell's equations and the dynamical material equations. It is thus important to recognise that the effect of the velocity turbulence on the magnetic fluctuations has to be taken care of in the Fourier amplitudes even in the frame of turbulence in the absence of flow.
The physical reason is that the magnetic field fluctuations are long range. They correlate with velocity fluctuations over large distances and thus include a substantial part of the mechanical turbulence spectrum. Since \(\delta \vec {V}\) is a Fourier integral itself, any treatment becomes involved. One way of dealing with this expression is referring to cumulant expansions (see Fox 1976; Kubo 1962, for the theory). This procedure implies taking the logarithm of the exponential with argument \(-i\vec {k} \cdot \delta \vec {V}t\), expanding the exponential in the velocity fluctuation, ensemble averaging over the ensemble of fluctuations assuming \(\langle \delta \vec {V}\rangle =0\), when averaging term by term and rearranging,
$$\begin{aligned} \left\langle \log \exp (-i\vec {k} \cdot \delta \vec {V}t)-1\right\rangle = \sum _{m=1}^\infty \frac{(-1)^m}{(2m)!}{\left\langle (\vec {k} \cdot \delta \vec {V})^{2m}\right\rangle t^{2m}} \end{aligned}$$
This corresponds to a Gaussian distribution of the ensemble of velocity fluctuations with zero mean. Any finite mean \(\vec {U}_0\ne 0\) is added to the mean stream speed. Re-exponentiating yields the cumulant expansion
$$\begin{aligned} \delta \vec {B}(t,\vec {x}) = \frac{1}{16\pi ^4}\int {\mathrm{d}}\omega \,{\mathrm{d}}\vec {k}\,\delta \vec {B}_{\omega \vec {k}} {\mathrm{e}}^{-i[\omega (\vec {k})t -\vec {k}\cdot \vec {x}] -\frac{1}{2}{\langle (\vec {k} \cdot \delta \vec {V})\rangle ^2}t^2+\cdots } \end{aligned}$$
of which only the lowest-order term in the exponent is retained. This term is negative, quadratic in the average fluctuations and time t implying a turbulent correlation of the mechanical velocity turbulence and magnetic fluctuations when transported downstream.7 The square of velocity fluctuations is related to the correlation function of the velocity fluctuations which is the Fourier transform of
$$\begin{aligned} \delta \vec {V}^2(t^{\prime },\vec {x}^{\prime }) = \frac{1}{(2\pi )^8} \int {\mathrm{d}}w^{\prime }{\mathrm{d}}w^{\prime \prime }\,{\mathrm{d}}\vec {\kappa }^{\prime }{\mathrm{d}}\vec {\kappa }^{\prime \prime } \delta \vec {V}_{w^{\prime }\vec {\kappa }^{\prime }} \delta \vec {V}_{w^{\prime \prime }\vec {\kappa }^{\prime \prime }} {\mathrm{e}}^{-i(w^{\prime }+w^{\prime \prime })t^{\prime }+i(\vec {\kappa }^{\prime }+\vec {\kappa }^{\prime \prime })\cdot \vec {x}^{\prime }} \end{aligned}$$
where time \(t^{\prime }\) and space \(\vec {x}^{\prime }\) refer to the time and space dependencies of the turbulent velocity fluctuations. The ensemble averaged velocity spectrum in this case becomes an average over the primed fluctuation scales
$$\begin{aligned} \langle \delta \vec {V}^2\rangle _{w\vec {\kappa }}(t,\vec {x}) = \frac{1}{(2\pi )^4\Delta T\Delta V} \int {\mathrm{d}}w^{\prime }\,{\mathrm{d}}\vec {\kappa }^{\prime } \delta \vec {V}_{w^{\prime }\vec {\kappa }^{\prime }}(t,\vec {x}) \delta \vec {V}_{w-w^{\prime },\vec {\kappa }-\vec {\kappa }^{\prime }}(t,\vec {x}) \end{aligned}$$
the usual result, with \(\Delta T, \Delta V\) the corresponding time and volume which are averaged over. This enters the exponent in Eq. (14) through its inverse Fourier transform
$$\begin{aligned} \langle \delta \vec {V}^2\rangle (t^{\prime },\vec {x}^{\prime };t,\vec {x}) = \frac{1}{(2\pi )^8\Delta T\Delta V} \int {\mathrm{d}}w\,{\mathrm{d}}\vec {\kappa }{\mathrm{e}}^{-iwt^{\prime }+i\vec {\kappa }\cdot \vec {x}^{\prime }} \int {\mathrm{d}}w^{\prime }\,{\mathrm{d}}\vec {\kappa }^{\prime } \delta \vec {V}_{w^{\prime }\vec {\kappa }^{\prime }}(t,\vec {x}) \delta \vec {V}_{w-w^{\prime },\vec {\kappa }-\vec {\kappa }^{\prime }}(t,\vec {x}) \end{aligned}$$
Note that it may still depend on the observer time and space \(t,\vec {x}\). For stationary turbulence simplification is achieved by \(\delta \vec {V}_{-w,-\kappa }=\delta \vec {V}^*_{w\kappa }\) which allows use of its energy spectrum \({\mathcal {E}}_{\delta \vec {V}}\propto \langle |\delta \vec {V}|^2\rangle _{w^{\prime }\kappa ^{\prime }}\), in which case one has
$$\begin{aligned} \langle \delta \vec {V}^2\rangle (t^{\prime },\vec {x}^{\prime };t,\vec {x}) \propto \int {\mathrm{d}}w\,{\mathrm{d}}\vec {\kappa } {\mathrm{e}}^{-iwt^{\prime }+i\vec {\kappa }\cdot \vec {x}^{\prime }} \int {\mathrm{d}}w^{\prime }\,{\mathrm{d}}\vec {\kappa }^{\prime } {\mathcal {E}}_{\delta \vec {V},w^{\prime }\kappa ^{\prime }}(t,\vec {x}) \end{aligned}$$
These expressions demonstrate the complications introduced when taking into account the effect of mechanical turbulence on the magnetic fluctuations and ultimately on magnetic turbulence.
Complete separation of turbulence from flow implies independence of the primed and unprimed scales. This holds only approximately because the large-scale eddies in the turbulence, which contain most of the turbulent energy, cause transport of the spectrum of small-scale eddies while their own scales approach those of the unprimed flow. They stretch and deform the small-scale eddies. Simulations of pure stationary homogeneous velocity turbulence with \(V_0=0\) (Fung et al. 1992; Kaneda 1993; Yakhot et al. 1989) have demonstrated these Doppler broadening effects. We will briefly refer to them below in the context of inclusion of a model of turbulence. We will, however, not make use of the above general expression for the cumulative spectral density of the velocity turbulence in the exponential in Eq. (14). We rather restrict to Kolmogorov inertial range power law spectra. Indeed, turbulent velocity spectra in the solar wind have been measured (e.g. Podesta et al. 2006, 2007). They exhibit ranges of power laws thus suggesting a continuous spectrum of eddies and vortices in some limited scale range of substantial amplitude and energy content.
Simplified case
In the "Appendix" we treat the general case. Here let us consider the strongly simplified example, when the turbulent velocity fluctuations are dominated by a narrow wavenumber interval around a dominating turbulent eddy \((w_0,\kappa _0)\).8 Then \(\delta \vec {V}_{w\vec {\kappa }}\propto 16\pi ^4 \delta (w-w_0,\vec {\kappa -\kappa }_0)\). One may think of such a situation as realised in intermittent turbulence for instance in the foreshock (Narita et al. 2006) and the magnetosheath where single mode eddies seem to be continuously present. They arise from the presence of the collisionless bow shock and flow down the magnetosheath not having had time to decay into a broad band of turbulence. In such a case the whole velocity integral reduces to some complex amplitude
$$\begin{aligned} \delta \vec {V}(\vec {x},t) = C\,\delta \vec {V}_{w_0\vec {\kappa }_0} {\mathrm{e}}^{-i[(w_0+\vec {\kappa }_0 \cdot \vec {V}_0)t - \vec {\kappa }_0 \cdot \vec {x}]} \sim C\delta \vec {V}_{w_0\vec {\kappa }_0} \{1-i[(w_0+\vec {\kappa }_0 \cdot \vec {V}_0)t -\vec {\kappa }_0 \cdot \vec {x}]\} \end{aligned}$$
where for simplicity we expanded the exponential to just demonstrate the main effect. With \(\Delta t\) the time of measurement and \(L_\perp\) a transverse dimension of measurement, the factor of proportionality is \(C\equiv V_0(L_\perp \Delta t)^2\). The exponent in the central equality can be lowered by reference to the identity
$$\begin{aligned} {\mathrm{e}}^{-i\psi }= \cos \psi -i\sin \psi \end{aligned}$$
which shows that after integration with respect to w and \(\kappa\) the velocity fluctuation will contribute an imaginary and a real part to the exponential in the magnetic fluctuation. The imaginary contribution just shifts the frequency by another amount (which in common applications of the Taylor hypothesis is ignored). The real part introduces some higher order time dependence. As we will show, it implies that the velocity fluctuations, when transformed into the observers frame—an unavoidable step in any observations—, cause some kinds of dissipation in the magnetic fluctuations and thus some kinds of deformation of the measured magnetic fluctuation spectrum
$$\begin{aligned} \delta \vec {B}(t,\vec {x}) = \frac{1}{16\pi ^4} \int {\mathrm{d}}\omega \,{\mathrm{d}}\vec {k}\,\delta \vec {B}_{\omega \vec {k}} {\mathrm{e}}^{-i[\omega + \vec {k} \cdot (\vec {V}_0 + C\delta \vec {V}_{w_0\vec {\kappa }_0} \{1-i[(w_0+\vec {\kappa }_0 \cdot \vec {V}_0)t -\vec {\kappa }_0 \cdot \vec {x}]\}) ]t+i\vec {k} \cdot \vec {x}} \end{aligned}$$
This dissipation is due to the correlations between the different velocity and magnetic scales mentioned above. To be as simple as possible here, instead of lowering the exponent, we expand the exponential as shown on the right in the last expression. This contributes real and complex terms in the exponential argument, second-order terms in the time t and a mixed term \(t\vec {x}\). The linear real contribution modifies the oscillating phase shifting it a out of the zero order Taylorian \(k_0V_0\) contribution. The imaginary term in \(\delta \vec {V}\) causes the noted dissipation in time, an effect of de-correlating velocity and magnetic field fluctuations and some weak energy loss which might be visible in the magnetic power spectra by small deviations of the slope from Kolmogorov to become slightly steeper, for instance, as is frequently observed.
To show this, assume that \(\delta \vec {V}_{w_0\vec {\kappa }_0}^{\mathrm{real}}\) is a real amplitude (in reality it is complex). The exponent in the fluctuation integral becomes
$$\begin{aligned} -i\{[\omega +\vec {k} \cdot (\vec {V}_0 + C\,\delta \vec {V}^{\mathrm{real}}_{w_0\vec {\kappa }_0})]t - \vec {k} \cdot \vec {x}\} - C\,[(w_0+\vec {\kappa }_0 \cdot \vec {V}_0)t^2 - \vec {\kappa }_0 \cdot \vec {x}t]\vec {k} \cdot \delta \vec {V}^{\mathrm{real}}_{w_0\vec {\kappa }_0} \end{aligned}$$
Define \(\tau =t-\vec {\kappa }_0 \cdot \vec {x}/w_0^{\prime }\) with \({\mathrm{d}}t={\mathrm{d}}\tau\) and \(w_0^{\prime }=w_0+\vec {\kappa }_0 \cdot \vec {V}_0\). We also have \(\omega ^{\prime }=\omega + \vec {k} \cdot (\vec {V}_0+C\delta \vec {V}^{{\mathrm{real}}}_{w_0\vec {\kappa }_0})\). With these definitions we quadratically complete the exponents. The turbulent magnetic fluctuations become
$$\begin{aligned} \delta \vec {B}(t,\vec {x}) = \frac{1}{16\pi ^4}\int {\mathrm{d}}\omega \,{\mathrm{d}}\vec {k}\, \delta \vec {B}_{\omega \vec {k}} {\mathrm{e}}^{-C\vec {k} \cdot \delta \vec {V}_{w_0\vec {\kappa }_0}^{\mathrm{real}} [w_0^{\prime }\tau ^2+(\vec {\kappa }_0 \cdot \vec {x})^2/4w_0^{\prime }]} {\mathrm{e}}^{-i[\omega ^{\prime } \tau +(\omega ^{\prime }\vec {\kappa }_0/w_0^{\prime }-\vec {k}) \cdot \vec {x}]} \end{aligned}$$
which explicitly exhibits the Gaussian damping or decorrelation in the real exponent. (In the "Appendix" we perform the calculation of the integrals up to the step they can be done analytically.)
The appearance of decorrelation is solely due to the action of the turbulence. It occurs both in time and space and depends on wavenumber k and the turbulent velocity spectrum. It increases with k and cannot be eliminated when forming the power spectrum. Its presence indicates that the mechanical turbulence has some effect on the wavenumber shape of the spectrum when transformed into the observer's stationary frame.
One is not interested in the fluctuations but in their Fourier spectrum which is obtained by applying the inverse transform.
$$\begin{aligned} \delta \vec {B}_{\varpi \vec {K}} = \int {\mathrm{d}}t\,{\mathrm{d}}\vec {x}\, {\mathrm{e}}^{i\varpi t-i\vec {K} \cdot \vec {x}}\delta \vec {B}(t,\vec {x}) \end{aligned}$$
This yields the replacements for the frequency \(\omega ^{\prime }=\varpi\) and for the wavenumber \(\vec {K}=(\omega ^{\prime }/w_0^{\prime })\vec {\kappa }_0-\vec {k}\) which shows that the measured frequency \(\varpi \rightarrow \vec {k} \cdot (\vec {V}_0+C\delta \vec {V}^{\mathrm{real}}_{w_0\vec {\kappa }_0})\) becomes linearly transformed into the wavenumber space by the sum of the flow and dominant turbulent speeds. This is the slightly varied wanted result. The measured wavenumbers, on the other hand, depend on the wavenumber of the turbulence in a more complicated way. They also include the frequency of the dominant eddy.
The general form of the magnetic fluctuations is obtained when using the exponential identity and separating into real and imaginary parts. This is done in the "Appendix". It retains the \(w-\vec {\kappa }\) integration of the turbulent velocities and the full trigonometric functions of which the former expression just retains only the first expansion terms. For all practical purposes such an expression becomes incapable because the spectrum of velocity fluctuations is badly known. Thus one stays with the above short term expression, its frequency and wavenumber spectrum. The power spectral density of magnetic fluctuations is obtained from the above expression in the usual way. Measurements usually refer to power spectral densities instead of Fourier spectra. These are obtained in the usual way.
Magnetic power spectrum in the simplest case
Turbulence theory deals with power spectra. It is not interested in the fluctuations themselves. We therefore proceed to a formulation of the Taylor-transformed magnetic power spectrum. It is not too difficult to obtain an integral equation for the power spectrum of the magnetic fluctuations in the observers spacecraft frame when just restricting to the above simplest case which includes the velocity effect of turbulent eddies. For this the magnetic field fluctuations in the spacecraft frame must be averaged over time and space. Squaring Eq. (20) and taking its Fourier amplitude we obtain the turbulent magnetic energy spectrum
$$\begin{aligned} \left\langle \left| \delta \vec {B}\right| ^2\right\rangle _{\omega \vec {k}}=\, & {} \frac{(TV)^{-1}}{4(2\pi )^6} \int \frac{\delta \vec {B}_{\omega _1\vec {k}_1} \delta \vec {B}_{\omega _2\vec {k}_2}\, {\mathrm{d}}\omega _1{\mathrm{d}}\omega _2 {\mathrm{d}}\vec {k}_1 {\mathrm{d}}\vec {k}_2}{\sqrt{Cw^{\prime }_0 \delta \vec {V}^{\mathrm{real}}_{w_0\kappa _0} \cdot (\vec {k}_1+\vec {k}_2)} \prod _i[ k_{1i}+k_{2i}-k_i-(\omega ^{\prime }_1+\omega ^{\prime }_2)\kappa _{0i}/w^{\prime }_0]}\nonumber \\&\times \exp \left\{ -\frac{(\omega ^{\prime }_1+\omega ^{\prime }_2-\omega )^2 + \sum _i\kappa _{0i}^{-2} [(k_{1i}+k_{2i}-k_i) - (\omega ^{\prime }_1+\omega ^{\prime }_2)\kappa _{0i}/w^{\prime }_0]^2}{4[Cw^{\prime }_0\delta \vec {V}^{\mathrm{real}}_{w_0\kappa _0} \cdot (\vec {k}_1+\vec {k}_2)]} \right\} \end{aligned}$$
where \(\omega ^{\prime }_{1,2}=\omega _{1,2} + \sum _ik_{1,2i}V^{\prime }_{0i}\), with \(\vec {V}^{\prime }_0=\vec {V}_0+C\delta \vec {V}^{{\mathrm{real}}}_{w_0\vec {\kappa }_0}\). Here T and V are the respective time interval and volume over which the time and space averages are taken. These are determined by the experimental conditions and instrumental resolutions.
The triple product in the denominator implies the presence of three singularities in the integral which can be exploited for its simplification. The assumption of absence of singularities in the spectra themselves implies that they are entire functions. For this to hold the turbulence is free of any eigenmodes. Otherwise one deals with intermittency and needs to include their residua. This must only be done if they have been identified in the observations. Unfortunately, their non-identification does not imply their absence as they may be hidden in the overall spectrum of their frequency range but are not resolved.
The singularities are in the three components of wavenumbers or frequencies. To see this we rewrite the general term in the product as
$$\begin{aligned} k_{1i}+k_{2i}-k_i-(\omega ^{\prime }_1+\omega ^{\prime }_2)\kappa _{0i}/w^{\prime }_0 = k^{\prime }_{1i}+k^{\prime }_{2i}-k_i-(\omega _{1}+\omega _{2})\kappa _{0i}/w^{\prime }_0 \end{aligned}$$
with the definition \(\vec {k}^{\prime }_{1,2}\equiv \left( {\mathbf {I}}-\vec {\kappa }_{0}\vec {V}^{\prime }_{0}/w^{\prime }_0\right) \cdot \vec {k}_{1,2}\). This yields the wavenumber volume element
$$\begin{aligned} {\mathrm{d}}\vec {k}^{\prime }_1{\mathrm{d}}\vec {k}^{\prime }_2 = \left( {\mathbf {I}}-\frac{\vec {\kappa }_0\vec {V}^{\prime }_0}{w^{\prime }_0} \right) ^2:{\mathrm{d}}\vec {k}_1{\mathrm{d}}\vec {k}_2 \end{aligned}$$
which must be inverted in order to be able to replace the unprimed volume element. It produces the factor \(\left( {\mathbf {I}}-\vec {\kappa }_0\vec {V}^{\prime }_0/w^{\prime }_0\right) ^{-2}\) under the integral sign, which is the scalar square product of the inverse tensor. Integrating with respect to \(\vec {k}^{\prime }_2\) over the upper complex plane and accounting only for the resonant part, which assumes that the magnetic fluctuation spectra \(\delta \vec {B}_{\omega _2\vec {k}_2}\) have no singularities neither in this plane nor on the real axis, a rather strong restriction by itself (in principle neglecting resonant wave-wave interactions), produces a factor \((i\pi )\) in front of the integral while replacing the resonant product in the denominator by
$$\begin{aligned} \delta \left( k^{\prime }_{2i}+k^{\prime }_{1i}-k_i-\Omega _{1i}-\Omega _{2i}\right) \end{aligned}$$
in the numerator, while leaving in the denominator a product of the two remaining components \(j\ne i\). It is this factor which in the integration with respect to the ith component of the \(\vec {k}^{\prime }_2\) introduces a mixing of indices when eliminating this component of the second wavenumber. In this way the integral becomes the sum of three integrals.
Let us define \(A^{-1}=1-\vec {\kappa }_0 \cdot \vec {V}^{\prime }_0/w^{\prime }_0\). Then, we have at resonance
$$\begin{aligned}&k_{2i} = -k_{1i}+ A[k_i+\kappa _{0i}(\omega _1+\omega _2)/w^{\prime }_0] \nonumber \\&\omega ^{\prime }_1+\omega ^{\prime }_2 = (\omega _1+\omega _2)(1+A\vec {V}^{\prime }_0 \cdot \vec {\kappa }_0/w^{\prime }_0) +A\vec {V}^{\prime }_{0} \cdot \vec {k} \nonumber \\&\delta \vec {V}^{\mathrm{real}}_{w_0\kappa _0} \cdot (\vec {k}_1+\vec {k}_2) = A\delta \vec {V}^{\mathrm{real}}_{w_0\kappa }\cdot [\vec {k} +\vec {\kappa }_{0}(\omega _1+\omega _2)/w^{\prime }_0] \end{aligned}$$
These expressions have to be used in Eq. (22). They enter the denominator under the root and the fraction in the gaussian exponential. Unfortunately their appearance in these places in mixed form containing both frequencies \(\omega _{1,2}\) inhibits any further analytical treatment even in the resonant simplified case. Moreover, the indices of the magnetic fluctuation spectra become affected, changing to
$$\begin{aligned} \delta \vec {B}_{\omega _2\vec {k}_2} \rightarrow \delta \vec {B}_{\omega _2, [\vec {k}-\vec {k}^{\prime }_1+(\omega _1+\omega _2)\vec {\kappa }_0/w^{\prime }_0]} \end{aligned}$$
Similar difficulties arise if the resonances are attributed to one of the frequencies \(\omega _{1,2}\). In this case, defining frequencies \(\Omega _i=w^{\prime }_0k_i/\kappa _{0i}, \vec {k}=\Omega \vec {\kappa }_0/w^{\prime }_0\), the power spectrum of the magnetic energy density is subject to the equation
$$\begin{aligned} \left\langle \left| \delta \vec {B}\right| ^2\right\rangle _{\omega \vec {k}}=\, & {} \frac{i}{(4\pi )^4TV} \sum _i\int \frac{\delta \vec {B}_{\omega _1\vec {k}_1} \delta \vec {B}_{(\omega ^{\prime }_1+\Omega _i-\Omega _{1i}-\Omega _{2i})\vec {k}_2} \,{\mathrm{d}}\omega _1 {\mathrm{d}}\vec {k}_1{\mathrm{d}}\vec {k}_2}{\sqrt{Cw^{\prime }_0\delta \vec {V}^{\mathrm{real}}_{w_0\kappa _0} \cdot (\vec {k}_1+\vec {k}_2)} \prod _{j\ne i}[k_{1j}+k_{2j}-k_j-\omega ^{\prime }_1\kappa _{0j}/w^{\prime }_0]} \nonumber \\&\times \exp \left\{ -\frac{(\omega -\Omega _i+\Omega _{1i}+\Omega _{2i})^2 + \sum _{j\ne i}\kappa _{0j}^{-2}[k_{1j}+k_{2j} - k_j - \omega ^{\prime }_1\kappa _{0j}/w^{\prime }_0]^2}{4[Cw^{\prime }_0\delta \vec {V}^{\mathrm{real}}_{w_0\kappa _0} \cdot (\vec {k}_1+\vec {k}_2)]}\right\} \end{aligned}$$
which is the sum of three integrals. Here the volume element becomes
$$\begin{aligned} {\mathrm{d}}\vec {k}_1{\mathrm{d}}\vec {k}_2 = (\kappa _{01}\kappa _{02}\kappa _{03})^2 {\mathrm{d}}\Omega _{11}{\mathrm{d}}\Omega _{12}{\mathrm{d}}\Omega _{13} {\mathrm{d}}\Omega _{21}{\mathrm{d}}\Omega _{22}{\mathrm{d}}\Omega _{23}/w^{{\prime 6}_0} \end{aligned}$$
In addition \(\omega ^{\prime }_1=\omega _1+\vec {k}_1 \cdot \vec {V}^{\prime }_0\) has to be replaced, and the wavenumbers must be expressed through the \(\Omega\)s.
This sum of integral equations looks simpler but is at the best ready for an iterative solution, because the complicated index prevents from combining the fluctuations in the integrand into a power spectrum. This is a simple consequence of the Wiener-Khinchin theorem which has been applied here several times. The discouraging observation is that, though it seems that the \(\omega _1\) integration could be separated, in all and even this simplest case one cannot simply refer to the power spectral energy density of the observed magnetic fluctuations in order to reconstruct the wavenumber spectrum. One needs to solve a set of complicated integral equations which contains the folding product of the magnetic fluctuations in frequency and wavenumber space. Even separating the \(\omega _1\) integration requires a shift in one of the fluctuation spectra imposed by the second index in order to perform the folding. The presence of a turbulent fluctuation spectrum in velocity thus implies correlations in the magnetic fluctuations.
Except for an iteration procedure using measured magnetic fluctuation spectra, the only way, even in this most simplistic case under the most simplifying assumptions is to assume a reasonable model for the velocity fluctuations. Imposing such models for the magnetic fluctuation spectrum one expects that the integral can be iteratively solved and the approximate magnetic power spectral density in wavenumber space can be constructed from the measurement of the frequency spectrum.
What the above expressions show is not only that different spectral domains in frequency space are correlated. It also shows that the spectrum becomes folded by the weight of a fairly complicated gaussian distribution in frequency and wavenumbers, which appears as the kernel of the above integral equations (22). Considering the resonant denominator simplifies it slightly but does not release from the necessity of solving them.
The general case of turbulence in the velocity and its stepwise reduction to the Taylor hypothesis is treated in the "Appendix". It shows that the Taylor hypothesis concerning the transformation of the magnetic turbulence spectrum is nothing else but the equivalent to the complete neglect of everything in mechanical turbulence except the pure streaming velocity. Whether this in MHD turbulence can be justified, is questionable, because the magnetic and mechanical fluctuations are intimately related and should not be considered separately. Below we return to this point.
There is, however, a further simplification for the particular case of purely Alfvénic turbulence. In this case the linear magnetic fluctuations \(\delta \vec {B}\propto \delta \vec {V}\) and the magnetic spectra in the integrand can be expressed through the corresponding spectral fluctuations of the velocity. This results in an expression of the magnetic power spectral density solely through the spectral densities of the velocity fluctuation. This substantially simplifies the transformation. Unfortunately it does not resolve the correlation problem which remains as that of the velocity fluctuations. Nevertheless, the problem reduces to the knowledge of the latter and to an appropriate solution of the singular correlation integral. Ultimately this can numerically be obtained.
Mapping problem
Taylor's hypothesis breaks down when fluctuations are no longer negligible in the flow velocity. One of the possible ways to overcome the effect of the flow velocity fluctuation is to map the time series data onto the spatial domain in the streamwise direction (along the flow) by correcting for the instantaneous or individual realizations of the flow velocity fluctuation. Here we sketch a more appropriate mapping method (than the use of Taylor's hypothesis) along with a data analysis for the Helios-1 plasma and magnetic field measurements. Figure 1 displays the magnetic field magnitude, the flow velocity magnitude (for protons), and the number density (for protons) from Helios-1 spacecraft from March 3, 1975, 1200 UT to March 4, 1975, 1200 UT in the time series style. The Helios-1 spacecraft is located at a distance of about 0.4 AU (Astronomical Unit) from the Sun.
Time series plots of the magnetic field magnitude, flow velocity magnitude (for protons), and proton number density from the Helios-1 spacecraft measurement in the inner heliosphere (at a radial distance of about 0.4 AU from the Sun) on March 3–4, 1975
The instantaneous, fluctuation-mapping of the spacecraft data from the time domain onto the streamwise spatial domain is obtained by the following relation:
$$\begin{aligned} R^{\mathrm {(map)}}(t) = \int _{t_0}^t V(t^\prime ) \; {\mathrm{d}}t^\prime . \end{aligned}$$
Here, for simplicity, we consider the radial direction from the Sun and use the radial flow component \(V_r\) for the mapping. The five-point Newton–Cotes algorithm is implemented in the numerical integration in Eq. (31). Positive values in the mapped distance R are associated with the streamwise (or anti-sunwards) direction (in the observer's frame), and negative values are associated with the sunwards direction. For comparison, the conventional radial mapping under Taylor's hypothesis reads
$$\begin{aligned} R^{\mathrm {(TH)}}(t) = V_0 (t - t_0) . \end{aligned}$$
Magnetic field data mapped in the two ways (fluctuation-corrected way and Taylor's hypothesis) are displayed in Fig. 2 as a function of the radial distance from the spacecraft position towards the Sun as \(B(R^{\mathrm {(map)}})\) in black and \(B(R^{\mathrm {(TH)}})\) in grey. The mean flow speed is about 612 km/s in the radial direction (away from the Sun). The fluctuation-mapped data (in black) apparently have a very close waveform to the Taylor-mapped data (in grey), but the two mapped data differ in the spatial positions. For example, the peak at \(R=-3.7\times 10^{7}\) km in the fluctuation-mapped data is displaced to \(R=-3.8\times 10^{7}\) km in the Taylor-mapped data, or the waveform is displaced in the opposite order such as the field decrease around \(R=-4.8 \times 10^{7}\) km. The fluctuation-based mapping in the turbulence observation from the time domain onto the spatial domain may thus be regarded as a shuffling of the data without changing the statistics or the probability distribution function of the fluctuations.
Magnetic field data (only the magnitude is plotted) mapped onto the streamwise spatial coordinates using the total flow velocity in black (including the mean constant flow velocity field \(V_0\) and the fluctuating field \(\delta V\) and using the mean constant flow velocity in grey (Taylor's hypothesis). Only the radial direction away from the Sun is considered here
Energy spectrum for the magnetic field is compared between the fluctuation-mapped data and the Taylor-mapped data (Fig. 3). The fluctuation-mapped data are irregularly displaced in the spatial domain and are re-sampled into a regular sampling data set by interpolation. The spectrum is evaluated by the Welch-FFT (fast Fourier transformation) algorithm with a window size of 512 data points, a sliding of 128 data points, and 12 degrees of freedom (which is the number of sub-intervals for the statistical averaging). Figure 3 displays the total fluctuation energy (in the spectral domain) over the three components of the magnetic field (which is the trace of the spectral density matrix) as a function of the streamwise wavenumbers for the fluctuation-mapped data (in black) and the Taylor-mapped data (in grey). The fluctuation energy has nearly the same spectral shape in the lower wavenumber range up to about \(5 \times 10^{-5}\,{\hbox {rad}}\,{\mathrm{km}}^{-1}\). The spectrum becomes gradually and increasingly steeper in the fluctuation-mapped data at about \(10^{-4}\,{\hbox {rad}}\,{\mathrm{km}}^{-1}\), while the spectrum exhibits a break at about \(10^{-4}\,{\hbox {rad}}\,{\mathrm{km}}^{-1}\) and becomes suddenly steeper in the Taylor-mapped data. Therefore, the use of the Taylor's hypothesis may introduce a spectral deformation when the fluctuation in the flow velocity is not negligible.
Energy spectrum (trace of the spectral density matrix) of the mapped magnetic field data (onto the spatial coordinates) in the streamwise wavenumber domain. The spectrum using the total flow velocity is represented in black and that using the mean constant flow velocity is in grey
Fluid-like or MHD-like breakdown of Taylor's hypothesis
Time dependence of magnetic field fluctuation in the observer's frame consists of the three distinct factors: the MHD-intrinsic fluctuation (e.g. Alfvén wave) with the frequency \(\omega = k_\Vert V_A\), the advection by the mean flow velocity or the Doppler shift \(k V_0\), and the random sweeping by the fluctuating flow velocity \(k \delta V\) as follows.
$$\begin{aligned} \delta B(t) \propto \exp \left[ -i \omega t - i k V_0 t - i k \delta V t \right] . \end{aligned}$$
Taylor's hypothesis breaks down either under the finite intrinsic frequencies (of the Alfvén waves) as \(k_\Vert V_A\) or under the finite fluctuation amplitude in the flow velocity as \(k \delta V\). We associate the fluctuating flow velocity with the perpendicular wavenumbers (representing eddies around the mean magnetic field) and can derive an estimate of the sense of the breakdown of Taylor's hypothesis (fluid-like or MHD-like) by taking a ratio of the two frequency quantities,
$$\begin{aligned} r & = {} \frac{k_\perp \delta V}{k_\Vert V_A} \end{aligned}$$
$$\begin{aligned} & = {} \frac{\delta V}{V_A \tan \theta } , \end{aligned}$$
where the angle \(\theta\) is defined as \(\tan \theta = \frac{k_\Vert }{k_\perp }\) and is approximated to the angle between the mean flow and the mean magnetic field in the application to the observation, \(\theta \simeq \theta _{V_0 B}\). Thus, the solar wind observation in a more radial mean magnetic field from the Sun (a small value of \(\tan \theta _{V_0 B}\)) is influenced by the fluid-like breakdown of Taylor's hypothesis (inaccurate mapping onto the spatial coordinates), and that in a more perpendicular mean magnetic field to the direction from the Sun (a large value of \(\tan \theta _{V_0 B}\)) is influenced by the MHD-like breakdown of Taylor's hypothesis (intrinsic Alfvén waves and counter-propagating Alfvén waves).
So far (including "Appendix") we did not refer to any model of the turbulence. The entire approach given was just from the point of view of the observer who measures fluctuations and does not primarily ask for a model. It illuminates the purely Galilean effect of the transport of turbulence across the observers frame and the prospects of accounting for the presence of turbulence in the reduction of the magnetic power spectrum. Given any turbulence, no matter how it is generated and evolves, the spectrum of magnetic fluctuations swept across the observer has been considered as to its modification by the turbulent fluctuations in the velocity field. From that point of view our endeavour (as completely given in "Appendix") is rather general. There have been attempts in the literature to approach the Taylor problem from the side of turbulence theory. In these cases, theory provides a theoretical spectrum of turbulence which then is set to flow. As far as such theoretical spectra of the turbulent flow are concerned, they enter into our turbulent velocity terms \(\delta \vec {V}_{w\vec {\kappa }}\) or their squared time averages \(\langle |\delta \vec {V}|^2\rangle\).
Among the turbulence models, the most prominent are Kolmogorov's (Kolmogorov 1941a, b, 1962) and, from the point of view of turbulence in a magnetised medium like MHD, Iroshnikov and Kraichnan's (Iroshnikov 1964; Kraichnan 1965, 1967) models both discussed widely in the literature (cf., e.g. Biskamp 2003).
Focus has also been given to the energy spectrum of the velocity fluctuations (Fung et al. 1992; Kaneda 1993) which in Eq. (14) enters through the cumulant expansion term in the exponent of the magnetic fluctuations before the magnetic energy spectrum is calculated. The assumption there is that the original mechanical turbulent energy spectrum to start with
$$\begin{aligned} {\mathcal {E}}(\kappa ) = \int {\mathcal {E}}(\kappa , w_\kappa ) {\mathrm{d}}w_\kappa \end{aligned}$$
is either Kolmogorov \({\mathcal {E}}_K\propto \epsilon ^\frac{2}{3} \kappa ^{-\frac{5}{3}}\) or Iroshnikov–Kraichnan \({\mathcal {E}}_{\mathrm{IK}}\propto (\epsilon V_A)^\frac{1}{2} \kappa ^{-\frac{3}{2}}\), with \(V_A\) Alfvén speed.
It becomes deformed by advection (by the large-scale energy-carrying eddies). Such theoretical and numerical attempts restrict to the inertial range of the velocity turbulence. If the energy in the velocity fluctuations at frequency \(w_\kappa\) has Gaussian spread in wavenumber, the advected Kolmogorov energy spectrum of the small-scale velocity fluctuations at mean large eddy velocity \(U_0\) assumes the form (Fung et al. 1992)
$$\begin{aligned} {\mathcal {E}}^K_{\delta \vec {V}}(\kappa ,w_\kappa ) \propto \frac{{\mathcal {E}}_K}{2\kappa U_0} \sum _{\pm }\exp \left( -\frac{1}{2} \frac{w^2_{\kappa \pm }}{\kappa ^2 U_0^2}\right) , \quad w_{\kappa \pm } = w_\kappa \pm \lambda \epsilon ^\frac{1}{3}\kappa ^\frac{2}{3} \end{aligned}$$
Here \(\lambda \sim O(1)\) is some constant. The advected Kolmogorov velocity spectrum of the mechanical turbulence thus should steepen for the assumed high large-scale speeds \(U_0\)
$$\begin{aligned} {\mathcal {E}}^K_{\delta \vec {V}}(\kappa ,w_\kappa ) \propto \kappa ^{-\frac{8}{3}} \end{aligned}$$
At large \(\kappa\) the exponential factor in this range tends to unity. At small \(\kappa\) it suppresses the spectrum exponentially. Advection thus reduces the effect of small-scale eddy turbulence on the large eddies. It is the short scales in the inertial range which cause the main deformation of the advected spectrum. Taylor's assumption then implies that the frequency of the mechanical turbulence simply becomes \(w_\pm \sim \kappa U_0\), i.e. determined by the speed of the largest eddies. The \(\kappa\)-dependent second term in \(w_\pm\) is neglected, and the exponential reduces to a number. At high Reynolds numbers and large \(U_0\) the internal turbulent dispersion plays no role in mechanical Kolmogorov turbulence. Even if it exists, it is not taken into account anywhere. The advected Kolmogorov velocity spectrum can be integrated (Tennekes 1975) with respect to wavenumber \(\kappa\) to become
$$\begin{aligned} {\mathcal {E}}_w^K \sim (\epsilon U_0)^\frac{2}{3}w^{-\frac{5}{3}} \end{aligned}$$
which, as expected, is a simple mapping of the velocity spectrum into frequency space or vice versa from frequency into wavenumber space. This applies to the original frame of turbulence. It does not yet apply to the Taylor–Galilei transformation from the stationary turbulence frame via the large-scale streaming into the observer's frame. The functional dependence of the inertial range at large advection speeds \(U_0\) has been reproduced by numerical simulations (Fung et al. 1992; Kaneda 1993). In fact, in order to be somewhat more precise, it should be noted that the complete reduced frequency spectrum obtained (Fung et al. 1992; Kaneda 1993) consists of two terms
$$\begin{aligned} {\mathcal {E}}_w = a\,w^{-2}+b\,w^{-\frac{5}{3}}, \quad a,b\in \{C\} \end{aligned}$$
which generalises the former theory (Tennekes 1975) to inclusion of a large range of Reynolds numbers in the velocity turbulence. Here a, b are \(\epsilon\)-dependent constants (c-numbers). It has, however, been shown (Fung et al. 1992; Kaneda 1993) that the first of these terms is always small compared with the second as long as one stays in the inertial range.
Use of these expressions in Eq. (14) and applying the w-integration yields a delta function, and the average over the velocity turbulence reduces to an additive term \(\propto Rk^2w^{-\frac{5}{3}}t^2\) in the exponential with some factor R, which depends on \(U_0\) and \(\epsilon\). (If it depends on time, this dependence is given by \(R(U_0t=x-x_0)\) as a mere translation.) The term resulting in the argument of the exponential, however, retains the irreducible second-order time dependence in the exponential as long as the velocity spectrum is not completely neglected. Reference to the Kolmogorov spectrum and its mapping thus does not eliminate its effect on the spectrum of magnetic turbulence.
Whether neglect of the internal frequencies and their dispersion, the turbulent dispersion relation, is justified or not remains an unresolved problem. The large speed simulations seem to justify it at least for the limited inertial range, however, on the expense of rather short inertial ranges of less than an order of magnitude in frequency or so obtained in the simulations in the high Reynolds number limit. This suggests that in the inertial range possibly no susceptible dispersion can experimentally be detected and thus in the transformation of the velocity spectrum plays no role. The small-scale velocity eddies seem "frozen" (Fung et al. 1992; Kaneda 1993) in the Kolmogorov transport of energy down the inertial range. This does not resolve the complications in determining the magnetic fluctuation spectrum, however. It just tells that the inertial range velocity spectrum can be transformed but in the magnetic fluctuations must be retained at least in the form of the cumulant expansion term.
We can, however, understand Eq. (39) as an inertial range turbulent dispersion relation. The internal turbulent dispersion in the inertial range is given by
$$\begin{aligned} w_\kappa = \pm \lambda \epsilon ^\frac{1}{3}\kappa ^\frac{2}{3} \end{aligned}$$
which in fact follows directly from inspection of the Kolmogorov spectrum in one dimension as the inverse timescale in the inertial range. The same argument applied to an Iroshnikov–Kraichnan spectrum \({\mathcal {E}}_{\mathrm{IK}}\) with \(\delta z_\pm \sim (\epsilon V_A\ell )^\frac{1}{4}\) leads to an isotropic weakly increasing with wavenumber turbulent dispersion relation
$$\begin{aligned} w_\kappa \sim (\epsilon V_A)^\frac{1}{4} \kappa ^\frac{3}{4} \end{aligned}$$
in the inertial range interval \(\kappa _{\mathrm{in}}\ll \kappa \ll \kappa _d\) between the eddy wavenumbers of energy injection \(\kappa _{\mathrm{in}}\) and energy dissipation \(\kappa _d\). (Note that the latter is not necessarily Kolmogorov's genuine dissipation scale, but might simply be the transition from the magnetised electron scale to the domain of scales where the electrons become nonmagnetic and processes take over in which the magnetic field and its fluctuations are not anymore included but electrostatic processes start dominating.) Both these dispersion relations which follow from straight dimensional analyses are weakly nonlinear only. They show that the turbulent frequencies increase with decreasing spatial scale of the velocity eddies, a behaviour reminding of sound waves. Smaller eddies oscillate at larger frequency, an effect of their decreased inertia. These expressions do not account for any anisotropy and higher dimensionality in the velocity turbulence, however, which is justified at large eddy scales in the MHD range but becomes questionable in the Hall and electron-MHD range where the ions demagnetise thus becoming about independent of the magnetic field. Their dependence on the magnetic field is only via their charge neutralising and electric current coupling to the magnetised electrons. Instead of Alfvén waves, the relevant waves are kinetic Alfvén waves (Baumjohann and Treumann 2012) with dispersion
$$\begin{aligned} w^2(\kappa _\Vert ,\kappa _\perp ) = \kappa _\Vert ^2 V_A^2 \left( 1 + {\frac{1}{2}} \beta \kappa _\perp ^2 \rho _i^2 \right) \left( 1+\kappa _\perp ^2\lambda _{\mathrm{e}}^2 \right) ^{-1} \end{aligned}$$
with \(\rho _i\) the ion gyroradius, \(\lambda _{e,i}=c/\omega _{e,i}\) electron and ion inertial lengths, and \(\omega _{e,i}\) electron and ion plasma frequency, respectively. \(\lambda _i>\kappa _\perp ^{-1}>\lambda _e\) holds in this range. Any turbulence in the wavenumber range \(\kappa >\rho _i^{-1}\) becomes anisotropic. Its spectrum then splits into parallel and perpendicular components (Goldreich and Sridhar 1995)
$$\begin{aligned} {\mathcal {E}}_{\kappa _\Vert } \propto \epsilon ^\frac{3}{2} \left( V_A \kappa _\Vert \right) ^{-\frac{5}{2}}, \quad {\mathcal {E}}_{\kappa _\perp } \propto \epsilon ^\frac{2}{3} \kappa _\perp ^{-\frac{5}{3}} \end{aligned}$$
which in the perpendicular direction is Kolmogorov while in the parallel direction is steeper. This is because the two scales are different (Biskamp 2003): \(\kappa _\perp /\kappa _\Vert \sim \left( \sqrt{\beta /2}\rho _i\kappa _\perp \right) ^\frac{1}{3}\) or \(\kappa _\perp /\kappa _\Vert \sim \left( \lambda _i\kappa _\perp \right) ^\frac{1}{3}\), which basically is the restriction on \(\kappa _\perp\). The deformation of the spectrum then depends on the direction of the main flow parallel or perpendicular to the mean magnetic field. For parallel flow, only the parallel spectrum of eddies becomes deformed, while for perpendicular convection of the turbulence, it is the perpendicular spectrum which deforms. The former is non-magnetised, while the latter is subject to the Kolmogorov kind of deformation referred to above in Eq. (36). We then have for \(\delta z_\perp \sim \ell _\perp ^\frac{1}{3}/V_A\) and \(\delta z_\Vert \sim \ell _\perp /\ell _\Vert\) which gives the two dispersion relations
$$\begin{aligned} w_{\kappa _\perp }\sim & {} \frac{\epsilon ^\frac{1}{3}}{V_A}\kappa _\perp ^\frac{2}{3}, \quad (\kappa _\Vert ~{\mathrm {fixed}}) \nonumber \\ w_{\kappa _\Vert }\sim & {} \left( \frac{\epsilon }{V_A^3}\right) ^\frac{1}{2}\kappa _\Vert ^\frac{1}{2}, \quad (\kappa _\perp ~{\mathrm {fixed}}) \end{aligned}$$
where the parallel relation is just a consequence of the perpendicular dispersion relation. Application of the Taylor hypothesis to the mechanical turbulence power spectra in these cases is justified as this deformation can for large \(U_0\) be mapped to frequency space. This holds for the Kolmogorov and Iroshnikov–Kraichnan spectra as well as their anisotropic extension into the range of Hall scales, respectively, ion inertial scales. It implies neglect of the internal frequency dependence of the power spectral densities. However, this is of little help in the transformation of the magnetic spectra as these still depend on the presence of the velocity fluctuations, respectively, mechanical turbulence. Reference to those turbulent power spectral densities does not eliminate this dependence.
The above brief investigation shows a number of interesting points which usually are neglected by experimentalists and theorists as well or considered to be simple and not worth any discussion. So straight recalculations of power spectral densities recorded at spacecraft from frequency into wavenumber space adopting Taylor's hypothesis are common wisdom.
The above analysis demonstrates that such a simple replacement is possible in the Fourier spectra of the magnetic fluctuations, though under some listed rather severe restrictions.
We do not want to be as rude as Saint-Jacques and Baldwin (2000) concerning the application of the Taylor hypothesis to observations in order to obtain an impression of some part of the wavenumber spectrum of the turbulence, where it can be applied, in particular when no direct measurements of the velocity spectrum or its wavenumber distribution are available, the more frequently realised case. Monitoring magnetic fluctuations is easiest and has the advantage of obeying relativistic invariance. However, when the turbulent velocity fluctuations have to be taken into account because they become comparably large amplitude and thus cannot be ignored, as should be the case in slow solar wind flows, for instance, then the problem of the required transformation into the spacecraft frame becomes more complicated.
The above considerations (including those in "Appendix") demonstrate quite clearly that the restriction to observation of the mechanical turbulence, viz. the velocity (and also the density) fluctuations and their power spectral densities, permits application of the Taylor hypothesis. This permission is granted for high Reynolds numbers and fast flows and applies to the inertial range. We have used this in discussing the Kolmogorov spectrum for homogeneous and isotropic turbulence of the flow. We have not yet considered the effects of anisotropies in the flow which in MHD must naturally occur because of the differences in the flow parallel and perpendicular to the magnetic field, i.e. the free flow along a mean external field and its convection perpendicular to this mean field. Even in this case, one must distinguish between strong and weak field conditions as the relation between flow and field differ in those cases. Aside of these cases, in the homogeneous and isotropic turbulent flow, the inertial range turbulence suggests that the large energy-carrying eddies freeze the small-scale eddies, and there is a range in which the spectrum can indeed be simply mapped from frequency space into wavenumber space and vice versa. Thus, here the Taylor hypothesis applies under conditions of large Reynolds numbers and some gap between large and small eddy turbulence.
However, from this step going up the ladder to the magnetic power spectral densities becomes a rather more difficult endeavour. The full turbulent velocity spectrum appears in the argument of the exponential in magnetic fluctuations. The transformation of the turbulent flow into the spacecraft frame changes the frequency mapping into the wavenumber spectrum. There is no linearity between the two as this is distorted even in the simplest cases. Only under severe restrictions (as identified in "Appendix"), this straight mapping from mechanical into magnetic turbulence can be successfully done. In the general case, even estimates of the different ranges of plasma parameters like the wavenumbers corresponding to gyroscales and inertial scales must be cautioned. The Taylor hypothesis applied to magnetic turbulence should be restricted to fast flows only, substantially faster than the fastest expected rotational velocities of the turbulent eddies in the mechanical flow. Such conditions are given in high-speed solar wind or stellar outflows. Except for the wavenumber- and frequency-dependent "damping effect" of the spectrum found here, this has been, in principle, all well known already.
In MHD, the magnetic turbulence and mechanical turbulence are intimately connected, which suggests use of the well-known Alfvénic Elsasser variables \(z_\pm\) (Elsasser 1950). At high Reynolds numbers and weak kinetic-magnetic correlations, this leads to Iroshnikov–Kraichnan spectra (cf., e.g. Biskamp 2003, for a review) for the subset of a non-streaming broad spectrum of turbulent eddies to which we referred above. At the shorter, non-Alfvénic scales below MHD scales in magnetic turbulence in the Hall- or electron-MHD range of a non-magnetised ion and magnetised electron fluids, reference to kinetic Alfvén rather than Alfvén waves is more appropriate. They naturally introduce an anisotropy through the appearance of the inertial length as a natural scale in this range and internal transport mainly along the mean magnetic field.
If one is not in the MHD regime and the phase speed of the ambient fluctuations exceeds the Alfvén velocity (e.g. whistler turbulence), the Taylor hypothesis of course needs to be considered carefully. Whistler turbulence, for example, requires the presence of free energy that would not be inherent to turbulence (in fact fluid or MHD turbulence does not contain any such sources) because whistlers do not grow by themselves, whether ion or electron whistlers does not matter here. (There is a thermal fluctuation level present because they are eigenmodes of the plasma, but this level is far below any turbulence.) They need an excess of energy perpendicular to the magnetic field which in homogeneous turbulence is not given a priori. Or they need beams of some kinds which are not a matter of homogeneous stationary turbulence as they are injected from outside. The only possibility for their sufficiently large amplitude to play any role in turbulence is that one is dealing with inhomogeneous turbulence which locally over a short radial distance is approximately homogeneous but on the large radial scale (in solar wind, for instance, in a Parker model field) is inhomogeneous. Then, inhomogeneity and projection of flow to bent magnetic fields might support growth of whistlers until reaching large amplitudes, evolve into weakly dissipative solitons, i.e. weak shocks which reflect particle etc. This, however, is not anymore fluid or MHD turbulence and would be erroneous treating it so as it would be fluid or MHD turbulence. Clearly the solar wind is expanding. Hence, such effects might occur. But then all attempts of description which ignore it are simply incorrect. We are not dealing here with either of them just with homogeneous non-expanding turbulence as the simplest case.
One can also formulate this problem in the way that in plasma consisting of electrons and ions, the two regimes of MHD and inertial regime where the inertial lengths enter cannot be treated the same way because, from the point of view of electrodynamic response, the ions and electron separate here. However, the mechanical velocity (in a weak magnetic field with \(\beta \gtrsim 1\) like the solar wind is independent of this, and since it is basic to turbulence, to it alone can Taylor's hypothesis be applied. The only assumption is that the velocity fluctuations are small with respect to the mean flow (such that their self-interaction can be ignored), which is mostly satisfied (except when the solar wind disappears).
Another important consideration is whether or not one can even, in principle, deduce the wavenumber from the time series. For example, again from the Helios data, when the spectral index is \(f^{-1}\) (in the frequency domain), one has to check if the implied wave length is less than the distance of the spacecraft to the Sun. If it isn't, Taylor's hypothesis would not help. So, it is interesting to address the question, "can we demonstrate instances where their theoretical analysis makes a quantitative difference?" Technically speaking, the task of the wavenumber determination from the time series can be achieved by computing the phase speed (in the observer's frame) from the ratio of the electric field to the magnetic field and using the relation \(v_{\mathrm {ph}} = \omega /k\) along the stream. This method is, however, applicable when the electromagnetic component of the electric field is used and when the electric field is not superposed or mixed over multiple waves. Generally, in the inertial range of solar wind turbulence, the fluctuation phase speeds are low and the Taylor hypothesis works well. It does not work where its assumptions are violated. The example of the foreshock and magnetosheath is intriguing in that regard, but one has to be very careful in the foreshock where the entire concept of plasma moments is questionable due to the non-Maxwellian nature of the solar wind distribution functions. The magnetosheath might be a fruitful area to consider.
We are not dealing with power spectra either but primarily with fluctuations from which secondarily power spectra can be calculated. Since it is clear from our analysis that velocity fluctuation spectra can be Taylor–Galilei transformed into frequency space (under the weak assumption of small turbulent fluctuation amplitudes compared with the solar wind flow speed, not with the Alfvén velocity), it is also clear (because the transformation holds and is just subject to the observational, experimental, instrumental, methodological uncertainties and errors) that from the Galilei transformation, one directly obtains the spectrum of wavenumber fluctuations of the turbulent velocity. One even, if the measurements are precise enough, obtain the phase of the fluctuations which when calculating the power spectral density afterwards is generally lost. Maintaining it would provide a measure of the correlation length, an interesting problem in inself. A practical advice from our approach is to take the velocity turbulence to find out what the original wavenumber spectrum was/is, not the magnetic field. From that mentioned above, the velocity turbulence can be Taylor–Galilei transformed into the spacecraft frequency frame, while the magnetic field cannot except from the extreme case of complete separation from mechanical turbulence—a case when the magnetic field will never become turbulent.
In summary, we conclude that the Taylor hypothesis can be safely applied under rather weak assumptions to the turbulent power spectrum of the velocity in both cases of advection by large eddies and in moderate to fast streaming plasmas with the restriction that one must take into account the relative directions of the mean stream velocity and the directions of the wavenumber of the turbulent velocities (eddies). In these cases, the internal dispersion relation of the turbulence can be ignored if only the advection or streaming speeds are large enough. The wavenumber spectrum is then conserved in its either K or IK shape and Taylor–Galilei transforms with same shape into the spacecraft frequency frame.
The same conclusion does, however, not rigorously hold for the magnetic power spectral density. We have demonstrated this in the main text and in full rigour in "Appendix". Even though the amplitude of the magnetic field itself is not vulnerable to the transformation, the full spectrum of the turbulent fluctuations of the velocity appears in the exponential of the Fourier transform of the magnetic field principally inhibiting interpretation via the Taylor hypothesis. Application of Taylor's hypothesis to the magnetic power spectra then requires a complete neglect of any relation between the turbulent magnetic and velocity fields. This is equivalent to the assumption that the magnetic field is by itself turbulent, which is unphysical. Nevertheless, assuming that this holds approximately, it implies that the Taylors transformation can approximately be applied if and only if the energy contained in the velocity power spectrum is completely negligible compared to the streaming energy density \(\frac{1}{2}m_iN_0V_0^2\), i.e. the observations take place in a high-speed flow. Only in this case, one can conclude from the frequency magnetic power spectral density as measured in the spacecraft frame on the wavenumber spectrum of the turbulence.
The notion of a turbulent dispersion relation seems to be alien to turbulence theory which just considers a wavenumber spectrum which is understood as the integral \(\int {\mathrm{d}}\omega (\dots )\) with respect to frequency of the power spectral density of the turbulent fluctuations. In \(\omega ,\vec {k}\) space the power spectral density occupies a volume which, resolved for \(\omega (\vec {k})\) gives a complicated multiply connected relation between frequency and wavenumber, the turbulent dispersion relation. This is not a solution of a linear wave eigenmode equation whose solutions are ordinary waves.
Doubts in the general validity of Taylor's hypothesis and its unreflected application have been expressed not only for MHD (e.g. by Goldstein et al. 1986; Huang and Sahraoui 2015; Klein et al. 2014, 2015; Lugones et al. 2016; L'vov et al. 1999; Matthaeus et al. 2010; Narita 2017, 2018; Perri et al. 2017; Treumann and Baumjohann 2017; Wilczek and Narita 2012, 2014) but also in other fields (Belmonte et al. 2000; Bourouaine and Perez 2018; Burghelea et al. 2005; Cheng et al. 2017; Creutin et al. 2015; Davoust and Jacquin 2011; Dennis and Nickels 2008; Geng et al. 2015; He et al. 2010; Goto and Vassilicos 2016; Higgins et al. 2012; Kumar and Verma 2018; Macmahan et al. 2012; Podesta 2017; Saint-Jacques and Baldwin 2000; Shet et al. 2017; Squire et al. 2017; Tsinober et al. 2001; Yang and Howland 2018) dealing with turbulence, among them meteorology, hydrology, channel flows, river research and others.
See also the thermodynamic arguments in Treumann and Baumjohann (2017) referring to observed spectra (Šafránková et al. 2016).
Their "virtual" character differs from virtual modes in quantum theory where they exist during time uncertainty \(\Delta t<\hbar /\Delta \epsilon _{\vec {p}}\), with \(\epsilon _{\vec {p}}=\hbar \omega\), and therefore can never be observed. The turbulent fluctuations of frequency \(\omega\) and wavenumber \(\vec {k}\) are "virtual" in the sense that they are no solutions of an eigenmode equation but just cover a set of wavenumbers and frequencies giving rise to the smooth turbulent wavenumber and frequency spectrum.
One should stress that this representation is completely general. It would be wrong to understand the Fourier components as eigenfunctions. The transformation just maps the turbulent fluctuations from real space \(\{t,\vec {x}\}\rightarrow \{\omega ,\vec {k}\}\) into frequency and wavenumber (momentum) space. If there is a relation between \(\omega\) and \(\vec {k}\) through a turbulent "dispersion relation", but to repeat this is by no means a solution of a system of eigenmode equations for the turbulent fluctuations. The system of equations of turbulence is highly nonlinear: it consist of the Fourier-transformed untruncated and unexpanded dynamical equations of all particle dynamics, flows, and fields.
We assume that the total streaming speed \(|\vec {V}|\ll c\) though possibly but not necessarily large can be understood as nonrelativistically small. For instance, in the solar wind where Taylor's hypothesis is continuously applied we have bulk streaming velocities roughly 200 km/s \(\lesssim V_0 \lesssim 2000\) km/s, occasionally exhibiting very rare peaks which may reach speeds as high as \(V_0 \sim 4000\) km/s. Hence the ratio V / c is at most in the few per cent range allowing to circumvent the complications, reference to relativity would introduce here. However note that in applications to fast expanding stellar winds, like in cataclysmic variables, Wolf-Rayet stars, or even supernova remnants, one would have to refer to relativity.
Inclusion of linear wave damping has been considered by Narita and Vörös (2017).
For simplicity we choose a fixed wavenumber but could as well stay with only a fixed frequency \(w_0\) and retain the full \(\kappa\) spectrum as only the frequency is subject to the Taylor–Galilei transformation.
WB contributed to the discussion of the manuscript. YN contributed spacecraft data analysis, theory, and manuscript writing. RT presented the original idea, theory/calculation and wrote the manuscript. All authors read and approved the final manuscript.
This work was part of a Visiting Scientist Programme at the International Space Science Institute Bern. We acknowledge the reserved interest of the ISSI directorate as well as the generous hospitality of the ISSI staff.
Helios-1 40-second plasma and magnetic field data are publicly available at the NASA CDAWeb server (https://cdaweb.sci.gsfc.nasa.gov/index.html).
Part of this work (YN) is financially supported by Austrian Space Applications Programme (ASAP) at Austrian Research Promotion Agency, FFG ASAP12 SOPHIE, under contract 853994 and Austrian Science.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
We performed some of the integrations in Eq. (21) under the assumption that only one single turbulent wave mode \((w_0,\vec {\kappa }_0)\) dominates the turbulent velocity spectrum \(\delta \vec {V}(t,\vec {x})\).
Here, we will be more general because any mode of frequency \(w_0\) might be highly degenerated in the sense that it consists of a broad spectrum of modes with same frequency but completely different wavenumbers \(\vec {\kappa }\) as, for instance, is the case in turbulent nonlinear sideband generation. Hence, one must allow for an undefined broad spectrum in wavenumbers \(\vec {\kappa }\) even when taking only one frequency \(w=w_0\). This implies that the general case concerning \(\vec {\kappa }\) cannot be avoided. Then we have
$$\begin{aligned} \delta \vec {V}(t,\vec {x})=\frac{1}{16\pi ^4} \int {\mathrm{d}}w\,{\mathrm{d}}\vec {\kappa }\, \delta \vec {V}_{w\vec {\kappa }} \left\{ \cos \left[ (w + \vec {\kappa } \cdot \vec {V}_0) t + \vec {\kappa } \cdot \vec {x}\right] - i\sin \left[ (w + \vec {\kappa } \cdot \vec {V}_0) t - \vec {\kappa } \cdot \vec {x}\right] \right\} \end{aligned}$$
This results in a complicated time and space dependence of the magnetic fluctuations
$$\begin{aligned} \delta \vec {B}(t,\vec {x}) = \frac{1}{16\pi ^4}\int {\mathrm{d}}\omega \,{\mathrm{d}}\vec {k}\, \delta \vec {B}_{\omega \vec {k}} {\mathrm{e}}^{-i\left[ \omega (\vec {k}) + \vec {k} \cdot \left( \vec {V}_0 + \delta \vec {V}(t,\vec {x})\right) \right] t + i\vec {k} \cdot \vec {x}} \end{aligned}$$
One is interested in the spectrum of the fluctuations in the spacecraft frame, not the fluctuations themselves. The Fourier spectrum of velocity fluctuations in the approximation that just the main streaming velocity appears in its Fourier components is of course trivially obtained as the Fourier inversion of the velocity fluctuation spectrum (though, one could also retain the fluctuations themselves in the exponent and successively expand the integral under the assumption that the expansion converges). For the magnetic fluctuations, however, we retain the velocity fluctuations in the exponential and integrate with respect to time in Eq. (21) leaving the spatial integration for later.
To make this explicit, we define the integral operator
$$\begin{aligned} {\mathcal {O}}_{\vec {k}}\equiv \frac{\vec {k}}{16\pi ^4} \cdot \int {\mathrm{d}}w\,{\mathrm{d}}\vec {\kappa }\, \delta \vec {V}_{w\kappa } \end{aligned}$$
which when operating on \(\exp \vec {\kappa } \cdot \vec {x}\) depends on space \(\vec {x}\) but in addition acts to the right on all functions containing w and \(\kappa\). Then, we have for the magnetic fluctuations
$$\begin{aligned} \delta \vec {B}_{\varpi \vec {K}}& = {} \frac{1}{16\pi ^4}\int {\mathrm{d}}\omega \,{\mathrm{d}}\vec {k}\, \delta \vec {B}_{\omega \vec {k}} \int {\mathrm{d}}t\,{\mathrm{d}}\vec {x}\, {\mathrm{e}}^{-i[(\omega +\vec {k} \cdot \vec {V}_0-\varpi )t + (\vec {K} - \vec {k}) \cdot \vec {x}]} \nonumber \\&\left. \times {\mathrm{e}}^{-it{\mathcal {O}}_{\vec {k}} \{\cos [(w+\vec {\kappa } \cdot \vec {V}_0)t + \vec {\kappa } \cdot \vec {x}] - i\sin [(w+\vec {\kappa } \cdot \vec {V}_0)t - \vec {\kappa } \cdot \vec {x}]\}} \right\} \end{aligned}$$
The time and space integrations have become separated. They can in principle be independently performed. It is, however, more convenient to define \(\alpha \equiv \omega -\varpi +\vec {k} \cdot \vec {V}_0\), and \(\beta \equiv w+\vec {\kappa } \cdot \vec {V}_0\). Integration is a complicated procedure. The presence of the trigonometric functions in the exponent indicates a strong nonlinear coupling. The sinus-term is real and will to lowest order be proportional to \(t^2\). It hence introduces some kinds of dissipation. However, for all real \(\alpha , \beta\) the arguments of the trigonometric functions vary between 0 and \(\pi\), and dissipation may be small even though it will be finite.
The first-order relevant contribution corresponding to the Taylor hypothesis is obtained when introducing the new variable \(\tau =\beta t\), expanding the trigonometric functions and retaining only the lowest order in \(\tau\). The implication is that \(\delta (\beta )=\delta (w+\vec {\kappa } \cdot \vec {V}_0)\) is applied, and therefore
$$\begin{aligned} w = -\vec {\kappa } \cdot \vec {V}_0 \end{aligned}$$
is to be used in the integral for \({\mathcal {O}}\). This then yields for the time integral
$$\begin{aligned} \frac{1}{\beta } \int {\mathrm{d}}\tau \, {\mathrm{e}}^{-i\tau (\alpha +{\mathcal {O}})/\beta } = 2\pi \delta (\alpha +{\mathcal {O}}) \end{aligned}$$
The w-operator function is thus lost, and \({\mathcal {O}}\) becomes the transform of \(\delta \vec {V}_{\vec {w\kappa }}\):
$$\begin{aligned} {\mathcal {O}}_{\vec {k}}(\vec {x}) = \frac{1}{8\pi ^3}\int {\mathrm{d}}\vec {\kappa }\, \vec {k} \cdot \delta \vec {V}_{-\vec {\kappa } \cdot \vec {V}_0, \vec {\kappa }}\,{\mathrm{e}}^{i\vec {\kappa } \cdot \vec {x}} \equiv \vec {k} \cdot \delta \vec {{\mathcal {V}}}_{\mathrm{turb}} (\vec {V}_0,\vec {x}) \end{aligned}$$
Though by now this is just a function, it enters at a very complicated place. The \(\delta\)-function requires that the frequency in \(\delta \vec {B}_{\omega \vec {k}}\) is to be replaced by
$$\begin{aligned} \omega = \varpi -\vec {k} \cdot [\vec {V}_0 - \delta \vec {{\mathcal {V}}}_{\mathrm{turb}}(\vec {V}_0,\vec {x})] \end{aligned}$$
before the integration with respect to \(\vec {x}\) is performed. Here \(\delta \vec {{\mathcal {V}}}_{\mathrm{turb}}\) is the real space velocity fluctuation. This is the frequency \(\varpi\) in the observer's frame shifted by both the streaming and some contribution of the turbulence, which, however, is still to be integrated over space before yielding the magnetic fluctuation amplitude. It shows that the Taylor hypothesis has a much more complicated consequence that the shift in frequency implies.
Actually, this last integration cannot be easily performed even in this simplest case where we took only the lowest-order approximation in the time integral. Because plugging in the expression for \(\omega\), the Fourier transform of the magnetic fluctuation amplitude obeys the following implicit representation:
$$\begin{aligned} \delta \vec {B}_{\varpi \vec {K}} = \frac{1}{8\pi ^3} \int {\mathrm{d}}\vec {k} \int {\mathrm{d}}\vec {x}\, \delta \vec {B}_{\varpi -\vec {k} \cdot [\vec {V}_0-\delta \vec {{\mathcal {V}}}_{\mathrm{turb}}(\vec {V}_0,\vec {x})], \vec {k}}\,{\mathrm{e}}^{-i(\vec {k} - \vec {K}) \cdot \vec {x}} \end{aligned}$$
Here, the spatial dependence is in the index on the magnetic fluctuation amplitude, which cannot be further resolved unless by iteration. Thus, even the most crude approximation which takes into account the contribution of the velocity fluctuations in the Taylor transformation leads to a rather complicated dependence of the frequency on the spectrum of fluctuations in the mechanical turbulence. Resolution of the equation for the Fourier amplitude of the magnetic fluctuations becomes a formidable task and can be done only if completely neglecting the effect of the mechanical turbulence. This is what the Taylor hypothesis in fact imposes, and it is justified only when the flow speed by far exceeds the contribution of the fastest speeds in the fluctuations. The condition under which this is satisfied is high flow speeds \(V_0\gg V_A\), far above the Alfvén speed \(V_A\), which usually requires also that the ambient magnetic field is weak.
In order to arrive at a Taylor-like expression, one must violently neglect the spatial dependence in the index. the \(\vec {x}\) integration reduces trivially to a delta function \(2\pi \delta (\vec {k}-\vec {K})\). This permits performing the integral over \(\vec {k}\) to obtain
$$\begin{aligned} \delta \vec {B}_{\varpi \vec {K}} = \frac{1}{4\pi ^2} \delta \vec {B}_{\varpi -\vec {K} \cdot [\vec {V}_0-\delta \vec {{\mathcal {V}}}_{\mathrm{turb}}(\vec {V}_0)], \vec {K}} \end{aligned}$$
which is somewhat more elaborate version of Taylor's hypothesis. Neglecting now the fluctuation in the index, which for \(|V_0|\gg \delta {\mathcal {V}}\) is not unjustified, the Fourier amplitude of the fluctuation is mapped to its stream-transformed version. This expression can then serve in the calculation of the power spectral density of the turbulent magnetic field.
International Space Science Institute, Hallerstraße 6, 3012 Berne, Switzerland
Geophysics Department, Ludwig-Maximilians-Universität, Theresienstraße 41, 80333 Munich, Germany
Space Research Institute, Austrian Academy of Sciences, Schmiedlstraße 6, 8042 Graz, Austria
Alexandrova O, Saur J, Lacombe C, Mangeney A, Michell J, Schwartz SJ, Roberts P (2009) Universality of solar wind turbulent spectrum from MHD to electron scales. Phys Rev Lett 103:165003. https://doi.org/10.1103/PhysRevLett.103.165003 View ArticleGoogle Scholar
Baumjohann W, Treumann RA (2012) Basic space plasma physics, London 1996, Revised and enlarged edition. Imperial College Press, London. https://doi.org/10.1142/P850 View ArticleGoogle Scholar
Belmonte A, Martin B, Goldburg WI (2000) Experimental study of Taylor's hypothesis in a turbulent soap film. Phys Fluids 12:835–845. https://doi.org/10.1063/1.870339 View ArticleGoogle Scholar
Biskamp D (2003) Magnetohydrodynamic turbulence. Cambridge University Press, Cambridge, p 310View ArticleGoogle Scholar
Bourouaine S, Perez JC (2018) On the limitations of Taylor's hypothesis in Parker Solar Probe's measurements near the Alfvén critical point. Astrophys J Lett 858:L20. https://doi.org/10.3847/2041-8213/aabccf View ArticleGoogle Scholar
Burghelea T, Segre E, Steinberg V (2005) Validity of the Taylor hypothesis in a random spatially smooth flow. Phys Fluids 10:103101. https://doi.org/10.1063/1.2077367 View ArticleGoogle Scholar
Celnikier LM, Harvey CC, Jegou J, Moricet P, Kemp M (1983) A determination of the electron density fluctuation spectrum in the solar wind, using the ISEE propagation experiment. Astron Astrophys 126:293–298. https://doi.org/10.1088/2041-8205/737/2/L41 View ArticleGoogle Scholar
Chen CHK, Bale SD, Salem D, Mozer FS (2011) Frame dependence of the electric field spectrum of solar wind turbulence. Astrophys J Lett 737:L41. https://doi.org/10.1088/2041-8205/737/2/L41 View ArticleGoogle Scholar
Chen CHK, Salem CS, Bonnell JW, Mozer FS, Bale SD (2012) Density fluctuation spectrum of solar wind turbulence between ion and electron scales. Phys Rev Lett 109:035001. https://doi.org/10.1103/PhysRevLett.109.0354001 View ArticleGoogle Scholar
Cheng Y, Sayde C, Li Q, Gasara J, Selker J, Tanner E, Gentine P (2017) Failure of Taylor's hypothesis in the atmospheric surface layer and its correction for eddy-covariance measurements. Geophys Res Lett 44:4287–4295. https://doi.org/10.1002/2017GL073499 View ArticleGoogle Scholar
Creutin JD, Leblois E, Lepioufle JM (2015) Unfreezing Taylor's hypothesis for precipitation. J Hydrometeorol 16:2443–2462. https://doi.org/10.1175/JHM-D-14-0120.1 View ArticleGoogle Scholar
Davoust S, Jacquin L (2011) Taylor's hypothesis convection velocities from mass conservation equation. Phys Fluids 23:051701. https://doi.org/10.1063/1.3584004 View ArticleGoogle Scholar
Dennis DJC, Nickels TB (2008) On the limitations of Taylor's hypothesis in constructing long structures in a turbulent boundary layer. J Fluid Mech 614:197–206. https://doi.org/10.1017/S0022112008003352 View ArticleGoogle Scholar
Elsasser WM (1950) The hydromagnetic equations. Phys Rev 79:183–183. https://doi.org/10.1103/PhysRev.79.183 View ArticleGoogle Scholar
Fox RF (1976) Critique of the generalized cumulant expansion method. J Math Phys 17:1148–1153. https://doi.org/10.1063/1.523041 View ArticleGoogle Scholar
Fung JCH, Hunt JCR, Malik NA, Perkins RJ (1992) Kinematic simulation of homogeneous turbulence by unsteady randon Fourier modes. J Fluid Mech 236:281–318. https://doi.org/10.1017/S0022112092001423 View ArticleGoogle Scholar
Geng C, He G, Wang Y, Xu C, Lozano-Durán A, Wallace JM (2015) Taylor's hypothesis in turbulent channel flow considered using a transport equation analysis. Phys Fluids 27:025111. https://doi.org/10.1063/1.4908070 View ArticleGoogle Scholar
Goldreich P, Sridhar S (1995) Toward a theory of interstellar turbulence. 2: strong alfvenic turbulence. Astrophys J 438:763–775. https://doi.org/10.1086/175121 View ArticleGoogle Scholar
Goldstein ML, Roberts DA, Matthaeus WH (1986) Systematic errors in determining the propagation direction of interplanetary Alfvénic fluctuations. J Geophys Res 91:13357–13365. https://doi.org/10.1029/JA091iA12p13357 View ArticleGoogle Scholar
Goldstein ML, Roberts DA, Matthaeus WH (1995) Magnetohydrodynamic turbulence in the solar wind. Ann Rev Astron Astrophys 33:283–326. https://doi.org/10.1146/annurev.aa.33.090195.001435 View ArticleGoogle Scholar
Goto S, Vassilicos JC (2016) Local equilibrium hypothesis and Taylor's dissipation law. Fluid Dyn Res 48:021402. https://doi.org/10.1088/0169-5983/48/2/021402 View ArticleGoogle Scholar
He X, He G, Tong P (2010) Small-scale turbulent fluctuations beyond Taylor's frozen-flow hypothesis. Phys Rev E 81:065303(R). https://doi.org/10.1103/PhysRevE.81.065303 View ArticleGoogle Scholar
Higgins CW, Froidevaux M, Simeonov V, Vercauteren N, Barry C, Parlange MB (2012) The effect of scale on the applicability of Taylor's frozen turbulence hypothesis in the atmospheric boundary layer. Bound Layer Meteorol 143:379–391. https://doi.org/10.1007/s10546-012-9701-1 View ArticleGoogle Scholar
Huang S, Sahraoui F (2015) Violation of the Taylor hypothesis at electron scales in the solar wind and its effect on the energy spectra measured onboard spacecraft. EGU General Assembly 2015. Vienna, Austria, ID 7814Google Scholar
Iroshnikov PS (1964) Turbulence of a conducting fluid in a strong magnetic field. Sov Astron 7:566–571Google Scholar
Kaneda Y (1993) Lagrangian and Eulerian time correlations in turbulence. Phys Fluids A 5:2835–2845. https://doi.org/10.1063/1.858747 View ArticleGoogle Scholar
Klein KG, Howes GG, TenBarge JM (2014) The violation of the of Taylor hypothesis in measurements of solar wind turbulence. Astrophys J Lett 790:L20. https://doi.org/10.1088/2041-8205/790/2/L20 View ArticleGoogle Scholar
Klein KG, Perez JC, Verscharen D, Mallet A, Chandran BDG (2015) Modified version of Taylor's hypothesis for Solar probe Plus observations. Astrophys J Lett 801:L18. https://doi.org/10.1088/2041-8205/801/1/L18 View ArticleGoogle Scholar
Kolmogorov A (1941a) The local structure of turbulence in incompressible viscous fluid for very large Reynolds' number. Dokl Akad Nauk SSSR 30:301–305Google Scholar
Kolmogorov AN (1941b) Dissipation of energy in locally isotropic turbulence. Dokl Akad Nauk SSSR 32:16Google Scholar
Kolmogorov AN (1962) A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number. J Fluid Mech 13:82–85. https://doi.org/10.1017/S0022112062000518 View ArticleGoogle Scholar
Kraichnan RH (1965) Inertial-range spectrum of hydromagnetic turbulence. Phys Fluids 8:1385–1387. https://doi.org/10.1063/1.1761412 View ArticleGoogle Scholar
Kraichnan RH (1967) Inertial ranges in two-dimensional turbulence. Phys Fluids 10:1417–1423. https://doi.org/10.1063/1.1762301 View ArticleGoogle Scholar
Kubo R (1962) Generalized cumulant expansion method. J Phys Soc Jpn 17:1100–1120. https://doi.org/10.1143/JPSJ.17.1100 View ArticleGoogle Scholar
Kumar A, Verma MK (2018) Applicability of Taylor's hypothesis in thermally driven turbulence. R Soc Open Sci 5:172152. https://doi.org/10.1098/rsos.172152 View ArticleGoogle Scholar
Lugones R, Dmitruk P, Mininni PD, Wan M, Matthaeus WH (2016) On the spatio-temporal behavior of magnetohydrodynamic turbulence in a magnetized plasma. Phys Plasmas 23:112304. https://doi.org/10.1063/1.4968236 View ArticleGoogle Scholar
L'vov VS, Pomyalov A, Procaccia I (1999) Temporal surrogates of spatial turbulent statistics: the Taylor hypothesis revisited. Phys Rev E 60:4175–4184. https://doi.org/10.1103/PhysRevE.60.4175 View ArticleGoogle Scholar
Macmahan J, Feniers A, Ashley W, Thornton E (2012) Frequency–wavenumber velocity spectra, Taylor's hypothesis, and length scales in a natural gravel bed river. Water Resour Res 48:W09548. https://doi.org/10.1029/2011WR011709 View ArticleGoogle Scholar
Matthaeus WH, Dasso S, Weygand JM, Kivelson MG, Osman KT (2010) Eulerian decorrelation of fluctuations in the interplanetary magnetic field. Astrophys J Lett 721:L10–L13. https://doi.org/10.1088/2041-8205/721/1/L10 View ArticleGoogle Scholar
Narita Y (2017) Error estimate of Taylor's frozen-in hypothesis in the spectral domain. Ann Geophys 35:325–331. https://doi.org/10.5194/angeo-35-325-2017 View ArticleGoogle Scholar
Narita Y (2018) Space–time structure and wavevector anisotropy in space plasma turbulence. Liv Rev Sol Phys 15:2. https://doi.org/10.1007/s31116-017-0010-0 View ArticleGoogle Scholar
Narita Y, Vörös Z (2017) Lifetime estimates for plasma turbulence. Nonlinear Process Geophys 24:673–679. https://doi.org/10.5194/npg-24-673-2017 View ArticleGoogle Scholar
Narita Y, Glassmeier KH, Treumann RA (2006) Wave-number spectra and intermittency in the terrestrial foreshock region. Phys Rev Lett 97:191101. https://doi.org/10.1103/PhysRevLett.97.191101 View ArticleGoogle Scholar
Nariyuki Y, Hada T (2006) Remarks on nonlinear relation among phases and frequencies in modulational instabilities of parallel propagating Alfvén waves. Nonlinear Process Geophys 13:425–441. https://doi.org/10.5194/npg-13-425-2006 View ArticleGoogle Scholar
Perri S, Servidio S, Vaivads A, Valentini F (2017) Numerical study on the validity of the Taylor hypothesis in space plasmas. Astrophys J Suppl 231:4. https://doi.org/10.3847/1538-4365/aa755a View ArticleGoogle Scholar
Podesta JJ (2009) Dependence of solar-wind power spectra on the direction of the local mean magnetic field. Astrophys J 698:986–999. https://doi.org/10.1088/0004-637X/698/2/986 View ArticleGoogle Scholar
Podesta JJ (2010) Solar wind turbulence: advances in observation and theory. In: Proceedings of the International Astronomical Union, 6(S274), pp 295–301. https://doi.org/10.1017/S1743921311007162 View ArticleGoogle Scholar
Podesta JJ (2017) How to define the mean spare amplitude of solar wind fluctuations with respect to the local mean magnetic field. J Geophys Res 122:11835–11844. https://doi.org/10.1002/2017JA023864 View ArticleGoogle Scholar
Podesta JJ, Roberts DA, Goldstein ML (2006) Power spectrum of small-scale turbulent velocity fluctuations in the solar wind. J Geophys Res 111:A10109. https://doi.org/10.1029/2006JA011834 View ArticleGoogle Scholar
Podesta JJ, Roberts DA, Goldstein ML (2007) Spectral exponents of kinetic and magnetic energy spectra in solar wind turbulence. Astrophys J 664:543–548. https://doi.org/10.1086/519211 View ArticleGoogle Scholar
Roberts OW, Li X, Jeska L (2014) A statistical study of the solar wind turbulence at ion kinetic scales using the k-filtering technique and cluster data. Astrophys J 802:2. https://doi.org/10.1088/0004-637X/802/1/2 View ArticleGoogle Scholar
Šafránková J, Nemeček Z, Přech L, Zastenker GN (2013) Ion kinetic scale in the solar wind observed. Phys Rev Lett 110:25004. https://doi.org/10.1103/PhysRevLett.110.025004 View ArticleGoogle Scholar
Šafránková J, Nemeček Z, Němec F, Přech L, Chen CHK, Zastenker GN (2016) Power spectral density of fluctuations of bulk and thermal speeds in the solar wind. Astrophys J 825:121. https://doi.org/10.3847/0004-637X/825/2/121 View ArticleGoogle Scholar
Sahraoui F, Goldstein ML, Robert P, Khotyaintsev YV (2009) Evidence of a cascade and dissipation of solar-wind turbulence at the electron gyroscale. Phys Rev Lett 102:231102. https://doi.org/10.1103/PhysRevLett.102.231102 View ArticleGoogle Scholar
Sahraoui F, Belmont G, Goldstein ML (2012) New insight into short-wavelength solar wind fluctuations from Vlasov theory. Astrophys J 748:100. https://doi.org/10.1088/0004-637X/748/2/100 View ArticleGoogle Scholar
Sahraoui F, Huang SY, Belmont G, Goldstein ML, Retinò A, Robert P, De Patoul J (2013) Scaling of the electron dissipation range of solar wind turbulence. Astrophys J Lett 777:15. https://doi.org/10.1088/0004-637X/777/1/15 View ArticleGoogle Scholar
Saint-Jacques D, Baldwin JE (2000) Taylor's hypothesis: good for nuts. Proc SPIE 4006:951–962. https://doi.org/10.1117/12.390175 View ArticleGoogle Scholar
Schwartz SJ, Horbury T, Owen C. Baumjohann W, Nakamura R, Canu P, Roux A, Sahraoui F, Louarn P, Sauvaud JA, Poncon JL, Vaivads A, Marcucci MF, Anastadiadis A, Fujimoto M, Excoubet P, Taylor M, Eckersley S, Allouis E (2009) Perkonsin MC on behalf of the cross-scale team: cross-scale: multi-scale coupling in space plasmas. Exp Astron 23:1001–1015. https://doi.org/10.1007/s10686-008-9085-x View ArticleGoogle Scholar
Shet CS, Cholemari MR, Veeravalli SV (2017) Eulerian spatial and temporal autocorrelations: assessment of Taylor's hypothesis and a model. J Turb 18:1105–1119. https://doi.org/10.1080/14685248.2017.1357823 View ArticleGoogle Scholar
Squire DT, Hutchins N, Morrill-Winter C, Schultz MP, Klewicki JC, Marusic I (2017) Applicability of Taylor's hypothesis in rough- and smooth-wall boundary layers. J Fluid Mech 812:398–417. https://doi.org/10.1017/jfm.2016.832 View ArticleGoogle Scholar
Taylor GI (1938) The spectrum of turbulence. Proc R Soc Lond Ser A 164:476–490. https://doi.org/10.1098/rspa.1938.0032 View ArticleGoogle Scholar
Tennekes H (1975) Eulerian and Lagrangian time microscales in isotropic turbulence. J Fluid Mech 67:561–567. https://doi.org/10.1017/S0022112075000468 View ArticleGoogle Scholar
Treumann RA, Baumjohann W (2017) The usefulness of Poynting's theorem in magnetic turbulence. Ann Geophys 35:1353–1360. https://doi.org/10.5194/angeo-35-1353-2017. arXiv:1709.04741 [physics.space-ph]View ArticleGoogle Scholar
Tsinober A, Vedula P, Yeung PK (2001) Random Taylor hypothesis and the behavior of local and convective accelerations in isotropic turbulence. Phys Fluids 13:1974–1984. https://doi.org/10.1063/1.1375143 View ArticleGoogle Scholar
Tu CY, Marsch E (1995) MHD structures, waves and turbulence in the solar wind: observations and theories. Space Sci Rev 73:1–210. https://doi.org/10.1007/BF00748891 View ArticleGoogle Scholar
Wilczek M, Narita Y (2012) Wave-number–frequency spectrum for turbulence from a random sweeping hypothesis with mean flow. Phys Rev E 86:066398. https://doi.org/10.1103/PhysRevE.86.066308 View ArticleGoogle Scholar
Wilczek M, Narita Y (2014) A note on Taylor's hypothesis under large-scale flow variation. Nonlinear Process Geophys 21:645–649. https://doi.org/10.5194/npg-21-645-2014 View ArticleGoogle Scholar
Yakhot V, Orszag SA, She Z-S (1989) Space–time correlations in turbulence—kinematical versus dynamical effects. Phys Fluids A 1:184–186. https://doi.org/10.1063/1.857486 View ArticleGoogle Scholar
Yang XIA, Howland MF (2018) Implication of Taylor's hypothesis on measuring flow modulation. J Fluid Mech 836:222–237. https://doi.org/10.1017/jfm.2017.803 View ArticleGoogle Scholar
Yoon PH (2007) Kinetic theory of hydromagnetic turbulence, I. Formal results for parallel propagation. Phys. Plasmas 14:102302. https://doi.org/10.1063/1.2780139 View ArticleGoogle Scholar
Zhou Y, Matthaeus WH, Dmitruk P (2004) Magnetohydrodynamic turbulence and time scales in astrophysical and space plasmas. Rev Mod Phys 76:1015–1035. https://doi.org/10.1103/RevModPhys.76.1015 View ArticleGoogle Scholar
3. Space science
|
CommonCrawl
|
Results for 'Trinh Nguyễn'
Trung tâm ISR có bài ra mừng 130 năm Ngày sinh Chủ tịch Hồ Chí Minh.Hồ Mạnh Toàn - 2020 - ISR Phenikaa 2020 (5):1-3.details
Bài mới xuất bản vào ngày 19-5-2020 với tác giả liên lạc là NCS Nguyễn Minh Hoàng, cán bộ nghiên cứu của Trung tâm ISR, trình bày tiếp cận thống kê Bayesian cho việc nghiên cứu dữ liệu khoa học xã hội. Đây là kết quả của định hướng Nhóm nghiên cứu SDAG được nêu rõ ngay từ ngày 18-5-2019.
Echo Chambers and Epistemic Bubbles.C. Thi Nguyen - 2020 - Episteme 17 (2):141-161.details
Recent conversation has blurred two very different social epistemic phenomena: echo chambers and epistemic bubbles. Members of epistemic bubbles merely lack exposure to relevant information and arguments. Members of echo chambers, on the other hand, have been brought to systematically distrust all outside sources. In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. It is crucial to keep these phenomena distinct. First, echo chambers can explain the post-truth phenomena in a way that epistemic (...) bubbles cannot. Second, each type of structure requires a distinct intervention. Mere exposure to evidence can shatter an epistemic bubble, but may actually reinforce an echo chamber. Finally, echo chambers are much harder to escape. Once in their grip, an agent may act with epistemic virtue, but social context will pervert those actions. Escape from an echo chamber may require a radical rebooting of one's belief system. (shrink)
The Question of Quality.Phuong-Thao T. Trinh, Thu-Hien T. Le, Thu-Trang Vuong & Phuong-Hanh Hoang - 2019 - In Quan-Hoang Vuong & Trung Tran (eds.), The Vietnamese Social Sciences at a Fork in the Road. Warsaw, Poland: De Gruyter. pp. 121-142.details
Phuong-Thao T. Trinh, Thu-Hien T. Le, Thu-Trang Vuong, Phuong-Hanh Hoang (2019). Chapter 6. The question of quality. In Quan-Hoang Vuong, Trung Tran (Eds.), The Vietnamese Social Sciences at a Fork in the Road (pp. 121–142). Warsaw, Poland: De Gruyter. DOI:10.2478/9783110686081-011. -/- Online ISBN: 9783110686081 © 2019 Sciendo / De Gruyter.
Cognitive Islands and Runaway Echo Chambers: Problems for Epistemic Dependence on Experts.C. Thi Nguyen - 2020 - Synthese 197 (7):2803-2821.details
I propose to study one problem for epistemic dependence on experts: how to locate experts on what I will call cognitive islands. Cognitive islands are those domains for knowledge in which expertise is required to evaluate other experts. They exist under two conditions: first, that there is no test for expertise available to the inexpert; and second, that the domain is not linked to another domain with such a test. Cognitive islands are the places where we have the fewest resources (...) for evaluating experts, which makes our expert dependences particularly risky. -/- Some have argued that cognitive islands lead to the complete unusability of expert testimony: that anybody who needs expert advice on a cognitive island will be entirely unable to find it. I argue against this radical form of pessimism, but propose a more moderate alternative. I demonstrate that we have some resources for finding experts on cognitive islands, but that cognitive islands leave us vulnerable to an epistemic trap which I will call runaway echo chambers. In a runaway echo chamber, our inexpertise may lead us to pick out bad experts, which will simply reinforce our mistaken beliefs and sensibilities. (shrink)
Moral Outrage Porn.C. Thi Nguyen & Bekka Williams - 2020 - Journal of Ethics and Social Philosophy 18 (2).details
We offer an account of the generic use of the term "porn", as seen in recent usages such as "food porn" and "real estate porn". We offer a definition adapted from earlier accounts of sexual pornography. On our account, a representation is used as generic porn when it is engaged with primarily for the sake of a gratifying reaction, freed from the usual costs and consequences of engaging with the represented content. We demonstrate the usefulness of the concept of generic (...) porn by using it to isolate a new type of such porn: moral outrage porn. Moral outrage porn is representations of moral outrage, engaged with primarily for the sake of the resulting gratification, freed from the usual costs and consequences of engaging with morally outrageous content. Moral outrage porn is dangerous because it encourages the instrumentalization of one's empirical and moral beliefs, manipulating their content for the sake of gratification. Finally, we suggest that when using porn is wrong, it is often wrong because it instrumentalizes what ought not to be instrumentalized. (shrink)
Autonomy and Aesthetic Engagement.C. Thi Nguyen - 2019 - Mind 129 (516):1127-1156.details
There seems to be a deep tension between two aspects of aesthetic appreciation. On the one hand, we care about getting things right. On the other hand, we demand autonomy. We want appreciators to arrive at their aesthetic judgments through their own cognitive efforts, rather than deferring to experts. These two demands seem to be in tension; after all, if we want to get the right judgments, we should defer to the judgments of experts. The best explanation, I suggest, is (...) that aesthetic appreciation is something like a game. When we play a game, we try to win. But often, winning isn't the point; playing is. Aesthetic appreciation involves the same flipped motivational structure: we aim at the goal of correctness, but having correct judgments isn't the point. The point is the engaged process of interpreting, investigating, and exploring the aesthetic object. Deferring to aesthetic testimony, then, makes the same mistake as looking up the answer to a puzzle, rather than solving it for oneself. The shortcut defeats the whole point. This suggests a new account of aesthetic value: the engagement account. The primary value of the activity of aesthetic appreciation lies in the process of trying to generate correct judgments, and not in having correct judgments. -/- *There is an audio version available: look for the Soundcloud link, below.*. (shrink)
Games and the Art of Agency.C. Thi Nguyen - 2019 - Philosophical Review 128 (4):423-462.details
Games may seem like a waste of time, where we struggle under artificial rules for arbitrary goals. The author suggests that the rules and goals of games are not arbitrary at all. They are a way of specifying particular modes of agency. This is what make games a distinctive art form. Game designers designate goals and abilities for the player; they shape the agential skeleton which the player will inhabit during the game. Game designers work in the medium of agency. (...) Game-playing, then, illuminates a distinctive human capacity. We can take on ends temporarily for the sake of the experience of pursuing them. Game play shows that our agency is significantly more modular and more fluid than we might have thought. It also demonstrates our capacity to take on an inverted motivational structure. Sometimes we can take on an end for the sake of the activity of pursuing that end. (shrink)
Games: Agency as Art.C. Thi Nguyen - 2020 - New York: Oxford University Press.details
Games occupy a unique and valuable place in our lives. Game designers do not simply create worlds; they design temporary selves. Game designers set what our motivations are in the game and what our abilities will be. Thus: games are the art form of agency. By working in the artistic medium of agency, games can offer a distinctive aesthetic value. They support aesthetic experiences of deciding and doing. -/- And the fact that we play games shows something remarkable about us. (...) Our agency is more fluid than we might have thought. In playing a game, we take on temporary ends; we submerge ourselves temporarily in an alternate agency. Games turn out to be a vessel for communicating different modes of agency, for writing them down and storing them. Games create an archive of agencies. And playing games is how we familiarize ourselves with different modes of agency, which helps us develop our capacity to fluidly change our own style of agency. (shrink)
Trust as an Unquestioning Attitude.C. Thi Nguyen - forthcoming - Oxford Studies in Epistemology.details
Most theories of trust presume that trust is a conscious attitude that can be directed only at other agents. I sketch a different form of trust: the unquestioning attitude. What it is to trust, in this sense, is not simply to rely on something, but to rely on it unquestioningly. It is to rely on a resource while suspending deliberation over its reliability. To trust, then, is to set up open pipelines between yourself and parts of the external world — (...) to permit external resources to have a similar relationship to one as one's internal cognitive faculties. This creates efficiency, but at the price of exquisite vulnerability. We must trust in this way because we are cognitively limited beings in a cognitively overwhelming world. Crucially, we can hold the unquestioning attitude towards objects. When I trust my climbing rope, I climb while putting questions of its reliability out of mind. Many people now trust, in this sense, complex technologies such as search algorithms and online calendars. But, one might worry, how could one ever hold such a normatively loaded attitude as trust towards mere objects? How could it ever make sense to feel betrayed by an object? Such betrayal is grounded, not in considerations of inter-agential cooperation, but in considerations of functional integration. Trust is our engine for expanding and outsourcing our agency — for binding external processes into our practical selves. Thus, we can be betrayed by our smartphones in the same way that we can be betrayed by our memory. When we trust, we try to make something a part of our agency, and we are betrayed when our part lets us down. This suggests a new form of gullibility: agential gullibility, which occurs when agents too hastily and carelessly integrate external resources into their own agency. (shrink)
Unable to Do the Impossible.Anthony Nguyen - 2020 - Mind 129 (514):585-602.details
Jack Spencer has recently argued for the striking thesis that, possibly, an agent is able to do the impossible—that is, perform an action that is metaphysically impossible for that person to perform. Spencer bases his argument on (Simple G), a case in which it is impossible for an agent G to perform some action but, according to Spencer, G is still intuitively able to perform that action. I reply that we would have to give up at least four action-theoretical principles (...) if we accept that G is able to do the impossible. We may be best off retaining the principles and thus rejecting Spencer's intuition that G is able to do the impossible. I then consider an argument for the claim that G is able to do the impossible that goes through the Snapshot Principle. I, however, deny that any true variant of the Snapshot Principle shows that G is able to do the impossible. Moreover, the counterexample to the Snapshot Principle that I develop also suggests that G is unable to do the impossible in (Simple G). The most natural explanation for why an agent is unable to perform some action in this counterexample extends to (Simple G). Next, I develop three error theories for why we might initially share Spencer's intuition that G is able to do the impossible in (Simple G). Finally, I consider a couple other "G-cases" of Spencer's and find them all wanting. Perhaps we are unable to do the impossible. (shrink)
Competition as Cooperation.C. Thi Nguyen - 2017 - Journal of the Philosophy of Sport 44 (1):123-137.details
Games have a complex, and seemingly paradoxical structure: they are both competitive and cooperative, and the competitive element is required for the cooperative element to work out. They are mechanisms for transforming competition into cooperation. Several contemporary philosophers of sport have located the primary mechanism of conversion in the mental attitudes of the players. I argue that these views cannot capture the phenomenological complexity of game-play, nor the difficulty and moral complexity of achieving cooperation through game-play. In this paper, I (...) present a different account of the relationship between competition and cooperation. My view is a distributed view of the conversion: success depends on a large number of features. First, the players must achieve the right motivational state: playing for the sake of the struggle, rather than to win. Second, successful transformation depends on a large number of extra-mental features, including good game design, and social and institutional features. (shrink)
Expertise and the Fragmentation of Intellectual Autonomy.C. Thi Nguyen - 2018 - Philosophical Inquiries 6 (2):107-124.details
In The Great Endarkenment, Elijah Millgram argues that the hyper-specialization of expert domains has led to an intellectual crisis. Each field of human knowledge has its own specialized jargon, knowledge, and form of reasoning, and each is mutually incomprehensible to the next. Furthermore, says Millgram, modern scientific practical arguments are draped across many fields. Thus, there is no person in a position to assess the success of such a practical argument for themselves. This arrangement virtually guarantees that mistakes will accrue (...) whenever we engage in cross-field practical reasoning. Furthermore, Millgram argues, hyper-specialization makes intellectual autonomy extremely difficult. Our only hope is to provide better translations between the fields, in order to achieve intellectual transparency. I argue against Millgram's pessimistic conclusion about intellectual autonomy, and against his suggested solution of translation. Instead, I take his analysis to reveal that there are actually several very distinct forms intellectual autonomy that are significantly in tension. One familiar kind is direct autonomy, where we seek to understand arguments and reasons for ourselves. Another kind is delegational autonomy, where we seek to find others to invest with our intellectual trust when we cannot understand. A third is management autonomy, where we seek to encapsulate fields, in order to manage their overall structure and connectivity. Intellectual transparency will help us achieve direct autonomy, but many intellectual circumstances require that we exercise delegational and management autonomy. However, these latter forms of autonomy require us to give up on transparency. (shrink)
Philosophy of Games.C. Thi Nguyen - 2017 - Philosophy Compass 12 (8):e12426.details
What is a game? What are we doing when we play a game? What is the value of playing games? Several different philosophical subdisciplines have attempted to answer these questions using very distinctive frameworks. Some have approached games as something like a text, deploying theoretical frameworks from the study of narrative, fiction, and rhetoric to interrogate games for their representational content. Others have approached games as artworks and asked questions about the authorship of games, about the ontology of the work (...) and its performance. Yet others, from the philosophy of sport, have focused on normative issues of fairness, rule application, and competition. The primary purpose of this article is to provide an overview of several different philosophical approaches to games and, hopefully, demonstrate the relevance and value of the different approaches to each other. Early academic attempts to cope with games tried to treat games as a subtype of narrative and to interpret games exactly as one might interpret a static, linear narrative. A faction of game studies, self-described as "ludologists," argued that games were a substantially novel form and could not be treated with traditional tools for narrative analysis. In traditional narrative, an audience is told and interprets the story, where in a game, the player enacts and creates the story. Since that early debate, theorists have attempted to offer more nuanced accounts of how games might achieve similar ends to more traditional texts. For example, games might be seen as a novel type of fiction, which uses interactive techniques to achieve immersion in a fictional world. Alternately, games might be seen as a new way to represent causal systems, and so a new way to criticize social and political entities. Work from contemporary analytic philosophy of art has, on the other hand, asked questions whether games could be artworks and, if so, what kind. Much of this debate has concerned the precise nature of the artwork, and the relationship between the artist and the audience. Some have claimed that the audience is a cocreator of the artwork, and so games are a uniquely unfinished and cooperative art form. Others have claimed that, instead, the audience does not help create the artwork; rather, interacting with the artwork is how an audience member appreciates the artist's finished production. Other streams of work have focused less on the game as a text or work, and more on game play as a kind of activity. One common view is that game play occurs in a "magic circle." Inside the magic circle, players take on new roles, follow different rules, and actions have different meanings. Actions inside the magic circle do not have their usual consequences for the rest of life. Enemies of the magic circle view have claimed that the view ignores the deep integration of game life from ordinary life and point to gambling, gold farming, and the status effects of sports. Philosophers of sport, on the other hand, have approached games with an entirely different framework. This has lead into investigations about the normative nature of games—what guides the applications of rules and how those rules might be applied, interpreted, or even changed. Furthermore, they have investigated games as social practices and as forms of life. (shrink)
Cultural Appropriation and the Intimacy of Groups.C. Thi Nguyen & Matthew Strohl - 2019 - Philosophical Studies 176 (4):981-1002.details
What could ground normative restrictions concerning cultural appropriation which are not grounded by independent considerations such as property rights or harm? We propose that such restrictions can be grounded by considerations of intimacy. Consider the familiar phenomenon of interpersonal intimacy. Certain aspects of personal life and interpersonal relationships are afforded various protections in virtue of being intimate. We argue that an analogous phenomenon exists at the level of large groups. In many cases, members of a group engage in shared practices (...) that contribute to a sense of common identity, such as wearing certain hair or clothing styles or performing a certain style of music. Participation in such practices can generate relations of group intimacy, which can ground certain prerogatives in much the same way that interpersonal intimacy can. One such prerogative is making what we call an appropriation claim. An appropriation claim is a request from a group member that non-members refrain from appropriating a given element of the group's culture. Ignoring appropriation claims can constitute a breach of intimacy. But, we argue, just as for the prerogatives of interpersonal intimacy, in many cases there is no prior fact of the matter about whether the appropriation of a given cultural practice constitutes a breach of intimacy. It depends on what the group decides together. (shrink)
The Radical Account of Bare Plural Generics.Anthony Nguyen - 2020 - Philosophical Studies 177 (5):1303-1331.details
Bare plural generic sentences pervade ordinary talk. And yet it is extremely controversial what semantics to assign to such sentences. In this paper, I achieve two tasks. First, I develop a novel classification of the various standard uses to which bare plurals may be put. This "variety data" is important—it gives rise to much of the difficulty in systematically theorizing about bare plurals. Second, I develop a novel account of bare plurals, the radical account. On this account, all bare plurals (...) fail to express propositions. The content of a bare plural has to be pragmatically "completed" by a speaker in order for her to make an assertion. At least the content of a quantifier expression has to be supplied. But sometimes, the content of a sentential operator or modal verb is also supplied. The radical account straightforwardly explains the variety data: Speakers' communicative intentions vary wildly across different contexts. (shrink)
The Uses of Aesthetic Testimony.C. Thi Nguyen - 2017 - British Journal of Aesthetics 57 (1):19-36.details
The current debate over aesthetic testimony typically focuses on cases of doxastic repetition — where, when an agent, on receiving aesthetic testimony that p, acquires the belief that p without qualification. I suggest that we broaden the set of cases under consideration. I consider a number of cases of action from testimony, including reconsidering a disliked album based on testimony, and choosing an artistic educational institution from testimony. But this cannot simply be explained by supposing that testimony is usable for (...) action, but unusable for doxastic repetition. I consider a new asymmetry in the usability aesthetic testimony. Consider the following cases: we seem unwilling to accept somebody hanging a painting in their bedroom based merely on testimony, but entirely willing to accept hanging a painting in a museum based merely on testimony. The switch in intuitive acceptability seems to track, in some complicated way, the line between public life and private life. These new cases weigh against a number of standing theories of aesthetic testimony. I suggest that we look further afield, and that something like a sensibility theory, in the style of John McDowell and David Wiggins, will prove to be the best fit for our intuitions for the usability of aesthetic testimony. I propose the following explanation for the new asymmetry: we are willing to accept testimony about whether a work merits being found beautiful; but we are unwilling to accept testimony about whether something actually is beautiful. (shrink)
Autonomy, Understanding, and Moral Disagreement.C. Thi Nguyen - 2010 - Philosophical Topics 38 (2):111-129.details
Should the existence of moral disagreement reduce one's confidence in one's moral judgments? Many have claimed that it should not. They claim that we should be morally self-sufficient: that one's moral judgment and moral confidence ought to be determined entirely one's own reasoning. Others' moral beliefs ought not impact one's own in any way. I claim that moral self-sufficiency is wrong. Moral self-sufficiency ignores the degree to which moral judgment is a fallible cognitive process like all the rest. In this (...) paper, I take up two possible routes to moral self-sufficiency.First, I consider Robert Paul Wolff's argument that an autonomous being is required to act from his own reasoning. Does Wolff's argument yield moral self-sufficiency? Wolff's argument does forbid unthinking obedience. But it does not forbid guidance: the use of moral testimony to glean evidence about nonmoral states of affairs. An agent can use the existence of agreement or disagreement as evidence concerning the reliability of their own cognitive abilities, which is entirely nonmoral information. Corroboration and discorroboration yields nonmoral evidence, and no reasonable theory of autonomy can forbid the use of nonmoral evidence. In fact, by using others to check on my own cognitive functionality, an agent is reasoning better and is thereby more autonomous.Second, I consider Philip Nickel's requirement that moral judgment proceed from personal understanding. I argue that the requirement of understanding does forbid unthinking obedience, but not discorroboration. When an agent reasons morally, and then reduces confidence in their judgments through discorroboration, they are in full contact with the moral reasons, and with the epistemic reasons. Discorroboration yields more understanding, not less. (shrink)
The Right Way to Play a Game.C. Thi Nguyen - 2019 - Game Studies 19 (1).details
Is there a right or wrong way to play a game? Many think not. Some have argued that, when we insist that players obey the rules of a game, we give too much weight to the author's intent. Others have argued that such obedience to the rules violates the true purpose of games, which is fostering free and creative play. Both of these responses, I argue, misunderstand the nature of games and their rules. The rules do not tell us how (...) to interpret a game; they merely tell us what the game is. And the point of the rules is not always to foster free and creative play. The point can be, instead, to communicate a sculpted form of activity. And in games, as with any form of communication, we need some shared norms to ground communicative stability. Games have what has been called a "prescriptive ontology." A game is something more than simply a piece of material. It is some material as approached in a certain specified way. These prescriptions help to fix a common object of attention. Games share this prescriptive ontology with more traditional kinds of works. Novels are more than just a set of words on a page; they are those words read in a certain order. Games are more than just some software or cardboard bits; they are those bits interacted with according to certain rules. Part of a game's essential nature is the prescriptions for how we are to play it. What's more, we investigate the prescriptive ontology of games, we will uncover at least distinct prescriptive categories of games. Party games prescribe that we encounter the game once; heavy strategy games prescribe we encounter the game many times; and community evolution games prescribe that we encounter the game while embedded in an ongoing community of play. (shrink)
The Aesthetics of Rock Climbing.C. Thi Nguyen - 2017 - The Philosophers' Magazine 78:37-43.details
An Ethics of Uncertainty.C. Thi Nguyen - 2011 - Dissertation, UCLAdetails
Moral reasoning is as fallible as reasoning in any other cognitive domain, but we often behave as if it were not. I argue for a form of epistemically-based moral humility, in which we downgrade our moral beliefs in the face of moral disagreement. My argument combines work in metaethics and moral intuitionism with recent developments in epistemology. I argue against any demands for deep self-sufficiency in moral reasoning. Instead, I argue that we need to take into account significant socially sourced (...) information, especially as a check for failures on our own moral intuitions and reasoning. -/- First, I argue for an epistemically plausible version of moral intuitionism, based on recent work in epistemic entitlement and epistemic warrant. Second, I argue that getting clear on the epistemic basis shows the defeasibility of moral judgment. Third, I argue the existence of moral disagreement is a reason to reduce our certainty in moral judgment. Fourth, I argue that this effect is not a violation of norms of autonomy for moral judgment. (shrink)
MỘT SỐ QUÁ TRÌNH NGẪU NHIÊN CÓ BƯỚC NHẢY.Hoàng Thị Phương Thảo - 2015 - Dissertation, Vietnam National University, Hanoidetails
MỘT SỐ QUÁ TRÌNH NGẪU NHIÊN CÓ BƯỚC NHẢY -/- Hoàng Thị Phương Thảo -/- Luận án Tiến sỹ -/- TRƯỜNG ĐẠI HỌC KHOA HỌC TỰ NHIÊN ĐẠI HỌC QUỐC GIA HÀ NỘI -/- Hà Nội - 2015 .
3 công trình nghiên cứu ấn tượng của học giả Việt năm 2017.Lệ Thu - 2018 - Dân Trí Online 2018 (2).details
Dân Trí (17/02/2018) — Bằng niềm say mê và tâm thế nghiên cứu khoa học nghiêm túc, các học giả Việt là tác giả/đồng tác giả chính những công trình nghiên cứu ấn tượng được công bố trên các tạp chí uy tín hàng đầu thế giới trong năm qua.
The Debates and the Long-Awaited Reform.Trung Tran, Phuong-Thao T. Trinh, Thu-Trang Vuong & Hiep-Hung Pham - 2019 - In Quan-Hoang Vuong (ed.), The Vietnamese Social Sciences at a Fork in the Road. Warsaw, Poland: De Gruyter. pp. 17-32.details
Trung Tran, Phuong-Thao T. Trinh, Thu-Trang Vuong, Hiep-Hung Pham (2019). Chapter 1. The debates and the long-awaited reform. In Quan-Hoang Vuong, Trung Tran (Eds.), The Vietnamese Social Sciences at a Fork in the Road (pp. 17–32). Warsaw, Poland: De Gruyter / Sciendo. DOI:10.2478/9783110686081-006 -/- Online ISBN: 9783110686081 © 2019 De Gruyter / Sciendo.
"Cultural Additivity" and How the Values and Norms of Confucianism, Buddhism, and Taoism Co-Exist, Interact, and Influence Vietnamese Society: A Bayesian Analysis of Long-Standing Folktales, Using R and Stan.Quan-Hoang Vuong, Manh-Tung Ho, Viet-Phuong La, Dam Van Nhue, Bui Quang Khiem, Nghiem Phu Kien Cuong, Thu-Trang Vuong, Manh-Toan Ho, Hong Kong T. Nguyen, Viet-Ha T. Nguyen, Hiep-Hung Pham & Nancy K. Napier - manuscriptdetails
Every year, the Vietnamese people reportedly burned about 50,000 tons of joss papers, which took the form of not only bank notes, but iPhones, cars, clothes, even housekeepers, in hope of pleasing the dead. The practice was mistakenly attributed to traditional Buddhist teachings but originated in fact from China, which most Vietnamese were not aware of. In other aspects of life, there were many similar examples of Vietnamese so ready and comfortable with adding new norms, values, and beliefs, even contradictory (...) ones, to their culture. This phenomenon, dubbed "cultural additivity", prompted us to study the co-existence, interaction, and influences among core values and norms of the Three Teachings –Confucianism, Buddhism, and Taoism–as shown through Vietnamese folktales. By applying Bayesian logistic regression, we evaluated the possibility of whether the key message of a story was dominated by a religion (dependent variables), as affected by the appearance of values and anti-values pertaining to the Three Teachings in the story (independent variables). Our main findings included the existence of the cultural additivity of Confucian and Taoist values. More specifically, empirical results showed that the interaction or addition of the values of Taoism and Confucianism in folktales together helped predict whether the key message of a story was about Confucianism, β{VT ⋅ VC} = 0.86. Meanwhile, there was no such statistical tendency for Buddhism. The results lead to a number of important implications. First, this showed the dominance of Confucianism because the fact that Confucian and Taoist values appeared together in a story led to the story's key message dominated by Confucianism. Thus, it presented the evidence of Confucian dominance and against liberal interpretations of the concept of the Common Roots of Three Religions ("tam giáo đồng nguyên") as religious unification or unicity. Second, the concept of "cultural additivity" could help explain many interesting socio-cultural phenomena, namely the absence of religious intolerance and extremism in the Vietnamese society, outrageous cases of sophistry in education, the low productivity in creative endeavors like science and technology, the misleading branding strategy in business. We are aware that our results are only preliminary and more studies, both theoretical and empirical, must be carried out to give a full account of the explanatory reach of "cultural additivity". (shrink)
Post-Mortem Reproduction From a Vietnamese Perspective—an Analysis and Commentary.Hai Thanh Doan, Diep Thi Phuong Doan & Nguyen Kim The Duong - 2020 - Asian Bioethics Review 12 (3):257–288.details
Post-mortem reproduction is a complex and contested matter attracting attention from a diverse group of scholars and resulting in various responses from a range of countries. Vietnam has been reluctant to deal directly with this matter and has, accordingly, permitted post-mortem reproduction implicitly. First, by analysing Vietnam's post-mortem reproduction cases, this paper reflects on the manner in which Vietnamese authorities have handled each case in the context of the contemporary legal framework, and it reveals the moral questions arising therefrom. The (...) article then offers an account of Vietnamese social norms as an explanation for the tendency to conduct post-mortem reproduction. In arguing that a deeper and more thorough examination of the moral and ethical reasoning is required, the paper advocates in favour of supportive post-mortem reproduction regulation. In doing so, the paper seeks to reconcile the Vietnamese legal framework and post-mortem reproduction experiences of other countries. The article concludes that Vietnam and countries sharing the similar cultural traits should permit post-mortem reproduction explicitly. This would require full engagement with the ethical and legal issues arising, and careful promulgation of regulations and guidelines based on comparative experiences of a range of countries in handling this matter. (shrink)
GNOSEOLOGY: In Relation to Truth, Knowledge and Metaphysics.Knut Vuong Nguyen - manuscriptdetails
A short introduction on the problem of knowledge, and the problems treated by modern philosophy, in relation to truth and metaphysics.
Policy Response, Social Media and Science Journalism for the Sustainability of the Public Health System Amid the COVID-19 Outbreak: The Vietnam Lessons.La Viet Phuong, Pham Thanh Hang, Manh-Toan Ho, Nguyen Minh Hoang, Nguyen Phuc Khanh Linh, Vuong Thu Trang, Nguyen To Hong Kong, Tran Trung, Khuc Van Quy, Ho Manh Tung & Quan-Hoang Vuong - 2020 - Sustainability 12:2931.details
Vietnam, with a geographical proximity and a high volume of trade with China, was the first country to record an outbreak of the new Coronavirus disease (COVID-19), caused by the Severe Acute Respiratory Syndrome Coronavirus 2 or SARS-CoV-2. While the country was expected to have a high risk of transmission, as of April 4, 2020—in comparison to attempts to contain the disease around the world—responses from Vietnam are being seen as prompt and effective in protecting the interests of its citizens, (...) with 239 confirmed cases and no fatalities. This study analyzes the situation in terms of Vietnam's policy response, social media and science journalism. A self-made web crawl engine was used to scan and collect official media news related to COVID-19 between the beginning of January and April 4, yielding a comprehensive dataset of 14,952 news items. The findings shed light on how Vietnam—despite being under-resourced—has demonstrated political readiness to combat the emerging pandemic since the earliest days. Timely communication on any developments of the outbreak from the government and the media, combined with up-to-date research on the new virus by the Vietnamese science community, have altogether provided reliable sources of information. By emphasizing the need for immediate and genuine cooperation between government, civil society and private individuals, the case study offers valuable lessons for other nations concerning not only the concurrent fight against the COVID-19 pandemic but also the overall responses to a public health crisis. (shrink)
Cultural Evolution in Vietnam's Early 20th Century: A Bayesian Networks Analysis of Franco-Chinese House Designs.Quan-Hoang Vuong, Quang-Khiem Bui, Viet-Phuong La, Thu-Trang Vuong, Manh-Toan Ho, Hong-Kong T. Nguyen, Hong-Ngoc Nguyen, Kien-Cuong P. Nghiem & Manh-Tung Ho - manuscriptdetails
The study of cultural evolution has taken on an increasingly interdisciplinary and diverse approach in explicating phenomena of cultural transmission and adoptions. Inspired by this computational movement, this study uses Bayesian networks analysis, combining both the frequentist and the Hamiltonian Markov chain Monte Carlo (MCMC) approach, to investigate the highly representative elements in the cultural evolution of a Vietnamese city's architecture in the early 20th century. With a focus on the façade design of 68 old houses in Hanoi's Old Quarter (...) (based on 78 data lines extracted from 248 photos), the study argues that it is plausible to look at the aesthetics, architecture, and designs of the house façade to find traces of cultural evolution in Vietnam, which went through more than six decades of French colonization and centuries of sociocultural influence from China. The in-depth technical analysis, though refuting the presumed model on the probabilistic dependency among the variables, yields several results, the most notable of which is the strong influence of Buddhism over the decorations of the house façade. Particularly, in the top 5 networks with the best Bayesian Information Criterion (BIC) scores and p<0.05, the variable for decorations (DC) always has a direct probabilistic dependency on the variable B for Buddhism. The paper then checks the robustness of these models using Hamiltonian MCMC method and find the posterior distributions of the models' coefficients all satisfy the technical requirement. Finally, this study suggests integrating Bayesian statistics in the social sciences in general and for the study of cultural evolution and architectural transformation in particular. (shrink)
On How Religions Could Accidentally Incite Lies and Violence: Folktales as a Cultural Transmitter.Quan-Hoang Vuong, Manh-Tung Ho, Hong-Kong T. Nguyen, Thu-Trang Vuong, Trung Tran, Khanh-Linh Hoang, Thi-Hanh Vu, Phuong-Hanh Hoang, Minh-Hoang Nguyen, Manh-Toan Ho & Viet-Phuong La - 2020 - Palgrave Communications 6 (1):82.details
Folklore has a critical role as a cultural transmitter, all the while being a socially accepted medium for the expressions of culturally contradicting wishes and conducts. In this study of Vietnamese folktales, through the use of Bayesian multilevel modeling and the Markov chain Monte Carlo technique, we offer empirical evidence for how the interplay between religious teachings (Confucianism, Buddhism, and Taoism) and deviant behaviors (lying and violence) could affect a folktale's outcome. The findings indicate that characters who lie and/or commit (...) violent acts tend to have bad endings, as intuition would dictate, but when they are associated with any of the above Three Teachings, the final endings may vary. Positive outcomes are seen in cases where characters associated with Confucianism lie and characters associated with Buddhism act violently. The results supplement the worldwide literature on discrepancies between folklore and real-life conduct, as well as on the contradictory human behaviors vis-à-vis religious teachings. Overall, the study highlights the complexity of human decision-making, especially beyond the folklore realm. (shrink)
Central Limit Theorem for Functional of Jump Markov Processes.Nguyen Van Huu, Quan-Hoang Vuong & Minh-Ngoc Tran - 2005 - Vietnam Journal of Mathematics 33 (4):443-461.details
Some conditions are given to ensure that for a jump homogeneous Markov process $\{X(t),t\ge 0\}$ the law of the integral functional of the process $T^{-1/2} \int^T_0\varphi(X(t))dt$ converges to the normal law $N(0,\sigma^2)$ as $T\to \infty$, where $\varphi$ is a mapping from the state space $E$ into $\bbfR$.
Improving Bayesian Statistics Understanding in the Age of Big Data with the Bayesvl R Package.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Manh-Toan Ho, Manh-Tung Ho & Peter Mantello - 2020 - Software Impacts 4 (1):100016.details
The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language's no-U-turn (NUTS) sampler. The package combines the ability (...) to construct Bayesian network models using directed acyclic graphs (DAGs), the Markov chain Monte Carlo (MCMC) simulation technique, and the graphic capability of the ggplot2 package. As a result, it can improve the user experience and intuitive understanding when constructing and analyzing Bayesian network models. A case example is offered to illustrate the usefulness of the package for Big Data analytics and cognitive computing. (shrink)
Project Syndicate điểm nghiên cứu COVID-19 của ISR.Nguyen Phuc Khanh Linh - unknowndetails
Trang nghiên cứu kinh tế và chính sách nổi tiếng thế giới Project Syndicate đã đăng bài Op Ed của tác giả Hong-Kong Nguyen [1] với tiêu điểm là nghiên cứu COVID-19 mới xuất bản trên Sustainability (doi:10.3390/su12072931 [2]) của Trung tâm ISR, Phenikaa University, sử dụng dữ liệu của A.I. for Social Data Lab (AISDL).
An Open Database of Productivity in Vietnam's Social Sciences and Humanities for Public Use.Quan-Hoang Vuong, Viet-Phuong La, Thu-Trang Vuong, Manh-Toan Ho, Hong K. T. Nguyen, Viet-Ha T. Nguyen, Hiep-Hung Pham & Manh-Tung Ho - 2018 - Scientific Data (Nature) 5 (180188):1-15.details
This study presents a description of an open database on scientific output of Vietnamese researchers in social sciences and humanities, one that corrects for the shortcomings in current research publication databases such as data duplication, slow update, and a substantial cost of doing science. Here, using scientists' self-reports, open online sources and cross-checking with Scopus database, we introduce a manual system and its semi-automated version of the database on the profiles of 657 Vietnamese researchers in social sciences and humanities who (...) have published in Scopus-indexed journals from 2008 to 2018. The final system also records 973 foreign co-authors, 1,289 papers, and 789 affiliations. The data collection method, highly applicable for other sources, could be replicated in other developing countries while its content be used in cross-section, multivariate, and network data analyses. The open database is expected to help Vietnam revamp its research capacity and meet the public demand for greater transparency in science management. (shrink)
A Time Travel Amidst a Recession.Huyen Nguyen - unknowndetails
Since I was born in 1998, three major international economic crises had hit: the Asian financial crisis in 1997-1999, the Dotcom bubble in the early 2000s, and the 2007- 2008 financial crisis. And now, the world is undergoing another economic downturn due to the spread of the COVID-19 pandemic.
Can Hume Deny Reid's Dilemma?Anthony Nguyen - forthcoming - Hume Studies.details
Reid's dilemma concludes that, whether the idea associated with a denied proposition is lively or faint, Hume is committed to saying that it is either believed or merely conceived. In neither case would there be denial. If so, then Hume cannot give an adequate account of denial. I consider and reject Powell's suggestion that Hume could have advanced a "Content Contrary" account of denial that avoids Reid's dilemma. However, not only would a Humean Content Contrary account be viciously circular, textual (...) evidence suggests that Hume did not hold such an account. I then argue that Govier's distinction between force and vivacity cannot help Hume. Not only did Hume fail to recognize this distinction, we can advance a variant of Reid's dilemma even if we distinguish force from vivacity. (shrink)
How Twitter Gamifies Communication.C. Thi Nguyen - forthcoming - In Applied Epistemology. Oxford University Press.details
Twitter makes conversation into something like a game. It scores our communication, giving us vivid and quantified feedback, via Likes, Retweets, and Follower counts. But this gamification doesn't just increase our motivation to communicate; it changes the very nature of the activity. Games are more satisfying than ordinary life precisely because game-goals are simpler, cleaner, and easier to apply. Twitter is thrilling precisely because its goals have been artificially clarified and narrowed. When we buy into Twitter's gamification, then our values (...) shift from the complex and pluralistic values of communication, to the narrower quest for popularity and virality. Twitter's gamification bears some resemblance with the phenomena of echo chambers and moral outrage porn. In all these phenomena, we are instrumentalizing our ends for hedonistic reasons. We have shifted our aims in an activity, not because the new aims are more valuable, but in exchange for extra pleasure. (shrink)
Review of Jennifer Lena's "Entitled: Discriminating Tastes and the Expansion of the Arts". [REVIEW]C. Thi Nguyen - 2020 - Journal of Aesthetics and Art Criticism 78 (2):257-261.details
The "Same Bed, Different Dreams" of Vietnam and China: How (Mis)Trust Could Make or Break It.Hong-Kong T. Nguyen, Quan-Hoang Vuong, Manh-Tung Ho & Thu- Trang Vuong - manuscriptdetails
The relationship between Vietnam and China could be captured in the Chinese expression of "同床异梦", which means lying on the same bed but having different dreams. The two countries share certain cultural and political similarities but also diverge vastly in their national interests. This paper adds to the extant literature on this topic by analyzing the element of trust/mistrust in their interactions in trade-investment, tourism, and defense-security. The analysis shows how the relationship is increasingly interdependent but is equally fragile due (...) to the lack of trust on both sides. The mistrust or even distrust of Chinese subjects run deep within the Vietnamese mindset, from the skepticism of Chinese investment, Chinese tourists, discrimination against ethnic Chinese, to the caution against Chinese aggression in the South China Sea. The paper forecasts that, despite the deep-seated differences and occasional mistrust, going forward, neither side would risk damaging the status quo even when tensions peak. (shrink)
STOCK MARKET AND ECONOMIC GROWTH IN VIETNAM.Nguyen Thuy Hoan - 2019 - Dissertation, University of Central Lancashiredetails
For many years, the relationship between the financial system and economic growth has attracted the attention of scholars intending to uncover the direction of the relationship. The stock market is a part of the financial system and plays an essential role in channelling equity funds into the economy and creating liquidity for the equity instruments. A substantial empirical study postulates that the stock market can boost the economic growth of an economy. However, other studies assert that, at best, the stock (...) market is an unimportant economic driver. (shrink)
Các tạp chí KH Nga rút bỏ hơn 800 công bố.Nguyên Huyên - 2020 - SSHPA 2020 (1):1-2.details
Các tạp chí Nga vừa có đợt rút hơn 800 bài báo khoa học. Đây là kết quả bước đầu của cuộc điều tra quy mô lớn do Viện Hàn lâm Khoa học Nga (RAS) tiến hành, sau rất nhiều cáo buộc về các hành vi gian lận khoa học ở Nga.
A Functional Naturalism.Anthony Nguyen - forthcoming - Synthese.details
I provide two arguments against value-free naturalism. Both are based on considerations concerning biological teleology. Value-free naturalism is the thesis that both (1) everything is, at least in principle, under the purview of the sciences and (2) all scientific facts are purely non-evaluative. First, I advance a counterexample to any analysis on which natural selection is necessary to biological teleology. This should concern the value-free naturalist, since most value-free analyses of biological teleology appeal to natural selection. My counterexample is unique (...) in that it is likely to actually occur. It concerns the creation of synthetic life. Recent developments in synthetic biology suggest scientists will eventually be able to develop synthetic life. Such life, however, would not have any of its traits naturally selected for. Second, I develop a simple argument that biological teleology is a scientific but value-laden notion. Consequently, value-free naturalism is false. I end with some concluding remarks on the implications for naturalism, the thesis that (1). Naturalism may be salvaged only if we reject (2). (2) is a dogma that unnecessarily constrains our conception of the sciences. Only a naturalism that recognizes value-laden notions as scientifically respectable can be true. Such a naturalism is a functional naturalism. (shrink)
Beauty Culture in Post-Reform Vietnam: Glocalization or Homogenization?Hong-Kong Nguyen - 2020 - SocArXiv Papers.details
This essay re-examines the global beauty culture and ideals as established by the West and continually re-imagined worldwide through three primary lenses of race, gender, and political economy. Based on this understanding, it then delves into how the beauty culture in Vietnam has been shaped and transformed since the country conducted economic reforms in 1986 and has become more integrated into the global economy today.
Death: The Loss of Life-Constitutive Integration.Doyen Nguyen - 2019 - Diametros 60:72-78.details
This discussion note aims to address the two points which Lizza raises regarding my critique of his paper "Defining Death: Beyond Biology," namely that I mistakenly attribute a Lockean view to his 'higher brain death' position and that, with respect to the 'brain death' controversy, both the notions of the organism as a whole and somatic integration are unclear and vague. First, it is known from the writings of constitutionalist scholars that the constitution view of human persons, a theory which (...) Lizza also holds, has its roots in John Locke's thought. Second, contrary to Lizza's claims, the notions of the organism as a whole and somatic integration are both more than adequately described in the biomedical and biophilosophical literature. (shrink)
Monuments as Commitments: How Art Speaks to Groups and How Groups Think in Art.C. Thi Nguyen - 2019 - Pacific Philosophical Quarterly 100 (4):971-994.details
Art can be addressed, not just to individuals, but to groups. Art can even be part of how groups think to themselves – how they keep a grip on their values over time. I focus on monuments as a case study. Monuments, I claim, can function as a commitment to a group value, for the sake of long-term action guidance. Art can function here where charters and mission statements cannot, precisely because of art's powers to capture subtlety and emotion. In (...) particular, art can serve as the vessel for group emotions, by making emotional content sufficiently public so as to be the object of a group commitment. Art enables groups to guide themselves with values too subtle to be codified. (shrink)
MARKET RESEARCH: SWECO FINLAND's POTENTIAL ENTRY IN VIETNAM IN INFRASTRUCTURE CONSULTING BUSINESS.Duc Thanh Nguyen - 2019 - Dissertation, Satakunta University of Applied Sciencesdetails
The research resulted in a comprehensive overview of the Vietnamese public infrastructure market. Primary data confirmed most of the secondary data collected while adding more supporting details, with no perceivable contradiction among data sources. Macro-economically, Vietnam emerged as a high-potential market due to the rapid economic growth, the massive infrastructure demand driven by urbanization and industrialization, and the Government's recent efforts. However, there were market uncertainties that required careful consideration, with regulatory inefficiency and corruption being the most prominent ones. Culturally, (...) Vietnam and Finland exhibited distinct differences. The thesis concluded that Vietnam was a market of high potentiality and medium risk. The author's recommendations for market entry included a careful approach to mitigate risks concerning regulations, and the addition of a translator when negotiating to reduce the risk of cultural misunderstanding. (shrink)
Postpatriarchy.Dzung Kieu Nguyen - 2013 - Journal of Research in Gender Studies 3 (2):27-47.details
This article points out: "The combination of men and women in families is irrational." Men and women are two different "species." They only require sexual activities from each other, which are considered the less time-consuming activities during their lives. Sex must be treated as an enemy of marriage, due to its inferior and treacherous nature, and should not be included in marriage. Men and women should not live together in a family, since this institution must be understood as a permanent (...) place for all family members and is expected to have a solid structure. The traditional family model is the result of men"s enslavement of women and the exaggeration of the role of sex. This model creates an overwhelming advantage for men in selecting partners, proposing marriage, and other family activities. This article indicates: (i) The prominent family models existing between the group-marriage period and now are sex-based family models. (ii) Technical and social conditions nowadays require a new and sustainable base for a family. The selected targets in this study are the consanguineous and sworn relationships among same-sex people in case they choose to be heterosexual, (and in turn, among opposite-sex persons when they engage in homosexuality). For example, a family can consist of two blood brothers (or sworn brothers or cousins) with their children, in case they are heterosexual. This family model is named the non-sex based family (NSBF) model, since the sexual needs will be met outside the family. The article also outlines a post-patriarchal society with the presence of NSBFs, and argues that the new model should be seen as an essential development trend of society. (shrink)
Precis of Games: Agency as Art.C. Thi Nguyen - manuscriptdetails
Games are a distinctive form of art — and very different from many traditional arts. Games work in the medium of agency. Game designers don't just tell stories or create environments. They tell us what our abilities will be in the game. They set our motivations, by setting the scoring system and specifying the win-conditions. Game de-signers sculpt temporary agencies for us to occupy. And when we play games, we adopt these designed agencies, submerging ourselves in them, and taking on (...) their specified ends for a while. Games constitute a library of agencies — and by exploring them, we can learn new ways to inhabit our own agency. When we play games, we engage in a special form of agential fluidity. We can absorb ourselves temporarily in alternate, constructed agencies. Games make use of that capacity to record different practical mindsets. Games turn out to be our technology for communicating forms of agency. (shrink)
Socialism and Entrepreneurship.Lanh Thi Nguyen - 2020 - Dissertation, Trier Universitydetails
This thesis finds that individuals from North Vietnam have lower entrepreneurship intentions, are less likely to enroll in entrepreneurship education programs, and display lower likelihood to take over an existing business, compared to those from the South of Vietnam. These findings indicate the enduring influence of historical and institutional arrangements on entrepreneurship outcomes. The long-lasting effect of formerly socialist institutions on entrepreneurship is apparently deeper than previously discovered in the prominent case of East-West Germany and East-West Europe as well.
The Arts of Action.C. Thi Nguyen - 2020 - Philosophers' Imprint 20 (14):1-27.details
The theory and culture of the arts has largely focused on the arts of objects, and neglected the arts of action – the "process arts". In the process arts, artists create artifacts to engender activity in their audience, for the sake of the audience's aesthetic appreciation of their own activity. This includes appreciating their own deliberations, choices, reactions, and movements. The process arts include games, urban planning, improvised social dance, cooking, and social food rituals. In the traditional object arts, the (...) central aesthetic properties occur in the artistic artifact itself. It is the painting that is beautiful; the novel that is dramatic. In the process arts, the aesthetic properties occur in the activity of the appreciator. It is the game player's own decisions that are elegant, the rock climber's own movement that is graceful, and the tango dancers' rapport that is beautiful. The artifact's role is to call forth and shape that activity, guiding it along aesthetic lines. I offer a theory of the process arts. Crucially, we must distinguish between the designed artifact and the prescribed focus of aesthetic appreciation. In the object arts, these are one and the same. The designed artifact is the painting, which is also the prescribed focus of appreciation. In the process arts, they are different. The designed artifact is the game, but the appreciator is prescribed to appreciate their own activity in playing the game. Next, I address the complex question of who the artist really is in a piece of process art — the designer or the active appreciator? Finally, I diagnose the lowly status of the process arts. (shrink)
Trust and Sincerity in Art.C. Thi Nguyen - forthcoming - Ergo: An Open Access Journal of Philosophy.details
Our life with art is suffused with trust. We don't just trust one another's aesthetic testimony; we trust one another's aesthetic actions. Audiences trust artists to have made it worth their while; artists trust audiences to put in the effort. Without trust, audiences would have little reason to put in the effort to understand difficult and unfamiliar art. I offer a theory of aesthetic trust, which highlights the importance of trust in aesthetic sincerity. We trust in another's aesthetic sincerity when (...) we rely on them to fulfill their commitments to act for aesthetic reasons — rather than for, say, financial, social, or political reasons. We feel most thoroughly betrayed by an artist, not when they make bad art, but when they sell out. This teaches us something about the nature of trust in general. According to many standard theories, trust involves thinking the trusted to be cooperative or good-natured. But trust in aesthetic sincerity is different. We trust artists to be true to their own aesthetic sensibility, which might involve selfishly ignoring their audience's needs. Why do we care so much about an artist's sincerity, rather than merely trusting them to make good art? We emphasize sincerity when wish to encourage originality, rather than to demand success along predictable lines. And we ask for sincerity when our goal is to discover a shared sensibility. In moral life, we often try to force convergence through coordinated effort. But in aesthetic life, we often hope for the lovely discovery that our sensibilities were similar all along. And for that we need to ask for sincerity, rather than overt coordination. (shrink)
|
CommonCrawl
|
arXiv.org > quant-ph > arXiv:2010.11658
arXiv:2010.11658 (quant-ph)
[Submitted on 22 Oct 2020 (v1), last revised 9 Jul 2021 (this version, v4)]
Title:On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work
Authors:Kai-Min Chung, Serge Fehr, Yu-Hsuan Huang, Tai-Ning Liao
Abstract: We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). To start off with, we offer a concise exposition of the technique, which easily extends to the parallel-query QROM, where in each query-round the considered algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis.
Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, for typical examples the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds.
We demonstrate this on a few examples, recovering known results (like the optimality of parallel Grover), but also obtaining new results (like the optimality of parallel BHT collision search). Our main target is the hardness of finding a $q$-chain with fewer than $q$ parallel queries, i.e., a sequence $x_0, x_1,\ldots, x_q$ with $x_i = H(x_{i-1})$ for all $1 \leq i \leq q$.
The above problem of finding a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete cryptographic application of our techniques, we prove that the "Simple Proofs of Sequential Work" proposed by Cohen and Pietrzak remains secure against quantum attacks. Such an analysis is not simply a matter of plugging in our new bound; the entire protocol needs to be analyzed in the light of a quantum attack. Thanks to our framework, this can now be done with purely classical reasoning.
Subjects: Quantum Physics (quant-ph); Computational Complexity (cs.CC); Cryptography and Security (cs.CR)
Cite as: arXiv:2010.11658 [quant-ph]
(or arXiv:2010.11658v4 [quant-ph] for this version)
From: Yu-Hsuan Huang [view email]
[v1] Thu, 22 Oct 2020 12:44:08 UTC (193 KB)
[v2] Mon, 11 Jan 2021 12:00:18 UTC (196 KB)
[v3] Mon, 15 Mar 2021 17:34:03 UTC (205 KB)
[v4] Fri, 9 Jul 2021 09:06:07 UTC (205 KB)
quant-ph
cs.CR
|
CommonCrawl
|
6. Geometry
6.01 Area of triangles
6.02 Area of special quadrilaterals
6.03 Area of composite shapes
6.04 Nets of solids
Investigation: Surface area using nets
Investigation: Develop the formula for volume of a rectangular prism
6.05 Volume of rectangular prisms
United States of AmericaMA
The volume of a three dimensional shape is the amount of space that is contained within that shape.
A quantity of volume is represented in terms of the volume of a unit cube, which is a cube with side length $1$1 unit. By definition, a single unit cube has a volume of $1$1 cubic unit, written as $1$1 unit3.
1 unit cube has a volume of 1 unit3.
The image below shows a rectangular prism with length $5$5 units, width $3$3 units, and height $2$2 units. Notice that the length of each edge corresponds to the number of unit cubes that could be lined up side by side along that edge.
How many unit cubes fit within this shape?
We can find the number of unit cubes that could fit inside the rectangular prism by taking the product of the three side lengths. This gives $5\times3\times2=30$5×3×2=30, so there are $30$30 unit cubes in the prism, which means it has a volume of $30$30 unit3.
Use the sliders to change the length, width, and height of the rectangular prism. Consider the questions below.
Why do you think all of the unit cubes in the base are shown?
If we count the number of unit cubes in the base, how can we use the height to get the total volume (number of unit cubes)?
What product could we use the find the volume?
Volume of a rectangular prism
In the same way that the area of a two dimensional shape is related to the product of two perpendicular lengths, the volume of a three dimensional shape is related to the product of three mutually perpendicular lengths (each of the three lengths is perpendicular to the other two).
The volume of a rectangular prism is given by
$\text{Volume }=\text{length }\times\text{width }\times\text{height }$Volume =length ×width ×height , or
$V=l\times w\times h$V=l×w×h
Use the three sliders for length, width, and height to see how changing these affect the rectangular prism. Click the boxes to see the formula and volume revealed.
A cube can be thought of as a special type of rectangular prism, one that has all sides equal in length. The formula for the volume of a cube is similar to the formula for the area of a square.
Volume of a cube
The volume of a cube is given by
$\text{Volume }=\text{side }\times\text{side }\times\text{side }$Volume =side ×side ×side , or
$V=s\times s\times s=s^3$V=s×s×s=s3
Find the volume of the following rectangular prism.
Think: The side lengths have units of cm, so the volume will be in cm3.
Do: The base of the prism has a width of $2$2 cm and a length of $7$7 cm, and the height of the prism is $9$9 cm. We will use these sides in the formula for the volume of a rectangular prism.
$\text{Volume }$Volume $=$= $\text{length }\times\text{width }\times\text{height }$length ×width ×height (Formula for the volume of a rectangular prism)
$=$= $7\times2\times9$7×2×9 (Substitute the values for the length, width, and height)
$=$= $126$126 (Perform the multiplication to find the volume)
So this rectangular prism has a volume of $126$126 cm3.
The local swimming pool is $25$25 m long. It has eight lanes, each $2$2 m wide, and its depth is $1.5$1.5 m. What is the volume of water in the pool?
Think: The water in the pool is in the shape of a rectangular prism, so to find its volume we need to find the side lengths of this prism. The length and depth of the pool are two side lengths we can use. The final side length is found by multiplying the number of lanes by the width of each lane.
Do: First we calculate the width of the pool using the width of each swim lane: $8\times2$8×2 m $=16$=16 m. Next we use the formula for the volume of a rectangular prism.
$=$= $25\times16\times1.5$25×16×1.5 (Substitute the values for the length, width, and height)
So the water in the pool has a volume of $600$600 m3.
Reflect: Even though the volume formula uses the terms "length", "width", and "height", when referring to everyday objects it may be more appropriate or more common to use alternative words like "width", "depth", or "thickness". In this example, we could just as well have used the formula $\text{Volume }=\text{length }\times\text{width }\times\text{depth }$Volume =length ×width ×depth .
We use special units to describe volume, based on the notion of cubic units described above. Because the units for length include millimeters, centimeters, meters and kilometers we end up with the following units for area.
Units of Volume
cubic millimeters = mm3
(picture a cube with side lengths of $1$1 mm each - that's pretty small!)
cubic centimeters = cm3
(picture a cube with side lengths of $1$1 cm each - about the size of a dice)
cubic meters = m3
(picture a cube with side lengths of $1$1 m each - what could be this big?)
Before we start a question, it is important to check that all of the sides are in the same unit. If they aren't, then we should convert them to the same unit.
Find the volume of the rectangular prism shown.
Find the volume of the cube shown.
A box is $1$1 m long, $20$20 cm high and $30$30 cm wide. What is the volume of the box in cubic centimeters?
6.G.2
Find the volume of a right rectangular prism with fractional edge lengths by packing it with unit cubes of the appropriate unit fraction edge lengths, and show that the volume is the same as would be found by multiplying the edge lengths of the prism. Apply the formulas V = lwh and = BhV to find volumes of right rectangular prisms with fractional edge lengths in the context of solving real-world and mathematical problems.
|
CommonCrawl
|
Numerical solution for generalized nonlinear fractional integro-differential equations with linear functional arguments using Chebyshev series
Khalid K. Ali1,
Mohamed A. Abd El Salam1,
Emad M. H. Mohamed1,
Bessem Samet2,
Sunil Kumar ORCID: orcid.org/0000-0001-5420-81473 &
M. S. Osman4
Advances in Difference Equations volume 2020, Article number: 494 (2020) Cite this article
In the present work, a numerical technique for solving a general form of nonlinear fractional order integro-differential equations (GNFIDEs) with linear functional arguments using Chebyshev series is presented. The recommended equation with its linear functional argument produces a general form of delay, proportional delay, and advanced non-linear arbitrary order Fredholm–Volterra integro-differential equations. Spectral collocation method is extended to study this problem as a matrix discretization scheme, where the fractional derivatives are characterized in the Caputo sense. The collocation method transforms the given equation and conditions to an algebraic nonlinear system of equations with unknown Chebyshev coefficients. Additionally, we present a general form of the operational matrix for derivatives. The introduced operational matrix of derivatives includes arbitrary order derivatives and the operational matrix of ordinary derivative as a special case. To the best of authors' knowledge, there is no other work discussing this point. Numerical test examples are given, and the achieved results show that the recommended method is very effective and convenient.
Nonlinear differential (DEs) and integro-differential equations (IDEs) have a great importance in modeling of many phenomena in physics and engineering [1–17]. Fractional differential equations involving the Caputo and other fractional derivatives, which are a generalization of classical differential equations, have attracted widespread attention [18–25]. In the last decade or so, several studies have been carried out to develop numerical schemes to deal with fractional integro-differential equations (FIDEs) of both linear and nonlinear type. The successive approximation methods such as Adomian decomposition [26], He's variational iteration technique [8], HPM [5], He's HPM [27], modified HPM [28], finite difference method [29], a modified reproducing kernel discretization method [30], and differential transformation method [31] were used to deal with FIDEs. Spectral methods with different basis were also applied to FIDEs, Chebyshev and Taylor collocation, Haar wavelet, Tau and Walsh series schemes, etc. [32–39] as an example. The collocation method is one of the powerful spectral methods which are widely used for solving fractional differential and integro-differential equations [40–44]. Further, the numerical solution of delay and advanced DEs of arbitrary order has been reported by many researchers [45–58]. Differential equations of advanced argument had fewer contributions in mathematics research compared to delay differential equations, which had a great development in the last decade [59, 60]. Monotone iterative technique was introduced with Riemann–Liouville fractional derivative to deal with FIDEs with advanced arguments [61], while the collocation method with Bessel polynomials treated linear Fredholm integro-differential-difference equations [62]. In our previous work, Tau method with the Chebyshev polynomials was employed to deal with linear fractional differential equations with linear functional arguments [63]; therefore, the Chebyshev collocation method was extended to fractional differential equations with delay [64]. The equations with functional form of argument represent mixed type equations delay, proportional delay, and advanced differential equations. All reported works considered a generalization of equations with functional argument with integer order derivative or with fractional derivative in the linear case.
In this work, we introduce a general form of nonlinear fractional integro-differential equations (GNFIDEs) with linear functional arguments, which is a more general form of nonlinear fractional pantograph and Fredholm–Volterra integro-differential equations with linear functional arguments [65–69]. The spectral collocation method is used with Chebyshev polynomials of the first kind as a matrix discretization method to treat the proposed equations. An operational matrix for derivatives is presented. The introduced operational matrix of derivatives includes fractional order derivatives and the operational matrix of ordinary derivative as a special case. No other studies have discussed this point.
The proposed GNFIDEs with linear functional arguments are presented as follows:
$$\begin{aligned} &\sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,i}(x)y^{k}(x) y^{(\nu _{i})}(p_{i}x+\xi _{i})+\sum _{h=1}^{n_{3}}\sum_{j=0}^{n_{4}}P_{h,j}(x)y^{(h)}(x) y^{(\alpha _{j})}(q_{j}x+\zeta _{j}) \\ &\quad=f(x)+ \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x,t) y^{(\upsilon _{d})}(t) \,dt+ \int _{a}^{\phi (x)}\sum_{c=0}^{n_{6}}V_{c}(x,t) y^{(\beta _{c})}(t) \,dt, \end{aligned}$$
where \(x\in [a,b]\), \(Q_{k,i}(x), P_{h,j}(x)\), \(f(x)\), \(V_{c}(x,t), K_{d}(x,t)\) are well-defined functions, and \(a,b,p_{i},\xi _{i},q_{j}, \zeta _{j}\in \Re \) where \(p_{i},q_{j}\neq 0\), \(\nu _{i} \geq 0\), \(\alpha _{j} \geq 0\), \(\upsilon _{d}\geq 0\), \(\beta _{c}\geq 0\) and \(i - 1 < \nu _{i} \leq i \), \(j - 1 < \alpha _{j} \leq j \), \(d - 1 < \upsilon _{d}\leq d\), \(c - 1 < \beta _{c}\leq c\), \(n_{i} \in \mathbb{N}\), under the conditions
$$ y^{(i)}(\eta _{i})=\mu _{i}, \quad i=0,1,2, \ldots,m-1, $$
where \(\eta _{i}\in [a,b]\), and m is the greatest integer order derivative, or the highest integer order greater than the fractional derivative. The general form (1) contains at least three different arguments, then the following corollary defines the interval that the independent variable x belongs to. Chebyshev polynomials of the first kind are used in this work to approximate the solution of suggested equation (1). The Chebyshev polynomials are characterized on \([-1, 1]\).
Corollary 1.1
The independent variable x of (1) belongs to \([a,b]\), which is the intersection of the intervals of the different arguments and \([-1,1]\)i.e. \(x\in [a,b]=[\frac{-1+\xi _{i}}{p_{i}},\frac{1+\xi _{i}}{p_{i}}] \cap [\frac{-1+\zeta _{j}}{q_{j}},\frac{1+\zeta _{j}}{q_{j}}] \cap [-1,1]\).
General notations
In this section, some definitions and properties for the fractional derivative and Chebyshev polynomials are listed [63, 64, 70, 71].
The Caputo fractional derivative
The Caputo fractional derivative operator \(D^{\gamma }_{t}\) of order γ is characterized in the following form:
$$ D^{\gamma }_{t}\varPsi (x) = \frac{1}{\varGamma (n-\gamma )} \int _{0}^{x} \frac{\varPsi ^{(n)}(t)}{(x-t)^{\gamma -n+1}}\,dt,\quad \gamma > 0, $$
where \(x > 0\), \(n-1 < \gamma \leq n, n \in \mathbb{N}_{0}\), and \(\mathbb{N}_{0} = \mathbb{N}-\{0\}\).
\(D^{\gamma }_{t} \sum_{i=0}^{m}\lambda _{i} \varPsi _{i}(x)= \sum_{i=0}^{m} \lambda _{i} D^{\gamma }_{t}\varPsi _{i}(x)\), where \(\lambda _{i}\) and γ are constants.
The Caputo fractional differentiation of a constant is zero.
\(D^{\gamma }_{t} x^{k}= \bigl\{ \scriptsize{ \begin{array}{l@{\quad}l} 0 & \text{for } k\in \mathbb{N}_{0}\text{ and } k<\lceil \gamma \rceil, \\ \frac{\varGamma (k+1) x^{k-\gamma }}{\varGamma (k+1-\gamma )} & \text{for } k\in \mathbb{N}_{0} \text{ and } k\geq \lceil \gamma \rceil, \end{array}}\)
where \(\lceil \gamma \rceil \) denotes to the smallest integer greater than or equal to γ.
Chebyshev polynomials
The Chebyshev polynomials \(T_{n}(x)\) of the first kind are defined as follows: orthogonal polynomials in x of degree n are defined on \([-1, 1]\) such that
$$ T_{n}(x)=\cos n\theta , $$
where \(x=\cos \theta \) and \(\theta \in [0, \pi ]\). The polynomials \(T_{n}(x)\) are generated by using the following recurrence relations:
$$ T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x), $$
with initials
$$ T_{0}(x)=1, \qquad T_{1}(x)=x,\quad n=1,2,\ldots . $$
The Chebyshev polynomials \(T_{n} (x)\)are explicitly expressed in terms of \(x^{n} \)in the following form:
$$ T_{n} (x)=\sum_{k=0}^{[n/2]}w_{k}^{(n)} x^{n-2k} , $$
$$ w_{k}^{(n)} =(-1)^{k} 2^{n-2k-1} \frac{n}{n-k} \begin{pmatrix} {n-k} \\ k \end{pmatrix},\quad 2k\le n. $$
Procedure solution using the collocation method
The solution \(y(x)\) of (1) may be expanded by Chebyshev polynomial series of the first kind as follows [64]:
$$ y(x)=\sum_{n=0}^{\infty }c_{n}T_{n}(x). $$
By truncating series (5) to \(N<\infty \), the approximate solution is expressed in the following form:
$$\begin{aligned} y(x)&\cong \sum_{n=0}^{N}c_{n}T_{n}(x) \\ &=T(x)C, \end{aligned}$$
where \(T(x) \) and C are matrices given by
$$ T(x)= \begin{bmatrix} {T_{0} (x)} & {T_{1} (x)} & {\cdots } & {T_{N} (x)} \end{bmatrix},\qquad C=\biggl[\frac{1}{2}c_{0},c_{1},c_{2}, \ldots,c_{N}\biggr]^{T}. $$
Now, using (4), relation (6) may written in the following form:
$$ y(x)=X(x) W^{T} C, $$
where W is a square lower triangle matrix with size \((N+1)\times (N+1) \) given by
W i j = { 1 if i = j = 0 , ( − 1 ) k 2 i − 2 k − 1 i i − k ( i − k k ) if i + j even and j ≤ i , 0 if j > i , i + j odd,
$$ k=\textstyle\begin{cases} \frac{i}{2},\ldots,1,0 & \text{for even $i$,} \\ \frac{i-1}{2},\ldots,1,0 & \text{for odd $i$,} \end{cases}\displaystyle \quad i,j=0, 1, 2, \ldots, N. $$
$$ W= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ -1 & 0 & 2 & 0 & 0 \\ 0 & -3 & 0 & 4 & 0 \\ 1 & 0 & -8 & 0 & 8 \end{pmatrix}_{N=4},\qquad W= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ -1 & 0 & 2 & 0 & 0 & 0 \\ 0 & -3 & 0 & 4 & 0 & 0 \\ 1 & 0 & -8 & 0 & 8 & 0 \\ 0 & 5 & 0 & -20 & 0 & 16 \end{pmatrix}_{N=5} . $$
Then, by substituting from (6) in (1), we get
$$\begin{aligned} &\sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,r}(x){\bigl(T(x) C \bigr)}^{k} D^{\nu _{i}}T(p_{i}x+\xi _{i}) C \\ &\quad{}+\sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x) \bigl(T^{(h)}(x) C\bigr) D^{ \alpha {j}}T(q_{j}x+\zeta _{j}) C \\ &\quad{}- \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x,t) T^{(\upsilon _{d})}(t) C\,dt- \int _{a}^{\phi (x)}\sum_{c=0}^{n_{6}}V_{c}(x,t) T^{(\beta _{c})}(t) C \,dt=f(x). \end{aligned}$$
We can write (9) as follows:
$$\begin{aligned} & \Biggl[ \sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,r}(x){\bigl(T(x) C \bigr)}^{k} D^{\nu _{i}}T(p_{i}x+\xi _{i}) \\ &\quad{}+ \sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x) \bigl(T^{(h)}(x) C\bigr) D^{ \alpha _{j}}T(q_{j}x+\zeta _{j}) \\ &\quad{}- \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x,t) D^{\upsilon _{d}} T(t) \,dt- \int _{a}^{\phi (x)}\sum_{c=0}^{n_{6}}V_{c}(x,t) D^{\beta _{c}}T(t) \,dt \Biggr]C=f(x). \end{aligned}$$
The collocation points are defined in the following form:
$$ x_{l}=lh+a, $$
$$ h=\frac{b-a}{N},\quad l=0,1,2,\ldots,N. $$
By substituting the collocation points (11) in (10), we get
$$\begin{aligned} & \Biggl[ \sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,i}(x_{l}){ \bigl(T(x_{l}) C\bigr)}^{k} D^{\nu _{i}}T(p_{i}x_{l}+ \xi _{i}) \\ &\quad{}+ \sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x_{l}) \bigl(T^{(h)}(x_{l}) C\bigr) D^{\alpha _{j}}T(q_{j}x_{l}+ \zeta _{j}) \\ &\quad{}- \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x_{l},t) D^{\upsilon _{d}} T(t) \,dt- \int _{a}^{\phi (x_{l})}\sum_{c=0}^{n_{6}}V_{c}(x_{l},t) D^{ \beta _{c}}T(t) \,dt \Biggr]C=f(x_{l}). \end{aligned}$$
In the following theorem we introduce a general form of operational matrix of the row vector \(T(x)\) in the representation as (7), such that the process includes the fractional order derivatives, and ordinary operational matrix given as a special case when \(\alpha _{i}\rightarrow \lceil \alpha _{i}\rceil \).
Theorem 1
Assume that the Chebyshev row vector \(T(x)\)is represented as (7), then the fractional order derivative of the vector \(D^{\alpha _{i}}T(x)\)is
$$ D^{\alpha _{i}}T(x)=X_{\alpha _{i}}(x)B_{\alpha _{i}}W^{T}, $$
$$ X_{\alpha _{i}}(x)=\bigl[x^{{-\alpha _{i}+i}}\ x^{{1-\alpha _{i}+i}}\ x^{{2- \alpha _{i}+i}}\ \cdots\ x^{{N-1-\alpha _{i}+i}}\bigr],\quad i-1 < \alpha _{i} \leqslant i, $$
where \(B_{\alpha _{i}}\)is an \((N+1)\times (N+1)\)square upper diagonal matrix, the elements \(b_{r,s}\)of \(B_{\alpha _{i}}\)can be written as follows:
$$ \textstyle\begin{cases} b_{r,r+i}=\frac{\varGamma (r+i+1)}{\varGamma (r+i-\alpha _{i})} &r,s=0, 1, 2, \ldots, N, \\ 0& \textit{otherwise,} \end{cases} $$
where \(i-1 < \alpha _{i}\leqslant i, N\geqslant \lceil \alpha _{i} \rceil \).
$$\begin{aligned} D^{\alpha _{i}}T(x)&= D^{\alpha _{i}}\bigl[1\ x \ x^{2}\ \cdots\ x^{N}\bigr] W^{T} \\ &=X_{\alpha _{i}} B_{\alpha _{i}} W^{T}, \end{aligned}$$
if \(0 < \alpha _{1}\leqslant 1\), using Caputo's fractional properties, we get
$$\begin{aligned} & X_{\alpha _{1}}=\bigl[x^{1-{\alpha _{1}}}\ x^{2-{\alpha _{1}}}\ x^{3-{ \alpha _{1}}}\ \cdots\ x^{N+1-{\alpha _{1}}}\bigr], \end{aligned}$$
$$\begin{aligned} & B_{\alpha _{1}}= \begin{pmatrix} 0 &\frac{2}{\varGamma (2-\alpha _{1})} &0 \cdots &0 \\ 0 &0 &\frac{\varGamma (3)}{\varGamma (3-\alpha _{1})} \cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 & 0 & 0 \cdots &\frac{\varGamma (N)}{\varGamma (N-\alpha _{1})} \\ 0 & 0 & 0 \cdots &0 \end{pmatrix}. \end{aligned}$$
As \(\alpha _{1} \longrightarrow 1\), the system reduces to the ordinary case \((B_{\alpha _{1}}\longrightarrow B)\) (see [64]).
Also \(1 < \alpha _{2}\leqslant 2\), then
$$\begin{aligned} &B_{\alpha _{2}}= \begin{pmatrix} 0 &0 &\frac{3}{\varGamma (3-\alpha _{2})} \cdots &0&0 \\ 0 &0 &0& \frac{\varGamma (4)}{\varGamma (4-\alpha _{2})}\cdots &0 \\ \vdots &\vdots &\vdots &\vdots &\vdots \\ 0 & 0 & 0 \cdots &0&\frac{\varGamma (N)}{\varGamma (N-\alpha _{2})} \\ 0 & 0 & 0 \cdots &0&0 \\ 0 & 0 & 0 \cdots &0&0 \end{pmatrix}. \end{aligned}$$
As \(\alpha _{2}\longrightarrow 2\), the system reduces to the ordinary case \((B_{\alpha _{2}}\longrightarrow B^{2})\) (see [64]).
By the same way, if we take \(2 < \alpha _{3}\leqslant 3\), then
$$\begin{aligned} & B_{\alpha _{3}}= \begin{pmatrix} 0 &0 &0&\frac{4}{\varGamma (4-\alpha _{3})} \cdots &0&0 \\ 0 &0 &0& 0&\frac{\varGamma (5)}{\varGamma (5-\alpha _{3})}&0 \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\ 0 & 0 & 0 \cdots &0&0&\frac{\varGamma (N)}{\varGamma (N-\alpha _{3})} \\ 0 & 0 & 0 \cdots &0&0&0 \\ 0 & 0 & 0 \cdots &0&0&0 \\ 0 & 0 & 0 \cdots &0&0&0 \end{pmatrix}. \end{aligned}$$
By induction, \(i-1 < \alpha _{i}\leqslant i\), then
and \(B_{\alpha _{i}}\) as in (15), where the proposed operational matrix represents a kind of unification of ordinary and fractional case. □
Now, we give the matrix representation for all terms in (12) as representation (13).
∗ The first term in (12) can be written as follows:
$$\begin{aligned} &\sum_{k=0}^{n_{1}} \sum_{i=0}^{n_{2}}Q_{k,i}(x_{l}){ \bigl(T(x_{l}) C\bigr)}^{k} D^{\nu _{i}}T(p_{i}x_{l}+ \xi _{i}) \\ &\quad=\sum_{k=0}^{n_{1}}\sum _{i=0}^{n_{2}}Q_{k,i}(x_{l}) { \bigl(\bar{X} \bar{W}^{T} \bar{C}\bigr)}^{k} X_{\nu _{i}}B_{\nu _{i}}H_{p_{i}}E_{ \xi _{i}} W^{T} C, \end{aligned}$$
$$\begin{aligned} &\bar{X}= \begin{pmatrix} X(x_{0})&0&0\cdots &0 \\ 0&X(x_{1})&0\cdots &0 \\ 0 & 0 &X(x_{2})\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &X(x_{N}) \end{pmatrix} ,\\ & \bar{W}^{T}= \begin{pmatrix} {W}^{T}&0&0\cdots &0 \\ 0&{W}^{T}&0\cdots &0 \\ 0 & 0 &{W}^{T}\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &{W}^{T} \end{pmatrix}, \qquad \bar{C}= \begin{pmatrix} C&0&0\cdots &0 \\ 0&C&0\cdots &0 \\ 0 & 0 &C\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &C \end{pmatrix}. \end{aligned}$$
In addition, \(H_{p_{i}}\) is a square diagonal matrix of the coefficients for the linear argument, and the elements of \(H_{p_{i}}\) can be written as follows:
$$ h_{rs}=\textstyle\begin{cases} 0 & \text{if $r\neq s$;} \\ p_{i}^{r} & \text{if $r=s$.} \end{cases} $$
Moreover, \(E_{\xi _{i}}\) is a square upper triangle matrix for the shift of the linear argument, and the form of \(E_{\xi _{i}}\) is
E ξ i = ( ( 0 0 ) ( ξ i ) 0 ( 1 0 ) ( ξ i ) 1 − 0 ( 2 0 ) ( ξ i ) 2 − 0 ⋯ ( N 0 ) ( ξ i ) N − 0 0 ( 1 1 ) ( ξ i ) 1 − 1 ( 2 1 ) ( ξ i ) 2 − 1 ⋯ ( N 1 ) ( ξ i ) N − 1 0 0 ( 2 2 ) ( ξ i ) 2 − 2 ⋯ ( N 2 ) ( ξ i ) N − 2 ⋮ ⋮ ⋮ ⋮ 0 0 0 ⋯ ( N N ) ( ξ i ) N − N ) .
∗ The second term in (12) can be written as follows:
$$\begin{aligned} &\sum_{h=1}^{n_{3}} \sum_{j=0}^{n_{4}}P_{h,j}(x_{l}) \bigl(T^{(h)}(x_{l}) C\bigr) D^{\alpha _{j}}T(q_{j}x_{l}+ \zeta _{j}) \\ &\quad=\sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x_{l}) \bigl( \bar{X} \bar{B}_{h} \bar{W}^{T}\bar{C}\bigr) X_{\alpha _{j}} B_{\alpha _{j}} H_{Pi}E_{\zeta j}{W}^{T}C, \end{aligned}$$
$$ \bar{B}_{h}= \begin{pmatrix} 0 & B_{h} &0 \cdots &0 \\ 0 &0 & B_{h} \cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 & 0 & 0 \cdots &B_{h} \\ 0 & 0 & 0 \cdots &0 \end{pmatrix}, $$
and \(B_{h}\) is the same as \(B_{\alpha _{i}} \) when \(h=\lceil \alpha _{i}\rceil \).
The matrix representation for the variable coefficients takes the form
$$ Q_{i,j}= \begin{pmatrix} Q_{i,j}(x_{0})&0&0& \ldots &0 \\ 0&Q_{i,j}(x_{1})&0& \ldots &0 \\ \vdots &\vdots &\vdots &\vdots &\vdots \\ 0&0&0& \ldots & Q_{i,j}(x_{N}) \end{pmatrix}. $$
∗ Matrix representation for integral terms: Now, we try to find the matrix form corresponding to the integral term. Assume that \(K_{d} (x,t)\) can be expanded to univariate Chebyshev series with respect to t as follows:
$$ K_{d} (x,t)\cong \sum_{r=0}^{N}u_{d,r}(x)T_{r}(t). $$
Then the matrix representation of the kernel function \(K_{d} (x,t) \) is given by
$$ K_{d} (x,t)\cong U_{d}(x)T^{T}(t), $$
$$ U_{d}(x)=\bigl[u_{d,0}(x)\ u_{d,1}(x)\ \cdots\ u_{d,N}(x)\bigr]. $$
Substituting relations (13) and (27) in the present integral part, we obtain
$$\begin{aligned} & \int _{a}^{b}K_{d}(x,t) y^{(\upsilon _{d})}(t) \,dt \\ &\quad = \int _{a}^{b}U_{d}(x)T^{T}(t) T^{(\upsilon _{d})}(t) C \,dt \\ &\quad= \int _{a}^{b}U_{d}(x)W X^{T}(t) X_{\upsilon _{d}}(t) B_{ \upsilon _{d}} W^{T} C \,dt \\ &\quad= \int _{a}^{b}U_{d}(x) W \bigl[t^{0}\ t^{1}\ \cdots\ t^{N} \bigr]^{T} \bigl[t^{{0-\upsilon _{d}+d}}\ t^{{1-\upsilon _{d}+d}}\ t^{{2- \upsilon _{d}+d}}\ \cdots\ t^{{N-1-\upsilon _{d}+d}}\bigr] B_{\upsilon _{d}} W^{T} C \,dt \\ &\quad=U_{d}(x) W \biggl( \int _{a}^{b} t^{p} t^{q-\upsilon _{d}+d} \,dt \biggr) B_{\upsilon _{d}} W^{T} C \\ &\quad=U_{d}(x) W \biggl( \int _{a}^{b} t^{p+q-\upsilon _{d}+d} \,dt \biggr) B_{\upsilon _{d}} W^{T} C \\ &\quad=U_{d}(x) W Z_{\upsilon _{d}} B_{\upsilon _{d}} W^{T} C,\quad p,q=0,1,\ldots,N, \end{aligned}$$
$$ Z_{d}= \int _{a}^{b} t^{p+q-\upsilon _{d}+d} \,dt,\quad p,q=0,1, \ldots,N, $$
$$ Z_{d}=[z_{pq}]= \frac{b^{p+q-\upsilon _{d}+d+1}-a^{p+q-\upsilon _{d}+d+1}}{p+q-\upsilon _{d}+d+1},\quad p,q=0,1,\ldots,N. $$
So, the present integral term can be written as:
$$\begin{aligned} \int _{a}^{b}\sum_{d=0}^{n_{5}}K_{d}(x_{l},t) y^{( \upsilon _{d})}(t) \,dt&=\sum_{d=0}^{n_{5}} U_{d}(x_{l}) W Z_{d} B_{ \upsilon _{d}} W^{T} C \\ &=\sum_{d=0}^{n_{5}} U_{d} W Z_{d} B_{\upsilon _{d}} W^{T} C, \end{aligned}$$
$$ U_{d}= \begin{pmatrix} U_{d}(x_{0}) \\ U_{d}(x_{1}) \\ \vdots \\ U_{d}(x_{N}) \end{pmatrix}. $$
∗ Matrix representation for integral terms: Now, we try to find the matrix form corresponding to the integral term. By the same way \(V_{c} (x,t)\) can be expanded as (26)
$$ V_{c} (x,t)\cong \sum_{r=0}^{N}g_{c,r}(x)T_{r}(t). $$
Then the matrix representation of the kernel function \(V_{c} (x,t) \) is given by
$$ V_{c}(x,t)\cong G_{c}(x)T^{T}(t), $$
$$ G_{c}(x)=\bigl[g_{c,0}(x)\ g_{c,1}(x)\ \cdots\ g_{c,N}(x)\bigr]. $$
$$\begin{aligned} & \int _{a}^{\phi (x)}V_{c}(x,t) y^{(\beta _{c})}(t) \,dt \\ &\quad = \int _{a}^{\phi (x)}U_{c}(x)T^{T}(t) D^{\beta _{c}}T(t) C \,dt \\ &\quad= \int _{a}^{\phi (x)}U_{c}(x)W X^{T}(t) X_{\beta _{c}}(t) B_{ \beta _{c}} W^{T} C \,dt \\ &\quad= \int _{a}^{\phi (x)}U_{c}(x) W \bigl[t^{0}\ t^{1}\ \cdots\ t^{N} \bigr]^{T} \bigl[t^{{0-\beta _{c}+c}}\ t^{{1-\beta _{c}+c}}\ t^{{2- \beta _{c}+c}}\ \cdots\ t^{{N-1-\beta _{c}+c}}\bigr] B_{\beta _{c}} W^{T} C \,dt \\ &\quad=U_{c}(x) W \biggl( \int _{a}^{\phi (x)} t^{p} t^{q-\beta _{c}+c} \,dt \biggr) B_{\beta _{c}} W^{T} C \\ &\quad=U_{c}(x) W \biggl( \int _{a}^{\phi (x)} t^{p+q-\beta _{c}+c} \,dt \biggr) B_{\beta _{c}} W^{T} C \\ &\quad=U_{c}(x) W Z_{\beta _{c}}(x) B_{\beta _{c}} W^{T} C,\quad p,q=0,1,\ldots,N, \end{aligned}$$
$$ Z_{\beta _{c}}(x)= \int _{a}^{\phi (x)} t^{p+q-\beta _{c}+c} \,dt,\quad p,q=0,1, \ldots,N, $$
$$ Z_{\beta _{c}}(x)=\bigl[z_{pq}(x)\bigr]= \frac{\phi (x)^{(p+q-\beta _{c}+c+1)}-a^{p+q-\beta _{c}+c+1}}{p+q-\beta _{c}+c+1},\quad p,q=0,1,\ldots,N. $$
So, the present integral term can be written as follows:
$$\begin{aligned} \int _{a}^{\phi (x)}\sum_{c=0}^{n_{6}}V_{c}(x_{l},t) y^{( \beta _{c})}(t) \,dt&=\sum_{c=0}^{n_{6}} G_{c}(x_{l}) W Z_{{\beta _{c}}}(x_{l}) B_{\beta _{c}} W^{T} C \\ & =\sum_{c=0}^{n_{6}} \bar{G_{c}} \bar{W} \bar{Z_{\beta _{c}}} \bar{B_{\beta _{c}}} \bar{W^{T}} \bar{C}, \end{aligned}$$
$$\begin{aligned} &\bar{G_{c}}= \begin{pmatrix} G_{c}(x_{0})&0&0\cdots &0 \\ 0&G_{c}(x_{1})&0\cdots &0 \\ 0 & 0 &G_{c}(x_{2})\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &G_{c}(x_{N}) \end{pmatrix} ,\\ &\bar{Z_{\beta _{c}}}= \begin{pmatrix} {Z_{c}}(x_{0})&0&0\cdots &0 \\ 0&{Z_{c}}(x_{1})&0\cdots &0 \\ 0 & 0 &{Z_{c}}(x_{2})\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0 &0&0\cdots &{Z_{c}}(x_{N}) \end{pmatrix}. \end{aligned}$$
Now, by substituting equations (24), (25), and (29) into (12), we have the fundamental matrix equation
$$\begin{aligned} & \Biggl[ \sum_{k=0}^{n_{1}} \sum_{i=0}^{n_{2}}Q_{k,i}(x) \bigl( \bar{X} \bar{W}^{T}\bar{C}\bigr)^{k}X_{\nu i}B_{\nu i}H_{Pi} D_{\xi _{i}}{W}^{T}C \\ &\quad{}+\sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x) \bigl(\bar{X} \bar{B}_{h} \bar{W}^{T}\bar{C}\bigr) X_{\alpha j}B_{\alpha _{j}}H_{qj}E_{ \zeta j}{W}^{T}C \\ &\quad{}-\sum_{d=0}^{n_{5}} U_{d} W Z_{d} B_{\upsilon _{d}} W^{T} C-\sum _{c=0}^{n_{6}} \bar{G_{c}} \bar{W} \bar{Z_{c}} \bar{B_{\beta _{c}}} \bar{W^{T}} \bar{C} \Biggr]=F. \end{aligned}$$
We can write (34) in the form
$$ OC=F \quad\text{or}\quad [O;F], $$
$$\begin{aligned} \begin{aligned} O= {}&\sum_{k=0}^{n_{1}} \sum_{i=0}^{n_{2}}Q_{k,i}(x) \bigl( \bar{X} \bar{W}^{T}\bar{C}\bigr)^{k}X_{\nu i}B_{\nu i}H_{Pi} E_{\xi _{i}}{W}^{T} \\ &{}+\sum_{h=1}^{n_{3}}\sum _{j=0}^{n_{4}}P_{h,j}(x) \bigl(\bar{X} \bar{B}_{h} \bar{W}^{T}\bar{C}\bigr) X_{\alpha j}B_{\alpha _{j}}H_{qj}E_{ \zeta j}{W}^{T} \\ &{} -\sum_{d=0}^{n_{5}} U_{d} W Z_{d} B_{\upsilon _{d}} W^{T} C-\sum _{c=0}^{n_{6}} \bar{G_{c}} \bar{W} \bar{Z_{c}} \bar{B_{\beta _{c}}} \bar{W^{T}} \bar{C} , \\ F={}& \begin{pmatrix} f(x_{1}) \\ f(x_{2}) \\ \vdots \\ f(x_{N}) \end{pmatrix}. \end{aligned} \end{aligned}$$
Suppose \(k\geqslant 2\), then the matrix representation for the terms free of derivatives in (1), by using (6), we obtain
$$ y^{k}(x)=y^{k-1}(x) y (x) = \bigl(X(x) W^{T} C\bigr)^{k-1} X(x)W^{T} C. $$
We can achieve the matrix form of (37) by using the collocation points as follows:
$$ y^{k}(x)=\bigl(\bar{X} \bar{W}^{T} \bar{C} \bigr)^{k-1} X W^{T} C. $$
∗ We can achieve the matrix form for conditions (2) by using (6) on the form
$$ X(\eta _{i})B_{i}W^{T}C=\mu _{i}, \quad i=0,1,2,\ldots,m-1, $$
$$ M_{i}C=[\mu _{i}], $$
$$ M_{i}=X(\eta _{i})B_{i}W^{T}. $$
Consequently, replacing m rows of the augmented matrix \([O;F]\) by rows of the matrix \([M_{i};\mu _{i}]\), we have \([\bar{O};\bar{F}]\) or
$$ \bar{O}C=\bar{F}. $$
System (34), together with conditions, gives \((N+1)\) nonlinear algebraic equations which can be solved for the unknown \(c_{n}\), \(n = 0, 1, 2, \ldots,N\). Consequently, \(y(x)\) given as equation (6) can be calculated.
Numerical examples
In this section, several numerical examples are given to illustrate the accuracy and the effectiveness of the method.
Error estimation
if the exact solution of the proposed problem is known, then the absolute error will be estimated from the following:
$$ e_{N}(x)= \bigl\vert y_{\mathrm{exact}}(x)- y_{\mathrm{approximate}}(x) \bigr\vert , $$
where \(y_{\mathrm{Exact}}(x)\) is the exact solution and \(y_{\mathrm{Approximate}}(x)\) is the achieved solution at some N. The calculation of \(L_{2} \) error norm also can obtained as follows:
$$ l_{2} =\sqrt{h\sum _{I=0}^{I} \bigl( \bigl\vert y_{\mathrm{exact}}^{I}(x) -y_{\mathrm{approximate}}^{I}(x) \bigr\vert \bigr)^{2} }, $$
where h is the step size along the given interval. We can easily check the accuracy of the suggested method by the residual error. When the solution \(y_{\mathrm{Approximate}}(x)\) and its derivatives are substituted in (1), the resulting equation must be satisfied approximately, that is, for \(x\in [a,b]\), \(l=0,1,2,\ldots \)
$$\begin{aligned} e_{N}= {}&\Biggl\vert \sum _{k=0}^{n_{1}}\sum_{I=0}^{n_{2}}q_{k,I}(x_{l})y^{k}(x_{l}) y^{(\nu _{I})}(p_{I}x_{l}+\xi _{I})+\sum _{h=1}^{n_{3}}\sum_{j=0}^{n_{4}}p_{h,j}(x_{l})y^{(h)}(x_{l}) y^{(\alpha _{j})}(q_{j}x_{l}+\zeta _{j}) \\ & {} -F(x_{l})- \int _{a}^{b}\sum_{d=0}^{n_{5}}k_{d}(x_{l},t) y^{( \upsilon _{d})}(t) \,dt- \int _{a}^{\phi (x_{l})}\sum_{C=0}^{n_{6}}v_{C}(x_{l},t) y^{(\beta _{C})}(t) \,dt \Biggr\vert , \end{aligned}$$
where \(E_{N}\leq 10^{-\pounds}\) (£ positive integer) and \(y(x)\) considered as \(y_{\mathrm{Approximate}}(x)\).
Consider the following NFIDE with linear functional argument:
$$\begin{aligned} &y^{2}(x)D^{\nu _{2}}y(x)+y^{4}(x)y'(2x+1)+y^{4}(x)+y'(x)D^{ \alpha _{3}}y(x) \\ &\quad=f(x)+ \int ^{x}_{0}(3t-2x)y^{(1.5)}(t) \,dt+ \int ^{1}_{0}t e^{x}y^{(1.8)}(t) \,dt,\quad x\in [0,1]. \end{aligned}$$
The ICs are \(y(1)=2\), \(y^{\prime }(1)=2\), and \(y^{\prime \prime }(1)=2\) and the exact solution is \(y(x)=x^{2}+x\) at \(\nu _{2}=1.5\), \(\alpha _{3}=2.5\), \(\upsilon _{2}=1.5\), \(\beta _{2}=1.8 \), where \(f(x)=- 0.990113 e^{x} + 0.300901 x^{2.5} + 2.25676 x^{0.5} (x + x^{2})^{2} + (x + x^{2})^{4} + (x + x^{2})^{4} (2 + 4 (1 + 2 x))\). We apply the suggested method with \(N = 4\), and by the fundamental matrix equation of the problem defined by (34), we have
$$\begin{aligned} & \bigl[Q_{2,2} \bigl(\bar{X} \bar{W}^{T}\bar{C}\bigr)^{2} X_{ \nu _{2}}B_{\nu _{2}} W^{T}C+Q_{2,0} \bigl(\bar{X} \bar{W}^{T}\bar{C} \bigr)^{4} X B_{1} H_{2} E_{1} (W)^{T} C \\ &\quad{} +Q _{3,0}\bigl(\bar{X} \bar{W}^{T} \bar{C}\bigr)^{3} X {W}^{T}C+P_{1,3} \bar{X} \bar{B}_{1} \bar{W}^{T}\bar{C} X_{\alpha _{3}}B_{\alpha _{3}} (W)^{T}C \\ &\quad{} -\bar{G_{2}} \bar{W}\bar{Z_{\beta _{2}}} \bar{B_{\beta _{2}}} \bar{W^{T}} \bar{C}-U_{2} W Z_{\upsilon _{2}} B_{\upsilon _{2}} W^{T} C \bigr]=F, \end{aligned}$$
$$\begin{aligned} &X= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 1 & \frac{1}{4} & \frac{1}{16} & \frac{1}{64} & \frac{1}{256} \\ 1 & \frac{1}{2} & \frac{1}{4} & \frac{1}{8} & \frac{1}{16} \\ 1 & \frac{3}{4} & \frac{9}{16} & \frac{27}{64} & \frac{81}{256} \\ 1 & 1 & 1 & 1 & 1 \end{pmatrix},\qquad B_{1}= \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix},\\ & E_{1}= \begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 & 6 \\ 0 & 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix}, \\ &F= \begin{pmatrix} -2.70811 \\ -1.75983 \\ 3.17448 \\ 41.4935 \\ 249.328 \end{pmatrix},\qquad X_{\alpha _{3}}= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0 & 0.125 & 0.03125 & 0.0078125 & 0.00195313 \\ 0 & 0.353553 & 0.176777 & 0.0883883 & 0.0441942 \\ 0 & 0.649519 & 0.487139 & 0.365354 & 0.274016 \\ 0 & 1 & 1 & 1 & 1 \end{pmatrix}, \\ &H_{2}= \begin{pmatrix} 2 & 0 & 0 & 0 &0 \\ 0 & 4 & 0 & 0 &0 \\ 0 & 0 & 8 & 0&0 \\ 0 & 0 & 0 & 16&0 \\ 0 & 0 & 0 & 0&32 \end{pmatrix},\\ & B_{\alpha _{3}}= \begin{pmatrix} 0 & 0 & 0 & 6.77028 & 0 \\ 0 & 0 & 0 & 0 & 18.0541 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \\ &W= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ -1 & 0 & 2 & 0 & 0 \\ 0 & -3 & 0 & 4 & 0 \\ 1 & 0 & -8 & 0 & 8 \end{pmatrix} ,\\ & B_{\nu _{2}}=B_{\upsilon _{2}}= \begin{pmatrix} 0 & 0 & 2.25676 & 0 & 0 \\ 0 & 0 & 0 & 4.51352 & 0 \\ 0 & 0 & 0 & 0 & 7.22163 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \\ &X_{\nu _{2}}= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 0.5 & 0.125 & 0.03125 & 0.0078125 & 0.00195313 \\ 0.707107 & 0.353553 & 0.176777 & 0.0883883 & 0.0441942 \\ 0.866025 & 0.649519 & 0.487139 & 0.365354 & 0.274016 \\ 1 & 1 & 1 & 1 & 1 \end{pmatrix}, \\ &Z_{\upsilon _{2}}= \begin{pmatrix} 0 & 0 & 0.666667 & 0.4 & 0.285714 \\ 0 & 0 & 0.4 & 0.285714 & 0.222222 \\ 0 & 0 & 0.285714 & 0.222222 & 0.181818 \\ 0 & 0 & 0.222222 & 0.181818 & 0.153846 \\ 0 & 0 & 0.181818 & 0.153846 & 0.133333 \end{pmatrix},\qquad G_{2}= \begin{pmatrix} 0 & 3 & 0 & 0 & 0 \\ -\frac{1}{2} & 3 & 0 & 0 & 0 \\ -1 & 3 & 0 & 0 & 0 \\ -\frac{3}{2} & 3 & 0 & 0 & 0 \\ -2 & 3 & 0 & 0 & 0 \end{pmatrix}, \\ &B_{\beta _{2}}= \begin{pmatrix} 0 & 0 & 2.17825 & 0 & 0 \\ 0 & 0 & 0 & 5.44562 & 0 \\ 0 & 0 & 0 & 0 & 9.90113 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}, \\ & \bar{X}=\left (\textstyle\begin{array}{@{}c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{}} 1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{4} & -\frac{7}{8} & -\frac{11}{16} & \frac{17}{32} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{1}{2} & - \frac{1}{2} & -1 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \frac{3}{4} & \frac{1}{8} & -\frac{9}{16} & -\frac{31}{32} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \end{array}\displaystyle \right ). \end{aligned}$$
Equation (45) and conditions present a nonlinear system of \((N + 1)\) algebraic equations in the coefficients \(c_{i}\). The solution of this system at \(N=4\) gives the Chebyshev coefficients as follows:
$$ c_{0}=\frac{1}{2} ,\qquad c_{1}=1 , \qquad c_{2}= \frac{1}{2},\qquad c_{3}= 0,\qquad c_{4}=0 . $$
Therefore, the approximate solution of this example using (6) is given by
$$ y_{4}(x) =\frac{1}{2}T_{0}(x)+T_{1}(x)+ \frac{1}{2}T_{2}(x)=x^{2}+x, $$
which is the exact solution of problem (44).
Consider the following nonlinear fractional integro-differential equation:
$$\begin{aligned} &y^{\prime \prime }(x) D^{\alpha _{2}}y(x)+y^{3}(x) D^{\nu _{2}}y(x-1)+y^{\prime }(x) \\ &\quad=f(x)+ \int _{-1}^{0}t e^{x} y'(t) \,dt+ \int _{-1}^{x}(3t+2x) y(t) \,dt,\quad x\in [-1,0]. \end{aligned}$$
The ICs are \(y(0)=1\), \(y^{\prime }(0)=1\), and the exact solution is \(y(x)=x^{3}+1\) at \(\alpha _{2}=1.8\), \(\nu _{2}=2\), where \(f(x)=\frac{9}{10} + \frac{3 e^{x}}{4}- \frac{3 x}{4} - \frac{x^{2}}{2}+ 32.6737 x^{2.2} -\frac{11 x^{5}}{10} + 6 (-1 + x) (1 + x^{3})^{3}\).
The matrix representation of equation (47) is
$$\begin{aligned} & \bigl[P_{2,2} \bar{X} \bar{B_{2}} \bar{W}^{T}\bar{C} X_{ \alpha _{2}}B_{\alpha _{2}} W^{T}C+Q_{3,2} \bigl(\bar{X} \bar{W}^{T} \bar{C} \bigr)^{3} X_{\nu _{2}} B_{\nu _{2}}H_{1} E_{-1} W^{T} C \\ &\quad{} +Q _{0,1} X B_{1} {W}^{T}C-U_{1} W Z_{\upsilon _{1}} B_{\upsilon _{1}} W^{T}C-\bar{G_{0}} \bar{W}\bar{Z_{\beta _{0}}} \bar{B_{\beta _{0}}} \bar{W^{T}} \bar{C} \bigr]=F. \end{aligned}$$
Equation (48) and conditions present a nonlinear system of \((N + 1)\) algebraic equations in the coefficients \(c_{i}\), the solution of this system at \(N=4\) gives the Chebyshev coefficients
$$ c_{0}=1 ,\qquad c_{1}=\frac{3}{4} ,\qquad c_{2}=0 ,\qquad c_{3}= \frac{1}{4} . $$
Thus, the solution of this problem becomes
$$ y_{4}(x) =T_{0}+\frac{3}{4}T_{1}(x)+ \frac{1}{4}T_{3}(x)=x^{3}+1, $$
Consider the following nonlinear fractional integro-differential equation with advanced argument:
$$\begin{aligned} &y^{3}(x) D^{\alpha _{2}}y(x)+y^{2}(x) y^{\prime \prime }(x+1)+x y^{\prime }(x+1) \\ &\quad=f(x)+ \int _{0}^{1}(5t-4x) y(t) \,dt+ \int _{0}^{1}\bigl(3t+2e^{x}\bigr) y^{(0.7)}(t) \,dt+ \int _{0}^{3x+1}(3t+2x) y^{(0.5)}(t) \,dt, \\ &\qquad x \in [0,1]. \end{aligned}$$
The subjected conditions are \(y(1)=3\), \(y^{\prime }(1)=3\), and the exact solution is \(y(x)=x^{2}+x+1\) at \(\alpha _{2}=1.6\), where \(f(x)=-8.42841 - 3.20484 e^{x} + (22 x)/3 - 1.35406 (1 + 3 x)^{2.5} - 1.28958 (1 + 3 x)^{3.5} + 2 (1 + x + x^{2})^{2} + 2.25412 x^{0.4} (1 + x + x^{2})^{3} + x (1 + 2 (1 + x)) - x ( 1.50451 (1 + 3 x)^{1.5} + 1.2036 (1+ 3 x)^{2.5})\). The fundamental matrix equation of the problem becomes as follows:
$$\begin{aligned} & \bigl[Q_{3,2} \bigl(\bar{X} \bar{W}^{T}\bar{C}\bigr)^{3} X_{ \alpha _{2}}B_{\alpha _{2}} W^{T}C+Q_{2,2} \bigl(\bar{X} \bar{W}^{T} \bar{C} \bigr)^{2} X_{\nu _{2}} B_{2}H_{1} E_{1} W^{T}C \\ &\quad{} +Q _{0,1} X_{1} B_{1} H_{1} E_{1} {W}^{T}C- U_{0} W Z_{\upsilon _{0}} W^{T}C \\ & \quad{}- U_{1} W Z_{\upsilon _{1}} B_{\upsilon _{1}} W^{T}C- \bar{G_{1}} \bar{W}\bar{Z_{\beta _{1}}} \bar{B_{\beta _{1}}} \bar{W^{T}} \bar{C} \bigr]=F. \end{aligned}$$
Equation (50) and conditions present a nonlinear system of \((N + 1)\) algebraic equations in the coefficients \(c_{i}\). The solution of this system at \(N=4\) gives the Chebyshev coefficients in the following form:
$$ c_{0}=\frac{3}{2} ,\qquad c_{1}=1 ,\qquad c_{2}= \frac{1}{2} , \qquad c_{3}=0, \qquad c_{4}=0 . $$
Thus, the solution of the proposed problem becomes
$$ y_{4}(x) =\frac{3}{2}T_{0}(x)+T_{1}(x)+ \frac{1}{2}T_{2}(x)=x^{2}+x+1, $$
Consider the following linear fractional integro-differential equation with argument [65]:
$$ x^{2} D^{\nu _{2}}y(x)+x y^{\prime }(x)+y(x-1)+y(x)=f(x)+ \int _{0}^{1}\biggl(\frac{12x^{2}}{7}-2\biggr) y(t) \,dt ,\quad x\in [0,1]. $$
The ICs are \(y(0)=4\), \(y^{\prime }(0)=-4\), and the exact solution is \(y(x)=x^{2}-4x+4\) at \(\nu _{2}=2\), where \(f(x)=\frac{53}{3} - 14 x + 2 x^{2}\). We apply the suggested method with \(N = 4\), then the fundamental matrix equation of the problem becomes as follows:
$$\begin{aligned} & \bigl[Q_{0,2} X_{\nu _{2}} B_{\nu _{2}} W^{T}C+Q _{0,1} X_{1} B_{1} {W}^{T}C \\ &\quad{} +Q_{0,0} X_{\nu _{0}} B_{\nu _{0}}H_{1} E_{-1} W^{T}C - U_{0} W Z_{\upsilon _{0}} B_{\upsilon _{0}} W^{T}C \bigr]=F. \end{aligned}$$
Equation (53) and conditions present a linear system of \((N + 1)\) algebraic equations in the coefficients \(c_{i}\). The solution of this system at \(N=4\) gives the Chebyshev coefficients as follows:
$$ c_{0}=\frac{9}{2} ,\qquad c_{1}=-4 ,\qquad c_{2}= \frac{1}{2}, \qquad c_{3}=1.73868\times 10^{-16},\qquad c_{4}=-6.61509\times 10^{-18} . $$
$$ y_{4}(x) =\frac{9}{4}T_{0}(x)-4T_{1}(x)+ \frac{1}{2}T_{2}(x)+1.73868 \times 10^{-16}T_{3}(x)-6.61509 \times 10^{-18}T_{4}(x). $$
In Table 1 the comparison of the absolute errors for the present scheme at \(N=4\), where \(\nu _{2}=2\), and the method of reference [65] at \(N=8, 10\) is presented. Also, Table 2 shows the numerical values of the approximate solution for various N with reference [65] and the exact solution. The residual error according to (43) is given in Tables 3 and 4 as follows: \(E_{8} \) and \(E_{10} \) for various values of \(\nu _{2} \). Figure 1 provides the comparison of \(y(x)\) for \(N = 4\) with various values of \(\nu _{2}\), where \(\nu _{2}= 2\), 1.8, 1.7, and 1.6. The same comparison is made for \(N=10\) in Fig. 2, and the comparison of the error function for the present method at \(N=4\) and [65] at \(N = 8\) and 10 is given in Fig. 3 for Example 4.
Comparison of \(y(x)\) for \(N = 4\) with \(\nu _{2}= 2\), 1.8, 1.7, and 1.6 for Example 4
Comparison of \(y(x)\) for \(N = 10\) with \(\nu _{2}= 2\), 1.9, 1.8, and 1.7 for Example 4
Comparison of error function for the present method at \(N=4\) and [65] at \(N = 8\) and 10 for Example 4
\(y(x)\) for different N and \(\nu _{4}\) values for Example 5
Table 1 Comparison of the absolute errors for Example 4 for different N values at \(\nu _{2}=2\)
Table 2 Numerical solution of Example 4 for different N values
Table 3 Residual error \(E_{8}\) at \(\nu _{2}=1.9, 1.8, 1.7\) for Example 4
Table 4 Residual error \(E_{10}\) at \(\nu _{2}=1.9, 1.8, 1.7\) for Example 4
Let us assume the fractional integro-differential equation [68, 69]
$$ D^{\nu _{4}}y(x)-y (x)=x \bigl(1+e^{x}\bigr)+3e^{x}- \int _{0}^{x}y(t) \,dt ,\quad x\in [0,1]. $$
The subjected conditions are \(y(1)=1+e\), \(y^{\prime }(1)=2e\), \(y^{\prime \prime }(1)=3e\), \(y^{\prime \prime \prime }(1)=4e\), the exact solution of this FIDE is \(y(x)=1+xe^{x}\) when \(\nu _{4}=4\). For solving this challenge, we apply the present scheme for various values of N.
In Table 5, we contribute the numerical results \(y(x_{l})\), for \(N = 9\), of our proposed scheme together with numerical results \(y(x_{l})\), for \(N = 10\), of the Legendre collocation method (LCM) [69] and [68]. It is observed that the proposed scheme reaches the same results of [69] with lower degree of approximation. Moreover, the proposed scheme has superior results with regard to the ADM [66] as shown in [69]. In addition, the numerical results associated with our presented method LCM and generalized differential transform method (GDTM) [67] for \(N = 10\) and \(\nu _{4} = 3.75\) are given in Table 6. As shown in Table 4 of [69], the ADM has very weak approximations with regard to GDTM and LCM. Therefore, we do not consider ADM in Table 6. From this table, we can find that our achieved results are the same as those of LCM, but GDTM results are away from our proposed scheme and LCM results. Achieved evidences confirm the capability of our scheme. For showing the authenticity of the proposed scheme, we depicted the numerical solution \(y(x)\) for various values of \(\nu _{4}\) such as: 3.50, 3.75, and 4. Also, Fig. 5 compares the error function for the present method at \(N=8\), \(N = 9\) and 12 with \(\nu _{4}=4\), and the comparison of the absolute errors for different values of N at \(\nu _{4}=4\) is given in Table 7. The residual error \(E_{10} \) is given in Table 8 for different values of \(\nu _{4} \) as follows: \(3.75, 3.5\).
Comparison of error function for the present method at \(N=8\), \(N = 9\) and 12 for Example 5, where \(\nu _{4}=4\)
Table 5 Numerical results of Example 5 for different N values at \(\nu _{4}=4\)
Table 6 Numerical results of Example 5 for \(\nu _{4} = 3.75\)
Table 8 Residual error \(E_{10}\) at \(\nu _{4}=3.75, 3.5\) for Example 5
Finally, since problem (56) defines on \([0,1]\) the proposed method applied with the Chebyshev nodes (zeros of Chebyshev polynomials) as collocation points. Table 9 compares the absolute errors for different values of N at \(\nu _{4}=4\) using Chebyshev nodes collocation points, namely \(\frac{1}{2} ( 1+\cos \frac{i \pi }{N} ), i=0,1,\ldots,N \). Also, the comparison of the \(L_{2}\) error norm according to (42) using both equally spaced (11) and Chebyshev nodes collocation points is given in Table 10. Comparing Table 7 with Table 9 and the \(L_{2}\) results in Table 10, one finds that the nodes of Chebyshev fall on \([-1,1]\) and they are chosen with the collocation method as collocation points if the problem is also defined in the same interval, and better results will be obtained than any choice of other form of collocation points, and any modification in the nodes to fit the interval of the problem does not give the good results as expected than the original zeros of the Chebyshev polynomials.
Table 9 Comparison of the absolute errors for Example 5 for different N values at \(\nu _{4}=4\) using Chebyshev nodes collocation points
Table 10 Comparison of the \(L_{2}\) error norm for Example 5 using both equally spaced and Chebyshev nodes collocation points
A numerical study for a generalized form of nonlinear arbitrary order integro-differential equations (GNFIDEs) with linear functional arguments is introduced using Chebyshev series. The suggested equation with its linear functional argument represents a general form of delay, proportional delay, and advanced nonlinear fractional order Fredholm–Volterra integro-differential equations. Additionally, we have presented a general form of the operational matrix of derivatives. The fractional and ordinary order derivatives have been obtained and presented in one general operational matrix. Therefore, the proposed operational matrix represents a kind of unification of ordinary and fractional case. To the best of authors knowledge, there is no other work discussing this point. We have presented many numerical examples that greatly illustrate the accuracy of the presented study to the proposed equation and also show how that the propose scheme is very competent and acceptable.
Yang, X.J., Gao, F., Ju, Y.: General Fractional Derivatives with Applications in Viscoelasticity. Academic Press, San Diego (2020)
Subashini, R., Ravichandran, C., Jothimani, K., Baskonus, H.M.: Existence results of Hilfer integro-differential equations with fractional order. Discrete Contin. Dyn. Syst., Ser. S 13(3), 911–923 (2020)
MathSciNet MATH Google Scholar
Valliammal, N., Ravichandran, C., Hammouch, Z., Baskonus, H.M.: A new investigation on fractional-ordered neutral differential systems with state-dependent delay. Int. J. Nonlinear Sci. Numer. Simul. 20(7–8), 803–809 (2019)
Jerri, A.: Introduction to Integral Equations with Applications, 2nd edn. Wiley, New York (1999)
Osman, M.S.: New analytical study of water waves described by coupled fractional variant Boussinesq equation in fluid dynamics. Pramana 93(2), 26 (2019)
Al-Ghafri, K.S., Rezazadeh, H.: Solitons and other solutions of \((3+ 1)\)-dimensional space-time fractional modified KdV-Zakharov–Kuznetsov equation. Appl. Math. Nonlinear Sci. 4(2), 289–304 (2019)
Jothimani, K., Kaliraj, K., Hammouch, Z., Ravichandran, C.: New results on controllability in the framework of fractional integrodifferential equations with nondense domain. Eur. Phys. J. Plus 134(9), 441 (2019)
Dehghan, M., Shakeri, F.: Solution of parabolic integro-differential equations arising in heat conduction in materials with memory via He's variational iteration technique. Int. J. Numer. Methods Biomed. Eng. 26(6), 705–715 (2010)
Ilhan, E., Kiymaz, I.O.: A generalization of truncated M-fractional derivative and applications to fractional differential equations. Appl. Math. Nonlinear Sci. 5(1), 171–188 (2020)
Gao, W., Baskonus, H.M., Shi, L.: New investigation of bats-hosts-reservoir-people coronavirus model and application to 2019-nCoV system. Adv. Differ. Equ. 2020(1), 1 (2020)
Yang, X.J., Abdel-Aty, M., Cattani, C.: A new general fractional-order derivative with Rabotnov fractional-exponential kernel applied to model the anomalous heat transfer. Therm. Sci. 23(3 Part A), 1677–1681 (2019)
Xiao-Jun, X.J., Srivastava, H.M., Machado, J.T.: A new fractional derivative without singular kernel. Therm. Sci. 20(2), 753–756 (2016)
Yang, A.M., Han, Y., Li, J., Liu, W.X.: On steady heat flow problem involving Yang–Srivastava–Machado fractional derivative without singular kernel. Therm. Sci. 20(suppl. 3), 717–721 (2016)
Yang, X.J., Feng, Y.Y., Cattani, C., Inc, M.: Fundamental solutions of anomalous diffusion equations with the decay exponential kernel. Math. Methods Appl. Sci. 42(11), 4054–4060 (2019)
Kumar, S., Ghosh, S., Samet, B., Goufo, E.F.: An analysis for heat equations arises in diffusion process using new Yang–Abdel–Aty–Cattani fractional operator. Math. Methods Appl. Sci. 43(9), 6062–6080 (2020)
Kumar, S., Kumar, R., Agarwal, R.P., Samet, B.: A study of fractional Lotka–Volterra population model using Haar wavelet and Adams–Bashforth–Moulton methods. Math. Methods Appl. Sci. 43(8), 5564–5578 (2020)
Kumar, S., Ahmadian, A., Kumar, R., Kumar, D., Singh, J., Baleanu, D., Salimi, M.: An efficient numerical method for fractional SIR epidemic model of infectious disease by using Bernstein wavelets. Mathematics 8(4), 558 (2020)
Podlubny, I.: Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications. Technical University of Kosice, Solvak Republic (1998)
Kilbas, A.A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations, vol. 204. Elsevier, Amsterdam (2006)
Gao, W., Veeresha, P., Prakasha, D.G., Novel, B.H.M.: Dynamic structures of 2019-nCoV with nonlocal operator via powerful computational technique. Biology 9(5), 107 (2020)
Atangana, A.: Fractional discretization: the African's tortoise walk. Chaos Solitons Fractals 130, 109399 (2020)
Yang, X.J.: Advanced Local Fractional Calculus & Its Applications. World Science Publisher, New York (2012)
Yang, X.J., Baleanu, D., Srivastava, H.M.: Local Fractional Integral Transforms and Their Applications. Elsevier, Amsterdam (2015)
Atangana, A.: Blind in a commutative world: simple illustrations with functions and chaotic attractors. Chaos Solitons Fractals 114, 347–363 (2018)
Singh, J., Kumar, D., Hammouch, Z., Atangana, A.: A fractional epidemiological model for computer viruses pertaining to a new fractional derivative. Appl. Math. Comput. 316, 504–515 (2018)
Dehghan, M., Shakourifar, M., Hamidi, A.: The solution of linear and nonlinear systems of Volterra functional equations using Adomian–Pade technique. Chaos Solitons Fractals 39(5), 2509–2521 (2009)
Yusufoğlu, E.: An efficient algorithm for solving integro-differential equations system. Appl. Math. Comput. 192(1), 51–55 (2007)
Javidi, M.: Modified homotopy perturbation method for solving system of linear Fredholm integral equations. Math. Comput. Model. 50(1–2), 159–165 (2009)
Ali, K.K., Cattani, C., Gómez-Aguilarc, J.F., Baleanu, D., Osman, M.S.: Analytical and numerical study of the DNA dynamics arising in oscillator-chain of Peyrard–Bishop model. Chaos Solitons Fractals 139, 110089 (2020)
Arqub, O.A., Osman, M.S., Abdel-Aty, A.H., Mohamed, A.B.A., Momani, S.: A numerical algorithm for the solutions of ABC singular Lane–Emden type models arising in astrophysics using reproducing kernel discretization method. Mathematics 8(6), 923 (2020)
Arikoglu, A., Ozkol, I.: Solutions of integral and integro-differential equation systems by using differential transform method. Comput. Math. Appl. 56(9), 2411–2417 (2008)
Saeedi, H., Moghadam, M.M., Mollahasani, N., Chuev, G.N.: A CAS wavelet method for solving nonlinear Fredholm integro-differential equations of fractional order. Commun. Nonlinear Sci. Numer. Simul. 16(3), 1154–1163 (2011)
Zhu, L., Fan, Q.: Solving fractional nonlinear Fredholm integro-differential equations by the second kind Chebyshev wavelet. Commun. Nonlinear Sci. Numer. Simul. 17(6), 2333–2341 (2012)
Eslahchi, M.R., Dehghan, M., Parvizi, M.: Application of the collocation method for solving nonlinear fractional integro-differential equations. J. Comput. Appl. Math. 257, 105–128 (2014)
Lakestani, M., Saray, B.N., Dehghan, M.: Numerical solution for the weakly singular Fredholm integro-differential equations using Legendre multiwavelets. J. Comput. Appl. Math. 235(11), 3291–3303 (2011)
Lakestani, M., Jokar, M., Dehghan, M.: Numerical solution of nth-order integro-differential equations using trigonometric wavelets. Math. Methods Appl. Sci. 34(11), 1317–1329 (2011)
Fakhar-Izadi, F., Dehghan, M.: The spectral methods for parabolic Volterra integro-differential equations. J. Comput. Appl. Math. 235(14), 4032–4046 (2011)
Sezer, M., Akyüz-Daşcıoglu, A.: A Taylor method for numerical solution of generalized pantograph equations with linear functional argument. J. Comput. Appl. Math. 200(1), 217–225 (2007)
Maleknejad, K., Mirzaee, F.: Numerical solution of integro-differential equations by using rationalized Haar functions method. Kybernetes 35(10), 1735–1744 (2006)
Yang, Y., Chen, Y., Huang, Y.: Spectral-collocation method for fractional Fredholm integro-differential equations. J. Korean Math. Soc. 51(1), 203–224 (2014)
Azin, H., Mohammadi, F., Machado, J.T.: A piecewise spectral-collocation method for solving fractional Riccati differential equation in large domains. Comput. Appl. Math. 38(3), 96 (2019)
Patrício, M.S., Ramos, H., Patrício, M.: Solving initial and boundary value problems of fractional ordinary differential equations by using collocation and fractional powers. J. Comput. Appl. Math. 354, 348–359 (2019)
Hou, J., Yang, C., Lv, X.: Jacobi collocation methods for solving the fractional Bagley–Torvik equation. Int. J. Appl. Math. 50(1), 114–120 (2020)
Ramadan, M.A., Dumitru, B., Highly, N.M.A.: Accurate numerical technique for population models via rational Chebyshev collocation method. Mathematics 7(10), 913 (2019)
Yang, X.J., Gao, F., Ju, Y., Zhou, H.W.: Fundamental solutions of the general fractional-order diffusion equations. Math. Methods Appl. Sci. 41(18), 9312–9320 (2018)
Yang, X.J., Tenreiro Machado, J.A.: A new fractal nonlinear Burgers' equation arising in the acoustic signals propagation. Math. Methods Appl. Sci. 42(18), 7539–7544 (2019)
Cao, Y., Ma, W.G., Ma, L.C.: Local fractional functional method for solving diffusion equations on Cantor sets. Abstr. Appl. Anal. 2014, Article ID 803693 (2014)
Oğuz, C., Sezer, M.: Chelyshkov collocation method for a class of mixed functional integro-differential equations. Appl. Math. Comput. 259, 943–954 (2015)
Saadatmandi, A., Dehghan, M.: Numerical solution of the higher-order linear Fredholm integro-differential-difference equation with variable coefficients. Comput. Math. Appl. 59(8), 2996–3004 (2010)
Kürkçü, Ö., Aslan, E., Sezer, M.: A numerical approach with error estimation to solve general integro-differential-difference equations using Dickson polynomials. Appl. Math. Comput. 276, 324–339 (2016)
Gülsu, M., Öztürk, Y., Sezer, M.: A new collocation method for solution of mixed linear integro-differential-difference equations. Appl. Math. Comput. 216(7), 2183–2198 (2010)
Yüzbaşı, Ş.: Laguerre approach for solving pantograph-type Volterra integro-differential equations. Appl. Math. Comput. 232, 1183–1199 (2014)
Osman, M.S., Rezazadeh, H., Eslami, M., Neirameh, A., Mirzazadeh, M.: Analytical study of solitons to Benjamin–Bona–Mahony–Peregrine equation with power law nonlinearity by using three methods. UPB Sci. Bull., Ser. A, Appl. Math. Phys. 80(4), 267–278 (2018)
Osman, M.S., Rezazadeh, H., Eslami, M.: Traveling wave solutions for \((3+ 1)\) dimensional conformable fractional Zakharov–Kuznetsov equation with power law nonlinearity. Nonlinear Eng. 8(1), 559–567 (2019)
Yang, X.J.: New non-conventional methods for quantitative concepts of anomalous rheology. Therm. Sci. 23(6B), 4117–4127 (2019)
Yang, X.J.: New general calculi with respect to another functions applied to describe the Newton-like dashpot models in anomalous viscoelasticity. Therm. Sci. 23(6B), 3751–3757 (2019)
Kumar, S., Kumar, R., Cattani, C., Samet, B.: Chaotic behaviour of fractional predator-prey dynamical system. Chaos Solitons Fractals 135, 109811 (2020)
Odibat, Z., Kumar, S.: A robust computational algorithm of homotopy asymptotic method for solving systems of fractional differential equations. J. Comput. Nonlinear Dyn. 14(8), 081004 (2019)
Iakovleva, V., Vanegas, C.J.: On the solution of differential equations with delayed and advanced arguments. Electron. J. Differ. Equ. 13, 57 (2005)
Rus, I.A., Dârzu-Ilea, V.A.: First order functional-differential equations with both advanced and retarded arguments. Fixed Point Theory 5(1), 103–115 (2004)
Liu, Z., Sun, J., Szántó, I.: Monotone iterative technique for Riemann–Liouville fractional integro-differential equations with advanced arguments. Results Math. 63(3–4), 1277–1287 (2013)
Şahin, N., Yüzbaşi, Ş.: Sezer, M.: A Bessel polynomial approach for solving general linear Fredholm integro-differential-difference equations. Int. J. Comput. Math. 88(14), 3093–3111 (2011)
Raslan, K.R., Ali, K.K., Abd El Salam, M.A., Mohamed, E.M.H.: Spectral Tau method for solving general fractional order differential equations with linear functional argument. J. Egypt. Math. Soc. 27(1), 33 (2019)
Ali, K.K., Abd El Salam, M.A., Mohamed, E.M.H.: Chebyshev operational matrix for solving fractional order delay-differential equations using spectral collocation method. Arab J. Basic Appl. Sci. 26(1), 342–352 (2019)
Gürbüz, B., Sezer, M., Güler, C.: Laguerre collocation method for solving Fredholm integro-differential equations with functional arguments. J. Appl. Math. 2014, Article ID 682398 (2014)
El-Wakil, S.A., Elhanbaly, A., Abdou, M.A.: Adomian decomposition method for solving fractional nonlinear differential equations. Appl. Math. Comput. 182(1), 313–324 (2006)
Erturk, V.S., Momani, S., Odibat, Z.: Application of generalized differential transform method to multi-order fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 13(8), 1642–1654 (2008)
Tohidi, E., Ezadkhah, M.M., Shateyi, S.: Numerical solution of nonlinear fractional Volterra integro-differential equations via Bernoulli polynomials. In: Abstract and Applied Analysis 2014 (2014)
Saadatmandi, A., Dehghan, M.: A Legendre collocation method for fractional integro-differential equations. J. Vib. Control 17(13), 2050–2058 (2011)
Yang, X.J.: Local Fractional Functional Analysis & Its Applications. Asian Academic Publisher, Hong Kong (2011)
Yang, X.J.: General Fractional Derivatives: Theory, Methods and Applications. Taylor & Francis Group, New York (2019)
The fifth author, Dr Sunil Kumar, would like to acknowledge the financial support received from the Science and Engineering Research Board (SERB), DST Government of India (file no. EEQ/2017/0 0 0385). Prof. B. Samet is supported by Researchers Supporting Project, number (RSP-2020/4), King Saud University, Riyadh, Saudi Arabia.
Department of Mathematics, Faculty of Science, Al Azhar University, Cairo, Egypt
Khalid K. Ali, Mohamed A. Abd El Salam & Emad M. H. Mohamed
Department of Mathematics, College of Science, King Saud University, P.O. Box 2455, Riyadh, 11451, Saudi Arabia
Bessem Samet
Department of Mathematics, National Institute of Technology, Jamshedpur, 831014, Jharkhand, India
Department of Mathematics, Faculty of Science, Cairo University, Giza, Egypt
M. S. Osman
Khalid K. Ali
Mohamed A. Abd El Salam
Emad M. H. Mohamed
All authors carried out the proofs and conceived of the study. All authors read and approved the final manuscript.
Correspondence to Sunil Kumar or M. S. Osman.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Ali, K.K., Abd El Salam, M.A., Mohamed, E.M.H. et al. Numerical solution for generalized nonlinear fractional integro-differential equations with linear functional arguments using Chebyshev series. Adv Differ Equ 2020, 494 (2020). https://doi.org/10.1186/s13662-020-02951-z
Accepted: 06 September 2020
Chebyshev collocation method
Nonlinear fractional integro-differential equations
Functional argument
Caputo fractional derivatives
Topics in Special Functions and q-Special Functions: Theory, Methods, and Applications
|
CommonCrawl
|
Diff of /trunk/doc/user/notation.tex
revision 2923 by jfenwick, Thu Feb 4 04:05:36 2010 UTC
revision 3295 by jfenwick, Fri Oct 22 01:56:02 2010 UTC
# Line 19 Compact notation is used in equations su Line 19 Compact notation is used in equations su
20 There are two rules which make up the convention: There are two rules which make up the convention:
22 firstly, the rank of the tensor is represented by an index. For example, $a$ is a scalar; $b\hackscore{i}$ represents a vector; and $c\hackscore{ij}$ represents a matrix. firstly, the rank of the tensor is represented by an index. For example, $a$ is a scalar; $b_{i}$ represents a vector; and $c_{ij}$ represents a matrix.
24 Secondly, if an expression contains repeated subscripted variables, they are assumed to be summed over all possible values, from $0$ to $n$. For example, for the following expression: Secondly, if an expression contains repeated subscripted variables, they are assumed to be summed over all possible values, from $0$ to $n$. For example, for the following expression:
28 \begin{equation} \begin{equation}
29 y = a\hackscore{0}b\hackscore{0} + a\hackscore{1}b\hackscore{1} + \ldots + a\hackscore{n}b\hackscore{n} y = a_{0}b_{0} + a_{1}b_{1} + \ldots + a_{n}b_{n}
30 \label{NOTATION1} \label{NOTATION1}
31 \end{equation} \end{equation}
33 can be represented as: can be represented as:
36 y = \sum\hackscore{i=0}^n a\hackscore{i}b\hackscore{i} y = \sum_{i=0}^n a_{i}b_{i}
40 then in Einstein notion: then in Einstein notion:
43 y = a\hackscore{i}b\hackscore{i} y = a_{i}b_{i}
47 Another example: Another example:
50 \nabla p = \frac{\partial p}{\partial x\hackscore{0}}\textbf{i} + \frac{\partial p}{\partial x\hackscore{1}}\textbf{j} + \frac{\partial p}{\partial x\hackscore{2}}\textbf{k} \nabla p = \frac{\partial p}{\partial x_{0}}\textbf{i} + \frac{\partial p}{\partial x_{1}}\textbf{j} + \frac{\partial p}{\partial x_{2}}\textbf{k}
54 can be expressed in Einstein notation as: can be expressed in Einstein notation as:
57 \nabla p = p,\hackscore{i} \nabla p = p,_{i}
# Line 63 where the comma ',' indicates the partia Line 63 where the comma ',' indicates the partia
63 For a tensor: For a tensor:
66 \sigma \hackscore{ij}= \sigma _{ij}=
67 \left[ \begin{array}{ccc} \left[ \begin{array}{ccc}
68 \sigma\hackscore{00} & \sigma\hackscore{01} & \sigma\hackscore{02} \\ \sigma_{00} & \sigma_{01} & \sigma_{02} \\
71 \end{array} \right] \end{array} \right]
76 The $\delta\hackscore{ij}$ is the Kronecker $\delta$-symbol, which is a matrix with ones for its diagonal entries ($i = j$) and zeros for the remaining entries ($i \neq j$). The $\delta_{ij}$ is the Kronecker $\delta$-symbol, which is a matrix with ones for its diagonal entries ($i = j$) and zeros for the remaining entries ($i \neq j$).
79 \delta \hackscore{ij} = \delta _{ij} =
80 \left \{ \begin{array}{cc} \left \{ \begin{array}{cc}
81 1, & \mbox{if $i = j$} \\ 1, & \mbox{if $i = j$} \\
82 0, & \mbox{if $i \neq j$} \\ 0, & \mbox{if $i \neq j$} \\
|
CommonCrawl
|
***See videotapes of some of the talks here.***
AMS Invited Address
The Navier-Stokes, Euler and Related Equations
Wednesday, January 10, 2018, 10:05 a.m.- 10:55 a.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Edriss S. Titi, Texas A&M University and The Weizmann Institute of Science
In this talk I will present the most recent advances concerning the questions of global regularity of solutions to the three-dimensional Navier-Stokes and Euler equations of incompressible fluids. Furthermore, I will also present recent global regularity (and finite time blow-up) results concerning certain three-dimensional geophysical flows, including the three-dimensional viscous (non-viscous) ``primitive equations" of oceanic and atmospheric dynamics.
AMS-MAA Invited Address
Topological Modeling of Complex Data
Wednesday, January 10, 2018, 11:10 a.m.- 12:00 p.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Gunnar Carlsson, Stanford University
One of the fundamental problems faced by science and industry is that of making sense of large and complex data sets. To approach this problem, we need new organizing principles and modeling methodologies. One such approach is through topology, the mathematical study of shape. The shape of the data, suitably defined, is an important component of exploratory data analysis. In this talk, we will discuss the topological approach, with numerous examples, and consider some questions about how it will develop as mathematics.
AMS Colloquium Lectures
LECTURE I
Alternate Minimization and Scaling Algorithms: Theory, Applications and Connections Across Mathematics and Computer Science
Wednesday, January 10, 2018, 1:00 p.m.- 1:50 p.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Avi Wigderson, Institute for Advanced Study
This 3-lecture series will revolve around a common heuristic for general optimization problems called alternate minimization, and natural scaling algorithms which capture many of them. Recent attempts to formally analyze their performance in natural settings have uncovered a surprisingly rich web of connections between diverse areas of mathematics and computer science, all of which contribute and benefit from this interaction.
In this first lecture I will give the general set-up, and examples of problems for which these algorithms are relevant. I will then survey some of the different areas they touch, and how. In mathematics, these include non-commutative algebra, invariant theory, quantum information theory and analysis. In computer science they include optimization, algebraic complexity and pseudorandomness.
In the next two lectures I will survey aspects of two central problems to both math and CS, Proving algebraic identities and Proving analytic inequalities, influenced by the study above.
All three lectures are designed to be independent of each other. They require no special background knowledge. Lecture notes can be found at
http://www.math.ias.edu/~avi/PUBLICATIONS/CCC-17-tutorial-lecture-notes.pdf.
LECTURE II
Proving Algebraic Identities
Thursday, January 11, 2018, 1:00 p.m.- 1:50 p.m. Ballroom 6AB, Upper Level, San Diego Convention Center
In numerous mathematical settings, an object typically has several representations. The word (or isomorphism) problem asks: when are two given representations equivalent? Such problems have driven much structural and algorithmic research across mathematics.
We focus on the algebraic setting: our objects are polynomials and rational functions in many variables, represented by arithmetic formulae. Here the word problem is proving algebraic identities. I will describe the history, motivation and the status of this problem in two settings: when the variables commute, and when they do not.
For commuting variables, a probabilistic polynomial time algorithm was known, and a major open problem is to find a deterministic counterpart. To explain this we'll visit the VP versus VNP problem, permanents vs. determinants and more.
For non-commuting variables, I will describe a recent deterministic polynomial time algorithm based on the ideas of the first lecture, appealing to the theory of free skew fields and to degree bounds on invariant rings of linear group actions.
Finally, we'll see how the two settings are related!
This talk is self-contained, and requires no special background. The new material covered is taken mostly from https://arxiv.org/abs/1511.03730.
LECTURE III
Proving Analytic Inequalities
Friday, January 12, 2018, 1:00 p.m.- 1:50 p.m. Ballroom 6AB, Upper Level, San Diego Convention Center
The celebrated Brascamp-Lieb (BL) inequalities, and their reverse form of Barthe, is a powerful framework which unifies and generalizes many important inequalities in analysis, convex geometry and information theory.
I will exemplify BL inequalities, building to the general set-up. I will describe the structural theory that characterizes existence and optimality of these inequalities in terms of their description (called BL-data). But can one efficiently compute existence and optimality from given BL-data?
I will describe a recent polynomial time algorithm for these problems, based on a natural alternate minimization approach and operator scaling analysis discussed in the first lecture. It also supplies alternative proofs to some of the structural results.
This algorithm may be viewed (via the structural theory) in two ways that make it potentially exciting for new applications in optimization. First, it efficiently solves a large natural class of non-convex programs. Second, it efficiently solves a large natural class of linear programs with exponentially many inequalities.
This lecture is self-contained, independent of the previous two. No special background is assumed. Most of this presentation is based on the paper https://arxiv.org/abs/1607.06711.
MAA Invited Address
Quintessential Quandle Queries
Alissa Crans, Loyola Marymount University
Motivated by questions arising in starkly different contexts, quandles have been discovered and rediscovered over the past century. The axioms defining a quandle, an analogue of a group, simultaneously encode the three Reidemeister moves from knot theory and capture the essential properties of conjugation in a group. Thus, on the one hand, quandles are a fruitful source of applications to knots and knotted surfaces; in particular, they provide a complete invariant of knots. On the other, they inspire independent interest as algebraic structures; for instance, the set of homomorphisms from one quandle to another admits a natural quandle structure in a large class of cases. We will illustrate the history of this theory through numerous examples and survey recent developments.
Groups, Graphs, Algorithms: The Graph Isomorphism Problem
László Babai, University of Chicago
Deciding whether or not two given finite graphs are isomorphic has for decades been known as one of a small number of natural computational problems with unsettled complexity status within the P/NP theory.
Building on a framework introduced in a seminal 1980 paper by Eugene M. Luks, recent algorithmic progress on this problem has involved an interplay between finite permutation groups, graphs and more generally, relational structures with low arity, and algorithmic techniques such as the ``Divide and Conquer'' principle. The talk will attempt to illustrate some of the components of this work.
AMS Josiah Willard Gibbs Lecture
Privacy in the Land of Plenty
Cynthia Dwork, Harvard University
Privacy-preserving data analysis has a large literature spanning several academic disciplines over more than half a century. Many early attempts have proved problematic in vivo or in vitro. "Differential privacy," a notion tailored to situations in which data are plentiful, has provided a theoretically sound and powerful framework, given rise to an explosion of research, and has begun to see deployment on a global scale. We will review the definition of differential privacy, illustrate with some examples, and describe surprising applications to statistical validity under adaptive analysis and fairness in machine learning algorithms, settings in which privacy is not itself a concern.
Information, Computation, Optimization: Connecting the Dots in the Traveling Salesman Problem
Thursday, January 11, 2018, 9:00 a.m.- 9:50 a.m. Ballroom 6AB, Upper Level, San Diego Convention Center
William Cook, University of Waterloo
Few math models scream impossible as loudly as the traveling salesman problem. Given $n$ cities, the TSP asks for the shortest route to take you to all of them. Easy to state, but if ${\cal P} \neq {\cal NP}$ then no solution method can have good asymptotic performance as $n$ goes off to infinity. The popular interpretation is that we simply cannot solve realistic examples. But this skips over nearly 70 years of intense mathematical study. Indeed, in 1949 Julia Robinson described the TSP challenge in practical terms: ``Since there are only a finite number of paths to consider, the problem consists in finding a method for picking out the optimal path when $n$ is moderately large, say $n = 50$." She went on to propose a linear programming attack that was adopted by her RAND colleagues Dantzig, Fulkerson, and Johnson several years later.
Following in the footsteps of these giants, we use linear programming to show that a certain tour of 49,603 historic sites in the US is shortest possible, measuring distance with point-to-point walking routes obtained from Google Maps. We highlight aspects of the modern study of polyhedral combinatorics and discrete optimization that make the computation feasible. This is joint work with Daniel Espinoza, Marcos Goycoolea, and Keld Helsgaun.
AWM-AMS Noether Lecture
Nonsmooth Boundary Value Problems
Thursday, January 11, 2018, 10:05 a.m.- 10:55 a.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Jill Pipher, Brown University
The regularity properties of solutions to linear partial differential equations in domains depend on the structure of the equation, the degree of smoothness of the coefficients of the equation, and of the boundary of the domain. Quantifying this dependence is a classical problem, and modern techniques can answer some of these questions with remarkable precision. For both physical and theoretical reasons, it is important to consider partial differential equations with non-smooth coefficients. We'll discuss how some classical tools in harmonic and complex analysis have played a central role in answering questions in this subject at the interface of harmonic analysis and PDE.
MAA Project NExT Lecture on Teaching and Learning
Changing Mathematical Relationships and Mindsets: How All Students Can Succeed in Mathematics Learning
Thursday, January 11, 2018, 11:00 a.m.- 11:50 a.m. Ballroom 6C, Upper Level, San Diego Convention Center
Jo Boaler, Stanford University
This talk and discussion will consider how important new brain science can change students' ideas and approaches to mathematics, change students' mathematics pathways dramatically, and promote equity in mathematics classrooms. We will hear about research in neuroscience and education, watch classroom videos and consider mathematics transformations for school and college students.
SIAM Invited Address
Tensor Decomposition: A Mathematical Tool for Data Analysis
Thursday, January 11, 2018, 11:10 a.m. - 12:00 p.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Tamara G. Kolda, Sandia National Laboratories
Tensors are multiway arrays, and tensor decompositions are powerful tools for data analysis. In this talk, we demonstrate the wide-ranging utility of the canonical polyadic (CP) tensor decomposition with examples in neuroscience and chemical detection. The CP model is extremely useful for interpretation, as we show with an example in neuroscience. However, it can be difficult to fit to real data for a variety of reasons. We present a novel randomized method for fitting the CP decomposition to dense data that is more scalable and robust than the standard techniques. We further consider the modeling assumptions for fitting tensor decompositions to data and explain alternative strategies for different statistical scenarios, resulting in a _generalized_ CP tensor decomposition.
Algebraic Structures on Polytopes
Thursday, January 11, 2018, 2:15 p.m. - 3:05 p.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Federico Ardila, San Francisco State University
Generalized permutahedra are a beautiful family of polytopes with a rich combinatorial structure and strong connections to optimization. We study their algebraic structure: we prove they are the universal family of polyhedra with a certain ``Hopf monoid" structure. This construction provides a unifying framework to organize and study many combinatorial families: 1. It uniformly answers open questions and recovers known results about graphs, posets, matroids, hypergraphs, and simplicial complexes. 2. It reveals that three combinatorial reciprocity theorems of Stanley and Billera--Jia--Reiner on graphs, posets, and matroids are really the same theorem. 3. It shows that permutahedra and associahedra ``know" how to compute the multiplicative and compositional inverses of power series. The talk will be accessible to undergraduates and will not assume previous knowledge of these topics.
Searching for Hyperbolicity
Ruth Charney, Brandeis University
While groups are defined as algebraic objects, they can also be viewed as symmetries of geometric objects. This viewpoint gives rise to powerful tools for studying infinite groups. The work of Max Dehn in the early 20th century on groups acting on the hyperbolic plane was an early indication of this phenomenon. In the 1980's, Dehn's ideas were vastly generalized by Mikhail Gromov to a large class of groups, now known as hyperbolic groups. In recent years there has been an effort to push these ideas even further. If a group fails to be hyperbolic, might it still display some hyperbolic behavior? Might some of the techniques used in hyperbolic geometry still apply? The talk will begin with an introduction to some basic ideas in geometric group theory and Gromov's notion of hyperbolicity, and conclude with a discussion of recent work on finding and encoding hyperbolic behavior in more general groups.
Listen as Mike Breen, AMS Public Awareness Officer, speaks with Ruth Charney about her invited address.
Friday, January 12, 2018, 9:00 a.m.- 9:50 a.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Tadashi Tokieda, Stanford University
Would you like to come see some toys?
`Toy' here has a special sense: an object from daily life which can be found or made in minutes, yet which, if played with imaginatively, reveal behaviors that intrigue scientists for weeks. We will explore table-top demos of several such toys, and extract a mathematical story. Some of the toys will be classical but revisited, others will be original, and all will be surprising to mathematicians/physicists and amusing to everyone else.
Emergent Phenomena in Random Structures and Algorithms
Friday, January 12, 2018, 10:05 a.m.- 10:55 a.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Dana Randall, Georgia Institute of Technology
Markov chain Monte Carlo methods have become ubiquitous across science and engineering to model dynamics and explore large combinatorial sets. Over the last 20 years there have been tremendous advances in the design and analysis of efficient sampling algorithms for this purpose. One of the striking discoveries has been the realization that many natural Markov chains undergo phase transitions whereby they abruptly change from being efficient to inefficient as some parameter of the system is modified, also revealing interesting properties of the underlying stationary distributions.
We will explore valuable insights that phase transitions provide in three settings. First, they allow us to understand the limitations of certain classes of sampling algorithms, potentially leading to faster alternative approaches. Second, they reveal statistical properties of stationary distributions, giving insight into various interacting models, such as colloids, segregation models and interacting particle systems. Third, they predict emergent phenomena that can be harnessed for the design of distributed algorithms for certain asynchronous models of programmable active matter. We will see how these three research threads are closely interrelated and inform one another.
Wow, so many minimal Surfaces!
Friday, January 12, 2018, 11:10 a.m.- 12:00 p.m. Ballroom 6AB, Upper Level, San Diego Convention Center
André Neves, University of Chicago
Minimal surfaces are ubiquitous in Geometry but they are quite hard to find. For instance, Yau in 1982 conjectured that any 3-manifold admits infinitely many closed minimal surfaces but the best one knows is the existence of at least two.
In a different direction, Gromov conjectured a Weyl Law for the volume spectrum that was proven last year by Liokumovich, Marques, and myself.
I will cover a bit the history of the problem and then talk about recent work with Irie, Marques, and myself: we combined Gromov's Weyl Law with the Min-max theory Marques and I have been developing over the last years to prove that, for generic metrics, not only there are infinitely many minimal hypersurfaces but they are also dense.
MAA Lecture for Students
HOW MANY DEGREES ARE IN A MARTIAN CIRCLE? And Other Human - and Nonhuman - Questions One Should Ask About Everyday Mathematics
James Tanton, MAA Mathematician at Large
Who chose the number 360 for the count of degrees in a circle? Why that number? And why do mathematicians not like that number for mathematics? Why is the preferred direction of motion in mathematics counterclockwise when the rest of world naturally chooses clockwise? Why are fingers and single digit numbers both called digits? Why do we humans like the numbers 10, 12, 20, and 60 particularly so? Why are logarithms so confusing? Why is base e the "natural" logarithm to use? What happened to the vinculum? (Bring back the vinculum, I say!) Why did human circle-ometry become trigonometry? Let's spend a session together exploring tidbits from the human - and nonhuman - development of mathematics.
Current Events Bulletin Session
Friday, January 12, 2018, 1:00 p.m.- 5:00 p.m. Room 6E, Upper Level, San Diego Convention Center
Materials from Mathematics
Friday, January 12, 2018, 1:00 p.m,. Room 6E, Upper Level, San Diego Convention Center
Richard D. James, University of Minnesota
I survey some examples of materials whose recent discovery was based in an essential way on mathematical ideas. The main idea concerns "compatibility" – the fitting together of the phases of a material. Some of the emerging materials have the ability to change heat directly into electricity, without the need of a separate electrical generator.
How Complicated are Polynomials in Many Variables?
Craig L. Huneke, University of Virginia
The title question refers to systems of polynomial equations in many variables over a field. It can be made precise in many ways, for example, through the complexity of detecting whether a given polynomial can be expressed as a linear combination (with polynomial coefficients) of other polynomials.
Another sense in which it can be made precise is through comparisons of numerical data about the ideal generated by the polynomial equations, which generalize the numbers of generators and relations. This additional numerical data was originally introduced in the 1890's by David Hilbert to "count" the number of polynomial invariants of the action of a group (this was the work that `killed" invariant theory for a brief time!). In the last two years, three long-standing problems about these numerical invariants have been solved.
This talk will introduce the main players in this story: Hilbert functions, free resolutions, projective dimension, Betti numbers, and regularity. The first part of the talk will be historical and introductory, and the second half will focus on the solution of Ananyan and Hochster of Stillman's conjecture.
From Newton to Navier-Stokes, or How to Connect Fluid Mechanics Equations from Microscopic to Macroscopic Scales
Isabelle Gallagher, Université Paris Diderot
The question of deriving Fluid Mechanics equations from deterministic systems of interacting particles obeying Newton's laws, in the limit when the number of particles goes to infinity, is a longstanding open problem suggested by Hilbert in his 6th problem. One step in the program consists in deriving Fluid Mechanics Equations from the Boltzmann equation on the one hand, and the Boltzmann equation from particle systems on the other.
In this talk we shall show how to answer Hilbert's question at a formal level, and why it is very difficult, and actually an open problem to this day, to make the argument rigorous. We shall also discuss a few successful attempts in this program, in particular the works of Golse and Saint Raymond which provide a rigorous derivation of the incompressible Navier-Stokes equations from the Boltzmann equation.
LECTURE IV
The Cap Set Conjecture, The Polynomial Method, and Applications (after Croot-Lev-Pach, Ellenberg-Gijswijt, and Others)
Joshua A. Grochow, University of Colorado, Boulder
The card game Set asks players to find lines in a subset - drawn from a deck of cards - of the four-dimensional vector space over the integers mod 3. In n-dimensional generalized Set, we get the seemingly-innocuous cap set question: how large can a subset of $(\mathbb{Z}_3)^n$ be and still contain no lines? This question stood open for 30 years; in this talk, we'll see the beautiful, elementary, and wonderfully short proof of the Cap Set Conjecture (that the largest subset is exponentially smaller than the whole space), due to Ellenberg and Gijswijt, following on Croot-Lev-Pach. We will also see some of the many applications of the result and its proof, not only in combinatorics, but also in commutative algebra, the geometry of tensors, and computational complexity. Little background will be assumed beyond linear algebra over the integers modulo a prime.
AMS Retiring Presidential Address
The Concept of Holonomy---Its History and Recent Developments
Saturday, January 13, 2018, 9:00 a.m.- 9:50 a.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Robert L. Bryant, Duke University
Inspired by the study of `rolling constraints' in mechanics, the concept of holonomy was introduced into geometry to describe parallel translation in curved media. In the 1920s, it was applied by É. Cartan and his students to problems such as the classification of real forms of Lie groups. Riemannian manifolds with reduced holonomy made their first appearance in mainstream geometry as Kähler geometry, and, in the 1950s, this motivated M. Berger to classify the possible Riemannian holonomy groups, providing a fruitful taxonomy of geometries. S.-T. Yau's solution of the Calabi Conjecture fits naturally into this framework and stimulated interest in the other special 'holonomies' on Berger's list. Beginning in the 1980s the final two exceptional cases were shown to exist and to play an essential role in high-energy theoretical physics analogous to the role that Calabi-Yau spaces play in string theory and mirror symmetry. In recent years, many new results have extended our knowledge of these exceptional spaces and their remarkable properties, though much remains mysterious.
In this talk, I will describe this history, develop the basic concepts, and explain some of the recent advances and some of the challenging open problems in the study of holonomy.
Transforming Learning: Building Confidence and Community to Engage Students with Rigor
Saturday, January 13, 2018, 10:05 a.m.- 10:55 a.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Maria Klawe, Harvey Mudd College
As the first woman and the first mathematician to become president of Harvey Mudd College, I have been delighted to see our departments transform the teaching of rigorous mathematical content in ways that attract and retain female students in mathematics, computer science, engineering and physics. This talk describes the curricular and classroom transformations that have taken place over the last decade and the significant increases in diversity that have occurred as a result. Just in the last three years we have seen graduating classes in computer science, engineering and physics that were more than 50% female. I hope that attendees will leave energized and inspired to experiment in their own departments.
MAA-AMS-SIAM Gerald and Judith Porter Public Lecture
Political Geometry: Voting Districts, "Compactness," and Ideas About Fairness
Saturday, January 13, 2018, 3:00 p.m. - 3:50 p.m. Ballroom 6AB, Upper Level, San Diego Convention Center
Moon Duchin, Tufts University
The U.S. Constitution calls for a census every ten years, followed by freshly drawn congressional districts to evenly divide up the population of each state. How the lines are drawn has a profound impact on how the elections turn out, especially with increasingly fine-grained voter data available. We call a district gerrymandered if the lines are drawn to rig an outcome, whether to dilute the voting power of minorities, to overrepresent one political party, to create safe seats for incumbents, or anything else. Bizarrely-shaped districts are widely recognized as a red flag for gerrymandering, so a traditional districting principle is that the shapes should be "compact"—since that typically is left undefined, it's hard to enforce or to study. I will discuss "compactness" from the point of view of metric geometry, and I'll overview opportunities for mathematical interventions and constraints in the highly contested process of electoral redistricting. To do this requires a rich mix of law, civil rights, geometry, political science, and supercomputing.
|
CommonCrawl
|
Gauge symmetry is not a symmetry?
I have read before in one of Seiberg's articles something like, that gauge symmetry is not a symmetry but a redundancy in our description, by introducing fake degrees of freedom to facilitate calculations.
Regarding this I have a few questions:
Why is it called a symmetry if it is not a symmetry? what about Noether theorem in this case? and the gauge groups U(1)...etc?
Does that mean, in principle, that one can gauge any theory (just by introducing the proper fake degrees of freedom)?
Are there analogs or other examples to this idea, of introducing fake degrees of freedom to facilitate the calculations or to build interactions, in classical physics? Is it like introducing the fictitious force if one insists on using Newton's 2nd law in a noninertial frame of reference?
quantum-field-theory particle-physics gauge-theory research-level topological-order
Xiao-Gang Wen
RevoRevo
$\begingroup$ As it was mentioned, I just recommend to pay more attention to the phrase "This implies for example the conservation of the electric charge irrespective of the equation of motion." in David Bar Moshe answer. $\endgroup$ – Misha Aug 24 '11 at 6:30
$\begingroup$ This is a great question, but the answers are misleading. There is always a global part to the gauge symmetry which is a real symmetry. The Noether theorem gives you a current which is conserved due to the equations of motion, and there are conserved quantities associated to boundary transformations. $\endgroup$ – Ron Maimon Jun 23 '12 at 11:18
$\begingroup$ While gauge symmetry is, of course, classical and seems no quantum content, gauge symmetry breaking is purely quantum. This "correction" (or breaking) is a profound quantum phenomenon. $\endgroup$ – user15692 Nov 5 '12 at 6:14
$\begingroup$ @RonMaimon - Global symmetries are emphatically not part of the gauge symmetries. The set of gauge symmetries that form redundancies (and I think what people really mean by gauge symmetry) are those that act trivially at infinity (in a suitable sense), i.e. generated infinitesimally by functions $\alpha(x) \to 0$ as $x \to \infty$. Global symmetries on the other hand correspond to $\alpha(x) = $ constant which do not satisfy the above property. Thus, global symmetries are not part of the what one truly calls "gauge symmetry". $\endgroup$ – Prahar Jun 13 '16 at 15:30
$\begingroup$ @Prahar I have read this statement several times now, but wasn't really able to understand it. Do you know any good reason (or some good reference that explains) why only gauge symmetries that act trivially at infinity are true redundancies that need to be modded out? $\endgroup$ – JakobH May 13 '17 at 9:12
Because the term "gauge symmetry" pre-dates QFT. It was coined by Weyl, in an attempt to extend general relativity. In setting up GR, one could start with the idea that one cannot compare tangent vectors at different spacetime points without specifying a parallel transport/connection; Weyl tried to extend this to include size, thus the name "gauge". In modern parlance, he created a classical field theory of a $\mathbb{R}$-gauge theory. Because $\mathbb{R}$ is locally the same as $U(1)$ this gave the correct classical equations of motion for electrodynamics (i.e. Maxwell's equations). As we will go into below, at the classical level, there is no difference between gauge symmetry and "real" symmetries.
Yes. In fact, a frequently used trick is to introduce such a symmetry to deal with constraints. Especially in subjects like condensed matter theory, where nothing is so special as to be believed to be fundamental, one often introduces more degrees of freedom and then "glue" them together with gauge fields. In particular, in the strong-coupling/Hubbard model theory of high-$T_c$ superconductors, one way to deal with the constraint that there be no more than one electron per site (no matter the spin) is to introduce spinons (fermions) and holons (bosons) and a non-Abelian gauge field, such that really the low energy dynamics is confined --- thus reproducing the physical electron; but one can then go and look for deconfined phases and ask whether those are helpful. This is a whole other review paper in and of itself. (Google terms: "patrick lee gauge theory high tc".)
You need to distinguish between forces and fields/degrees of freedom. Forces are, at best, an illusion anyway. Degrees of freedom really matter however. In quantum mechanics, one can be very precise about the difference. Two states $\left|a\right\rangle$ and $\left|b\right\rangle$ are "symmetric" if there is a unitary operator $U$ s.t. $$U\left|a\right\rangle = \left|b\right\rangle$$ and $$\left\langle a|A|a\right\rangle =\left\langle b|A|b\right\rangle $$ where $A$ is any physical observable. "Gauge" symmetries are those where we decide to label the same state $\left|\psi\right\rangle$ as both $a$ and $b$. In classical mechanics, both are represented the same way as symmetries (discrete or otherwise) of a symplectic manifold. Thus in classical mechanics these are not separate, because both real and gauge symmetries lead to the same equations of motion; put another way, in a path-integral formalism you only notice the difference with "large" transformations, and locally the action is the same. A good example of this is the Gibbs paradox of working out the entropy of mixing identical particles -- one has to introduce by hand a factor of $N!$ to avoid overcounting --- this is because at the quantum level, swapping two particles is a gauge symmetry. This symmetry makes no difference to the local structure (in differential geometry speak) so one cannot observe it classically.
A general thing -- when people say "gauge theory" they often mean a much more restricted version of what this whole discussion has been about. For the most part, they mean a theory where the configuration variable includes a connection on some manifold. These are a vastly restricted version, but covers the kind that people tend to work with, and that's where terms like "local symmetry" tend to come from. Speaking as a condensed matter physicist, I tend to think of those as theories of closed loops (because the holonomy around a loop is "gauge invariant") or if fermions are involved, open loops. Various phases are then condensations of these loops, etc. (For references, look at "string-net condensation" on Google.)
Finally, the discussion would be amiss without some words about "breaking" gauge symmetry. As with real symmetry breaking, this is a polite but useful fiction, and really refers to the fact that the ground state is not the naive vacuum. The key is commuting of limits --- if (correctly) takes the large system limit last (both IR and UV) then no breaking of any symmetry can occur. However, it is useful to put in by hand the fact that different real symmetric ground states are separately into different superselection sectors and so work with a reduced Hilbert space of only one of them; for gauge symmetries one can again do the same (carefully) commuting superselection with gauge fixing.
gennethgenneth
$\begingroup$ when i try to browse your personal blog, i get a "Unknown control sequence '\Gam'" $\endgroup$ – Larry Harson Aug 23 '11 at 14:55
$\begingroup$ I didn't ask why it is called gauge symmetry. I was asking about how if gauge symmetry is not a symmetry, then how the gauge groups are not a symmetry group either! That is what I do not understand $\endgroup$ – Revo Aug 25 '11 at 7:59
$\begingroup$ @Revo: in classical field theory, they are symmetries. David Bar Moshe below explains how Noether's theorem works in this case. This is not the case in a quantum theory. People kept the terminology even though now we understand better how things work. $\endgroup$ – genneth Aug 25 '11 at 8:18
The (big) difference between a gauge theory and a theory with only rigid symmetry is precisely expressed by the Noether first and second theorems:
While in the case of a rigid symmetry, the currents corresponding to the group generators are conserved only as a consequence of the equations of motion. This is called that they are conserved "on-shell", in the case of a continuous gauge symmetry, the conservation laws become valid "off-shell", that is independently of the equations of motion. This implies for example the conservation of the electric charge irrespective of the equation of motion.
Now, the conservation law equations can be used in principle to reduce the number of fields.
The procedure is as follows:
Work on the subspace of the field configurations satisfying the conservation laws. However, there will still be residual gauge symmetries on this subspace. In order to get rid of those:
Select a gauge fixing condition for each conservation law.
This will reduce the "number of field components" by two for every gauge symmetry. The implementation of this procedure however is very difficult, because it actually requires to solve the conservation laws, and moreover, the reduced space of field configurations is very complicated. This is the reason why this procedure is rarely implemented and other techniques like BRST are used.
Vladimir Kalitvianski
David Bar MosheDavid Bar Moshe
$\begingroup$ Can you give a reference for such a calculation where by a physically conserved quantity is derived from local gauge symmetries? I would think that is impossible since after all gauges can be fixed and there would be no remnant symmetry but nothing physical would have changed either! I would have thought that all conservation laws needs the variation of the action (w.r.t the deformation parameters) to be evaluated on the solutions and hence conservation is always on-shell. That is my understanding of what happens even for non-Abelian gauge field theory. $\endgroup$ – user6818 Oct 31 '11 at 18:28
$\begingroup$ @Anirbit, Sorry for the late response. The following reference discussing Noether's second theorem: nd.edu/~kbrading/Research/WhichSymmetryStudiesJuly01.pdf Let's consider for definiteness a gauged Klein-Gordon field theory. The equation of motion of the gauge field is $\partial_{\nu}F_{\mu \nu} = J_{\mu}$, where $J_{\mu}$ is the Klein-Gordon field current: $i(\bar{\phi}\partial_{\mu}\phi - \phi\partial_{\mu}\bar{\phi})$. $\endgroup$ – David Bar Moshe Nov 14 '11 at 14:01
$\begingroup$ Cont. Thus this current is conserved when the gauge field satisfies its equation of motion, the matter field needs not satisfy its equation of motion for the conservation. Thus, one may say that the current conservation requires only the gauge fields to be on-shell. But this is not the whole story; the time component of the gauge field equations of motion is the Bianchi identity (or the Gauss law). $\endgroup$ – David Bar Moshe Nov 14 '11 at 14:01
$\begingroup$ Cont. The Lagrangian doesn't contain a time derivative for the time component of the gauge field. This component appears as a Lagrange multiplier times the Gauss law, thus its equation of motion is not dynamical, it just describes a constraint surface in the phase space expressing the redundancy of the field components. Thus the conservation of the time component of the Klein-Gordon current i.e., the charge (after integration over the 3-volume) is not dependent on any equation of motion of the "true" degrees of freedom. $\endgroup$ – David Bar Moshe Nov 14 '11 at 14:02
$\begingroup$ Dear @DavidBarMoshe: Minor thing. It seems to me that the Klein-Gordon field current should depend on the gauge potential, cf. this Phys.SE answer. $\endgroup$ – Qmechanic♦ Jan 21 '13 at 15:22
1) Why is it called a symmetry if it is not a symmetry? what about Noether theorem in this case? and the gauge groups U(1)...etc?
Gauge symmetry is a local symmetry in CLASSICAL field theory. This may be why people call gauge symmetry a local symmetry. But we know that our world is quantum. In quantum systems, gauge symmetry is not a symmetry, in the sense that the gauge transformation does not change any quantum state and is a do-nothing transformation. Noether's theorem is a notion of classical theory. Quantum gauge theory (when described by the physical Hilbert space and Hamiltonian) has no Noether's theorem.
Since the gauge symmetry is not a symmetry, the gauge group does not mean too much, in the sense that two different gauge groups can sometimes describe the same physical theory. For example, the $Z_2$ gauge theory is equivalent to the following $U(1)\times U(1)$ Chern-Simons gauge theory:
$$\frac{K_{IJ}}{4\pi}a_{I,\mu} \partial_\nu a_{J,\lambda} \epsilon^{\mu\nu\lambda}$$ with $$K= \left(\begin{array}[cc]\\ 0& 2\\ 2& 0\\ \end{array}\right)$$ in (2+1)D.
Since the gauge transformation is a do-nothing transformation and the gauge group is unphysical, it is better to describe gauge theory without using gauge group and the related gauge transformation. This has been achieved by string-net theory. Although the string-net theory is developed to describe topological order, it can also be viewed as a description of gauge theory without using gauge group.
The study of topological order (or long-range entanglements) shows that if a bosonic model has a long-range entangled ground state, then the low energy effective theory must be some kind of gauge theory. So the low energy effective gauge theory is actually a reflection of the long-range entanglements in the ground state.
So in condensed matter physics, gauge theory is not related to geometry or curvature. The gauge theory is directly related to and is a consequence of the long-range entanglements in the ground state. So maybe the gauge theory in our vacuum is also a direct reflection of the long-range entanglements in the vacuum.
2) Does that mean, in principle, that one can gauge any theory (just by introducing the proper fake degrees of freedom)?
Yes, one can rewrite any theory as a gauge theory of any gauge group. However, such a gauge theory is usually in the confined phase and the effective theory at low energy is not a gauge theory.
Also see a related discussion: Understanding Elitzur's theorem from Polyakov's simple argument?
Xiao-Gang WenXiao-Gang Wen
$\begingroup$ I have serval stupid questions about Xiao-Gang Wen's answer: 1) Noether theorem is a notion of classical theory. If Noether theorem is classical, how about the charge? In quantum theory, the Noether charge is still conseved, such as electric charge, isn't it? 2) in the sense the gauge transformation do not change any quantum state If the quantum state is just changed by a phase factor, does it mean the state change noting? In quantum mechanics, different gauge potential A_\mu will have physical effect such as A_B effect. Is there any relation between gauge transformation and A-B effect? $\endgroup$ – thone May 30 '12 at 9:58
$\begingroup$ 1) Electric charge is conserved because of a true global symmetry --- it is not gauge. $\endgroup$ – genneth May 30 '12 at 10:36
$\begingroup$ 2) It is not true that different gauged $A_\mu$ will have different effects. The base effect is the fact that different paths enclose different amounts of $B$, which is entirely gauge independent. $\endgroup$ – genneth May 30 '12 at 10:38
$\begingroup$ @ Jook: There are three kinds of gauge theories: (1) Classical gauge theory where both gauge field and charged matter are treated classically. (2) fake quantum gauge theory where gauge field is treated classically and charged matter is treated quantum mechanically. (3) real quantum gauge theory where both gauge field and charged matter are treated quantum mechanically. Most papers and books deal with the fake quantum gauge theory, and so does your question/answer it seems. My answer deals with the real quantum gauge theory, which is very different. $\endgroup$ – Xiao-Gang Wen May 30 '12 at 11:51
$\begingroup$ @Xiao-GangWen: Why do you think that a gauge symmetry (that goes to the identity in the boundary) is a true symmetry in classical physics? In my opinion, in neither case it is a true symmetry, but only a redundancy in the description. Thank you in advance. $\endgroup$ – Diego Mazón Jul 26 '12 at 23:43
When talking about symmetry, one should always indicate: symmetry of what?
If I measure the length of a stick in inches and then in centimeters, i.e. in different gauges, then I get two different answers, although the stick is the same in both cases. Similarly, when I measure the phase of a sine wave with two clocks that have different phases, then I get two different phases, and phase shifts form the group U(1). In the first example the stick is invariant under the change of gauge from centimeters to inches, but this has nothing to do with any physical symmetry of the stick. Noether's theorem has to do with symmetries of the Lagrangian. E.g. if the Lagrangian has spherical symmetry, then total angular momentum is conserved. The Noether theorem obviously also applies to quantum systems. A change of gauge is not a physical transformation, that is all. In quantum field theory one starts with a simple Lagrangian (e.g. Dirac Lagrangian), and then changes it so that it becomes invariant under local gauge changes, i.e. one then changes the derivative in the Dirac equation into a D which has a "gauge field" in it: to make this sound cryptic, one then says that "local gauge invariance has generated a gauge field", although this is not true. Imposing local gauge invariance simply puts a constraint on what sort of Lagrangians can be written. It is similar to demanding that a function F(z) be analytic in the complex plane, this also has serious consequences.
MartinMartin
Gauge symmetry imposes local conservation laws, which are called Ward Identities in QED and Slavnov-Taylor identities for non-Abelian gauge theories. Those identities relate amplitudes or limit them.
An example of those constraints imposed by gauge symmetry is the transversality of the vacuum polarization. To be more precise, gauge symmetry does not allow for a mass term for a photon on the Lagrangian. Yet, this could develop through quantum fluctuations. This is not happening due to the Ward identity that imposes transversality of the photon vacuum polarization. Another example is the relation between fermion propagator and the basic vertex in QED. It guarantees the absence of longitudinal photons.
The idea is thus that gauge symmetry does impose a sort of Noether theorem, but in much more refined way. It shows up at the level of quantum corrections and limits them. These relations are, furthermore, local. They become a sort of local version of Noether theorem.
José Ignacio LatorreJosé Ignacio Latorre
protected by Qmechanic♦ May 21 '15 at 7:39
Not the answer you're looking for? Browse other questions tagged quantum-field-theory particle-physics gauge-theory research-level topological-order or ask your own question.
What, in simplest terms, is gauge invariance?
What role does "spontaneously symmetry breaking" played in the "Higgs Mechanism"?
Noether's theorem and gauge symmetry
Why do we seek to preserve gauge symmetries after quantization?
What is spontaneous symmetry breaking in QUANTUM GAUGE systems?
Physical difference between gauge symmetries and global symmetries
Understanding Elitzur's theorem from Polyakov's simple argument?
Can we do path integrals in gauge theories without fixing a gauge?
Why does charge conservation due to gauge symmetry only hold on-shell?
What is the relationship between string net theory and string / M-theory?
Gauge redundancies and global symmetries
Gauge invariant scalar potentials
Gauge invariance and the form of the Rarita-Schwinger action
Gauge symmetry description for $\phi^4$?
Counting degrees of freedom for gravitational waves as a gauge field
Anomalies for not-on-site discrete gauge symmetries
Large gauge transformations for higher p-form gauge fields
Why am I wrong about how to view gauge theory?
What is the relationship between BRST symmetry and gauge symmetry?
Higgs-Mechanism: Why are gauge boson masses not protected by gauge symmetry
|
CommonCrawl
|
Multiple solutions for Dirichlet nonlinear BVPs involving fractional Laplacian
DCDS-B Home
Variational approach to stability of semilinear wave equation with nonlinear boundary conditions
October 2014, 19(8): 2593-2601. doi: 10.3934/dcdsb.2014.19.2593
Periodic solutions to differential equations with a generalized p-Laplacian
Adam Lipowski 1, , Bogdan Przeradzki 2, and Katarzyna Szymańska-Dębowska 2,
Centre of Mathematics and Physics, Technical University of Łódź, 90-924 Łódź, ul. Wólczańska 215, Poland
Institute of Mathematics, Technical University of Łódź, 90-924 Łódź, ul. Wólczańska 215, Poland, Poland
Received October 2013 Revised February 2014 Published August 2014
The existence of a periodic solution to nonlinear ODEs with $\varphi$-Laplacian is proved under conditions on functions given in the equation (not on the unknown solutions). The results are applied to a relativistic pendulum equation in a general form.
Keywords: coincidence degree, p-Laplacian, relativistic pendulum., continuation.
Mathematics Subject Classification: 34C25, 47H1.
Citation: Adam Lipowski, Bogdan Przeradzki, Katarzyna Szymańska-Dębowska. Periodic solutions to differential equations with a generalized p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2593-2601. doi: 10.3934/dcdsb.2014.19.2593
C. Bereanu, P. Jebelean and J. Mawhin, Periodic solutions of pendulum-like perturbations of singular and bounded $\phi$-Laplacians,, J. Dynam. Differential Equations, 22 (2010), 463. doi: 10.1007/s10884-010-9172-3. Google Scholar
C. Bereanu and J. Mawhin, Existence and multiplicity results for some nonlinear problems with singular $\phi$-Laplacian,, J. Differential Equations, 243 (2007), 536. doi: 10.1016/j.jde.2007.05.014. Google Scholar
C. Bereanu and P. J. Torres, Existence of at least two periodic solutions of the forced relativistic pendulum,, Proc. Amer. Math. Soc., 140 (2012), 2713. doi: 10.1090/S0002-9939-2011-11101-8. Google Scholar
H. Brezis and J. Mawhin, Periodic solutions of the forced relativistic pendulum,, Differential Integral Equations, 23 (2010), 801. Google Scholar
J. A. Cid and P. J. Torres, On the existence and stability of periodic solutions for pendulum-like equations with friction and $\phi$-Laplacian,, Discrete Contin. Dyn. Syst., 33 (2013), 141. doi: 10.3934/dcds.2013.33.141. Google Scholar
W. Ge and J. Ren, An extension of Mawhin's continuation theorem and its application to boundary value problems with a p-Laplacian,, Nonl. Anal. TMA, 58 (2004), 477. doi: 10.1016/j.na.2004.01.007. Google Scholar
S. Ma and Y. Zhang, Existence of infinitely many periodic solutions for ordinary p-Laplacian systems,, J. Math. Anal. Appl., 351 (2009), 469. doi: 10.1016/j.jmaa.2008.10.027. Google Scholar
R. Manásevich and J. Mawhin, Periodic solutions for nonlinear systems with p-Laplacian-like operators,, J. Differential Equations, 145 (1998), 367. doi: 10.1006/jdeq.1998.3425. Google Scholar
R. Manásevich and J. Mawhin, Boundary value problems for nonlinear perturbations of vector p-Laplacian-like operators,, J. Korean Math. Soc., 37 (2000), 665. Google Scholar
J. Mawhin, Periodic solutions of the forced pendulum: Classical vs relativistic,, Le Mathematiche, 65 (2010), 97. Google Scholar
Xiang Lv, Shiping Lu and Ping Yan, Periodic solutions of non-autonomous ordinary p-Laplacian systems,, J. Appl. Math. Comput., 35 (2011), 11. doi: 10.1007/s12190-009-0336-4. Google Scholar
P. J. Torres, Nondegeneracy of the periodically forced Liénard differential equation with $\phi$-Laplacian,, Commun. Contemp. Mathematics, 13 (2011), 283. doi: 10.1142/S0219199711004208. Google Scholar
Dimitri Mugnai. Bounce on a p-Laplacian. Communications on Pure & Applied Analysis, 2003, 2 (3) : 371-379. doi: 10.3934/cpaa.2003.2.371
Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040
Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012
Genni Fragnelli, Dimitri Mugnai, Nikolaos S. Papageorgiou. Robin problems for the p-Laplacian with gradient dependence. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 287-295. doi: 10.3934/dcdss.2019020
Francesca Colasuonno, Benedetta Noris. A p-Laplacian supercritical Neumann problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3025-3057. doi: 10.3934/dcds.2017130
Stefano Marò. Relativistic pendulum and invariant curves. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 1139-1162. doi: 10.3934/dcds.2015.35.1139
Patrizia Pucci, Mingqi Xiang, Binlin Zhang. A diffusion problem of Kirchhoff type involving the nonlocal fractional p-Laplacian. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4035-4051. doi: 10.3934/dcds.2017171
Robert Stegliński. On homoclinic solutions for a second order difference equation with p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 487-492. doi: 10.3934/dcdsb.2018033
CÉSAR E. TORRES LEDESMA. Existence and symmetry result for fractional p-Laplacian in $\mathbb{R}^{n}$. Communications on Pure & Applied Analysis, 2017, 16 (1) : 99-114. doi: 10.3934/cpaa.2017004
Sophia Th. Kyritsi, Nikolaos S. Papageorgiou. Positive solutions for p-Laplacian equations with concave terms. Conference Publications, 2011, 2011 (Special) : 922-930. doi: 10.3934/proc.2011.2011.922
Shanming Ji, Yutian Li, Rui Huang, Xuejing Yin. Singular periodic solutions for the p-laplacian ina punctured domain. Communications on Pure & Applied Analysis, 2017, 16 (2) : 373-392. doi: 10.3934/cpaa.2017019
Maya Chhetri, D. D. Hai, R. Shivaji. On positive solutions for classes of p-Laplacian semipositone systems. Discrete & Continuous Dynamical Systems - A, 2003, 9 (4) : 1063-1071. doi: 10.3934/dcds.2003.9.1063
Everaldo S. de Medeiros, Jianfu Yang. Asymptotic behavior of solutions to a perturbed p-Laplacian problem with Neumann condition. Discrete & Continuous Dynamical Systems - A, 2005, 12 (4) : 595-606. doi: 10.3934/dcds.2005.12.595
Carlo Mercuri, Michel Willem. A global compactness result for the p-Laplacian involving critical nonlinearities. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 469-493. doi: 10.3934/dcds.2010.28.469
Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107
Kanishka Perera, Andrzej Szulkin. p-Laplacian problems where the nonlinearity crosses an eigenvalue. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 743-753. doi: 10.3934/dcds.2005.13.743
Elisa Calzolari, Roberta Filippucci, Patrizia Pucci. Dead cores and bursts for p-Laplacian elliptic equations with weights. Conference Publications, 2007, 2007 (Special) : 191-200. doi: 10.3934/proc.2007.2007.191
Francisco Odair de Paiva, Humberto Ramos Quoirin. Resonance and nonresonance for p-Laplacian problems with weighted eigenvalues conditions. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1219-1227. doi: 10.3934/dcds.2009.25.1219
Marta García-Huidobro, Raul Manásevich, J. R. Ward. Vector p-Laplacian like operators, pseudo-eigenvalues, and bifurcation. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 299-321. doi: 10.3934/dcds.2007.19.299
Lingyu Jin, Yan Li. A Hopf's lemma and the boundary regularity for the fractional p-Laplacian. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1477-1495. doi: 10.3934/dcds.2019063
Adam Lipowski Bogdan Przeradzki Katarzyna Szymańska-Dębowska
|
CommonCrawl
|
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
How do I construct the $SU(2)$ representation of the Lorentz Group using $SU(2)\times SU(2)\sim SO(3,1)$ ?
This question is based on problem II.3.1 in Anthony Zee's book Quantum Field Theory in a Nutshell
Show, by explicit calculation, that $(1/2,1/2)$ is the Lorentz Vector.
I see that the generators of SU(2) are the Pauli Matrices and the generators of SO(3,1)is a matrix composed of two Pauli Matrices along the diagonal. Is it always the case that the Direct Product of two groups is formed from the generators like this?
I ask this because I'm trying to write a Lorentz boost as two simultaneous quatertion rotations [unit quaternions rotations are isomorphic to SU(2)] and tranform between the two methods. Is this possible?
In other words, How do I construct the SU(2) representation of the Lorentz Group using the fact that $SU(2)\times SU(2) \sim SO(3,1)$?
Here is some background information:
Zee has shown that the algebra of the Lorentz group is formed from two separate $SU(2)$ algebras [$SO(3,1)$ is isomorphic to $SU(2)\times SU(2)$] because the Lorentz algebra satisfies:
$$\begin{align}[J_{+i},J_{+j}] &= ie_{ijk}J_{k+} & [J_{-i},J_{-j}] &= ie_{ijk} J_{k-} & [J_{+i},J_{-j}] &= 0\end{align}$$
The representations of $SU(2)$ are labeled by $j=0,\frac{1}{2},1,\ldots$ so the $SU(2)\times SU(2)$ rep is labelled by $(j_+,j_-)$ with the $(1/2,1/2)$ being the Lorentz 4-vector because and each representation contains $(2j+1)$ elements so $(1/2,1/2)$ contains 4 elements.
quantum-field-theory homework-and-exercises group-theory group-representations
MadScientist
MadScientistMadScientist
$\begingroup$ It's really the basic problem one has to solve himself in order to understand how spinors work. Check a 3D subset of this problem at motls.blogspot.com/2012/04/why-are-there-spinors.html if you really need help. Just one correction: the complexifications of the $SU(2)\times SU(2)$ and $SO(3,1)$ algebras are the same. However, when the coefficients are real, they're different. $SU(2)\times SU(2)$ is $SO(4)$ while $SO(3,1)$, its pseudoorthogonal version, is the same Lie algebra as $SL(2,C)$. $\endgroup$
– Luboš Motl
$\begingroup$ Crossposted to math.stackexchange.com/q/146319/11127 $\endgroup$
– Qmechanic ♦
Here is a mathematical derivation. We use the sign convention $(+,-,-,-)$ for the Minkowski metric $\eta_{\mu\nu}$.
I) First recall the fact that
$SL(2,\mathbb{C})$ is (the double cover of) the restricted Lorentz group $SO^+(1,3;\mathbb{R})$.
This follows partly because:
There is a bijective isometry from the Minkowski space $(\mathbb{R}^{1,3},||\cdot||^2)$ to the space of $2\times2 $ Hermitian matrices $(u(2),\det(\cdot))$, $$\mathbb{R}^{1,3} ~\cong ~ u(2) ~:=~\{\sigma\in {\rm Mat}_{2\times 2}(\mathbb{C}) \mid \sigma^{\dagger}=\sigma \} ~=~ {\rm span}_{\mathbb{R}} \{\sigma_{\mu} \mid \mu=0,1,2,3\}, $$ $$\mathbb{R}^{1,3}~\ni~\tilde{x}~=~(x^0,x^1,x^2,x^3) \quad\mapsto \quad\sigma~=~x^{\mu}\sigma_{\mu}~\in~ u(2), $$ $$ ||\tilde{x}||^2 ~=~x^{\mu} \eta_{\mu\nu}x^{\nu} ~=~\det(\sigma), \qquad \sigma_{0}~:=~{\bf 1}_{2 \times 2}.\tag{1}$$
There is a group action $\rho: SL(2,\mathbb{C})\times u(2) \to u(2)$ given by $$g\quad \mapsto\quad\rho(g)\sigma~:= ~g\sigma g^{\dagger}, \qquad g\in SL(2,\mathbb{C}),\qquad\sigma\in u(2), \tag{2}$$ which is length preserving, i.e. $g$ is a pseudo-orthogonal (or Lorentz) transformation. In other words, there is a Lie group homomorphism
$$\rho: SL(2,\mathbb{C}) \quad\to\quad O(u(2),\mathbb{R})~\cong~ O(1,3;\mathbb{R}) .\tag{3}$$
Since $\rho$ is a continuous map and $SL(2,\mathbb{C})$ is a connected set, the image $\rho(SL(2,\mathbb{C}))$ must again be a connected set. In fact, one may show so there is a surjective Lie group homomorphism$^1$
$$\rho: SL(2,\mathbb{C}) \quad\to\quad SO^+(u(2),\mathbb{R})~\cong~ SO^+(1,3;\mathbb{R}) , $$ $$\rho(\pm {\bf 1}_{2 \times 2})~=~{\bf 1}_{u(2)}.\tag{4}$$
The Lie group $SL(2,\mathbb{C})=\pm e^{sl(2,\mathbb{C})}$ has Lie algebra $$ sl(2,\mathbb{C}) ~=~ \{\tau\in{\rm Mat}_{2\times 2}(\mathbb{C}) \mid {\rm tr}(\tau)~=~0 \} ~=~{\rm span}_{\mathbb{C}} \{\sigma_{i} \mid i=1,2,3\}.\tag{5}$$
The Lie group homomorphism $\rho: SL(2,\mathbb{C}) \to O(u(2),\mathbb{R})$ induces a Lie algebra homomorphism $$\rho: sl(2,\mathbb{C})\to o(u(2),\mathbb{R})\tag{6}$$ given by $$ \rho(\tau)\sigma ~=~ \tau \sigma +\sigma \tau^{\dagger}, \qquad \tau\in sl(2,\mathbb{C}),\qquad\sigma\in u(2), $$ $$ \rho(\tau) ~=~ L_{\tau} +R_{\tau^{\dagger}},\tag{7}$$ where we have defined left and right multiplication of $2\times 2$ matrices $$L_{\sigma}(\tau)~:=~\sigma \tau~=:~ R_{\tau}(\sigma), \qquad \sigma,\tau ~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}).\tag{8}$$
II) Note that the Lorentz Lie algebra $so(1,3;\mathbb{R}) \cong sl(2,\mathbb{C})$ does not$^2$ contain two perpendicular copies of, say, the real Lie algebra $su(2)$ or $sl(2,\mathbb{R})$. For comparison and completeness, let us mention that for other signatures in $4$ dimensions, one has
$$SO(4;\mathbb{R})~\cong~[SU(2)\times SU(2)]/\mathbb{Z}_2, \qquad\text{(compact form)}\tag{9}$$
$$SO^+(2,2;\mathbb{R})~\cong~[SL(2,\mathbb{R})\times SL(2,\mathbb{R})]/\mathbb{Z}_2.\qquad\text{(split form)}\tag{10}$$
The compact form (9) has a nice proof using quaternions
$$(\mathbb{R}^4,||\cdot||^2) ~\cong~ (\mathbb{H},|\cdot|^2)\quad\text{and}\quad SU(2)~\cong~ U(1,\mathbb{H}),\tag{11}$$
see also this Math.SE post and this Phys.SE post. The split form (10) uses a bijective isometry
$$(\mathbb{R}^{2,2},||\cdot||^2) ~\cong~({\rm Mat}_{2\times 2}(\mathbb{R}),\det(\cdot)).\tag{12}$$
To decompose Minkowski space into left- and right-handed Weyl spinor representations, one must go to the complexification, i.e. one must use the fact that
$SL(2,\mathbb{C})\times SL(2,\mathbb{C})$ is (the double cover of) the complexified proper Lorentz group $SO(1,3;\mathbb{C})$.
Note that Refs. 1-2 do not discuss complexification$^2$. One can more or less repeat the construction from section I with the real numbers $\mathbb{R}$ replaced by complex numbers $\mathbb{C}$, however with some important caveats.
There is a bijective isometry from the complexified Minkowski space $(\mathbb{C}^{1,3},||\cdot||^2)$ to the space of $2\times2 $ matrices $({\rm Mat}_{2\times 2}(\mathbb{C}),\det(\cdot))$, $$\mathbb{C}^{1,3} ~\cong ~ {\rm Mat}_{2\times 2}(\mathbb{C}) ~=~ {\rm span}_{\mathbb{C}} \{\sigma_{\mu} \mid \mu=0,1,2,3\}, $$ $$ M(1,3;\mathbb{C})~\ni~\tilde{x}~=~(x^0,x^1,x^2,x^3) \quad\mapsto \quad\sigma~=~x^{\mu}\sigma_{\mu}~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}) , $$ $$ ||\tilde{x}||^2 ~=~x^{\mu} \eta_{\mu\nu}x^{\nu} ~=~\det(\sigma).\tag{13}$$ Note that forms are taken to be bilinear rather than sesquilinear.
There is a surjective Lie group homomorphism$^3$
$$\rho: SL(2,\mathbb{C}) \times SL(2,\mathbb{C}) \quad\to\quad SO({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})~\cong~ SO(1,3;\mathbb{C})\tag{14}$$ given by $$(g_L, g_R)\quad \mapsto\quad\rho(g_L, g_R)\sigma~:= ~g_L\sigma g^{\dagger}_R, $$ $$ g_L, g_R\in SL(2,\mathbb{C}),\qquad\sigma~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}).\tag{15} $$
The Lie group $SL(2,\mathbb{C})\times SL(2,\mathbb{C})$ has Lie algebra $sl(2,\mathbb{C})\oplus sl(2,\mathbb{C})$.
The Lie group homomorphism
$$\rho: SL(2,\mathbb{C})\times SL(2,\mathbb{C}) \quad\to\quad SO({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})\tag{16}$$ induces a Lie algebra homomorphism $$\rho: sl(2,\mathbb{C})\oplus sl(2,\mathbb{C})\quad\to\quad so({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})\tag{17}$$ given by $$ \rho(\tau_L\oplus\tau_R)\sigma ~=~ \tau_L \sigma +\sigma \tau^{\dagger}_R, \qquad \tau_L,\tau_R\in sl(2,\mathbb{C}),\qquad \sigma\in {\rm Mat}_{2\times 2}(\mathbb{C}), $$ $$ \rho(\tau_L\oplus\tau_R) ~=~ L_{\tau_L} +R_{\tau^{\dagger}_R}.\tag{18}$$
The left action (acting from left on a two-dimensional complex column vector) yields by definition the (left-handed Weyl) spinor representation $(\frac{1}{2},0)$, while the right action (acting from right on a two-dimensional complex row vector) yields by definition the right-handed Weyl/complex conjugate spinor representation $(0,\frac{1}{2})$. The above shows that
The complexified Minkowski space $\mathbb{C}^{1,3}$ is a $(\frac{1}{2},\frac{1}{2})$ representation of the Lie group $SL(2,\mathbb{C}) \times SL(2,\mathbb{C})$, whose action respects the Minkowski metric.
Anthony Zee, Quantum Field Theory in a Nutshell, 1st edition, 2003.
Anthony Zee, Quantum Field Theory in a Nutshell, 2nd edition, 2010.
$^1$ It is easy to check that it is not possible to describe discrete Lorentz transformations, such as, e.g. parity $P$, time-reversal $T$, or $PT$ with a group element $g\in GL(2,\mathbb{C})$ and formula (2).
$^2$ For a laugh, check out the (in several ways) wrong second sentence on p.113 in Ref. 1: "The mathematically sophisticated say that the algebra $SO(3,1)$ is isomorphic to $SU(2)\otimes SU(2)$." The corrected statement would e.g. be "The mathematically sophisticated say that the group $SO(3,1;\mathbb{C})$ is locally isomorphic to $SL(2,\mathbb{C})\times SL(2,\mathbb{C})$." Nevertheless, let me rush to add that Zee's book is overall a very nice book. In Ref. 2, the above sentence is removed, and a subsection called "More on $SO(4)$, $SO(3,1)$, and $SO(2,2)$" is added on page 531-532.
$^3$ It is not possible to mimic an improper Lorentz transformations $\Lambda\in O(1,3;\mathbb{C})$ [i.e. with negative determinant $\det (\Lambda)=-1$] with the help of two matrices $g_L, g_R\in GL(2,\mathbb{C})$ in formula (15); such as, e.g., the spatial parity transformation $$P:~~(x^0,x^1,x^2,x^3) ~\mapsto~ (x^0,-x^1,-x^2,-x^3).\tag{19}$$ Similarly, the Weyl spinor representations are representations of (the double cover of) $SO(1,3;\mathbb{C})$ but not of (the double cover of) $O(1,3;\mathbb{C})$. E.g. the spatial parity transformation (19) intertwine between left-handed and right-handed Weyl spinor representations.
Qmechanic♦Qmechanic
$\begingroup$ Note for later: If $\tilde{\rho}:G\to O(u(2),\mathbb{R})$ is an extension $\tilde{\rho}(g)\sigma:= g\sigma g^{\dagger}$ with $SL(2,\mathbb{C})\subseteq G \subseteq GL(2,\mathbb{C})$, there does not exist a $g\in G$ such that it reproduces the discrete Lorentz transformations, such as, e.g. $\tilde{\rho}(g)=PT:\sigma\mapsto -\sigma$, $\tilde{\rho}(g)=T$, or $\tilde{\rho}(g)=P$. $\endgroup$
$\begingroup$ Note for later: If $g=\begin{pmatrix} a & b \\ c &d\end{pmatrix}$ and $\sigma=\begin{pmatrix} z^+ & z^{\ast} \\ z & z^- \end{pmatrix}$ with $z^{\pm}=x^0\pm x^3$ and $z=x^1+ix^2$, then $g\sigma g^{\dagger} = \begin{pmatrix}az^+a^{\ast} + bza^{\ast} + az^{\ast}b^{\ast} + bz^-b^{\ast} & az^+c^{\ast} + bzc^{\ast} + az^{\ast}d^{\ast} + bz^-d^{\ast} \\ cz^+a^{\ast} + dza^{\ast} + cz^{\ast}b^{\ast} + dz^-b^{\ast} & cz^+c^{\ast} + dzc^{\ast} + cz^{\ast}d^{\ast} + dz^-d^{\ast} \end{pmatrix}$; $\quad T: z^{\pm}\mapsto -z^{\mp}$; $\endgroup$
$\begingroup$ A representation of a Lie group (algebra) is also a representation of any subgroup (subalgebra), respectively. $\endgroup$
$\begingroup$ I was pondering a question related to Issam's farther above: Namely, why are we considering the complexification of $so(1, 3)$ in the first place (in QFT, that is)? Just because any complex representation will also give us a real representation and considering complex instead of real representations makes things easier (as the representation theory of complex semisimple Lie algebras is well understood)? $\endgroup$
– balu
$\begingroup$ The short and pragmatic answer is that the complexified Lorentz group works, is useful, often gets the job done, and is tied to the analytic properties of QFT. When it doesn't work, we have to roll up our sleeves! $\endgroup$
For the problem at hand formulated in a precise manner, "Show that the $\left(\frac{1}{2},\frac{1}{2}\right)$ representation of the $\mbox{SL}(2,\mathbb{C})$ group is* the Lorentz 4-vector", the solution - which is not so apparent from Qmechanic's otherwise good post - should be exhibited by direct / brute-force computation. This is relatively easy, and I quote from my diploma/Batchlor's Degree graduation paper (written in my native Romanian)
PART 1:
Let $\psi_{\alpha}$ be the components of a Weyl spinor wrt the canonical basis in a 2-dimensional vector space in which the fundamental $\left(\frac{1}{2},0\right)$ representation of $\mbox{SL}(2,\mathbb{C})$ "lives". Idem for $\bar{\chi}_{\dot{\alpha}}$ and the contragradient representation of the same group, $\left(0,\frac{1}{2}\right)$. Then, as an application of the Clebsch-Gordan theorem for $\mbox{SL}(2,\mathbb{C})$:
$\begin{equation} \psi _{\alpha }\otimes \overline{\chi }_{\stackrel{\bullet }{\alpha }}\equiv \psi _{\alpha }\overline{\chi }_{\stackrel{\bullet }{\alpha }}=\left[ \frac{1% }{2}\psi ^{\beta }\left( \sigma ^{\mu }\right) _{\beta \stackrel{\bullet }{% \beta }}\overline{\chi }^{\stackrel{\bullet }{\beta }}\right] \left( \sigma _{\mu }\right) _{\alpha \stackrel{\bullet }{\alpha }}\equiv V^{\mu}\left( \sigma _{\mu }\right) _{\alpha \stackrel{\bullet }{\alpha }}\text{.} \end{equation}$
$\left[ \frac{1}{2}\psi ^{\beta }\left( \sigma _{\mu }\right) _{\beta \stackrel{\bullet }{\beta }}\overline{\chi }^{\stackrel{\bullet }{\beta }% }\right] \left( \sigma ^{\mu }\right) _{\alpha \stackrel{\bullet }{\alpha }}=% \frac{1}{2}\left( \varepsilon ^{\beta \gamma }\psi _{\gamma }\right) \left( \sigma ^{\mu }\right) _{\beta \stackrel{\bullet }{\beta }}\left( \varepsilon ^{\stackrel{\bullet }{\beta }\stackrel{\bullet }{\gamma }}\overline{\chi }_{% \stackrel{\bullet }{\gamma }}\right) \left( \sigma _{\mu }\right) _{\alpha \stackrel{\bullet }{\alpha }} \\ =-\frac{1}{2}\psi _{\gamma }\varepsilon ^{\beta \gamma }\varepsilon ^{% \stackrel{\bullet }{\gamma }\stackrel{\bullet }{\beta }}\left( \sigma ^{\mu }\right) _{\beta \stackrel{\bullet }{\beta }}\overline{\chi }_{\stackrel{% \bullet }{\gamma }}\left( \sigma _{\mu }\right) _{\alpha \stackrel{\bullet }{% \alpha }} \\ =\frac{1}{2}\psi _{\gamma }\left[ \varepsilon ^{\gamma \beta }\varepsilon ^{% \stackrel{\bullet }{\gamma }\stackrel{\bullet }{\beta }}\left( \sigma ^{\mu }\right) _{\beta \stackrel{\bullet }{\beta }}\right] \overline{\chi }_{% \stackrel{\bullet }{\gamma }}\left( \sigma _{\mu }\right) _{\alpha \stackrel{% \bullet }{\alpha }} \\ =\frac{1}{2}\psi _{\gamma }\overline{\chi }_{\stackrel{\bullet }{\gamma }% }\left( \overline{\sigma }^{\mu }\right) ^{\stackrel{\bullet }{\gamma }% \gamma }\left( \sigma _{\mu }\right) _{\alpha \stackrel{\bullet }{\alpha }} \\ =\psi _{\gamma }\overline{\chi }_{\stackrel{\bullet }{\gamma }}\delta _{% \stackrel{\bullet }{\alpha }}^{\stackrel{\bullet }{\gamma }}\delta _{\alpha }^{\gamma }=\psi _{\alpha }\overline{\chi }_{\stackrel{\bullet }{\alpha }} $
This proof makes the Pauli matrices to be seen as Clebsch-Gordan coefficients.
$V^{\mu}\left(\psi,\chi\right)$ defined above is a Lorentz 4-vector (i.e. they are components of a Lorentz 4-vector seen as a generic member of a vector space carrying the fundamental representation of the restricted Lorentz group $\mathfrak{Lor}(1,3)$).
$V'^{\mu}\equiv \left( \phi ^{\prime }\right) ^{\alpha }\left( \sigma ^{\mu }\right) _{\alpha \stackrel{\bullet }{\beta }}\left( \overline{\chi }^{\prime }\right) ^{\stackrel{\bullet }{\beta }}=-\left( \overline{\chi }^{\prime }\right) _{\stackrel{\bullet }{\alpha }}\left( \overline{\sigma }^{\mu }\right) ^{\stackrel{\bullet }{\alpha }\beta }\left( \phi ^{\prime }\right) _{\beta }=-\left( M^{*}\right) _{\stackrel{\bullet }{\alpha }}{}^{\stackrel{% \bullet }{\beta }}\overline{\chi }_{\stackrel{\bullet }{\beta }}\left( \overline{\sigma }^{\mu }\right) ^{\stackrel{\bullet }{\alpha }\beta }M_{\beta }{}^{\gamma }\phi _{\gamma } \\ =-\overline{\chi }_{\stackrel{\bullet }{\beta }}\left( M^{\dagger }\right) ^{% \stackrel{\bullet }{\beta }}{}_{\stackrel{\bullet }{\alpha }}\left( \overline{\sigma }^{\mu }\right) ^{\stackrel{\bullet }{\alpha }\beta }M_{\beta }{}^{\gamma }\phi _{\gamma } \\ =-\overline{\chi }_{\stackrel{\bullet }{\beta }}\delta _{\stackrel{\bullet }{% \gamma }}^{\stackrel{\bullet }{\beta }}\left( M^{\dagger }\right) ^{% \stackrel{\bullet }{\gamma }}{}_{\stackrel{\bullet }{\alpha }}\left( \overline{\sigma }^{\mu }\right) ^{\stackrel{\bullet }{\alpha }\beta }M_{\beta }{}^{\gamma }\delta _{\gamma }^{\zeta }\phi _{\zeta } \\ =-\frac{1}{2}\overline{\chi }_{\stackrel{\bullet }{\beta }}\left( \overline{% \sigma }^{\nu }\right) ^{\stackrel{\bullet }{\beta }\zeta }\left( \sigma _{\nu }\right) _{\gamma \stackrel{\bullet }{\gamma }}\left( M^{\dagger }\right) ^{\stackrel{\bullet }{\gamma }}{}_{\stackrel{\bullet }{\alpha }% }\left( \overline{\sigma }^{\mu }\right) ^{\stackrel{\bullet }{\alpha }\beta }M_{\beta }{}^{\gamma }\phi _{\zeta } \\ =-\frac{1}{2}\left[ \left( M^{\dagger }\right) ^{\stackrel{\bullet }{\gamma }% }{}_{\stackrel{\bullet }{\alpha }}\left( \overline{\sigma }^{\mu }\right) ^{% \stackrel{\bullet }{\alpha }\beta }M_{\beta }{}^{\gamma }\left( \sigma _{\nu }\right) _{\gamma \stackrel{\bullet }{\gamma }}\right] \left[ \overline{\chi }_{\stackrel{\bullet }{\beta }}\left( \overline{\sigma }^{\nu }\right) ^{% \stackrel{\bullet }{\beta }\zeta }\phi _{\zeta }\right] \\ =-\frac{1}{2}Tr\left( M^{\dagger }\overline{\sigma }^{\mu }M\sigma _{\nu }\right) \left( \overline{\chi }\overline{\sigma }^{\nu }\phi \right) \\ =-\Lambda ^{\mu }{}_{\nu }\left( M\right) \left( \overline{\chi }\overline{% \sigma }^{\nu }\phi \right) \\ =\Lambda ^{\mu }{}_{\nu }\left( M\right) \left( \phi \sigma ^{\nu }\overline{% \chi }\right) \equiv \Lambda ^{\mu }{}_{\nu }\left( M\right) V^{\nu} $
*is = in the sense of group representation theory, it means that the carrier vector spaces of the two representations are isomorphic which is the content of the lemma. Note to the reader: the proof of the theorem uses the fact that these "classical" spinors have Grassmann parity 1. This explains the appearance and disappearance of the "-" sign.
DanielCDanielC
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged quantum-field-theory homework-and-exercises group-theory group-representations or ask your own question.
Why is $\mathfrak{so}(3,1)_{\mathbb{C}}^\uparrow \cong \mathfrak{su}_\mathbb{C}(2) \oplus \mathfrak{su}_\mathbb{C}(2)$
Show the Lie algebra is the same for $SU(2) \times SU(2)$ and Lorentz group
Why study $SL(2,\mathbb{C})$ representations when the representations of $SU(2)\times SU(2)$ exhausts all irreps of $SO(3,1)$?
Is the Lie algebra of $SL(2,C)$ isomorphic to the Lie algebra of Lorentz group?
Why is the $S_{z} =0$ state forbidden for photons?
Why is there this relationship between quaternions and Pauli matrices?
Motivating Complexification of Lie Algebras?
What's the relationship between $SL(2,\mathbb{C})$, $SU(2)\times SU(2)$ and $SO(1,3)$?
$(\frac{1}{2},\frac{1}{2})$ representation of $SU(2)\otimes SU(2)$
Vector spaces for the irreducible representations of the Lorentz Group
Can the finite dimensional irreducible $(j_+,j_-)$ representations of the Lorentz group $SO(3,1)$ be unitary?
Proof that $(1/2,1/2)$ Lorentz group representation is a 4-vector
Relation between the Dirac Algebra and the Lorentz group
General Irreducible Representation of Lorentz Group
Are spinors representations of the Lorentz group or its associated algebra?
|
CommonCrawl
|
Journal of Software Engineering Research and Development
Similarity testing for role-based access control systems
Carlos Diego N. Damasceno ORCID: orcid.org/0000-0001-8492-74841,2,
Paulo C. Masiero1,2 &
Adenilso Simao1,2
Journal of Software Engineering Research and Development volume 6, Article number: 1 (2018) Cite this article
Access control systems demand rigorous verification and validation approaches, otherwise, they can end up with security breaches. Finite state machines based testing has been successfully applied to RBAC systems and enabled to obtain effective test cases, but very expensive. To deal with the cost of these test suites, test prioritization techniques can be applied to improve fault detection along test execution. Recent studies have shown that similarity functions can be very efficient at prioritizing test cases. This technique is named similarity testing and assumes the hypothesis that resembling test cases tend to have similar fault detection capabilities. Thus, there is no gain from similar test cases, and fault detection ratio can be improved if test diversity increases.
In this paper, we propose a similarity testing approach for RBAC systems named RBAC similarity and compare to simple dissimilarity and random prioritization. RBAC similarity combines the dissimilarity degree of pairs of test cases with their relevance to the RBAC policy under test to maximize test diversity and the coverage of its constraints.
Five RBAC policies and fifteen test suites were prioritized using each of the three test prioritization techniques and compared using the Average Percentage Faults Detected metric.
Our results showed that the combination of the dissimilarity degree to the relevance of a test case to RBAC policies in the RBAC similarity can be more effective than random prioritization and simple dissimilarity, by itself, in most of the cases.
The RBAC similarity criterion is suitable as a test prioritization criteria for test suites generated from finite state machine models specifying RBAC systems.
Access control is one of the major pillars of software security. It is responsible for ensuring that only intended users can access data and only the required permissions to accomplish a task is guaranteed (Ferraiolo et al. 2007). In this context, the Role-Based Access Control (RBAC) model has been established as one of the most significant access control paradigms. In RBAC, users receive privileges through role assignments and activate them during sessions (ANSI 2004). Despite its simplicity, mistakes can occur during development and lead to faults, or either security breaches. Therefore, software verification and validation becomes necessary.
Finite State Machine (FSM) has been widely used for model-based testing (MBT) of reactive systems (Broy et al. 2005). Previous investigations using random FSMs have shown that recent test generation methods (e.g., SPY (Simão et al. 2009)), compared to traditional methods (e.g., W (Chow 1978) and HSI (Petrenko and Bochmann 1995)), tend to rely on fewer and longer test cases, reducing the overall test cost without impacting test effectiveness (Endo and Simao 2013). In the RBAC domain, although very effective and less costly, recent test generation methods still tend to output large amounts of test cases (Damasceno et al. 2016). Thus, there is a need for additional steps during software testing, such as test prioritization (Mouelhi et al. 2015).
Test case prioritization aims at finding an ideal ordering of test cases so that maximum benefits can be obtained, even if test execution is prematurely halted at some arbitrary point (Yoo and Harman 2012). A test prioritization criterion that has recently shown very promising results is similarity testing (Cartaxo et al. 2011; Bertolino et al. 2015). In similarity testing, we assume that resembling test cases tend to cover identical parts of an SUT, have equivalent fault detection capabilities, and no additional gain can be expected if executed simultaneously. This concept has been investigated under MBT (Cartaxo et al. 2011), access control testing (Bertolino et al. 2015) and software product line (SPL) testing (Henard et al. 2014) domains, but it has never been applied to RBAC. Moreover, since the fault detection effectiveness of test criteria are strongly related to its ability to represent faults of specific domains (Felderer et al. 2015), similarity testing may not be necessarily effective on RBAC domain.
In this paper, we investigate similarity testing for RBAC systems. A similarity testing criterion named RBAC similarity is introduced and compared to random prioritization and simple dissimilarity criteria using Average Percentage Faults Detected (APFD) metric, five RBAC policies, and three FSM-based testing methods. Our results show that RBAC similarity makes test prioritization more suitable to the specificities of the RBAC model and achieve higher APFD values compared to simple dissimilarity and random prioritization, in most of the cases.
This paper is organized as follows: Section 2 shows the theoretical background related to our investigation. Sections 2.1 to 2.3 give a brief introduction to FSM-Based Testing. The RBAC model and an FSM-based testing approach for RBAC systems are introduced in Sections 2.4 and 2.5. The test case prioritization problem and similarity testing are discussed in Section 2.6. Section 3 details our proposed similarity testing criteria named RBAC similarity. Section 4 depicts the experiment we performed to compare RBAC similarity to simple dissimilarity and random prioritization techniques. The results obtained from our experiments are analyzed and discussed. The threats to validity and final remarks are presented in Sections 6 and 7, respectively.
This section introduces the background behind our similarity testing approach for RBAC systems. First, we present the concept of FSM-based testing and three test generation methods (i.e., W, HSI, and SPY) which were considered in this study. Second, the RBAC model and an FSM-based testing approach for RBAC systems are described. At last, the test case prioritization problem and the specificities of the similarity testing are detailed.
Finite state machine based testing
A Finite State Machine (FSM) is a hypothetical machine composed of states and transitions (Gill 1962). Formally, an FSM can be defined as a tuple M=(S,s0,I,O,D,δ,λ) where S is a finite set of states, s0∈S is the initial state, I is the set of input symbols, O is the set of output symbols, D⊆S×I is the specification domain, δ:D→S is the transition function, and λ:D→O is the output function. An FSM always has a single current (origin) state s i ∈S which changes to destination (tail) state s j ∈S by applying an input x∈I where s j =δ(s i ,x), and returns an output y=λ(s i ,x). An input x is defined for s if in state s there is a transition consuming input x (i.e. (s,x)∈D). Such transition is said defined. An FSM is complete if all inputs are defined for all states, otherwise it is partial. Figure 1 depicts an example of a complete FSM with three states {q0,q1,q2}.
Example of complete FSM
A sequence α=x1x2...x n ∈I is defined for state s∈S, if there are states s1,s2,...,sn+1 such that s=s1 and δ(s i ,x i )=si+1, for all 1≤i≤n. The concatenation of two sequences α and ω is denoted as αω. A sequence α is a prefix of a sequence β, denoted by α≤β, if β=αω, for some given input sequence ω. An empty sequence is denoted by ε and a sequence α is a proper prefix of β, denoted by α<β, if β=αω for a given ω≠ε. The set of prefix sequences of a set T is defined as pref(T)={α | ∃β∈T and α<β}, if T=pref(T), T is prefix-closed.
The transition and output functions can be lifted to input sequences as usual; for the empty sequence ε, we have that δ(s,ε)=s and λ(s,ε)=ε. For a sequence αx defined for state s, we have that δ(s,αx)=δ(δ(s,α),x) and λ(s,αx)=λ(s,α)λ(δ(s,α),x). A sequence α=x1x2...x n ∈I is a transfer sequence from s to sn+1 if δ(s,α)=sn+1, thus sn+1 is reachable from s. If every state of an FSM is reachable from s0 then it is initially connected and if every state is reachable from all states, it is strongly connected.
The symbol Ω(s) denotes all input sequences defined for a state s and Ω M abbreviates Ω(s0), which refers to all defined input sequences for an FSM M. A separating sequence for two states s i and s j is a sequence γ such that γ∈Ω(s i )∩Ω(s j ) and λ(s i ,γ)≠λ(s j ,γ). In addition, if γ is able to distinguish every pair of states of an FSM, it is a distinguishing sequence. Considering the FSM presented in Fig. 1, the sequence a is a separating sequence for states q0 and q1 since λ(q0,a)=0 and λ(q1,a)=1.
Two FSMs M S =(S,s0,I,O,D,δ,λ) and M I =(S′,s0′,I,O′,D′,δ′,λ′) are equivalent if their initial states are. Two states s i ,s j are equivalent if ∀ α∈Ω(s i )∩Ω(s j ), λ(s i ,α)=λ′(s j ,α). An FSM M may have a reset operation, denoted by r, which takes to s0 regardless the current state. An input sequence α∈Ω M starting with a reset symbol r is a test case of M. A test suite T consists of a finite set of test cases of M, such that there are no α,β∈T where α<β. Prefixes α<β are excluded from test suite since the execution of β implies the execution of α. The length of a test case α is represented by |α| and describes the cost of executing α plus the reset operation. The number of test cases of one test suite T also describes the number of resets of T which is depicted as |T|.
Mutation analysis in FSM-based testing
In FSM-based testing, given a specification M, the symbol I(M) denotes the set of all deterministic FSMs, variants of M, with the same inputs of M for which all sequences in Ω M are defined. The set I(M) is called fault domain for M and these variants of M are named mutants and can be obtained either manually or by automatically performing simple syntactic changes using mutation operators (Andrews et al. 2006). Given m≥1, then I m (M) denotes all FSMs of I(M) with at most m states. Given a specification M with n states, a test suite T⊆Ω M is m-complete if for each N∈I m distinguishable from M, there is a test case t∈T that distinguish M from N. The following mutation operators are often used on FSM-based testing (Chow 1978): change initial state (CIS), which changes the s0 of an FSM to s k , such that s0≠s k ; change output (CO), which modifies the output of a transition (s,x), using a different function Λ(s,x) instead of λ(s,x); change tail state (CTS), which modifies the destination state of a transition (s,x), using a different function Δ(s,x) instead of δ(s,x); and add extra state (AES), which inserts a new state such that mutant N is equivalent to M. Figure 2 shows examples of mutants of the FSM shown in Fig. 1 using CIS, CO, CTS, and AES operators. Changes are marked with an asterisk (*).
Examples of FSM Mutants. a FSM mutant - CIS. b FSM mutant - CO. c FSM mutant - CTS. d FSM mutant - AES
If the output of a mutant is different from the original FSM, for any test case, the mutant is distinguished (or killed) and the seeded fault denoted by the mutant is detected. Moreover, some mutants can be syntactically different but functionally equivalent to the original model. These are called equivalent mutants. The process of analyzing if test cases trigger failures and kill mutants is called mutation analysis and is often used in software testing research (Jia and Harman 2011; Fabbri et al. 1994).
The main outcome of the mutation analysis is the mutation score, which indicates the effectiveness of a test suite. Given a test suite T, the mutation score (or effectiveness) can be calculated using the equation \(T_{\text {eff}}=\tfrac {\#km}{(\#tm-\#em)}\). The #km parameter represents the number of killed mutants; the #tm defines the total number of generated mutants; and #em denotes the number of mutants equivalent to the original SUT. Thus, the mutation score consists of the ratio of the number of detected faults over the total number of non-equivalent mutants. An m-complete test suite has full fault coverage for a given domain I m (M) and can detect all faults in any FSM implementation with at most m states. Thus, it scores 1.0, by definition.
FSM-based testing methods
FSM-based testing relies on FSM models to derive test cases and evaluate if the behavior of an SUT conforms to its specification (Utting et al. 2012). To check this behavioral conformance, two basic sets of sequences are often used: the state cover (Q) and transition cover (P) sets (Broy et al. 2005).
A set of input sequences is a state cover set of M if for each state s i ∈S there exists an α∈Q such that δ(s0,α)=s i and ε∈Q to reach the initial state. A set of input sequences P is named transition cover set of M if for each transition (s,x)∈D there are sequences α,αx∈P, such that δ(s0,α)=s, and ε∈P to reach the initial state. The transition cover set of an FSM is obtained by generating the testing tree of this FSM (Broy et al. 2005). The state and transition cover sets of the FSM depicted in Fig. 1 are respectively Q={ε,a,b} and P={ε,a,aa,ba,b,ab,bb}. After obtaining state and transition coverage, FSM-based testing methods require some pre-defined sets to identify the reached parts of an FSM. These are the characterization set and separating families.
A characterization set (W set) contains at least one input sequence which distinguishes each pair of states of an FSM. Formally, it means that for all pairs of states s i ,s j ∈S,i≠j, ∃α∈W such that λ(s i ,α)≠λ(s j ,α).
A separating family, or harmonized state identifiers, is a set of sequences H i for each state s i ∈S that satisfies the condition ∀s i ,s j ∈S,s i ≠s j ∃β∈H i ,γ∈H j that has a common prefix α such that α∈Ω(s i )∩Ω(s j ) and λ(s i ,α)≠λ(s j ,α). In the worst case, the separating family is the W set itself.
The characterization set of the FSM model shown in Fig. 1 is W={a,b}, and the separating family of states q0,q1,q2 are respectively H0={a,b}, H1={a}, and H2={b}. These sets are building blocks for most traditional and recent testing methods, such as W (Chow 1978; Vasilevskii 1973), HSI (Petrenko and Bochmann 1995), and SPY (Simão et al. 2009).
W method
The W method is the most classic FSM-based test generation algorithm (Chow 1978; Vasilevskii 1973). It uses the P set, to traverse all transitions, concatenated to the W set, for state identification. Moreover, it can also detect an estimated number of extra states using a traversal set \(\bigcup ^{m-n}_{i=0}(I^{i})\), such that (m−n) is the number of extra states and Ii contains all sequences of length i combining symbols of I. Thus, by concatenating P, the traversal set, and W, the W method can detect (m−n) extra states (e.g., AES mutants). Assuming the FSM in Fig. 1, no extra states (m=n) or proper prefixes, W method can generate T W ={aaa,aab,aba,abb,baa,bab,bba,bbb}, and |T W |=8.
HSI method
The Harmonized State Identifiers (HSI) method (Petrenko and Bochmann 1995) uses state identifiers H i to distinguish each state s i ∈S of an FSM model. The HSI test suite is obtained by concatenating the transition cover set P with H i , such that δ(s0,α)=s i , s i ∈S and α∈P. The HSI method can be applied to complete and partial FSMs. Assuming the FSM in Fig. 1, no extra states or proper prefixes, HSI method can generate T H S I ={aaa,aba,abb,baa,bba,bbb}, and |T H S I |=6, which is 75% the size of T W .
SPY method
The SPY method (Simão et al. 2009) is a recent test generation method able to generate m-complete test suites on-the-fly. First, the state cover set Q is concatenated to the state identifiers H i . Afterwards, differently from traditional methods, such as W and HSI, the traversal set is distributed over the set containing Q concatenated with H i based on sufficient conditions (Simão et al. 2009). Thus, by avoiding testing tree branching, test suite length and the number of resets can be reduced.
Experimental studies have indicated that SPY can generate test suites on average 40% shorter than traditional methods (Simão et al. 2009). Moreover, it can achieve higher fault detection effectiveness even if the number of extra states is underestimated (Endo and Simao 2013). Assuming the FSM in Fig. 1, no extra states or proper prefixes, SPY method can generate T S P Y ={aaaba,abbb,baa,bba}, and |T S P Y |=4, which is 50% the size of T W .
Access Control (AC) is one of the most important security mechanisms (Jang-Jaccard and Nepal 2014). Essentially, it ensures that only allowed users have access to protected system resources based on a set of rules, named security policies, that specify authorizations and access restrictions (Samarati and de Vimercati 2001). In this context, the Role-Based Access Control (RBAC) model has been established as one of the most significant access control paradigms (Ferraiolo et al. 2007). It uses the concept of grouping privileges to reduce the complexity of security management tasks (Samarati and de Vimercati 2001).
In RBAC, roles describe organizational figures (e.g., functions or jobs) which own a set of responsibilities (e.g., permissions). Roles can be assigned or revocated to users via role assignments and performed under sessions through role activations. Role hierarchies can be specified as inheritance relationships between senior and junior roles (e.g., sales director inherits permissions from sales manager). Thus, the mapping between security policies and the organizational structure can be more natural. These elements compose the ANSI RBAC model (ANSI 2004) which can also be extended to groups of administrative roles and permissions (Ben Fadhel et al. 2015). In Fig. 3, the ANSI RBAC and, within dashed lines, the Administrative RBAC models are depicted.
ANSI RBAC and administrative RBAC
Masood et al. (2009) define an RBAC policy as a 16-tuple P=(U,R,Pr,UR,PR,≤ A ,≤ I ,I,S u ,D u ,S r ,D r ,SSoD,DSoD,S s ,D s ), where:
U and R are the finite sets of users and roles;
Pr is the finite set of permissions;
UR⊆U×R is the set of user-role assignments;
PR⊆Pr×R is the set of permission-role assignments;
≤ A ⊆R×R and ≤ I ⊆R×R are the role activation and inheritance hierarchies relationships;
I={AS,DS,AC,DC,AP,DP} is the finite set of types of RBAC requests which respectively stand for user-role assignments (AS), deassignments (DS), activations (AC) and deactivations (DC); and permission-role activations (AC) and deactivations (DC);
\(S_{u},D_{u}: U \rightarrow \mathbb {Z}^{+}\) are static and dynamic cardinality constraints on users;
\(S_{r},D_{r}: R \rightarrow \mathbb {Z}^{+}\) are static and dynamic cardinality constraints on roles;
SSoD,DSoD⊆2R are the Static and Dynamic Separation of Duty (SoD) sets, respectively;
\(S_{s}: SSoD \rightarrow \mathbb {Z}^{+}\) specifies the cardinality of SSoD sets;
\(D_{s}: DSoD \rightarrow \mathbb {Z}^{+}\) specifies the cardinality of DSoD sets.
Role inheritance hierarchy is a role-to-role relationship (e.g., r j ≤ I r s ) that enable users assigned to a senior role (r s ) to have access to all permissions of junior roles (r j ). Role activation is a variant of role hierarchy (e.g., r j ≤ A r s ) which enable users assigned to a senior role (r s ) to activate junior roles (r j ) without being directly assigned to that junior role (Masood et al. 2009). Cardinality constraints specify a bound on the cardinality of user-role assignment and role activation relationships (Ben Fadhel et al. 2015). Static cardinality constraints (S u and S r ) bound user-role assignments and dynamic cardinality constraints (D u and D r ) limit user-role activations (i.e., role activations) and they can be specified from a user (S u and D u , respectively) and role (S r and D r , respectively) perspectives. Separation of Duty (SoD) constraints define static and dynamic (SSoD and DSoD, respectively) mutual exclusion relationships among roles based on a positive integer number n≥2 to avoid the simultaneous assignments or activations of conflicting roles (ANSI 2004) (e.g., given SSoD={staff, accountant, director} and n=2, S S S o D =2 defines that no user can be assigned to more than two roles of SSoD i set). Listing 1 shows an example of RBAC policy with two users (line 1), one role (line 2), and two permissions (line 3).
User u1 is assigned to role r1 (line 4) that is assigned to the permissions pr1 and pr2 (line 5). Both users can be assigned and activate at most one role (line 6-7). Role r1 can be assigned to at most two users (line 8); however, it can be activated by one user per time (line 9).
FSM-based testing of RBAC systems
Masood et al. (2009) propose an approach based on FSMs to specify and test the behavior of RBAC systems. Given an RBAC policy P, an FSM(P) consists of a complete FSM modeling all access control decisions that an RBAC mechanism must enforce. Formally, an FSM(P) is a tuple FSM(P)=(S P ,s0,I P ,O,D,δ P ,λ P ) where
S P is the set of states that P reach given its mutable elements;
s0∈S is the initial state where P currently stands given UR and PR;
I P is the input domain where I P ={(rq,up,r)} for all rq∈I, u∈{U∪Pr} and r∈R};
O is the output domain formed by granted and denied;
D=S P ×I P is the specification domain;
δ P :D→S P is the state transition function; and
λ P :D→O is the output function.
Each state s∈S P is labeled using a sequence of pairs of bits containing one pair for each combination of user-role and permission-role. A pair user-role can be assigned (10), activated (11) or not assigned (00); and a pair permission-role can be assigned (10) or not assigned (00). The maximum number of states of FSM(P) is bounded to 3|U|×|R| and the number of reachable states depends on the constraints of P. The set of input symbols I P contains all combinations of users, roles, permissions and types of RBAC requests which can be applied to P. Formally, it means that I P ={(rq,up,r)}∀ rq∈I, up∈{U∪Pr} and r∈R.
Transitions of FSM(P) denote access control decisions on destination states (s j ∈S P ) and output symbols (granted or denied) given the specification domain, that is complete (Masood et al. 2009) and composed by pairs of an origin state (s i ∈S P ), and an input symbol (rq,up,r)∈I P , and the constraints of P. Given the constraints of P, an origin state s i and an input symbol (rq,up,r), a destination state s j =δ P (s i ,(rq,up,r)) is defined by flipping the bits of s i label related to an user (or permission) up and role r, if the constraints of P allow such request. This procedure denotes how the state transition function δ P operates.
Regarding the output function λ P , a denied symbol is returned to inputs (requests) which do not change the state of P, such as user-role assignments already performed or requests denied due to some cardinality constraint. Thus, denied is only returned on self-loops. Transitions with different origin and destination states always return granted. The generation of an FSM(P) can be iteratively performed by evaluating all defined inputs of state s0 given the constraints of P (ΩFSM(P)).
Figure 4 shows the FSM(P) of the RBAC policy presented in Listing 1. Self-loop transitions, corresponding to requests returning denied, and transitions related to permissions are not shown to keep the figure uncluttered. The initial state 1000 depicts line 4 of Listing 1 where u1 is assigned to r1. From state 1000 all defined inputs are applied once to reach states 1100, 1010 and 0000 where respectively user u1 activates r1, u1 and u2 are assigned to r1, and none is assigned to r1. This procedure is iteratively repeated over all reached states until no new state is obtained. At the end, the resulting FSM(P) has a total of eight states due to Dr(r1)=1 which makes state 1111 unreachable, but not 9=3|U|×|R|, which is the maximum number of states.
Example of FSM(P) specifying an RBAC policy
Test generation from FSM(P)
Given an RBAC system implementing a policy P, FSM-based testing can verify if the behavior of such system conforms to P using its respective FSM(P) and some test generation method, such as W or transition cover (Masood et al. 2009).
Let \(\mathcal {R}\) denote the set of all RBAC policies. Given a policy \(P \in \mathcal {R}\), the set \(\mathcal {R}\) can be partitioned into two subsets of policies: Equivalent (conforming) to P (\(\mathcal {R}^{P}_{conf}\)); and Faulty policies (\(\mathcal {R}^{P}_{fault}\)). Since \(\mathcal {R}\) is infinitely large, Masood et al. (2009) proposed a mutation analysis technique to measure the effectiveness of a test suite as its ability to detect if an RBAC system behaves as some faulty policy \(P' \in \mathcal {R}^{P}_{fault}\).
The RBAC mutation analysis restricts \( \mathcal {R}^{P}_{fault}\) to be finite by only considering policies mutants P′=(U,R,Pr,UR′,PR′,≤A′,≤I′,I,Su′,Du′,Sr′,Dr′,SSoD′,DSoD′,Ss′,Ds′) generated by making simple changes to policy P=(U,R,Pr,UR,PR,≤ A ,≤ I ,I,S u ,D u ,S r ,D r ,SSoD,DSoD,S s ,D s ). Note that all mutants share the same set of users (U), roles (R), permissions (Pr) and inputs (I) of the original policy P. The set \( \mathcal {R}^{P}_{fault}\) of faulty policies is generated by making changes using two kinds of operators: mutation operators and element modification operators.
The mutation operators generate RBAC mutants by adding, modifying and removing elements from UR, PR, ≤ A , ≤ I , SSoD, and DSoD sets (e.g. add role to SSoD set). The element modification operators mutate policies by incrementing or decrementing the cardinality constraints S u ,D u ,S r ,D r ,S s , and D s . Each of these RBAC faults has corresponding faults on the FSM domain (Chow 1978), and FSM-based testing methods are also able to detect them (Masood et al. 2009). Figure 5 illustrates a part of one testing tree generated from four test cases and the FSM(P) in Fig. 4.
Testing Tree of an FSM(P)
By executing this test suite, an RBAC mutant generated from the policy shown in Listing 1 by applying the element modification operator to increment Dr(r1)=1 to Dr(r1)=2 can be detected. The FSM of this variant has state 1111 as reachable and, since test case t3 covers the transition 1110−AC(u2,r1)→1110, it can detect this fault.
Test case prioritization
Although very effective, FSM-based testing of RBAC systems tends to generate a large number of test cases regardless the methods used (Damasceno et al. 2016). Thus, development processes of RBAC systems with time and resources constraints may demand improvements on test execution. To cope with this issue, different techniques have been proposed to improve cost-effectiveness of test suites, such as Test Suite Minimization, also called test suite reduction, where redundant test cases are permanently removed; and Test Case Selection, which selects test cases based on changed parts of a System Under Test (SUT) (Yoo and Harman 2012). These techniques reduce time effort, but they may not work effectively, since they may also omit important test cases able to detect certain faults (Ouriques 2015).
Test Case Prioritization improves test execution without filtering out any test case. It aims at identifying an efficient test execution ordering so that maximum benefits can be obtained, even if test execution is prematurely halted at some arbitrary point (Ouriques 2015). To that, it uses a function f which quantitatively describes the quality of an ordering as test criteria (e.g., test effectiveness, code coverage). To illustrate test prioritization, consider an hypothetical SUT with 10 faults and five test cases A,B,C,D,E, as shown in Table 1.
Table 1 Example of test cases with fault-detection capability, taken from Elbaum et al. (2000)
In this example, all faults can be detected by running test cases C and E, since they respectively have 70% and 30% of fault-detection effectiveness. Test case A, on the other hand, can detect only 20% of the faults so it can negatively affect fault detection along test execution if placed at the beginning of a test suite. Thus, it is possible to speed up fault detection during test cases execution by placing C and E at the beginning of the test suite.
After test prioritization, the quality of a ordering can be measured using the Average Percentage Faults Detected (APFD) metric. The APFD is a metric commonly used in test prioritization research (Elbaum et al. 2002), and it is defined as follows:
$$ \text{APFD} = \frac{\sum_{i=1}^{n-1} F_{i}}{n \times l} +\frac{1}{2n} $$
In Eq. 1, the parameter n describes the total number of test cases, l defines the number of faults under consideration and F i specifies the number of faults detected by a test case i. The APFD value depicts the detection of faults (i.e., test effectiveness) along with test execution given test cases ordering. This value ranges from 0 to 1 and the greater the APFD is, the better is test cases ordering. Table 2 shows the APFD for three prioritized test suites, T1, T2 and T3 obtained from test cases in Table 1. In this example, the APFD points that T3 performs better than T2 and T1.
Table 2 APFD value for the test cases example
Similarity testing
Similarity testing is a promising test case prioritization approach that uses similarity functions to calculate the degree of similarity between pairs of tests and define test ordering (Cartaxo et al. 2011; Bertolino et al. 2015; Coutinho et al. 2014). It is an all-to-all comparison problem (Zhang et al. 2017) and, as most test prioritization algorithms (Elbaum et al. 2002), it has complexity O(n2). It assumes that resembling test cases are redundant in a sense they cover the same features of an SUT and tend to have equivalent fault detection capabilities (Bertolino et al. 2015).
To run similarity testing, a similarity matrix describing the resemblance between all pairs of test cases of a test suite T must be calculated with a similarity function d x . The similarity matrix SM of a test suite T with n test cases is a matrix where each element SM ij =d x (t i ,t j ) describes the similarity degree between two test cases t i and t j , such that 1≤i<j≤n. In Eq. 2 an illustrative example of similarity matrix is presented.
$$ \begin{aligned} &\qquad\qquad\ \ \ t_{1} \quad \ \ t_{2} \qquad\quad\ \ \ \cdots \quad\ \ t_{n-1} \qquad\qquad\ \ t_{n}\\ SM &= \begin{array}{c} t_{1}\\ t_{2}\\ \vdots\\ t_{n-1}\\ t_{n} \end{array} \left[ \begin{array}{lllll} 0 & \quad d_{x}(t_1,t_2) & \quad\cdots & \quad d_{x}(t_1,t_{n-1}) & \quad d_{x}(t_1,t_{n}) \\ 0 & \quad 0 & \quad & \quad & \quad d_{x}(t_2,t_{n}) \\ \vdots & \quad \vdots & \quad \ddots & \quad \vdots & \quad \vdots \\ & & \quad & \quad 0 & \quad d_{x}(t_{n-1},t_{n}) \\ & \quad 0 & \quad \cdots & \quad 0 & \quad 0 \end{array} \right] \end{aligned} $$
After calculating the similarity matrix, test ordering is defined based on similarity degrees (Cartaxo et al. 2011; Bertolino et al. 2015; Henard et al. 2014; Coutinho et al. 2014). According to Elbaum et al. (2002), the ordering process can use total or additional information. Test prioritization based on total information uses only pairwise similarity for ordering test cases, whereas additional information includes the similarity of previously executed test cases to improve ordering (i.e., the most distinct test case compared to all previous).
Cartaxo et al. (2011) showed that similarity testing can be more effective than random prioritization when applied to test sequences automatically generated from Labelled Transition Systems (LTS) (Cartaxo et al. 2011). In their study, the similarity degree (d s d ) between two test cases was calculated as the number of identical transitions (nit) divided by the average test case length. The average length was used to avoid small (large) similarity degrees due to similar short (long) test sequences. An extensive investigation on similarity testing for LTS is found in (Coutinho et al. 2014).
Bertolino et al. (2015) also investigated the application of similarity testing on XACML systems. XACML is an XML-based declarative notation for specifying access control policies and evaluating access requests (OASIS 2013). Essentially, they proposed a test prioritization approach named XACML similarity (d x s ) which considers three values for test prioritization: (i) a simple similarity (d s s ), which describes how much resembling are two test cases (t i ,t j ) based on their lexical distance; (ii) an applicability degree (AppValue), which points the percentage of parts of an XACML policy affected by a test case; and (iii) a priority value (PriorityValue) which gives weight to pairs of test cases based on their applicability degree. Although investigations have shown that simple similarity d s s is comparable to random prioritization, XACML similarity enabled significant improvements compared to simple similarity and random prioritization.
It should be noticed that the XACML standard can be used to specify and implement RBAC policies (OASIS 2014). However, its current version (OASIS 2014) does not support the specification of SSoD and DSoD constraints. Moreover, since the effectiveness of test criteria is strongly related to its ability to represent specific domain faults (Felderer et al. 2015), there is no guarantee that similarity testing can be as effective on RBAC as they were on XACML and LTS.
Similarity testing for RBAC systems
In this section, we introduce our similarity testing approach specific to RBAC systems, named RBAC similarity. The RBAC similarity consists of a similarity testing approach based on Cartaxo et al. (2011) and Bertolino et al. (2015) approaches and suitable for FSM-based testing of RBAC systems. A prioritization algorithm used to perform ordering test cases based on similarity criteria is also discussed.
RBAC similarity
In XACML similarity, applicability is the relation between an access request and an XACML policy which quantitatively describes the impact of this request (i.e., test case) to the rules of the policy (Bertolino et al. 2015). In our work we extend the concept of XACML applicability to the RBAC domain and propose the RBAC similarity, a similarity testing approach specific to RBAC systems.
Essentially, the RBAC similarity (d r s ) takes an RBAC policy P and a test suite T generated from an FSM(P) and evaluates the degree of resemblance between all pairs of test cases t i ,t j ∈T. To that, it uses a dissimilarity function and the applicability of this pair of test cases to the policy P under test. Given this information, a test case prioritization algorithm performs test ordering from the most distinct and relevant tests to the less diverse and suitable ones. To support similarity testing for RBAC, we proposed the concept of RBAC applicability which quantitatively describes the relevance of a test case to one RBAC policy. The dissimilarity function and the RBAC applicability are detailed in the following sections.
Simple dissimilarity:
The simple dissimilarity between test cases is measured based on the number of distinct transitions (ndt). Given two test cases t i and t j , the degree of simple dissimilarity (d s d ) is calculated as presented in Eq. 3.
$$ d_{sd}(t_{i},t_{j})=\frac{ndt(t_{i},t_{j})}{avg(length(t_{i})+length(t_{j}))} $$
The number of distinct transitions (ndt) between two test cases (t i ,t j ) is counted and then divided by the average length of the test cases t i and t j . Transitions are considered distinct when there is a mismatch between their origin states, input or output symbols, or destination (tail) states. The average test cases length is used to avoid small (or large) similarity degrees due to similar short (or long) test case lengths. Listing 2 shows an example of four test cases and their respective transitions and states covered given the FSM(P) previously shown in Fig. 4. The number of distinct transitions, the average length and the simple dissimilarity d s d for each pair of test cases are shown in Table 3.
Table 3 Simple dissimilarity of each pair of test cases
RBAC applicability:
The idea of the RBAC applicability is to quantitatively describe the relevance of a test case to one RBAC policy under test. An RBAC constraint is applicable to a test case if there is a match between the users, roles, or permissions of any input of this test case and the attributes of the constraint. For example, if an RBAC policy contains a static cardinality constraint S u (u1)=1, this constraint must regulate (i.e., apply some regulation to) all test cases with user u1 as test input (e.g., AS(u1,r2)). This idea enables to measure how much a test case t may impact a given policy P, without considering dynamic (behavioral) aspects of the RBAC model (e.g., FSM(P) states/transitions). Thus, it describes the structural or static coverage of a test case t over one policy P.
However, since RBAC is essentially a reactive system, a behavioral view of a test case is also necessary. In order to satisfy this requirement, we also propose the concept of behavioral or dynamic coverage. An RBAC constraint of a policy Preacts to a test case when this constraint is applicable to any input symbol and it influences on (enforces) the access control decision. As example, the test case t3, shown in Fig. 5, depicts a scenario of an RBAC policy containing a dynamic cardinality constraint D r (r1)=1 and two users u1 and u2 attempting to activate r1. This constraint is applicable (and reacts) to the last input requesting the second role activation of r1, and enforces a denied response. This information is associated with many transitions of the FSM(P) and used as requirements-based coverage criteria (Utting et al. 2012). Thus, by quantifying the number of RBAC constraints reacting to the inputs of a test case, the dynamic coverage of a policy P can be measured and support test prioritization.
Based on the concepts of static and dynamic coverage, we proposed the RBAC Applicability Degree (AD), which is an array of four values defined as shown in Eq. 4.
$$ AD_{P(t)}=\left[ pad_{P(t)} \quad asad_{P(t)}\quad acad_{P(t)} \quad prad_{P(t)}\right] $$
The RBAC Applicability Degree (AD) of a test case t to a given a policy P consists of four values:
Policy Applicability Degree (padP(t)), which shows the ratio of test inputs applicable to any RBAC constraint over the test case length;
Assignment Applicability Degree (asadP(t)), which shows the number of RBAC constraints related to assignment faults reacting to t;
Activation Applicability Degree (acadP(t)), which shows the number of RBAC constraints related to activation faults reacting to t; and
Permission Applicability Degree (pradP(t)), which shows the number of RBAC constraints related to permission faults reacting to t.
The padP(t) measures how much applicable one test case t is to a given policy based on all RBAC constraints applicable to t. The asadP(t) gives a quantitative information about how many RBAC constraints related to assignment faults (i.e., UR, S u , S r , SSoD, and S s ) react to t. The acadP(t) gives a quantitative information about how many RBAC constraints related to activation faults (i.e., ≤ A , D u , D r , DSoD, and D s ) react to t. Finally, the pradP(t) gives a quantitative information about how many RBAC constraints related to permission faults (i.e., PR, ≤ I ) react to t.
Based on the values of AD, the RBAC Applicability Degree (RAP(t)) is calculated. The RAP(t) value is a single quantitative attribute which summarizes the relevance of a single test case t to one policy P by summing the four applicability degrees.
$$ RA_{P(t)}= pad_{P(t)} + asad_{P(t)} + acad_{P(t)} + prad_{P(t)} $$
However, since test similarity is calculated for pairs of test cases, we also defined the RBAC Applicability Value (AppValue) which sums the applicability degrees of test cases (Eq. 6).
$$ AppValue(P,t_{i},t_{j}) = RA_{P(t_{i})} + RA_{P(t_{j})} $$
A priority value (PriorityValue) is calculated to weight the pairwise relevance of two test cases. This PriorityValue is a constant number α, β, γ, or δ defined based on the \(pad_{P(t_{i})}\) and \(pad_{P(t_{j})}\) values. These α, β, γ, and δ constants are defined by the user, such that α>β>γ>δ. The α is given for pairs of test cases where all test inputs are applicable, and δ is given if none of test inputs are applicable to the constraints of the RBAC policy P. The values 3, 2, 1 and 0 are suggested by Bertolino et al. (2015). Equation 7 shows the formula which derivates the PriorityValue
$$ PriorityValue(P,t_{i},t_{j}) =\left\{ \begin{array}{ll} & \alpha~ \text{if}~ (pad_{P(t_{i})} = pad_{P(t_{j})} = 1) \\ & \beta~ \text{if}~ (pad_{P(t_{i})}~XOR~pad_{P(t_{j})}) \\ & \gamma~ \text{if}~ (0 < pad_{P(t_{i})}, pad_{P(t_{j})}< 1) \\ & \delta~ \text{otherwise} \end{array} \right. $$
The RBAC Similarity (d r s ) of a pair of test cases consists of the sum of the d s d , AppValue and PriorityValue values, if d s d (t i ,t j )≠0, as shown in Eq. 8. The RBAC similarity was designed based on Bertolino et al. (2015) approach for similarity testing for XACML policies.
$$ d_{rs}(P,t_{i},t_{j}) =\left\{ \begin{array}{ll} & 0 ~~~~~~~~~~~~~~~~~~~~~~~~ \text{if}~ d_{sd}(t_{i},t_{j})=0 \\ & d_{sd}(t_{i},t_{j})+\\ & AppValue{(P,t_{i},t_{j})}+\\ & PriorityValue{(P,t_{i},t_{j})}~ \text{otherwise }\\ \end{array} \right. $$
As an example, the applicability degrees of each test case presented in Listing 2, given the RBAC policy in Listing 1, are presented in Table 4.
Table 4 RBAC applicability degree of each test case
As shown in Table 4, all test inputs of t3 are applicable to at least one RBAC constraint and test case t3 has the greatest RBAC applicability degree. Test case t2 has the second greatest value, followed by t1 and t0 with the same applicability degree. Afterwards, the simple dissimilarity, RBAC application value, and priority value are calculated for all pairs of test cases. All these values are joined in the RBAC similarity (d r s ) that is calculated for each pair of test cases, as presented in Table 5.
Table 5 RBAC similarity of each pair of test cases
Test prioritization algorithm
Given the similarity of all pairs of test cases, a test prioritization algorithm has to be used for scheduling test cases execution. The pseudocode of the test prioritization algorithm used in this study is presented in Algorithm 1. Essentially, the test prioritization algorithm iterates a similarity matrix calculated using a similarity function d x , from the most distinct pairs of test cases to the less dissimilar ones of a test suite S. Given each pairwise similarity, the longest test case is included in the list of prioritized test cases. Otherwise, the shortest is included, if not previously included. This process is performed until all test cases of S are included in L, which stands for the prioritized test suite.
Using the RBAC similarity and the test suite shown in Listing 2, the similarity matrix shown in Eq. 9 is obtained.
$$ \begin{aligned} &\quad \quad \quad \ \ t_{0} \quad t_{1} \quad t_{2} \quad t_{3}\\ SM&= \begin{array}{c} t_{0}\\ t_{1}\\ t_{2}\\ t_{3} \end{array} \left[ \begin{array}{cccc} 0 & 4.55 & 5.55 & 7.77 \\ 0 & 0 & 5.55 & 7.77 \\ 0 & 0 & 0 & 8.77 \\ 0 & 0 & 0 & 0 \end{array} \right] \end{aligned} $$
Using Algorithm 1, the first most dissimilar pair of test cases (t2,t3) is selected and the longest test case t3 is added to L. Afterwards, test case t0 is included since it is the longest test case from the next most dissimilar pair (t0,t3). The last pair considered is (t1,t3) and t1 is the next to be included. The prioritization ends with test case t2, from pair (t0,t2), scheduled at the end of the test execution. Listing 3 shows the L resulting test suite prioritized according to RBAC similarity.
Experimental evaluation
According to Damasceno et al. (2016), a larger number of test cases tends to be generated regardless the FSM-based testing methods for RBAC systems. Thus, the higher the number of states and transitions of FSM(P) increase, the greater the test suites are concerning the number of resets, total test suite length, and average test case length. Thus, additional steps become necessary to make software testing more cost-effective.
We proposed RBAC similarity to fill this research gap and designed an experiment to evaluate the cumulative effectiveness and the APFD of the RBAC similarity and compare to simple dissimilarity and random prioritization using test suites generated from FSM-based testing methods on RBAC systems. An schematic overview of this experiment is presented in Fig. 6.
Comparison of test prioritization techniques - schematic overview
Fifteen test suites were taken from a previous study (Damasceno et al. 2016) where test characteristics (i.e., number of resets, test suite length, and avg. test case length) and effectiveness were analyzed based on the FSM(P) characteristics (i.e., numbers of states, and transitions). These test suites were generated from five RBAC policies specified as FSM(P) models using the RBAC-BT software (Damasceno et al. 2016) and implementations of the W (Chow 1978), HSI (Petrenko and Bochmann 1995), and SPY (Simão et al. 2009) methods. Table 6 shows a summary of the five RBAC policies and the total number of RBAC mutants.
Table 6 RBAC policies characteristics
The RBAC-BTFootnote 1 is an FSM-based testing tool designed by Damasceno et al. (2016) to support FSM-based testing of RBAC systems and the automatic generation of FSM(P) models and RBAC mutants. RBAC-BT was extended to support test prioritization using RBAC similarity and simple dissimilarity. Due to the high number of pairwise comparisons required to perform test prioritization, a time constraint of 24 hours for each test prioritization procedure was defined. Procedures with a duration above this limit were canceled and random subsets of the complete test suites, named as subtest suite, were taken for prioritization.
On preliminary experiments, the prioritization of the test suites of policies P03, P04, P05 took more than 24 hours.
Thus, subtest suites of the aforementioned policies containing 2528 test cases were randomly generated 30 times. The number 2528 was taken from the largest complete test suite with test prioritization duration below the 24 hours threshold, the W test suite of policy P02. Table 7 shows the characteristics of the FSM(P) models and their respective complete test suites.
Table 7 FSM(P) and test characteristics
The six complete test suites were prioritized using each test prioritization, and the cumulative effectiveness of these test suites was measured in twenty-one parts. Afterwards, the cumulative effectiveness was used to calculate the APFD of each scenario. The APFD value was calculated using Eq. 1, F i as the number of faults detected by one test fragment i and l as the number of RBAC mutants. Random prioritization was performed 10 times to the 30 random subtest suites of P03, P04 and P05.
Using the R statistical package, we calculated mean APFD with confidence interval (CI) of 95% to all test scenarios and performed the nonparametric Wilcoxon matched-pairs signed ranks test to verify if the RBAC similarity reached different APFDs compared to simple dissimilarity and random prioritization with a confidence interval of 95%. As the alternative hypothesis, we considered that RBAC similarity performed better (i.e., greater mean cumulative effectiveness) than the other criteria.
To complement hypothesis tests, we analyzed the effect size by computing unstandardized (i.e., median and mean differences) and standardized measures (i.e., Cohen's d Hedges g (Kampenes et al. 2007) and Vargha-Delaney's 12 (Arcuri and Briand 2011)) using R and the effsize package (Torchiano 2017).
Analysis of the complete test suites
In this section, we discuss the results of the experiments comparing RBAC similarity, simple and random prioritization based on complete test suites. The mean cumulative effectiveness for P01 and P02 are respectively shown in Tables 8 and 9, and Figs. 7 and 8 with error bars calculated with a confidence interval of 95%. At the end of this section, we also show the mean APFD and the results of the Wilcoxon matched-pairs signed ranks test.
Cumulative effectiveness for P01 with error bars (CI=95%). a P01 + W. b P01 + HSI. c P01 + SPY
Table 8 Cumulative effectiveness of the P01 complete test suites
In most of the cases, there was no statistically significant difference between the prioritization algorithms in the P01 and P02 scenarios. The P01 + HSI scenario was the only exception where RBAC similarity reached an APFD higher than simple dissimilarity and random prioritization. In the five remaining scenarios, RBAC similarity performed without significant difference compared to at least one of the methods. The mean APFD for each scenario are shown in Table 10 with their respective confidence intervals of 95% subscripted.
Table 10 Mean APFD of the complete test suites with confidence interval of 95%
Table 11 shows the results of the Wilcoxon matched-pairs signed ranks test using a confidence interval of 95% to the mean cumulative effectiveness. In this case, we compared RBAC similarity to simple and random prioritization and random prioritization to simple dissimilarity. Significant results are highlighted in bold.
Table 11 Wilcoxon matched-pairs signed ranks test (CI=95%) for P01 and P02
Table 11 corroborates to the finding of Fig. 7 and Table 10 where RBAC similarity had a statistically significant difference compared to the other criteria in P01 + HSI scenario; and random prioritization reached significantly different APFDs compared to simple dissimilarity in the all scenarios.
Analysis of the subtest suites
Since test prioritization for P03, P04 and P05 was too expensive, we considered 30 random subtest suites with 2528 test cases. Random prioritization was run 10 times for each of the 30 subtest suites.
The mean cumulative effectiveness of P03, P04, and P05 are respectively presented in Tables 12, 13, and 14. Figs. 9, 10, and 11 show the mean cumulative effectiveness with error bars calculated using a confidence interval of 95%.
Fig. 10
Table 12 Cumulative effectiveness of the P03 subtest suites
In the P03 test scenarios, the first 5 to 10% of the W, HSI, and SPY subtest suites (i.e., a subset of 125 to 250 test cases) became sufficient to reach the maximum effectiveness. All test prioritization approaches presented similar results and no statistical significance was found between RBAC and the other approaches. In scenarios like this, test minimization techniques may be more cost-effective than test prioritization due to its O(n2) complexity.
In the P04 scenario, the benefits of RBAC similarity started to become more visible and statistically significant, as shown in Fig. 10 and Table 13. There was one exception where no significant difference was obtained. In the P04 + W scenario, the W method generated an extremely large test suite and, to enable test prioritization, we selected random subtest suites containing 2528 test cases. This random selection may have reduced test diversity. In the other scenarios, P04 + HSI and P04 + SPY, we found that the cumulative effectiveness of the RBAC similarity had a statistically significant difference compared to the other methods.
The mean cumulative effectiveness for the P05 test scenarios are presented in Fig. 11 and Table 14. In the P05 scenario, RBAC similarity, simple dissimilarity, and random prioritization clearly had statistically different cumulative effectivenesses. Respectively, 65% of the W and HSI, and 80% of the SPY subtest suites prioritized using RBAC similarity became capable of reaching the highest effectivenesses. RBAC similarity presented a significantly greater cumulative effectiveness compared to random prioritization and simple dissimilarity.
To the P03, P04 and P05, we also calculated the mean APFD based on the cumulative effectiveness of all runs of the 30 random subtest suites. The mean APFD of each test scenario with confidence interval of 95% is shown in Table 15. The highest APFD values are highlighted in bold.
Table 15 Mean APFD of the subtest suites with confidence interval of 95%
In P03 scenario, the fault distribution along the FSM(P03) may have benefited fault detection and all methods performed similarly. In P04 scenario, there was only one case where RBAC similarity did not work well and no statistically significant difference was found (i.e., P04 + W). Regarding simple dissimilarity, it did not reach an APFD higher than random prioritization. At last, in all P05 scenarios, we found statistically significant differences between RBAC, simple and random prioritization. Table 16 shows the results of the Wilcoxon matched-pairs signed ranks test in the test scenarios of policies P03, P04 and P05. Significant results are highlighted in bold.
Table 16 Wilcoxon matched-pairs signed ranks test (CI=95%) for P03,P04 and P05
The analysis of the mean APFD and the confidence intervals of the subtest suites indicated that RBAC similarity performed better than simple dissimilarity and random prioritization in some scenarios. In addition to assessing whether an algorithm performs statistically better than another, it is crucial to measure the magnitude of such improvement. To analyze such aspect, effect size measures are required (Kampenes et al. 2007; Arcuri and Briand 2011; Wohlin et al. 2012).
Effect size to subtest suites
Effect size measures allow for quantifying the difference (i.e., magnitude of the improvement) between two groups (Wohlin et al. 2012). Kampenes et al. (2007) found that only 29% of software engineering experiments report some effect size measure. Thus, to improve our analysis, we also evaluated the effect that one test prioritization method had on the APFD compared with the other methods.
There are two main classes of effect size: (i) unstandardized, which are dependent from the unit of measurement; and (ii) standardized, which are independent from the evaluation criteria measurement units. For each pair of different prioritization method, we computed five different measures: two unstandardized (i) mean and (ii) median differences; and three standardized (iii) Cohen's d (Cohen 1977), (iv) Hedges' g (Hedges 1981), and (v) Vargha-Delaney's 12 (Vargha and Delaney 2000).
Mean and median differences, Cohen's d, and Hedges' g are presented as often referred metrics in the software engineering literature (Kampenes et al. 2007). Cohen's d, and Hedges' g are computed based on the mean difference and an estimate of population standard deviation σ p o p and compared using standard conventions (Cohen 1992).
Vargha-Delaney (VD) 12 is an effect size measure based on stochastic superiority that denotes the probability of a method outperform another (Vargha and Delaney 2000). If both methods are equivalent then 12 =0.5. An effect size 12 >0.5 means that the treatment method has higher probability of achieving a better performance than the control method, otherwise vice-versa. Vargha-Delaney's 12 is recommended by Arcuri and Briand (2011) as a simple and intuitive measure of effect size for assessing randomized algorithms in software engineering research. Table 17 shows the pairwise comparison of the three test prioritization methods. The metrics presented can also be used in future research (e.g., meta-analysis (Kampenes et al. 2007)).
Table 17 Pairwise comparison among the methods with respect to the APFD
We did not compute the effect size to P01 and P02 due to the deterministic nature of RBAC and simple prioritizations and its consequent σ p o p =0. The analysis of effect size corroborated to the mean APFDs and Wilcoxon matched-pairs signed ranks tests and RBAC similarity had good results in P04+HSI, P04+SPY, and all P05 scenarios.
We found differences of medium magnitude between RBAC compared with simple and random prioritizations in P04+HSI; and large magnitude in P04+SPY and all P05 scenarios. There was only one case (i.e., P03+HSI) where RBAC prioritization did not outrun the other methods. In the other scenarios, we found negligible to medium differences between the techniques. Thus, the following order was observed, from the method with the lowest to the highest APFDs, Simple ≺ Random ≺ RBAC.
Recently, Cartaxo et al. (2011) and Bertolino et al. (2015) showed that similarity functions can be helpful when it is necessary to prioritize exhaustive test suites automatically generated for LTS models and XACML policies, respectively. In our previous study (Damasceno et al. 2016), we found that, no matter what FSM-based testing methods are applied to RBAC systems, when the number of users and roles increase, larger test suites tend to be generated. Thus, specific domain test criteria are required to optimize FSM-based testing for RBAC systems. To this end, there are three main approaches: (i) Test minimization, (ii) Test selection, and (iii) Test prioritization.
Unlike (i) test minimization and (ii) test selection, that may compromise fault detection capability; (iii) test prioritization aims at finding an order of execution to an entire test suite (i.e., without filtering out any test case) based on some test criteria (Yoo and Harman 2012). In this paper, we investigated the test prioritization for RBAC systems, and we proposed the RBAC similarity.
RBAC similarity compared to the other criteria
Our results showed that RBAC similarity performed better than simple dissimilarity and random prioritization in some of the scenarios, especially those with large FSM(P) models. To policies P01 and P02, we did not find statistically significant differences between the test prioritization criteria in most of the scenarios. The only exception was to P01 + HSI, where a statistically significant difference between RBAC similarity and the other criteria was found. The HSI method reduces test dimensions by using harmonized state identifiers instead of the characterization set (Petrenko and Bochmann 1995). In this scenario, the characteristics of the HSI may have affected test diversity and, as a result, benefited RBAC similarity.
Due to the large number of test cases generated from policies P03, P04, and P05, prioritizing the complete test suites became infeasible. To overcome this issue, we opted to apply test prioritization on random subtest suites.
To policy P03, all test prioritization approaches increased the cumulative effectiveness to the maximum value yet at the first 5 to 10% and we did not find statistically significant differences between them. Thus, the fault distribution along the FSM(P03) model benefited fault detection and test prioritization. In scenarios like this, test minimization may be more suitable than test prioritization, which has an O(n2) cost. However, as we highlighted earlier, there is a risk of reducing the capability of test suites detecting faults out of the RBAC domain.
The benefits of the RBAC similarity became more evident in P04 and P05 scenarios, the largest FSM(P) models. In policy P04, we found a statistically significant difference between RBAC similarity to the subtest suites generated from HSI and SPY. The only exception was the P04 + W scenario where the random selection of subtest suites may have compromised test diversity.
In the P05 scenario, the RBAC similarity outperformed both test prioritization criteria with statistically significant differences. The analysis of the mean APFD values and effect size corroborate to the mean cumulative effectivenesses depicted in Figs. 9 to 11.
Random prioritization vs. simple dissimilarity
Our results showed a statistically significant difference between random prioritization and simple dissimilarity. In ten out of 15 scenarios, random prioritization presented APFD significantly different and higher than simple dissimilarity. RBAC faults can be exhibited across many different transitions of FSM(P) (Masood et al. 2010). Thus, test diversity may not imply on higher APFD.
Practical feasibility
We found that RBAC similarity may not be feasible to large complete test suites, as seen in scenarios P03, P04 and P05. The O(n2) complexity is an inherent characteristic of most test prioritization approaches (Elbaum et al. 2002), especially similarity testing, that is also an all-to-all comparison problem (Zhang et al. 2017). However, RBAC similarity can still be improved through (i) test minimization and/or (ii) parallel programming.
The RBAC applicability can be used in test minimization as requirements coverage criteria to find test cases relevant to the constraints (i.e., requirements) of RBAC policies. Afterwards, RBAC similarity can be applied as we proposed. Thus, a significant test cost reduction can be achieved, but at the risk of reducing the fault-detection capability (Yoo and Harman 2012).
Recent studies have proposed parallel algorithms to efficiently calculate similarity matrices for mathematical modelling of heterogeneous hardware (Rawald et al. 2015) and ontology mapping (Gîză-Belciug and Pentiuc 2015). However, to the best of our knowledge, they have never been investigated for similarity testing. RBAC similarity as a test minimization criterion and parallel algorithms to calculate similarity matrices for test prioritization could boost up similarity testing but this is out of the scope of this study and left as future work.
Threats to validity
Conclusion Validity: Threats to conclusion validity relate with the ability draw correct conclusions about the relation between the treatment and the outcomes. To mitigate this, we used the Wilcoxon matched-pairs signed ranks test to verify if the RBAC similarity reached different APFDs compared to simple dissimilarity and random prioritization with a confidence interval of 95%. We also computed the mean APFD with a confidence interval of 95% and five effect size measures to quantify the difference between the methods. The statistical analysis were performed using the R statistical package and the effsize package (Torchiano 2017). The R scripts, input and output statistical data are included in the RBAC-BT repository.
Internal Validity: Threats to internal validity are related with influences that can affect independent variables with respect to causality. They threat conclusions about a possible causal relationship between treatment and outcome. To mitigate this threat, random tasks (i.e., subtest suite generation and random prioritization) were repeatedly performed to avoid results obtained by chance. Most of artifacts used in this work were reused from the lab package of our previous study (Damasceno et al. 2016).
Construct Validity: Construct validity concerns with generalizing outcomes to the concept or theory behind the experiment. We used first-order mutants from the RBAC fault domain (Masood et al. 2009) to simulate simple faults and evaluate the effectiveness of each prioritization criteria. Mutation analysis is a common assessment approach of software testing investigations (Jia and Harman 2011). Other RBAC fault models could be used in this experiment, such as malicious faults (Masood et al. 2009) and probabilistic models of fault coverage (Masood et al. 2010). These fault models could be used to analyse RBAC similarity testing from a perspective of faults of different nature, but they were left as future work. Moreover, despite the relatively low number of faults, the RBAC fault model is still representative to functional faults of RBAC systems (Masood et al. 2009).
External Validity: It concerns with the generalization of the outcomes to other scenarios. To mitigate this threat, we included test suites generated from three different test generation methods and RBAC policies with different characteristics.
Essentially, the RBAC model reduces the complexity of security management routines by grouping privileges through roles which can be assigned to users and activated in sessions. Access control testing is one important activity during the development of RBAC systems since implementation mistakes may lead to security breaches. In this context, previous studies have shown that FSM-based testing can be effective at detecting RBAC faults, but very expensive. Thus, additional steps become necessary to make RBAC testing more feasible and less costly.
Test case prioritization comes as a solution to this problem and it aims at finding an ordering for test cases execution to maximize some test criteria. Similarity testing is a variant of test case prioritization which has been investigated under the XACML and LTS domains and enabled to find better orders for test cases execution. In this paper we introduce a test prioritization technique named RBAC similarity which uses the dissimilarity between pairs of test cases and their pairwise applicability to the RBAC policy under test (i.e., the relevance of these test cases to the RBAC constraints) as test prioritization criteria.
Our RBAC similarity approach was experimentally evaluated and compared with simple dissimilarity and random prioritization as baselines. The obtained results pointed out that RBAC similarity improved the mean cumulative effectiveness and the APFD and enable to reach the maximum effectiveness of the test suites at a faster rate with significant difference in most of the cases. In some scenarios, prioritizing HSI and SPY test suites with RBAC similarity resulted on better APFD values than applying the technique to W test suites. The characteristics of the test cases generated from HSI and SPY favoured the similarity testing algorithms while random selection applied to complete test suites generated from W negatively impacted test prioritization using similarity functions. Moreover, random prioritization also outperformed simple dissimilarity in most of the cases. We analyze our data using Wilcoxon matched-pairs signed ranks test and error bars with CI=95%, and five effect size metrics (i.e., mean and median differences, Cohen's d, Hedges' g and Vargha-Delaney's 12 ) and found statistically significant in some scenarios.
All test artifacts (i.e., RBAC-BT tool, test suites, test results, RBAC policies, and statistical data) are available online Footnote 2 and can be used to replicate, verify and validate this experiment. As future work, we want to investigate alternative algorithms for ordering test cases, such as algorithms using total information for test prioritization, other fault models, such as simulated malicious faults and probabilistic fault models. We also intend to investigate the usage of RBAC similarity as a requirements coverage criterion for test minimization and as a fitness function in search-based software testing (McMinn 2004).
https://github.com/damascenodiego/rbac-bt/
https://github.com/damascenodiego/rbac-bt
Andrews, JH, Briand LC, Labiche Y, Namin AS (2006) Using mutation analysis for assessing and comparing testing coverage criteria. IEEE Trans Softw Eng. 32(8):608–624. https://doi.org/10.1109/TSE.2006.83.
ANSI (2004) Role based access control. Technical report, American National Standards Institute, Inc.ANSI/INCITS 359-2004.
Arcuri, A, Briand L (2011) A practical guide for using statistical tests to assess randomized algorithms in software engineering In: Proceedings of the 33rd International Conference on Software Engineering. ICSE '11, 1–10.. ACM, New York, NY, USA. https://doi.org/10.1145/1985793.1985795. http://doi.acm.org/10.1145/1985793.1985795.
Ben Fadhel, A, Bianculli D, Briand L (2015) A comprehensive modeling framework for role-based access control policies. J Syst Softw. 107(C):110–126. https://doi.org/10.1016/j.jss.2015.05.015.
Bertolino, A, Daoudagh S, Kateb DE, Henard C, Traon YL, Lonetti F, Marchetti E, Mouelhi T, Papadakis M (2015) Similarity testing for access control. Inf Softw Technol. 58:355–372. https://doi.org/10.1016/j.infsof.2014.07.003.
Broy, M, Jonsson B, Katoen JP, Leucker M, Pretschner A (2005) Model-Based Testing of Reactive Systems: Advanced Lectures (Lecture Notes in Computer Science). Springer, Secaucus, NJ, USA.
Book MATH Google Scholar
Cartaxo, EG, Machado PDL, Neto FGO (2011) On the use of a similarity function for test case selection in the context of model-based testing. Softw Test Verif Reliab. 21(2):75–100. https://doi.org/10.1002/stvr.413.
Chow, TS (1978) Testing software design modeled by finite-state machines. IEEE Trans Softw Eng. 4(3):178–187. https://doi.org/10.1109/TSE.1978.231496.
Cohen, J (1977) Statistical Power Analysis for the Behavioral Sciences. Revised edn.. Academic Press, New York. https://doi.org/10.1016/B978-0-12-179060-8.50001-3. https://www.sciencedirect.com/science/article/pii/B9780121790608500013.
Cohen, J (1992) A power primer. Psychol Bull. 112(1):155–159. https://doi.org/10.1037/0033-2909.112.1.155.
Coutinho, AEVB, Cartaxo EG, Machado PDdL (2014) Analysis of distance functions for similarity-based test suite reduction in the context of model-based testing. Softw Qual J.1–39. https://doi.org/10.1007/s11219-014-9265-z.
Damasceno, CDN, Masiero PC, Simao A (2016) Evaluating test characteristics and effectiveness of fsm-based testing methods on rbac systems In: Proceedings of the 30th Brazilian Symposium on Software Engineering. SBES '16, 83–92.. ACM, New York, NY, USA. https://doi.org/10.1145/2973839.2973849. http://doi.acm.org/10.1145/2973839.2973849.
Elbaum, S, Malishevsky AG, Rothermel G (2000) Prioritizing test cases for regression testing. SIGSOFT Softw Eng Notes. 25(5):102–112. https://doi.org/10.1145/347636.348910.
Elbaum, S, Malishevsky AG, Rothermel G (2002) Test case prioritization: A family of empirical studies. IEEE Trans Softw Eng. 28(2):159–182. https://doi.org/10.1109/32.988497.
Endo, AT, Simao A (2013) Evaluating test suite characteristics, cost, and effectiveness of fsm-based testing methods. Inf Softw Technol. 55(6):1045–1062. https://doi.org/10.1016/j.infsof.2013.01.001.
Fabbri, SCPF, Delamaro ME, Maldonado JC, Masiero PC (1994) Mutation analysis testing for finite state machines In: Software Reliability Engineering, 1994. Proceedings., 5th International Symposium On, 220–229. https://doi.org/10.1109/ISSRE.1994.341378.
Felderer, M, Zech P, Breu R, Büchler M, Pretschner A (2015) Model-based security testing: a taxonomy and systematic classification. Softw Test Verif Reliab. https://doi.org/10.1002/stvr.1580.
Ferraiolo, DF, Kuhn RD, Chandramouli R (2007) Role-Based Access Control. 2nd edn. Artech House, Inc., Norwood, MA, USA.
MATH Google Scholar
Gill, A (1962) Introduction to the Theory of Finite State Machines. McGraw-Hill, New York.
Gîză-Belciug, F, Pentiuc SG (2015) Parallelization of similarity matrix calculus in ontology mapping systems In: 2015 14th RoEduNet International Conference - Networking in Education and Research (RoEduNet NER), 50–55. https://doi.org/10.1109/RoEduNet.2015.7311827.
Hedges, LV (1981) Distribution theory for glass's estimator of effect size and related estimators. J Educ Stat. 6(2):107–128. https://doi.org/10.3102/10769986006002107. https://doi.org/10.3102/10769986006002107.
Henard, C, Papadakis M, Perrouin G, Klein J, Heymans P, Traon YL (2014) Bypassing the combinatorial explosion: Using similarity to generate and prioritize t-wise test configurations for software product lines. IEEE Trans Softw Eng. 40(7):650–670. https://doi.org/10.1109/TSE.2014.2327020. arXiv:1211.5451v1.
Jang-Jaccard, J, Nepal S (2014) A survey of emerging threats in cybersecurity. J Comput Syst Sci 80(5):973–993. https://doi.org/10.1016/j.jcss.2014.02.005. Special Issue on Dependable and Secure Computing.
Jia, Y, Harman M (2011) An analysis and survey of the development of mutation testing. Softw Eng IEEE Trans. 37(5):649–678. https://doi.org/10.1109/TSE.2010.62.
Kampenes, VB, Dyb T, Hannay JE, Sjberg DIK (2007) A systematic review of effect size in software engineering experiments. Inf Softw Technol. 49(11):1073–1086. https://doi.org/10.1016/j.infsof.2007.02.015.
Masood, A, Bhatti R, Ghafoor A, Mathur AP (2009) Scalable and effective test generation for role-based access control systems. IEEE Trans Softw Eng. 35(5):654–668. https://doi.org/10.1109/TSE.2009.35.
Masood, A, Ghafoor A, Mathur AP (2010) Fault coverage of constrained random test selection for access control: A formal analysis. J Syst Softw. 83(12):2607–2617. TAIC PART 2009 - Testing: Academic & Industrial Conference - Practice And Research Techniques.
McMinn, P (2004) Search-based software test data generation: A survey: Research articles. Softw Test Verif Reliab. 14(2):105–156. https://doi.org/10.1002/stvr.v14:2.
Mouelhi, T, Kateb DE, Traon YL (2015) Chapter five - inroads in testing access control, Advances in Computers, vol. 99. Elsevier. https://doi.org/10.1016/bs.adcom.2015.04.003. http://www.sciencedirect.com/science/article/pii/S0065245815000327.
OASIS (2013) eXtensible Access Control Markup Language (XACML) Version 3.0. Technical report, Organization for the Advancement of Structured Information Standards (OASIS). http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-os-en.pdf.
OASIS (2014) XACML v3.0 Core and Hierarchical Role Based Access Control (RBAC) Profile Version 1.0. http://docs.oasis-open.org/xacml/3.0/rbac/v1.0/cs02/xacml-3.0-rbac-v1.0-cs02.pdf.
Ouriques, JaFS (2015) Strategies for prioritizing test cases generated through model-based testing approaches In: Proceedings of the 37th International Conference on Software Engineering - Volume 2. ICSE '15, 879–882.. IEEE Press, Piscataway, NJ, USA. http://dl.acm.org/citation.cfm?id=2819009.2819204.
Petrenko, A, Bochmann GV (1995) Selecting test sequences for partially-specified nondeterministic finite state machines. In: Luo G (ed)7th IFIP WG 6.1 International Workshop on Protocol Test Systems. IWPTS '94, 95–110.. Chapman and Hall, Ltd., London, UK. http://dl.acm.org/citation.cfm?id=236187.233118.
Rawald, T, Sips M, Marwan N, Leser U (2015) Massively parallel analysis of similarity matrices on heterogeneous hardware In: Proceedings of the Workshops of the EDBT/ICDT 2015 Joint Conference (EDBT/ICDT), Brussels, Belgium, March 27th, 2015, 56–62.. CEUR-WS, Brussels.
Samarati, P, de Vimercati SC (2001) Access Control: Policies, Models, and Mechanisms(Focardi R, Gorrieri R, eds.). Springer, Berlin, Heidelberg. http://dx.doi.org/10.1007/3-540-45608-2_3.
Simão, A, Petrenko A, Yevtushenko N (2009) Generating Reduced Tests for FSMs with Extra States. In: Núñez M, Baker P, Merayo MG (eds), 129–145.. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-05031-2_9. http://dx.doi.org/10.1007/978-3-642-05031-2_9.
Torchiano, M (2017) Effsize: Efficient Effect Size Computation (v. 0.7.1). CRAN package repository. https://cran.r-project.org/web/packages/effsize/effsize.pdf. CRAN package repository. [Online; accessed 20-November-2017].
Utting, M, Pretschner A, Legeard B (2012) A taxonomy of model-based testing approaches. Softw Test Verif Reliab. 22(5):297–312. https://doi.org/10.1002/stvr.456.
Vargha, A, Delaney HD (2000) A critique and improvement of the cl common language effect size statistics of mcgraw and wong. J Educ Behav Stat. 25(2):101–132. https://doi.org/10.3102/10769986025002101. https://doi.org/10.3102/10769986025002101.
Vasilevskii, MP (1973) Failure diagnosis of automata. Cybernetics 9(4):653–665. https://doi.org/10.1007/BF01068590.
Wohlin, C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Measurement. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29044-2_3.
Yoo, S, Harman M (2012) Regression testing minimization, selection and prioritization: A survey. Softw Test Verif Reliab. 22(2):67–120. https://doi.org/10.1002/stv.430.
Zhang, YF, Tian YC, Kelly W, Fidge C (2017) Scalable and efficient data distribution for distributed computing of all-to-all comparison problems. Futur Gener Comput Syst. 67:152–162.
We acknowledge the help from all the LabES's members (Software Engineering Laboratory) at the University of Sao Paulo (USP) for their valuable comments. We also thank the reviewers for all valuable comments and suggestions to this study.
Carlos Diego Nascimento Damasceno's research project was supported by the National Council for Scientific and Technological Development (CNPq), process number 132249/2014-6.
Institute of Mathematics and Computer Science, University of Sao Paulo (ICMC-USP), Trabalhador Sao-carlense Avenue, 400, Sao Carlos-SP, 13566-590, Brazil
Carlos Diego N. Damasceno, Paulo C. Masiero & Adenilso Simao
Software Engineering Laboratory – LabES, Trabalhador Sao-carlense Avenue, 400, Room 6-208, Sao Carlos-SP, 13566-590, Brazil
Carlos Diego N. Damasceno
Paulo C. Masiero
Adenilso Simao
CDND designed and conducted the experiment, adapted the RBAC-BT tool and analyzed the results. PCM and AS supported the validation of the experiment protocol and analysis of results. All authors read and approved the final manuscript.
Correspondence to Carlos Diego N. Damasceno.
The authors declare that they have no competing interests
N. Damasceno, C.D., Masiero, P.C. & Simao, A. Similarity testing for role-based access control systems. J Softw Eng Res Dev 6, 1 (2018). https://doi.org/10.1186/s40411-017-0045-x
DOI: https://doi.org/10.1186/s40411-017-0045-x
Finite state machines
Test prioritization
Automated software testing: Trends and evidence
|
CommonCrawl
|
Exposure to urban PM1 in rats: development of bronchial inflammation and airway hyperresponsiveness
Ágnes Filep1,2,
Gergely H. Fodor3,
Fruzsina Kun-Szabó4,
László Tiszlavicz5,
Zsolt Rázga5,
Gábor Bozsó6,
Zoltán Bozóki1,2,
Gábor Szabó1,2 &
Ferenc Peták3
Respiratory Research volume 17, Article number: 26 (2016) Cite this article
Several epidemiological and laboratory studies have evidenced the fact that atmospheric particulate matter (PM) increases the risk of respiratory morbidity. It is well known that the smallest fraction of PM (PM1 - particulate matter having a diameter below 1 μm) penetrates the deepest into the airways. The ratio of the different size fractions in PM is highly variable, but in industrial areas PM1 can be significant. Despite these facts, the health effects of PM1 have been poorly investigated and air quality standards are based on PM10 and PM2.5 (PM having diameters below 10 μm and 2.5 μm, respectively) concentrations. Therefore, this study aimed at determining whether exposure to ambient PM1 at a near alert threshold level for PM10 has respiratory consequences in rats.
Rats were either exposed for 6 weeks to 100 μg/m3 (alert threshold level for PM10 in Hungary) urban submicron aerosol, or were kept in room air. End-expiratory lung volume, airway resistance (Raw) and respiratory tissue mechanics were measured. Respiratory mechanics were measured under baseline conditions and following intravenous methacholine challenges to characterize the development of airway hyperresponsiveness (AH). Bronchoalveolar lavage fluid (BALF) was analyzed and lung histology was performed.
No significant differences were detected in lung volume and mechanical parameters at baseline. However, the exposed rats exhibited significantly greater MCh-induced responses in Raw, demonstrating the progression of AH. The associated bronchial inflammation was evidenced by the accumulation of inflammatory cells in BALF and by lung histology.
Our findings suggest that exposure to concentrated ambient PM1 (mass concentration at the threshold level for PM10) leads to the development of mild respiratory symptoms in healthy adult rats, which may suggest a need for the reconsideration of threshold limits for airborne PM1.
Epidemiologic studies have observed associations between short-term increases in ambient particulate matter (PM) concentrations and increases in respiratory morbidity [1]. Atmospheric aerosol is a complex mixture of gases, solid and liquid particles. The diameter of these particles (Dp) varies in five orders of magnitude (1 nm – 100 μm). It has been well established that the particle size significantly determines how deep the particles can penetrate into the lung compartments. Particles with diameters between 2.5 and 10 μm (usually defined as PM2.5 and PM10) deposit mainly in the upper airways and can be cleared by the mucociliary system. PM2.5 deposit in the tracheobronchial region, whereas PM1 (particles with diameters of less than 1 μm) can reach the lung periphery, i.e. the alveolar region [2]. Although in urban PM mass PM10 is dominant, in industrial areas the PM1/PM10 mass ratio can exceed 0.5 [3]. Such a high mass ratio expressed in particle number (i.e. nPM1/nPM10) means at least 3 orders of magnitude. Several studies have demonstrated that low emission zones (LEZ) have a far greater positive effect on public health than one would expect from PM10 data [4]. The reason for this benefit is that LEZ are effective in decreasing the number of small particles, but not the mass concentration of any size fraction of PM. Because of these evidences, in the last decade the scientific interest has shifted from PM10 and PM2.5 to PM1 [5], even though air quality standards related to PM1 are still nonexistent.
The chemical composition of PM particles and their adverse health effects vary greatly according to their emission sources. The pulmonary effects of specific, potentially harmful constituents of PM, such as iron [6], elemental carbon [7] or combustion-derived nanoparticles [8] have been investigated. However, it is questionable whether these findings can be generalized to humans exposed to PM because of the complexity of real atmospheric aerosol [9]. The few earlier studies assessing the respiratory consequences of complex atmospheric aerosols in animal models were limited to exposures to particle concentrations at least five times higher than the alert level [10–13]. Thus, the development of adverse pulmonary symptoms including bronchial inflammation and airway hyperresponsiveness could be anticipated [14]. Consequently, it is not known whether complex urban aerosols in the PM1 fraction with concentrations around the current threshold level for PM10 cause pulmonary symptoms in healthy adult individuals. Therefore, the present study aims to establish whether the prolonged (6 weeks) inhalation of urban PM1 in concentrations at the current alert PM10-related threshold level has pulmonary effects on healthy adult rats.
Ethical approval for this study (no. I-74-50/2012) was provided by the Experimental Ethics Committee of the University of Szeged, Szeged, Hungary (Chairperson Prof. Gy. Szabó) on 7 December 2012, and by the local office of the Hungarian Animal Health and Welfare Directorate (no. XIV/152/2013, Chairperson Cs. Farle) on 9 January 2013. The work was carried out in accordance with EU Directive 2010/63/EU relating to animal experiments.
Exposure to PM1
Atmospheric aerosol samples were collected for a period of 5 years continuously in the Combined Cycle Power Plant of Debrecen, the second largest city in Hungary. The filtration system of the power plant operates 5,000 h per year and extracts approximately 580,000 m3 air in an hour. Particle removal is achieved in three steps. As many as 180 pieces of coarse filters are responsible for the removal of particles above 63 μm, and the same number of glass fiber filters for the removal of particles between 63 and 1 μm. The remaining small particles are removed from the air of the turbine areas by washing with water. The filtration properties of the filters depend strongly on the actual filter loading. In the initial phase, particles are caught between the fibers, but as the filter becomes more loaded, particles deposit on the top of the filter. As the particles occlude the routes within the filter, the position of the deposition efficiency minimum on the particle size axis shifts towards the smaller sizes. A complete characterization of the sample can be found in a previous study, which demonstrated that 88 % of the particles collected from the coarse filters were below 63 μm [14]. In the present study we used particles collected from the glass fiber filters. To be able to collect submicron particles without any extraction we aspirated the particles from the surface of the glass fiber filters with a special hoover. Additional size selection was done during the resuspension process.
The main air pollution sources at the sampling point were associated with the busy roads nearby, a residential area and the central railway station. Because of the long sampling period, dust composition can be interpreted as typical urban PM in any Central European city [15]. In order to achieve a more physiological deposition, we opted to choose aerosolized particle exposition, rather than intratracheal instillation. The aim of our study was to approach the ambient exposure as much as possible; hence we selected whole body exposure. The PM1 test atmosphere was created inside an exposure chamber. The total volume of the exposure chamber was 60 l, and the animal load (i.e. the total body volume of the animals) at the end of the experiment was 3.8 %. That ratio meets Silver's recommendation [16] to minimize effects on exposure concentration related to animal surface area. Re-suspension of the dust was achieved by using a PALAS RGB1000 disperser (Fig. 1) with a Type C dispersion cover (7 mm diameter powder reservoir), which uses a rotating brush to channel the particles into the dispersion airflow. The characteristics of the aerosol inside the chamber were evaluated at multiple points of the chamber before the study and continuously monitored during the 6-week-long exposure procedure. The mass concentration of the generated aerosol (ρ) was measured with a tapered element oscillating microbalance (TEOM) instrument (Series 1400a, Rupprecht and Patashnick Co. Inc., Albany, NY, USA) and particle number size distribution (dN/dlogDp) with an optical particle counter (OPC, Model 1.109, Grimm Aerosol Technik, Ainring, Germany). Black carbon content was also continuously measured by a photoacoustic spectroscopy (PAS) based instrument (courtesy of Hilase Ltd.). The characteristics of the achieved atmosphere were controlled by the settings of the disperser (feed rate of the transportation piston holding the particle sample, speed of the rotating brush and flowrate of dispersion air) and by using an in-house developed PM1 impactor in front of the exposure chamber. The cut-off diameter of the impactor at the applied flowrate was previously modeled and bench tested. In our case the flowrate of dispersion air was set to 8.3 l/min, which satisfied both the needs of the animals and the connected instruments (sample flow of TEOM, OPC and PAS was 3, 1.5 and 1.5 l/min, respectively). The feed rate was 20 mm/h and the speed of the brush rotation was 600 min−1. The applied flowrate ensured more than eight total volume changes per hour. The applied maximum animal load (3.8 %) and eight volume changes per hour lead to less than 3 ppm ammonia concentration in the chamber at the end of an exposure period (6 h) according to Dorato and Wolf [17]. Atmospheric pressure in the chamber was maintained through an open line (via a disposable particle filter, in order to avoid contamination of the air in the room). Relative humidity in the chamber was controlled by using zeolite.
Experimental setup for PM1 exposure. PAS: photoacoustic spectrometer, OPC: optical particle counter, TEOM: tapered element oscillation microbalance
Two groups of male Wistar rats were studied (weight range 350–455 g, 380 ± 36 g in the exposed group and 405 ± 27 g in the control group). The animals were maintained at a 12 h day/night cycle. The animals in the exposed group were exposed to PM1 in the exposure chamber for 6 h a day (09:00–17:00), 5 days a week, for 6 weeks (n = 6). The animals in the control group were kept in another chamber with identical dimensions. They underwent the same procedure except that they were allowed to breathe particle-free room air (n = 6). The rats in both groups had access to food and water ad libitum throughout the entire exposure period. Both groups were examined following this 6-week-long exposure.
PM1 mass concentration
To verify the stability of the target 100 μg/m3 mass concentration of PM1 inside the chamber, a TEOM was used. This instrument allows the quasi-continuous monitoring of the mass of PM accumulating on a filter mounted on an oscillating microbalance inside the measurement apparatus [18]. Changes in the frequency of oscillation, which reflect the mass of material accumulating on the filter, are detected in quasi-realtime and are converted by a microprocessor into an equivalent PM mass concentration every few seconds with a 10 min running average. The TEOM air stream was heated to 40 °C to prevent the condensation of water vapor on the collected samples and to keep the non-water semi-volatile mass loss at minimum [19].
PM1 particle number size distribution
OPC that was used for the real-time characterization of the particle number size distribution [20] detects light scattering on an individual particle passing through a laser beam. This device uses a 683 nm laser diode to illuminate the beam containing the particles, and a wide-angle collector optic is used to detect the subsequent light pulses with a photodiode. By knowing the geometry and flow parameters, the optical diameter, the size distribution and the total concentration of particles can be calculated from the intensity of scattered light.
Chemical composition of PM1
The elemental composition of the sample was measured with a RIGAKU Supermini WD-XRF (Pd X-ray source, 50 kV excitation voltage, 40 mA anode current) based on the emission of characteristic "secondary" (or fluorescent) X-rays (XRF) from a material that has been excited through bombardment with high-energy X-rays or gamma rays. Even though XRF is one of the most reliable methods for elemental composition measurement, quantification of the carbon content is not possible. Therefore, we measured the total carbon (TC) and the black carbon (BC) content of the aerosol separately.
The total carbon (TC) content of the sample was measured with the catalytic oxidation method (Elementar Analysensysteme GmbH), which achieves total combustion of samples by heating them to 1200 °C in an oxygen-rich environment inside the TC combustion tubes filled with a platinum catalyst. The carbon dioxide generated by oxidation was detected using a nondispersive infrared sensor (NDIR).
The black carbon (BC) content of PM1 was measured real-time with a photoacoustic spectroscopy (PAS) based instrument (courtesy of Hilase Ltd.) using a 680 nm laser diode. This method is based on the formation of sound waves following light absorption in a material sample [21]. PAS is the only method that is able to detect the optical absorption of particles in their natural airborne state.
Lung volume measurements
End-expiratory lung volume (EELV) was measured in both groups by using a body plethysmograph as detailed earlier [22], following tracheostomy but preceding vessel preparations. Briefly, the trachea was occluded at end-expiration until 3 or 4 spontaneous inspiratory efforts had been generated by the animal in the closed box. The changes in tracheal pressure and plethysmograph box pressure during these maneuvers were recorded, and Boyle's law was applied to calculate EELV from the relationship between the tracheal pressure and the box pressure after correction for the box impedance [23]. To minimize the biasing effects of the different breathing frequencies during the inspiratory efforts, the box pressure data were corrected for the thermal characteristics of the plethysmograph.
Measurement of airway and respiratory tissue mechanics
The input impedance of the respiratory system (Zrs) was measured by applying the forced oscillation technique in short (6 s) end-expiratory pauses interposed in the mechanical ventilation, as detailed previously [24]. Briefly, the ventilation was stopped at end-expiration and the tracheal cannula was connected to a loudspeaker-in-box system instead of the ventilator circuit, delivering a computer-generated small-amplitude (<1 cmH2O) pseudorandom signal (23 non-integer multiples between 0.5 and 20.75 Hz) through a 100 cm long, 2 mm internal diameter polyethylene tube into the tracheal cannula. Lateral pressures were measured by using two identical pressure transducers (model 33NA002D, ICSensors, Milpitas, CA, USA) at the loudspeaker end (P1) and at the tracheal end (P2) of the wave-tube. The signals P1 and P2 were low-pass filtered (5th order Butterworth, 25 Hz corner frequency), and sampled with the analogue-digital board of a microcomputer at a rate of 256 Hz. Fast Fourier transformation with 4 s time windows and 95 % overlapping was used to assess the pressure transfer functions (P1/P2) from the 6 s recordings collected during apnoea. Zrs was calculated as the load impedance of the wave-tube using Eq. 1 [25]:
$$ {Z}_{rs}=\frac{Z_0\cdot \sinh \kern0.4em \left(\gamma L\right)}{\frac{P_1}{P_2}- \cosh \kern0.4em \left(\gamma L\right)} $$
where Zo is the characteristic impedance and γ is the complex propagation wave number. These parameters were determined based on the geometrical data and the material constants of the wave-tube and the air.
The input impedances of the tracheal cannula and the connections were also measured, and subtracted from each Zrs spectrum.
A model described by Eq. 2, containing a frequency-independent resistance (Raw) and inertance (Iaw) and a tissue damping (G) and elastance (H) of a constant-phase tissue compartment [26] was fitted to the Zrs spectra by minimizing the weighted difference between the measured and the modelled impedance data.
$$ {Z}_{rs}={R}_{aw}+j\cdot \omega \cdot {I}_{aw}+\frac{G-j\cdot H}{\omega^a} $$
where α is equal to (2/π)atan(H/G), ω is the angular frequency and j is the imaginary unit.
The tissue parameters G and H are attributed to the damping (resistive) and elastic properties of the respiratory system. Raw and Iaw represent primarily the resistance and inertance of the airways, since the contribution of the chest wall to these parameters in rats is minor [27].
Animal preparations
Anesthesia was induced with an intraperitoneal injection of sodium pentobarbital (45 mg/kg) in adult male Wistar rats (393.3 g, 340–450 g). A polyethylene cannula (16 gauge, B. Braun Melsungen AG, Melsungen, Germany) was initiated through tracheostomy after subcutaneous administration of local anasthetics (lidocaine, 2–4 mg/kg) to ensure adequate analgesia around the surgical wound. The rats were then placed on a heating pad in a supine position with the tracheal tube connected to a small animal ventilator (Model 683, Harvard Apparatus, South Natick, MA, USA), to allow mechanical ventilation with room air (70 breaths/min, tidal volume 7 ml/kg). Then a femoral vein was cannulated (Abocath 22 G) for drug delivery. Anesthesia was also maintained through this iv line by regular injections of sodium pentobartibal (12 mg/kg, every 30 min). A femoral artery was also catheterized (Abocath 22 G) and attached to a pressure transducer (Model TSD104A, Biopac, Santa Barbara, CA, USA) for continuous systemic blood pressure monitoring to assess mean arterial pressure. The arterial blood pressure, ECG and heart rate were monitored continuously with a data collection and acquisition system (Biopac, Santa Barbara, CA, USA). Body temperature was kept in the 37 ± 0.5 °C range by using the heating pad. Muscle relaxation was achieved by repeated iv administration of pipecuronium (0.1 mg/kg, every 30 min, Arduan, Richter-Gedeon, Budapest, Hungary).
Experimental protocol
Both groups underwent the same experimental procedure. Following the tracheostomy the animals were placed in the plethysmograph box, and 3 to 4 EELV recordings were performed as detailed above. Mechanical ventilation was then maintained during the surgical preparations. After the animals had reached a steady-state condition, the volume history was standardized by performing lung hyperinflation by occluding the expiratory port of the ventilator. Baseline (BL) respiratory mechanical properties were determined by measuring 3 to 4 reproducible Zrs data sets. To assess the appearance of airway hyperresponsiveness subsequent to the exposures, continuous iv infusions of methacholine (MCh) were administered with increasing doses (4, 8 and 16 μg/kg/min). A set of Zrs data including 3 to 4 recordings was recorded 5 min after the onset of the infusion at each dose. Following the last dose, MCh infusion was stopped and after a 30 min recovery period, another set of Zrs data was collected as previously. At the end of the protocol, bronchoalveolar lavage was performed on the left lung, as detailed below. The right lung was fixed and excised for histological analyses.
Bronchoalveolar lavage
To assess pulmonary inflammatory cell counts, bronchoalveolar lavage of the left lung was performed. Following the euthanasia of the animals with an overdose of sodium pentobarbital, a mid-line thoracotomy was performed and the right bronchus was localized and clamped. Then 4 ml of pre-warmed (37 °C) normal saline was injected into the tracheal tube and the animal was re-connected to the ventilator for 1 min and the bronchoalveolar lavage fluid (BALF) was suctioned. Following the suctioning the clamp on the right bronchus was released. The samples were centrifuged onto a slide using a cytocentrifuge and following overnight drying, they were stained with haematoxylin-eosin and manually counted under a light microscope from 20 randomly selected non-overlapping fields of vision. The average number of specific cell types and the average total cell count were calculated.
Lung histopathological examinations
The right lungs, which had not been lavaged previously, were used for these analyses. The lungs were filled with 4 % buffered formalin by applying a hydrostatic pressure of 20 cmH2O. The lungs and heart were then removed en bloc and placed into 4 % buffered formalin until processing.
After complete fixation, transhilar horizontal sections (perpendicular to the longitudinal axis of the lung from the hilum) were embedded in paraffin. Two 5 μm sections were prepared in each lung specimen and were stained with haematoxylin-eosin.
For transmission electron microscopy, the formalin fixed, paraffin embedded specimens were re-embedded into plastic (Embed812, EMS, USA), and 70 nm thick sections were cut and placed on oval slot copper grids. They were analyzed under a transmission electron microscope (Philips CM10, 100 KV).
The scatters in the parameters were expressed as SE values. The Kolmogorov-Smirnov test was used to test data for normality. Two-way repeated measures of analysis of variances (ANOVA) with the factors assessment time and group allocation were used to assess the effects of fine particles on the respiratory mechanical parameters. The Holm-Sidak multiple comparison procedure was applied to compare the different experimental conditions (for repeated measures) or groups (for independent groups). Differences of EELV, baseline mechanical parameters and BALF cell counts were detected by Student's t-test. Statistical tests were carried out with the SigmaPlot software package (version 12.5, Systat Software, Inc., CA, USA) with a significance level of p < 0.05.
The average PM1 concentration during the exposure periods was 101.7 ± 29.4 μg/m3. The particle number size distribution in the exposure chamber was unimodal. The geometric mean diameter was calculated by Gaussian fit, and was found to be 391.2 ± 21.3 nm (Fig. 2). The geometric mean diameter based on particle mass size distribution (assuming a constant density) was found to be 2859.8 ± 139.7 nm. The fact that the ratio of particles having diameter larger than 1 μm was 4.87 % (in number concentration) clearly shows that particle mass size distribution can be misleading in case of dominating small particles.
Average particle number size distribution in the exposure chamber. The top panel shows measured values in the whole range, the bottom panel is a zoomed view of the gray area in the top panel. Symbols are average values with SD, continuous line represents the calculated Gaussian fit
The analysis of the chemical composition of the PM1 samples revealed the predominance of carbon (TC = 33.4 % containing BC = 6.38 %). Among the remaining chemical elements, silica was present in the greatest quantity (Si = 17.6 %), followed by iron (Fe = 11.4 %), calcium (Ca = 8.46 %) and aluminum (Al = 5.12 %). Lesser, but still noticeable amounts of sulfur (S = 2.32 %) and chlorine (Cl = 1.9 %) were found. Other metals were present in the samples in trace amounts (Ti = 0.67 %, Cu = 0.14 %, Zn = 0.29 %, Pb = 0.07 %).
There was no detectable difference between the two groups in terms of body weight (p = 0.235). The baseline values of EELV and respiratory mechanical parameters are displayed in Table 1. No statistically significant difference was detected between the control and exposed groups in any of these parameters.
Table 1 Baseline values of end-expiratory lung volume (EELV) and respiratory mechanical parameters (airway resistance, Raw; tissue damping, G and tissue elastance, H)
Figure 3 depicts the effects of MCh provocation on the respiratory mechanical parameters. All parameters exhibited elevations relative to the baseline in a dose-dependent manner. However, the animals in the exposed group exhibited significantly greater responses to 8 μg/kg/min MCh in H (p = 0.011), and to 16 μg/kg/min MCh in Raw (p = 0.005) and H (p = 0.006). MCh-induced changes in G did not differ between the groups throughout the study. All parameters returned to their baseline values after the 30 min recovery period (BL2).
Changes of the respiratory mechanical parameters following methacholine challenge. Raw: airway resistance, G: tissue damping, H: tissue elastance, BL: baseline, M4-8-16: methacholine doses of 4-8-16 μg/kg/min. *: p < 0.05 vs. control group
For the parameter Raw provocative dose (PD50-Raw) was calculated via linear interpolation, representing the dose of MCh associated with a 50 % increase in Raw. PD50-Raw was significantly lower in the exposed group (4.299 ± 0.509 μg/kg/min vs. 5.88 ± 0.513 μg/kg/min).
Total and differential cell counts assessed from BALF are demonstrated in Fig. 4. Samples obtained from the exposed group had elevated numbers of total cell count, macrophages, lymphocytes and basophils (p < 0.05 for all) compared to those obtained in the control group. Phagocytized dust particles were observed in 64.9 ± 2 % of the macrophages in the exposed group. Eosinophil and neutrophil numbers exhibited no statistically significant differences.
Average number of basophils, neutrophils, eosinophils, lymphocytes, macrophages and total cell counts per field of view in bronchoalveolar lavage fluid samples. *: p < 0.05 vs. control group
In the light microscopy samples obtained from the animals in the exposed group, free dust particles were observed on the bronchial epithelium (Fig. 5a), and phagocytized dust particles were embedded in the alveolar septa (Fig. 5b). Electron microscopy also revealed the appearance of dust particles in the alveolar macrophages in the animals in the exposed group (Fig. 5c). All these findings were absent in the lungs obtained from the rats in the control group (Fig. 5d).
Light (a, b, d) and electron microscopic (c) images of the lungs. a: Section of a bronchus in a representative animal in the exposed group. Arrow indicates an aggregate of free dust particles inside the bronchial lumen. b: Section of the alveolar space in a representative animal in the exposed group. Arrows indicate macrophages with phagocytosed dust particles. c: Transmission electron microscopic section of a representative animal in the exposed group. Arrows indicate embedded dust particles. d: Alveolar section of a representative animal in the control group
This study evidenced that a 6-week-long exposure to PM1 at a near-threshold level from a Central European city causes mild airway hyperresponsiveness in healthy adult rats. The respiratory symptoms are not manifested in any adverse changes in the baseline values of the parameters reflecting static lung volume, or airway or respiratory tissue mechanics. However, the presence of mild airway hyperresponsiveness following urban PM1 inhalation suggests the development of airway susceptibilities. To our knowledge, this is the first study to address the pulmonary effects of the continuous inhalation of PM1 at a near-threshold level (concerning to PM10).
Physical properties and chemical composition of the inhaled PM1
Since the exposure of the rats to PM1 was performed under laboratory conditions, characteristics of the generated aerosol were essential. Mass concentration and particle number size distribution were stable and fulfilled the requirements of the planned protocol (Dp < 1 μm, ρ ~ 100 μg/m3) during the exposure periods. As Salma et al. demonstrated by model calculations, the particle diameter applied in this study (391 ± 21 nm) belongs to the most inhalable fraction of the whole size range of atmospheric aerosol [28].
Since the re-suspension of particles was achieved by particle free ambient air, the gaseous composition of inhaled air was identical in case of the exposed and the control animals. Based on the chemical composition of the generated aerosol main emission sources (at the sampling point) were identified. The ratio of BC (indicator of traffic) compared to PM1 mass concentration (BC = 6.38 %) in this study agrees with the findings of other measurements in pedestrian zones in European city centers [29]. Kertész et al. used absolute principal component analysis for source apportionment at the same sampling point as used in this study [30]. According to their results and the identified elements in this study, four sources dominate in the city center of Debrecen: soil (Al, Si, Ca, Fe, Ti), traffic (Cu, Zn, Pb), combustion of oil and coal (S) and a mixed source of power generation and chemical industry (Cl). Since all the identified emission sources are typical of European cities, the findings of this study can be generalized.
Effects of PM1 on basal respiratory function
To characterize the functional changes in the respiratory system, static lung volume measurements were performed together with an assessment of airway and respiratory tissue mechanics by using the forced oscillation technique. This well-validated technique provides information about the flow resistance of the bronchi (Raw) with a detailed description of the respiratory tissue viscoelasticity (G and H). Parameter G reflects the dissipative (damping or resistive) properties of the respiratory tissues, while H is related to the respiratory tissue stiffness (elastance). The baseline values of the EELV [22] and the respiratory mechanical parameters [31, 32] exhibit excellent agreement with those reported previously in rats by using similar experimental methodologies.
Following a 6-week exposure to PM1 at a near-threshold level, no difference was found in the baseline properties of the respiratory system (EELV and mechanical parameters) despite the histological evidence of particles deposited in the acinar and alveolar epithelium. This finding is in concordance with previous results reporting a less than 1 % change in the resistive parameter peak expiratory flow in healthy humans following exposure to diesel exhaust [33], and the lack of change in the forced expiratory lung volumes following exposure to traffic related ambient particles in non-asthmatic subjects [34]. As a mild inflammation of the airways is not associated with a major deterioration of baseline lung function [35], the lack of significant changes in static lung volume, as well as airway and respiratory tissue mechanical parameters is consistent with earlier results, despite the presence of a mild inflammation.
Airway inflammation and responsiveness following PM1 inhalation
We observed significantly higher increases in Raw and significantly lower PD50-Raw values in rats exposed to nonspecific cholinergic constrictor stimuli, which demonstrated the development of airway hyperresponsiveness. The constriction of the central conducting airways (Raw) seems to be unlimited and highly dose-dependent, whereas the lung peripheral response (H) to a cholinergic challenge is restricted. The most plausible explanation for the latter phenomenon may be related to the smaller density of cholinergic receptors [36] on the lung periphery, resulting in their potential saturation by the agonist. The functional abnormality associated with airway hyperresponsiveness was consistent with the development of mild airway inflammation, which was evidenced by the accumulation of macrophages, lymphocytes and basophils in the BALF. We found no evidence for a statistically significant change in the neutrophil cell count in the exposed rats, while exposure to similar nanoparticles led to elevations in neutrophils in earlier studies [37, 38]. This discrepancy can be explained by the larger particle size (391 nm) in our study compared to those ultrafine particles applied previously (25 nm) [38]. Furthermore, the acute phase was investigated in these previous studies, where the innate immunity dominates the inflammatory response, resulting in an elevation of neutrophil count. However, when the exposure is chronic, innate immunity is overpowered by adaptive immunity, resulting in a number of neutrophils around the baseline with elevated lymphocytes.
Histological analyses also confirmed the presence of particles deposited in the bronchial epithelium and phagocytized particles in the alveolar space. Since inflammatory mediators released by these cells were shown to contribute greatly to the development of airway hyperresponsiveness [39], this mechanism provides a plausible explanation to our functional findings. However, the possible involvement of other pathologic processes, such as elevated levels of reactive oxygen species (ROS) and/or oxidative stress can also be anticipated [6, 40, 41].
Due to its technical simplicity, the vast majority of previous studies applied intratracheal instillation of fine and ultrafine particles despite its un-physiological deposition [42]. The few previous studies assessing the respiratory consequences of aerosolized ambient particles demonstrated the appearance of bronchial inflammation [43] and the associated airway hyperresponsiveness [44, 45], similar to our findings. However, these former investigations applied either substantially higher concentrations (3 mg/m3) [44], allergen sensitization [43] or short-term (20 min for 7 days) exposure of neonatal subjects [45]. Our findings add to these results the important information that mild airway symptoms may develop at near-threshold concentrations even in a young healthy adult lung.
Methodological aspects
It must be kept in mind that young healthy adult rats were involved in these investigations. Previous studies report an increased effect of PM in subjects with pre-existing respiratory disorders, such as humans with asthma [34, 46, 47] mice with allergen sensitization [7] or viral infections [37], or in neonatal [45, 48] and aged [38] rat populations.
An important methodological feature of this study is the use of the low-frequency forced oscillation technique to characterize the airway and respiratory tissue mechanics, because it provides the most specific information about the mechanical properties of the different lung compartments. This feature is favorable over methodologies that were applied previously following ambient aerosol exposures, and that supplied either global lung functional indices, such as spirometry [33, 34] or total lung resistance [43, 44], or only qualitative information about the change in the ventilation pattern [14, 35, 37, 43]. However, it is noteworthy that model parameters derived from Zrs data include noticeable components from the chest wall [24, 49]. This suggests that following aerosol exposures the presumably constant chest wall parameters somewhat diminish the real pulmonary changes, particularly in G and H, where the influence of the chest wall is substantial.
We examined the effects of the 6-week-long inhalation of PM1 from urban aerosol samples on the pulmonary system by performing basal lung function measurements, with the assessment of changes in lung responsiveness and also histopathological analyses. The chemical composition of the generated aerosol was typical of Central European cities, and contained no highly toxic compounds such as heavy metals. Mass concentration was stable during the 6-week-long exposure and never exceeded the current PM10-related alert threshold level by more than 10 %. Following the exposure, hyperresponsiveness and mild airway inflammation were detected in healthy adult rats. Our findings were confirmed by forced oscillatory measurements, cell counts assessed from BALF and histopathological examinations. Former studies of larger particle sizes (PM2.5 or PM10) revealed similar respiratory consequences in case of minimum five times higher mass concentrations. These results suggest that particle size significantly determines the concomitant respiratory responses. Effective prevention could be achieved by taking particle size into consideration when defining air quality standards.
BALF:
bronchoalveolar lavage fluid
BC:
Dp:
diameter of particles
EELV:
end-expiratory lung volume
tissue damping
tissue elastance
Iaw :
airway inertance
iv:
LEZ:
low emission zone
MCh:
methacholine
NDIR:
nondispersive infrared sensor
OPC:
optical particle counter
pressure at the loudspeaker end of the wavetube
pressure at the tracheal end of the wavetube
PAS:
Raw :
airway resistance
TC:
total carbon
TEOM:
tapered element oscillation microbalance
WD-XRF:
wavelength dispersive X-ray fluorescence spectrometry
Z0 :
characteristic impedance of the wave-tube
Zrs :
input impedance of the respirator system
α:
exponent of the constant-phase model
γ:
complex propagation number
ρ:
mass concentration
ω:
angular frequency
Pope III CA. Epidemiology of fine particulate air pollution and human health: biologic mechanisms and who's at risk? Environ Health Perspect. 2000;108 Suppl 4:713–23.
U.S. EPA. Air Quality Criteria for Particulate Matter (Final Report, April 1996). U.S. Environmental Protection Agency, Washington, D.C., EPA 600/P-95/001. http://cfpub.epa.gov/ncea/risk/recordisplay.cfm?deid=2832.
Farina F, Sancini G, Longhin E, Mantecca P, Camatini M, Palestini P. Milan PM1 induces adverse effects on mice lungs and cardiovascular system. Biomed Res Int. 2013;2013:583513.
Cyrys J, Peters A, Soentgen J, Wichmann HE. Low emission zones reduce PM10 mass concentrations and diesel soot in German cities. J Air Waste Manag Assoc. 2014;64:481–7.
Amato F, Moreno T, Pandolfi M, Querol X, Alastuey A, Delgado A, et al. Concentrations, sources and geochemistry of airborne particulate matter at a major European airport. J Environ Monit. 2010;12:854–62.
Zhou YM, Zhong CY, Kennedy IM, Pinkerton KE. Pulmonary responses of acute exposure to ultrafine iron particles in healthy adult rats. Environ Toxicol. 2003;18:227–35.
Alessandrini F, Schulz H, Takenaka S, Lentner B, Karg E, Behrendt H, et al. Effects of ultrafine carbon particle inhalation on allergic inflammation of the lung. J Allergy Clin Immunol. 2006;117:824–30.
Donaldson K, Tran L, Jimenez LA, Duffin R, Newby DE, Mills N, et al. Combustion-derived nanoparticles: a review of their toxicology following inhalation exposure. Part Fibre Toxicol. 2005;2:10.
Godleski JJ, Rohr AC, Coull BA, Kang CM, Diaz EA, Koutrakis P. Toxicological evaluation of realistic emission source aerosols (TERESA): summary and conclusions. Inhal Toxicol. 2011;23 Suppl 2:95–103.
Batalha JR, Saldiva PH, Clarke RW, Coull BA, Stearns RC, Lawrence J, et al. Concentrated ambient air particles induce vasoconstriction of small pulmonary arteries in rats. Environ Health Perspect. 2002;110:1191–7.
Gurgueira SA, Lawrence J, Coull B, Murthy GG, Gonzalez-Flecha B. Rapid increases in the steady-state concentration of reactive oxygen species in the lungs and heart after particulate air pollution inhalation. Environ Health Perspect. 2002;110:749–55.
Harkema JR, Wagner JG, Kaminski NE, Morishita M, Keeler GJ, McDonald JD, BarrettEG. Effects of concentrated ambient particles and diesel engine exhaust on allergic airway disease in Brown Norway rats. Res. Rep. Health Eff. Inst. 2009;145:5-55. http://www.ncbi.nlm.nih.gov/pubmed/20198910?dopt=Abstract.
Ito T, Suzuki T, Tamura K, Nezu T, Honda K, Kobayashi T. Examination of mRNA expression in rat hearts and lungs for analysis of effects of exposure to concentrated ambient particles on cardiovascular function. Toxicology. 2008;243:271–83.
de Brito JM, Macchione M, Yoshizaki K, Toledo-Arruda AC, Saraiva-Romanholo BM, Andrade Mde F, et al. Acute cardiopulmonary effects induced by the inhalation of concentrated ambient particles during seasonal variation in the city of Sao Paulo. J Appl Physiol. 2014;117:492–9.
Baranyai E, Tóth I, Nagy D, Posta J. The chemical and morphological analysis of Urban dust. Studia Universitatis Vasile Goldis Arad, Seria Stiintele Vietii. 2011;21:71–5.
Silver SD. Constant flow gassing chambers; principles influencing design and operation. J Lab Clin Med. 1946;31:1153–61.
Dorato MA, Wolff RK. Inhalation exposure technology, dosimetry, and regulatory issues. Toxicol Pathol. 1991;19:373–83.
Kingham S, Durand M, Aberkane T, Harrison J, Wilson JG, Epton M. Winter comparison of TEOM, MiniVol and DustTrak PM 10 monitors in a woodsmoke environment. Athmos Environ. 2006;40:338–47.
Aberkane T, Harvey M, Webb M. Annual ambient air quality monitoring report 2003. U04/58.Christchurch, New Zealand , Environment Canterbury; 2004.http://ecan.govt.nz/publications/Reports/AnnualAirQualityrpt05.pdf
Heim M, Mullins BJ, Umhauer H, Kasper G. Performance evaluation of three optical particle counters with an efficient "multimodal" calibration method. J Aerosol Sci. 2008;39:1019–31.
Ajtai T, Filep A, Utry N, Schnaiter M, Linke C, Bozoki Z, et al. Inter-comparison of optical absorption coefficients of atmospheric aerosols determined by a multi-wavelength photoacoustic spectrometer and an Aethalometer under sub-urban wintry conditions. J Aerosol Sci. 2011;42:859–66.
Habre W, Janosi TZ, Fontao F, Meyers C, Albu G, Pache JC, et al. Mechanisms for lung function impairment and airway hyperresponsiveness following chronic hypoxia in rats. Am J Physiol Lung Cell Mol Physiol. 2010;298:L607–14.
Janosi TZ, Adamicza A, Zosky GR, Asztalos T, Sly PD, Hantos Z. Plethysmographic estimation of thoracic gas volume in apneic mice. J Appl Physiol. 2006;101:454–9.
Petak F, Hantos Z, Adamicza A, Asztalos T, Sly PD. Methacholine-induced bronchoconstriction in rats: effects of intravenous vs. aerosol delivery. J Appl Physiol. 1997;82:1479–87.
Franken H, Clement J, Cauberghs M, Van de Woestijne KP. Oscillating flow of a viscous compressible fluid through a rigid tube: a theoretical model. IEEE Trans Biomed Eng. 1981;28:416–20.
Hantos Z, Daroczy B, Suki B, Nagy S, Fredberg JJ. Input impedance and peripheral inhomogeneity of dog lungs. J Appl Physiol. 1992;72:168–78.
Petak F, Hall GL, Sly PD. Repeated measurements of airway and parenchymal mechanics in rats by using low-frequency oscillations. J Appl Physiol. 1998;84:1680–6.
Salma I, Füri P, Németh Z, Balásházy I, Hofmann W, Farkas Á. Lung burden and deposition distribution of inhaled atmospheric urban ultrafine particles as the first step in their health risk assessment. Athmos Environ. 2015;104:39–49.
Invernizzi G, Ruprecht A, Mazza R, De Marco C, Močnik G, Sioutas C, et al. Measurement of black carbon concentration as an indicator of air quality benefits of traffic restriction policies within the ecopass zone in Milan, Italy. Athmos Environ. 2011;45:3522–7.
Kertész Z, Dobos E, Fenyős B, Kéki R, Borbély-Kiss I. Time and size resolved elemental component study of urban aerosol in Debrecen, Hungary. X-Ray Spectrom. 2008;37:107–10.
Czovek D, Novak Z, Somlai C, Asztalos T, Tiszlavicz L, Bozoki Z, et al. Respiratory consequences of red sludge dust inhalation in rats. Toxicol Lett. 2012;209:113–20.
Fodor GH, Babik B, Czovek D, Doras C, Balogh AL, Bayat S, et al. Fluid replacement and respiratory function: comparison of whole blood with colloid and crystalloid: A randomised animal study. Eur J Anaesthesiol. 2016;33:34–41.
Xu Y, Barregard L, Nielsen J, Gudmundsson A, Wierzbicka A, Axmon A, et al. Effects of diesel exposure on lung function and inflammation biomarkers from airway and peripheral blood of healthy volunteers in a chamber study. Part Fibre Toxicol. 2013;10:60.
Sarnat JA, Golan R, Greenwald R, Raysoni AU, Kewada P, Winquist A, et al. Exposure to traffic pollution, acute inflammation and autonomic response in a panel of car commuters. Environ Res. 2014;133:66–76.
Dong CC, Yin XJ, Ma JY, Millecchia L, Wu ZX, Barger MW, et al. Effect of diesel exhaust particles on allergic reactions and airway responsiveness in ovalbumin-sensitized brown Norway rats. Toxicol Sci. 2005;88:202–12.
Mak JC, Barnes PJ. Autoradiographic visualization of muscarinic receptor subtypes in human and guinea pig lung. Am Rev Respir Dis. 1990;141:1559–68.
Lambert AL, Mangum JB, DeLorme MP, Everitt JI. Ultrafine carbon black particles enhance respiratory syncytial virus-induced airway reactivity, pulmonary inflammation, and chemokine expression. Toxicol Sci. 2003;72:339–46.
Elder AC, Gelein R, Finkelstein JN, Cox C, Oberdorster G. Pulmonary inflammatory response to inhaled ultrafine particles is modified by age, ozone exposure, and bacterial toxin. Inhal Toxicol. 2000;12 Suppl 4:227–46.
Laskin DL, Morio L, Hooper K, Li TH, Buckley B, Turpin B. Peroxides and macrophages in the toxicity of fine particulate matter in rats. Res Rep Health Eff Inst. 2003;(117):1-51; discussion 53-63. http://www.ncbi.nlm.nih.gov/pubmed/?term=Laskin+DL%2C+Morio+L%2C+Hooper+K%2C+Li+TH%2C+Buckley+B%2C+Turpin+B%3A+Peroxides+and+macrophages+in+the+toxicity+of+fine+particulate+matter+in+rats.+Res+Rep+Health.
Carosino CM, Bein KJ, Plummer LE, Castaneda AR, Zhao Y, Wexler AS, et al. Allergic airway inflammation is differentially exacerbated by daytime and nighttime ultrafine and submicron fine ambient particles: heme oxygenase-1 as an indicator of PM-mediated allergic inflammation. J Toxicol Environ Health A. 2015;78:254–66.
Lu S, Zhang W, Zhang R, Liu P, Wang Q, Shang Y, et al. Comparison of cellular toxicity caused by ambient ultrafine particles and engineered metal oxide nanoparticles. Part Fibre Toxicol. 2015;12:5.
Osier M, Oberdorster G. Intratracheal inhalation vs intratracheal instillation: differences in particle effects. Fundam Appl Toxicol. 1997;40:220–7.
Alessandrini F, Beck-Speier I, Krappmann D, Weichenmeier I, Takenaka S, Karg E, et al. Role of oxidative stress in ultrafine particle-induced exacerbation of allergic lung inflammation. Am J Respir Crit Care Med. 2009;179:984–91.
Miyabara Y, Ichinose T, Takano H, Lim HB, Sagai M. Effects of diesel exhaust on allergic airway inflammation in mice. J Allergy Clin Immunol. 1998;102:805–12.
Balakrishna S, Saravia J, Thevenot P, Ahlert T, Lominiki S, Dellinger B, et al. Environmentally persistent free radicals induce airway hyperresponsiveness in neonatal rat lungs. Part Fibre Toxicol. 2011;8:11.
Evans KA, Halterman JS, Hopke PK, Fagnano M, Rich DQ. Increased ultrafine particles and carbon monoxide concentrations are associated with asthma exacerbation among urban children. Environ Res. 2014;129:11–9.
Schaumann F, Fromke C, Dijkstra D, Alessandrini F, Windt H, Karg E, et al. Effects of ultrafine particles on the allergic inflammation in the lung of asthmatics: results of a double-blinded randomized cross-over clinical pilot study. Part Fibre Toxicol. 2014;11:39.
Chan JK, Fanucchi MV, Anderson DS, Abid AD, Wallis CD, Dickinson DA, et al. Susceptibility to inhaled flame-generated ultrafine soot in neonatal and adult rat lungs. Toxicol Sci. 2011;124:472–86.
Barnas GM, Stamenovic D, Lutchen KR. Lung and chest wall impedances in the dog in normal range of breathing: effects of pulmonary edema. J Appl Physiol. 1992;73:1040–6.
The authors thank Orsolya Ivánkovitsné Kiss for her excellent technical assistance.
The help of József Tolnai is greatly appreciated in the analysis of the plethysmographic measurements.
The authors are grateful to Prof. József Posta for providing the aerosol samples.
This research was supported by the European Union and the State of Hungary, co-financed by the European Social Fund in the framework of TÁMOP 4.2.6-14/1, TÁMOP 4.2.6-15/1-2015-0002, TÁMOP-4.2.2.D-15/1/KONV-2015-0024 and TÁMOP 4.2.4. A/2-11-1-2012-0001 "National Excellence Program". The project was supported by the NTP-EFÖ-P-15 project by the Human Capacities Grant Management Office and the Hungarian Ministry of Human Capacities.
The authors confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
Department: MTA-SZTE Research Group on Photoacoustic Spectroscopy, H-6720, Szeged, Dóm tér 9, Hungary
Ágnes Filep, Zoltán Bozóki & Gábor Szabó
Department of Optics and Quantum Electronics, University of Szeged, H-6720, Szeged, Dóm tér 9, Hungary
Department of Medical Physics and Informatics, University of Szeged, H-6720, Szeged, Korányi fasor 9, Hungary
Gergely H. Fodor & Ferenc Peták
Institute for Environmental Sciences, University of Szeged, H-6720, Szeged, Dóm tér 9, Hungary
Fruzsina Kun-Szabó
Department of Pathology, University of Szeged, H-6720, Szeged, Állomás u. 2, Hungary
László Tiszlavicz & Zsolt Rázga
Department of Mineralogy, Geochemistry and Petrology, University of Szeged, H-6722, Szeged, Egyetem u. 2, Hungary
Gábor Bozsó
Ágnes Filep
Gergely H. Fodor
László Tiszlavicz
Zsolt Rázga
Zoltán Bozóki
Gábor Szabó
Ferenc Peták
Corresponding authors
Correspondence to Ágnes Filep or Gergely H. Fodor.
AF initiated and designed the studies conducted animal exposures, collected and processed particle size distribution data, data analysis and drafted the manuscript. GHF initiated and designed the studies performed the respiratory measurements and BALF analysis and drafted the manuscript. FKSZ assisted with data interpretation. FP assisted with manuscript drafting and edited the final manuscript, LT helped with histopathological analysis, ZSR performed the TEM imaging, GB characterized the particles, ZB and GSZ edited the final manuscript. All authors have read and approved the final manuscript.
Filep, Á., Fodor, G.H., Kun-Szabó, F. et al. Exposure to urban PM1 in rats: development of bronchial inflammation and airway hyperresponsiveness. Respir Res 17, 26 (2016). https://doi.org/10.1186/s12931-016-0332-9
Accepted: 09 February 2016
Airway hyperresponsiveness
Bronchial inflammation
Ambient aerosol
|
CommonCrawl
|
Astronomy Meta
Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. It only takes a minute to sign up.
Why are there not yet any instruments dedicated to registering time dilation caused by passing gravitational waves?
Wouldn't it be interesting to augment LIGO/Virgo capturing of space distortion with simultaneous capturing time dilation (both caused by the same passing gravitational wave)?
gravitational-waves time-dilation ligo
$\begingroup$ For uninformed folks like me could you add a link explaining "time dilation caused by passing gravitational wave(s)" showing that it has been predicted? Thanks! $\endgroup$
– uhoh
$\begingroup$ A much better, and less self-assured question is, "Why isn't it necessary to adjust atomic clocks for time dilation caused by passing gravitational waves?" $\endgroup$
– RonJohn
General relativity predicts that there are only two possible polarizations of gravitational waves, the so-called "tensor" polarizations $+$ and $\times$. It turns out you can show that the tensor polarizations actually don't lead to time dilation, making any attempted measurement of it pointless. The short answer, then, is that we don't expect to see any time dilation at all!
Now, you could argue that such an experiment would still be useful insofar as it could be used to search for alternative polarizations (the "scalar" and "vector" polarizations) which would indicate that a different theory of gravity is warranted. On the other hand, this would be arguably be redundant, because there are other methods with which we can probe alternative polarizations in interferometric data, either by looking at individual sources or the hypothesized stochastic gravitational wave background (at the frequencies LIGO is sensitive to).
An individual transient signal would need five$^{\dagger}$ appropriately aligned detectors to fully characterize contributions of alternative polarizations, but the LIGO-Virgo collaboration was able to search for evidence of scalar and vector polarizations in the signal from GW170814 (more here) and at least found that purely tensor polarizations were strongly favored over purely scalar or purely vector polarizations. KAGRA has begun observations, and LIGO-India should be completed by the middle of the decade, which will help break some of the degeneracies at work.
A search of the stochastic background wouldn't require so many detectors because the signal is not coming from any one place in the sky, so it provides another strategy with which to probe alternative polarizations. The O1 observing run turned up no evidence of backgrounds with scalar or vector polarizations; that said, there was also no evidence of any background at all, tensor polarizations included. It's also possible that pulsar timing arrays may be able to shed light on the issue if a stochastic background is detected and there is substantial evidence for tensor polarizations but not alternative polarizations (Cornish et al. 2017), making some of this moot.
$^{\dagger}$A single interferometer's response to a gravitational wave is a sum of terms corresponding to individual polarizations. In more general theories of gravity, there are up to two tensor modes, two vector modes, and two scalar modes, but the class of interferometers LIGO and Virgo belong to can only measure a particular linear combination of the scalar modes, so we deal with five degrees of freedom. Therefore, five detectors are needed to determine how each mode (or combination or modes) contributes to the signal (Chatziioannou et al. 2021).
Daddy Kropotkin
HDE 226868♦HDE 226868
$\begingroup$ Loeb and Maoz (2015) explicitly say there are time-delay components to GWs (and give an approximate amplitude for the time-time component of the metric in the case of binary SMBHs -- although without a derivation) and propose a scheme for measuring this using atomic clocks in separate orbits in space. Are they just confused? $\endgroup$
– Peter Erwin
$\begingroup$ The two tensor polarization states do not cause time dilation when derived in a flat background spacetime, but what about if a curved background is used instead, i.e., such as Kerr, which is more relevant for LIGO applications? $\endgroup$
– Daddy Kropotkin
$\begingroup$ @PeterErwin I don't think I'm the person to adequately answer that. I've only ever seen the Newtonian gauge in the context of first-order scalar perturbations (and I've only ever seen GWs associated with first-order tensor perturbations), and since I can't track down the source for whatever derivation they're using for their first equation, I can't speculate on what their reasoning is. I also don't know whether it makes sense to associate SMBH binaries with scalar perturbations of any order. So I'm at a loss there. $\endgroup$
– HDE 226868 ♦
$\begingroup$ @HDE226868 I think quasinormal modes from tensor perturbations are used to study the ringdown after a compact binary merger, and are also used in extreme mass-ratio inspirals. In those cases, the background spacetime is curved, not Minkowski, and hence my question above. Also, I think they derived their Eq.(1) just by taking the leading term of the post-Newtonian approximation, which depends on the chirp mass to the 5/3 $\endgroup$
$\begingroup$ @DaddyKropotkin surely what matters is where the GWs are measured, not what produced them. The GWs detect on Earth are essentially plane waves on a flat background. Sure, close to the generating source there will be longitudinal modes and non-zero $h_{00}$ terms. $\endgroup$
– ProfRob
The answer by @HDE 226868 addresses the current attempts by LIGO/Virgo and PTAs to detect alternate gravitational wave (GW) polarization states, which have not been detected. In that answer, this SE question is cited, which shows that gravitational waves being interpreted as tensor perturbations of the flat (Minkowski) spacetime produces only two non-trivial polarization states which are not time-time components and thus do not cause time dilation. However, this does not mean that gravitational radiation cannot generally cause gravitational time dilation, since the components of the strain tensor $h_{\mu\nu}$ are not gauge-invariant quantities, so I think it might not be sufficient to just point at them and claim that there is no time dilation.
In the (mathematically rigorous) paper by Koop and Finn (2014), they characterize the GW amplitude using the Riemann curvature tensor to "provide a new, first-principles derivation of the response of modern, light-time gravitational wave detectors in terms of their interaction with spacetime curvature... Finally, the curvature-based response formula leads to a simpler calculation of light-time detector response than the corresponding calculations carried out using the metric perturbation approach." See their Eq. (3.16) for that formula.
Hence, they proved using pure differential geometry that gravitational waves can cause time dilation in a light-time detector, which provides fundamental justification for the ideas used in the paper by Loeb and Moaz (2014) about atomic clocks and gravitational waves.
The Loeb and Moaz (2014) paper outlines a proposed framework to detect the gravitational time dilation due to a gravitational wave that passes through a network of atomic clocks orbiting in space. They use the post-Newtonian approximation, specifically the leading-order mass quadrupole approximation, as seen in their Eq. (1) where the strain depends on the 5/3 power of the chirp mass, e.g. see Eq. (3.9) of Cutler and Flannagan (1994). They cite a seminal paper by A. Sesana (2013), whose Eq. (11) is equivalent to the Eq. (1) of Loeb and Moaz, and Sesana even derives it for us :). In the footnote 1 of Loeb and Moaz (2014), they state:
"In this paper, we adopt for pedagogical reasons a Newtonian gauge which is commonly used to describe the time-dilation ef- fect due to stationary gravity, as measured in the Pound-Rebka experiment 7. In this gauge, an oscillating perturbation in the time-time component of the metric, $h_{00}$, would trigger periodic variation in the Pound-Rebka time dilation and a mismatch be- tween the ticking rate of clocks separated apart."
Therefore, I think that Loeb and Moaz (2014) are just assuming that their Eq. (1) approximates the time-time component of the strain tensor, as means of having a crude approximation to work with for the sake of outlining the idea of the paper, by identifying $f$ as the redshifted frequency, not the intrinsic gravitational wave frequency.
Why are there not yet any atomic clock instruments dedicated to registering time dilation caused by passing gravitational waves?
Mostly because the sensitivity of atomic clock instruments has only recently reached the precision required to make gravitational time dilation measurements, and also because detecting gravitational waves is a rather recent accomplishment. As stated in the intro of Loeb and Moaz (2014), the precision of optical lattice atomic clocks has reached $\sim 10^{-18}$, which is precisely the numerical prefactor in the front of their Eq. (1).
Yes indeed it would! But I think this would require using more sophisticated treatments of the background spacetime, which is dominated by the gravity of the solar system for LIGO/Virgo, rather than treating it as flat. Also, as @HDE 226868 points out, doing this with serious precision requires several ground-based interferometers, which will likely be reality in the future!
EDIT: This was my first answer which is not very relevant for the OP. Although pulsar timing arrays (PTAs) do not measure gravitational time dilation proper, as pointed out by HDE 226868, I'll keep it here for sake of clarity for my own progression in thinking about these questions.
The binary pulsar discovered by Hulse and Taylor in 1974 was the first binary pulsar to be discovered, and it was the first observational verification (later in 1975) of the existence of gravitational waves - however direct detection of gravitational waves did not occur until 2015 by LIGO and Virgo via compact binary coalescences.
Anyway, PTAs is a network of known pulsars whose delays of the time of arrival of pulses of light are correlated by a passing gravitational wave. Intuitively, such a gravitational wave would need to have a long wavelength, so a natural candidate has been the stochastic background of gravitational waves. The various correlations that exist in the networks are handled in a myriad of ways.
The Nanograv consortium has been taking data for over a decade, and recently published this paper announcing their progress. They are on the precipice of making a detection of the stochastic background, but there are some correlations that are still being worked out.
There are other PTAs being designed/constructed so the future looks bright for this field!
Daddy KropotkinDaddy Kropotkin
$\begingroup$ It's a bit misleading to say that PTAs look for time dilation; it's just appropriately correlated changes in the time of arrival (which aren't caused by time dilation), not intrinsic time dilation in the usual sense. $\endgroup$
$\begingroup$ Ah I see this now. My bad. I wrote this answer at night half asleep - I've edited it to be more clear. $\endgroup$
In Cartesian coordinates, the flat spacetime interval can be written in terms of invariant proper time $\tau$ as $$c^2 d\tau^2 = c^2dt^2 - dx^2 - dy^2 - dz^2\ ,$$ where $t$ is some universal time coordinate and the usual notation convention that $dt^2 = (dt)^2$ is used.
For all stationary observers, in the frame of reference for which $x, y, z$ are defined, then $dx= dy=dz=0$ and hence $d\tau = dt$ for all clocks that are stationary in that frame and the ratio of proper times is unity. This means the clock carried by the observer, which measures $\tau$, also measures $t$ and there is no time dilation between different stationary observers. Things change of course when observers start moving - that is Special Relativity.
The relevance of this, is that a gravitational wave (GW) applies a small perturbation to the metric, so the spacetime interval for a passing GW travelling along the $z$-axis is: $$c^2d\tau^2 = c^2dt^2 - (1+a_+\sin \omega t)dx^2 -2a_\times\sin(\omega t +\phi) dxdy - (1 - a_+\sin\omega t)dy^2 - dz^2\ , $$ where $\omega$ is the GW frequency, $a_+$ and $a_\times$ are the amplitudes of the tiny GW perturbations, one for each of the possible "plus" and "cross" polarisations, and $\phi$ is an arbitrary phase difference between those polarisations.
If $dx=dy=dz=0$, then you can see that it is still the case that $d\tau = dt$ and there is no time dilation between clocks at different locations.
This all assumes you are far from the source of gravitational waves, so that the waves can be considered transverse.
ProfRobProfRob
$\begingroup$ Repeating my comment to HDE 226868's answer: Loeb and Maoz explicitly say there are time-delay components to GWs (and give an approximate amplitude for the time-time component of the metric in the case of binary SMBHs -- although without a derivation -- and propose a scheme for measuring this using atomic clocks in separate orbits in space. Are they just wrong? $\endgroup$
$\begingroup$ @PeterErwin that 2015 preprint hasn't been accepted to a journal. So yes, maybe they are wrong. They have a non-zero $h_{00}$ metric component by adopting the Newtonian gauge. It may come down to what you mean by "time dilation". I am not qualified to say that L+M have got it wrong. There are Doppler shifts and other effects associated with passing GWs that are predicted by the metric above that might be confused/conflated with "time dilation". $\endgroup$
$\begingroup$ The first two paragraphs seem completely irrelevant. They describe a flat spacetime, which isn't what we have when there is a gravitational wave. Other issues: you don't give any justification for taking $dx=dy=dz=0$; you don't give any justification for interpreting $dt/d\tau$ as a measure of time dilation, which is problematic since the coordinate $t$ doesn't automatically have any physical interpretation. $\endgroup$
$\begingroup$ @BenCrowell flat spacetime is introduced because the gravitational waves we observe on Earth are a tiny perturbation of that. Time dilation is commonly defined in terms of the ratio of $dt$ to $d\tau$ for clocks that are located at different, but stationary spatial coordinates. $\endgroup$
$\begingroup$ @ProfRob: flat spacetime is introduced because the gravitational waves we observe on Earth are a tiny perturbation of that. You still haven't made any logical connection with the rest of your argument. Time dilation is commonly defined in terms of the ratio of dt to dτ for clocks that are located at different, but stationary spatial coordinates. No, this is wrong. One of the hardest things for beginners to get used to when they learn general relativity is that coordinates such as t don't have any built-in special significance $\endgroup$
Thanks for contributing an answer to Astronomy Stack Exchange!
Not the answer you're looking for? Browse other questions tagged gravitational-waves time-dilation ligo or ask your own question.
Graduation of the Astronomy Stack Exchange
What will eLISA be trying to observe?
Why can we detect gravitational waves?
Would gravitational waves be subject to external gravitational perturbations?
Multi-messenger astronomy: what is the potential of simultaneous detection of gravitational waves and neutrinos from a supernova?
What can be learned from, or noted in this LIGO Orrery video?
What's the actual speed of electromagnetic radiation in space?
Which things can LIGO see that LISA can't, and vice-versa?
Cosmological redshift - How do we know it's not caused by the observer's time dilation?
Black Hole Collision & Gravitational Waves
|
CommonCrawl
|
Infinite Sequences and Series
Select Section 11.1: Sequences 11.2: Series 11.3: The Integral Test and Estimates of Sums 11.4: The Comparison Tests 11.5: Alternating Series 11.6: Absolute Convergence and the Ratio and Root Tests 11.7: Strategy for Testing Series 11.8: Power Series 11.9: Representations of Functions as Power Series 11.10: Taylor and Maclaurin Series 11.11: Applications of Taylor Polynomials
(a) What is a sequence?
(b) What does it mean to say that $ \lim_{n \to \infty} a_n = 8? $
(c) What does it mean to say that $ \lim_{n \to \infty} a_n = \infty? $
Gabriel R.
(a) What is a convergent sequence? Give two examples.
(b) What is a divergent sequence? Give two examples.
List the first five terms of the sequence.
$ a_n = \frac {2^n}{2n + 1} $
.List the first five terms of the sequence.
$ a_n = \frac {n^2 - 1}{n^2 + 1} $
$ a_n = \frac {(-1)^{n-1}}{5^n} $
$$ a_n = \cos {n \pi}{2} $$
$ a_n = \frac {1}{n + 1}! $
$ a_n = \frac {(-1)^nn}{n! + 1} $
$ a_1 = 1, a_{n+1} = 5a_n - 3 $
$ a_1 = 6, a_{n+1} = \frac {a_n}{n} $
$ a_1 = 2, a_{n+1} = \frac {a_n}{1 + a_n} $
$ a_1 = 2, a_2 = 1, a_{a+1} = a_n - a_{n-1} $
Find a formula for the general term $ a_n $ of the sequence, assuming that the pattern of the first few terms continues.
$ \left\{\begin{array} \frac {1}{2}, \frac {1}{4}, \frac {1}{6}, \frac {1}{8}, \frac {1}{10}, . . . .\end{array}\right\} $
$$ \left\{\begin{array} 4, -1, \frac {1}{4}, - \frac {1}{16}, \frac {1}{64}, . . . . .\end{array}\right\} $$
$ \left\{ -3, 2, - \frac {4}{3}, {8}{9}, - \frac {16}{27}, . . .\right\} $
Joseph S.
$ \left\{\begin{array} 5, 8, 11, 14, 17, . . . . .\end{array}\right\} $
$ \left\{\begin{array} \frac {1}{2}, - \frac {4}{3}, \frac {9}{4}, - \frac {16}{5}, \frac {25}{6}, . . . . .\end{array}\right\} $
$ \left\{\begin{array} 1, 0, -1, 0, 1, 0, -1, 0, . . . .\end{array}\right\} $
Calculate, to four decimal places, the first ten terms of the sequence and use them to plot the graph of the sequence by hand. Does the sequence appear to have a limit? If so, calculate it. If not, explain why.
$ a_n = \frac {3n}{1 + 6n} $
$ a_n = 2 + \frac {(-1)^n}{n} $
$ a_n = 1 + (- \frac {1}{2})^n $
$ a_n = 1 + \frac{10^n}{9^n} $
Determine whether the sequence converges or diverges. If it converges, find the limit.
$ a_n = \frac {3 + 5n^2}{n + n^2} $
$ a_n = \frac {3 + 5n^2}{1 + n} $
$ a_n = \frac {n^4}{n^3 - 2n} $
$ a_n = 2 + (0.86)^n $
$ a_n = 3^n 7^{-n} $
$ a_n = \frac {3 \sqrt {n}}{\sqrt {n} + 2} $
$ a_n = e^{-1/ \sqrt n} $
$ a_n = \sqrt { \frac {1 + 4n^2}{1 + n^2}} $
$ a_n = \frac {4^n}{1 + 9^n} $
$ a_n = \cos \left( \frac {n \pi}{n + 1} \right) $
$ a_n = \frac {n^2}{\sqrt {n^3 + 4n}} $
$ a_n = e^{2n/(n + 2)} $
$ a_n = \frac {(-1)^n}{2 \sqrt n} $
$ a_n = \frac {(-1)^{n + 1}n}{n + \sqrt n} $
$ \left \{ \frac {(2n - 1)!}{(2n + 1)!}\right \}$
$ \left \{ \frac {\ln n}{\ln 2n} \right \} $
$ \{ \sin n \} $
$ a_n = \frac {\tan^{-1}n}{n} $
$ \{ n^2e^{-n}\} $
$ a_n = \ln (n + 1) - \ln n $
$ a_n = \frac { \cos^2 n}{2^n} $
$ a_n = \sqrt [n]{2^{1 + 3n}} $
$ a_n = n \sin (1/n) $
$ a_n = 2^{-n} \cos n \pi $
$ a_n = \left( 1+ \frac {2}{n} \right)^n $
$ a_n = \sqrt[n]{n} $
$ a_n = \ln(2n^2 + 1) - \ln(n^2 + 1) $
$ a_n = \frac { (\ln n)^2}{n} $
$ a_n = \arctan (\ln n) $
$ a_n = n - \sqrt {n + 1} \sqrt {n + 3} $
$ \left \{ 0, 1, 0, 0, 1, 0, 0, 0, 1, . . . \right \} $
$ \left \{ \frac {1}{1}, \frac {1}{3}, \frac {1}{2}, \frac {1}{4}, \frac {1}{3}, \frac {1}{5}, \frac {1}{4}, \frac {1}{6}, . . . \right \} $
$ a_n = \frac {n!}{2^n} $
$ a_n = \frac {(-3)^n}{n!} $
Use a graph of the sequence to decide whether the sequence is convergent or divergent. If the sequence is convergent, guess the value of the limit from the graph and then prove your guess. (See the margin note on page 699 for advice on graphing sequence .)
$ a_n = (-1)^n \frac {n}{n + 1} $
$ a_n = \frac { \sin n}{n} $
$ a_n = \arctan \left( \frac {n^2}{n^2 + 4} \right) $
$ a_n = \sqrt[n]{3^n + 5^n} $
$ a_n = \frac {n^2 \cos n}{1 + n^2} $
$ a_n = \frac { 1 \cdot 3 \cdot 5 \cdot \cdot \cdot \cdot \cdot (2n - 1)}{n!} $
$ a_n = \frac {1 \cdot 3 \cdot 5 \cdot \cdot \cdot \cdot \cdot (2n - 1)}{(2n)^n} $
(a) Determine whether the sequence defined as follows is convergent or divergent:
$ a_1 = 1 $ $ a_{n + 1} = 4 - a_n $ for $ n \ge 1 $
(b) What happens if the first term is $ a_1 = 2 $ ?
If $ \$ $1000 is invested at $ 6 \% $ interest, compounded annually, then after $ n $ years the investment is worth $ a_n = 1000(1.06)^n $ dollars.
(a) Find the first five terms of the sequence $ \{ a_n\}. $
(b) Is the sequence convergent or divergent? Explain.
If you deposit $ \$ $100 at the end of every month into an account that pays $ 3 \% $ interest per year compounded monthly, the amount of interest accumulated after $ n $ months is given by the sequence
$ I_n = 100 \left( \frac {1.0025^n - 1}{0.0025} - n\right) $
(a) Find the first six terms of the sequence.
(b) How much interest will you have earned after two years?
A fish has 5000 catfish in his pond. The number of catfish by $ 8 \% $ per month and the farmer harvests 300 catfish per month.
(a) Show that the catfish population $ P_n $ after $ n $ months is given recursively by
$ P_n = 1.08 P_{n-1} - 300$
$P_0 = 5000$
(b) How many catfish are in the pond after six months?
Find the first 40 terms of the sequence defined by
$ a_{n + 1} =\left\{
\begin{array}{ll}
\frac{1}{2} a_n & \text{if } a_n \text{ is an even number} \\
3a_n + 1 & \text{if } a_n \text{ is an odd number } \end{array} \right. $
and $ a_1 = 11. $ Do the same if $ a_1 = 25. $ Make a conjecture about this type of sequence.
For what values of $ r $ is the sequence $ \left\{ nr^n \right\} $ convergent?
(a) If $ \left \{ a_n \right\} $ is convergent, show that
$ \displaystyle\lim_{n\to\infty} a_{n+1} = \displaystyle\lim_{n\to\infty} a_n $
(b) A sequence $ \left\{ a_n \right\} $ is defined by $ a_1 = 1 $ and $ a_{n + 1} = 1/(1 + a_n) $ for $ n \ge 1. $ Assuming that $ \left\{ a_n \right\} $ is convergent, find its limit.
Suppose you know that $ \left\{ a_n \right\} $ is a decreasing sequence and all its terms lie between the numbers 5 and 8. Explain why the sequence has a limit. What can you say about the value of the limit?
Determine whether the sequence is increasing, decreasing, or not monotonic. Is the sequence bounded?
$ a_n = \cos n $
$ a_n = \frac{1}{2n + 3} $
$ a_n = \frac{1 - n}{2 +n} $
$ a_n = n(-1)^n $
$ a_n = 2 + \frac{(-1)^n}{n} $
$ a_n = 3 - 2ne^{-n} $
$ a_n = n^3 - 3n + 3 $
Find the limit of the sequence
$ \left\{ \sqrt 2, \sqrt{2\sqrt2}, \sqrt{2\sqrt{2\sqrt2}}, \cdot \cdot \cdot \right\} $
A sequence $ \left\{ a_n \right\} $ is given by $ a_1 = \sqrt 2, a_{n + 1} = \sqrt {2 + a_n}. $
(a) By induction or otherwise, show that $ \left\{ a_n \right\} $ is increasing and bounded above by 3. Apply the Monotonic Sequence Theorem to show that $ \lim_{n\to\infty} a_n $ exists.
(b) Find $ \lim_{n\to\infty} a_n. $
Show that the sequence defined by
$ a_1 = 1 $
$ a_{n + 1} = 3 - \frac{1}{a_n} $
is increasing and $ a_n < 3 $ for all $ n. $ Deduce that $ \{ a_n \} $ is convergent and find its limit.
$ a_{n + 1} = \frac {1}{3 - a_n} $
satisfies $ 0 < a_n \le 2 $ and is decreasing. Deduce that the sequence is convergent and find its limit.
(a) Fibonacci posed the following : Suppose that rabbits live forever and that every month each pair produces a new pair which becomes productive at age 2 months. If we start with one newborn pair, how many pairs or rabbits will we have in the $ n $th month? Show that the answer is $ f_n $ where $ \{ f_n \} $ is the Fibonacci sequence defined in Example 3(c).
(b) Let $ a_n = f_{n + 1} / f_n $ and show that $ a_{n - 1} = 1 + 1/a_{n - 2}. $ Assuming that $ \{ a_n \} $ is convergent, find its limit.
(a) Let $ a_1 =a, a_2 = f(a), a_3 = f(a_2) = f( f(a)), . . . , a_{n + 1} = f(a_n), $ where $ f $ is a continuous function. If $ lim_{n \to\infty} a_n = L, $ show that $ f(L) = L. $
(b) Illustrate part (a) by taking $ f(x) = \cos x, a = 1, $ and estimating the value of $ L $ to five decimal places.
(a) Use a graph to guess the value of the limit
$ \displaystyle \lim_{n \to \infty} \frac {n^5}{n!} $
(b) Use a graph of the sequence in part (a) to find the smallest values of $ N $ that correspond to $ \varepsilon = 0.1 $ and $ \varepsilon = 0.001 $ in Definition 2.
Use Definition 2 directly to prove that $ lim_{n \to \infty} r^n = 0 $ when $ \mid r \mid < 1. $
Prove Theorem 6.
[Hint: Use either Definition 2 or the Squeeze Theorem. ]
Prove that if $ \lim_{n \to \infty} a_n = 0 $ and $ \left \{ b_n \right \} $ is bounded, then $ \lim_{n \to\infty} (a_n b_n) = 0. $
Let $ a_n = \left ( 1 + \frac {1}{n} \right)^n. $
(a) Show that if $ 0 \le a < b, $ then
$ \frac {b^{n + 1} - a^{n + 1}}{b -a } < (n + 1)b^n $
(b) Deduce that $ b^n[(n + 1)a - nb] < a^{n + 1}. $
(c) Use $ a = 1 + 1/(n + 1) $ and $ b = 1 + 1/n $ in part (b) to show that $ \left \{ a_n \right \} $ is increasing.
(d) Use $ a = 1 $ and $ b = 1 + 1/(2n) $ in part (b) to show that $ a_{2n} < 4. $
(e) Use parts (c) and (d) to show that $ a_n < 4 $ for all $ n. $
(f) Use Theorem 12 to show that $ \lim_{n \to\infty} (1 + 1/n)^n $ exists.
(The limit is $ e. $ See Equation 3.6.6.)
Let $ a $ and $ b $ be positive numbers with $ a > b. $ Let $ a_1 $ be their arithmetic mean and $ b_1 $ their geometric mean:
$ a_1 = \frac {a + b}{2} $
$ b_1 = \sqrt {ab} $
Repeat this process so that, in general,
$ a^{n + 1} = \frac {a_n + b_n}{2} $
$ b_{n + 1} = \sqrt {a_n b_n} $
(a) Use mathematical induction to show that
$ a_n > a_{n + 1} > b_{n + 1} > b_n $
(b) Deduce that both $ \{ a_n \} $ and $ \{ b_n \} $ are convergent.
(c) Show that $ \lim_{n \to\infty} a_n = \lim_{n \to \infty} b_n $. Gauss called the common value of these limits the arithmetic-geometric mean of the numbers $ a $ and $ b. $
(a) Show that if $ \lim_{n \to \infty} a_{2n} = L $ and $ \lim_{n \to\infty} a_{2n + 1} = L, $ then $ \{ a_n \} $ is convergent and $ \lim_{n \to \infty} a_n = L $.
(b) If $ a_1 = 1 $ and
$ a_{n + 1} = 1 + \frac {1}{1 + a_n} $
find the first eight terms of the sequence $ \{ a_n \} $. Then use part (a) to show that $ \lim_{n \to \infty} a_n = \sqrt{ 2 } $. This gives the continued fraction expansion
$ \sqrt{ 2 } = 1 + \frac {1}{ 2 + \frac {1}{2 + \cdot \cdot \cdot}} $
The size of an undisturbed fish population has been modeled by the formula
$ p_{n + 1} = \frac {bp_n}{a + p_n} $
where $ p_n $ is the fish population after $ n $ years and $ a $ and $ b $ are positive constants that depend on the species and its environment. Suppose that the population in year 0 is $ p_0 > 0. $
(a) Show that if $ \{ p_n \} $ is convergent, then the only possible values for its limit are 0 and $ b - a $.
(b) Show that $ p_{n + 1} < (b/a)p_n $.
(c) Use part (b) to show that if $ a > b, $ then $ \lim_{n \to \infty} p_n = 0 $; in other words, the population dies out.
(d) Now assume that $ a < b $. Show that if $ p_0 < b - a $, then $ \{ p_n \} $ is increasing and $ 0 < p_n < b - a $. Show also that if $ p_0 > b - a $, then $ \{ p_n \} $ is decreasing and $ p_n > b - a $. Deduce that if $ a < b $, then $ \lim_{n \to \infty} p_n = b - a $.
|
CommonCrawl
|
Bayesian selection for coarse-grained models of liquid water
Elevating density functional theory to chemical accuracy for water simulations through a density-corrected many-body formalism
Saswata Dasgupta, Eleftherios Lambros, … Francesco Paesani
Water clusters and density fluctuations in liquid water based on extended hierarchical clustering methods
Yitian Gao, Hongwei Fang, … Yixuan Feng
Liquid water contains the building blocks of diverse ice phases
Bartomeu Monserrat, Jan Gerit Brandenburg, … Bingqing Cheng
The UWHAM and SWHAM Software Package
Bin W. Zhang, Shima Arasteh & Ronald M. Levy
Data-driven analysis of the number of Lennard–Jones types needed in a force field
Michael Schauperl, Sophie M Kantonen, … Michael K Gilson
FSC-Q: a CryoEM map-to-atomic model quality validation based on the local Fourier shell correlation
Erney Ramírez-Aportela, David Maluenda, … Carlos Oscar S. Sorzano
Simulating the ghost: quantum dynamics of the solvated electron
Jinggang Lan, Venkat Kapil, … Vladimir V. Rybkin
Diagnostics of Data-Driven Models: Uncertainty Quantification of PM7 Semi-Empirical Quantum Chemical Method
James Oreluk, Zhenyuan Liu, … Dmitry Zubarev
Integration of experimental data and use of automated fitting methods in developing protein force fields
Marcelo D. Polêto & Justin A. Lemkul
Julija Zavadlav1,
Georgios Arampatzis ORCID: orcid.org/0000-0003-0674-62601,2 &
Petros Koumoutsakos ORCID: orcid.org/0000-0001-8337-21221,2
Scientific Reports volume 9, Article number: 99 (2019) Cite this article
The necessity for accurate and computationally efficient representations of water in atomistic simulations that can span biologically relevant timescales has born the necessity of coarse-grained (CG) modeling. Despite numerous advances, CG water models rely mostly on a-priori specified assumptions. How these assumptions affect the model accuracy, efficiency, and in particular transferability, has not been systematically investigated. Here we propose a data driven comparison and selection for CG water models through a Hierarchical Bayesian framework. We examine CG water models that differ in their level of coarse-graining, structure, and number of interaction sites. We find that the importance of electrostatic interactions for the physical system under consideration is a dominant criterion for the model selection. Multi-site models are favored, unless the effects of water in electrostatic screening are not relevant, in which case the single site model is preferred due to its computational savings. The charge distribution is found to play an important role in the multi-site model's accuracy while the flexibility of the bonds/angles may only slightly improve the models. Furthermore, we find significant variations in the computational cost of these models. We present a data informed rationale for the selection of CG water models and provide guidance for future water model designs.
Water, an essential constituent of life1, remains an elusive target for modeling and simulation. Effective coarse-grained (CG) models of liquid water must balance computational savings, by handling fewer degrees of freedom, while at the same time capturing its essential physical properties2,3,4,5,6. CG water models have enabled simulations exceeding micro-meters/seconds that are relevant for processes in biophysical systems that are beyond the reach of conventional atomistic molecular dynamics (MD) simulations. CG modeling entails recasting the complex and detailed atomistic model into a simpler yet accurate representation. A CG model has the ability to model key quantities of interest (QoI) when it captures the effects of the eliminated degrees of freedom (DOFs)7,8,9. The CG process requires: (i) The identification of the system's optimal resolution. Commonly, groups of atoms are described with pseudo-atoms/interaction sites and a "mapping" function is used to determine the relation between these sites and the atomistic coordinates. For a given system various coarse-graining levels can be employed. For example, existing CG lipid membrane models range from representations with a single anisotropic site10 to three sites per lipid thus differentiating between the head and the tail11,12, or by grouping three or four heavy atoms into beads thus capturing varying degrees of chemical properties13,14,15,16; (ii) The specification of the associated Hamiltonian. Here, DOFs can be reduced by simplifying the form or by neglecting specific terms in the Hamiltonian. For instance, one can neglect the bond/angle vibrations and resort to rigid models17.
In effective CG models, the removed DOFs are insignificant for the QoI. However, to what degree a specific DOF is negligible for a given observable is hardly ever known beforehand. Thus, the majority of the CG models are designed based on intuition or extrapolations from existing models. Additionally, the number of the removed DOFs must be large so that the diminished accuracy compared to the AT models is justified with the substantial computational gains. It is usually assumed that increasing the level of coarse-graining will decrease the model's accuracy. However, and this is perhaps a key issue, the relation between the number of employed DOF in a model and its accuracy may not be a monotonic function7,18,19. Thus, one can end up (without realizing) in a worst case scenario, where the constructed model is redundant, i.e., better accuracy can be achieved with fewer DOFs (computationally less demanding model). For example, the two-site and four-site models of n-hexane molecule perform reasonably well, while a very similar three-site model fails19. A number of works have addressed the systematic selection of CG models in bio-molecular systems20,21,22,23,24,25.
The challenge of striking the optimal balance between accuracy and computational cost is crucial for CG models of water. At the same time, obtaining water-water interactions consumes the majority of the computational effort. Thus, many CG models of water were developed. These models differ in the coarse-graining resolution level, i.e., the mapping, which ranges from 1 to 11 water molecules per CG bead2,26. Models also differ in the employed Hamiltonian. For CG models where one bead represents one water molecule (1-to-1 mapping), the Hamiltonian is either derived from the atomistic simulations9,27 or parametrized based on analytic potentials ranging from a simple Lennard-Jones (LJ) to potentials incorporating tetrahedral ordering, dipole moment, and orientation-dependent hydrogen bonding interactions28,29,30,31. On a higher coarse-graining level, it was soon realized that chargeless models, such as the standard MARTINI model32,33, introduce unphysical features when applied to interfaces, such as an interface between water and a lipid membrane34,35,36. Thus, new CG models were developed which treat the electrostatics explicitly. In the PCGS model (3-to-1)37, the CG beads carry induced dipoles, in the polarizable MARTINI model (4-to-3)34 the electrostatic is modeled analog to the Drude oscillator, in the BMW model (4-to-3)35 the CG representation resembles a rigid water molecule with a fixed dipole and quadrupole moment, while the GROMOS CG model (5-to-2)38 introduces explicit charges with a fluctuating dipole. Note that in these models the extra interaction sites have no relation to the physical system making the intuitive construction of the model even more difficult.
Thus far, studies reporting the effects of the choices made in the coarse-graining level and model structure are relatively few. For water, the mapping was investigated by Hadley et al.39, where the investigated CG models were single-site models and the Hamiltonian was parameterized to reproduce the structural properties of water. The mapping 4-to-1 was found to give the optimal balance between efficiency and accuracy. However, by comparing the properties of the available water models it is hard to extract any physics as the models were developed to reproduce different properties. Furthermore, one should avoid artificially constructed scoring functions that could be biased but rather perform model selection based on rigorous mathematical foundation. In this respect, the Bayesian statistical framework can serve as a powerful tool which has become a popular technique to refine, guide and critically assess the MD models40,41,42,43,44,45.
In this work, we employ the Bayesian statistical framework to critically assess many CG water models (see Fig. 1). We investigate the biologically relevant CG resolution levels, i.e., mappings, where the number of grouped water molecules ranges from 1 to 6. At each resolution, multiple model structures are examined ranging from 1-site to 3-site models where we additionally investigate the rigid and flexible versions of the 2 and 3-site models for mapping M = 4. Our main objective is to determine the model evidence for all models and thus elucidate the impact of the mapping on the model's performance and the relevant DOFs in CG modeling of water. Furthermore, we evaluate the speed-up for each developed model which allows us to assess efficiency-accuracy trade-off. Lastly, we investigate the transferability of the water models to different thermodynamics states, i.e., to different temperatures. To this end, we employ the hierarchical Bayesian framework46 that can accurately quantify the uncertainty in the parameter space for multiple QoI, i.e., different properties or the same property at different conditions.
Schematic representation of the investigated models with rigid geometry. We consider several levels of coarse-graining and model structures. The number of grouped water molecules shown ranges from 3 (1 for the 1S model) to 6. The 4 model structures, i.e., the 1S, 2S, 3S, and 3S* are explained in the text. The spheres are color-coded according to the model's evidence rank (pink, green, yellow, red, dark red denote high to low model evidences, respectively).
We investigate a set of CG water models (partially shown in Fig. 1). For all models, we employ the interactions that are implemented in the standard MD packages. In the 1S model, a water cluster is modeled with a single chargeless spherical particle employing the LJ potential ULJ (rij) = 4ε[(σ/rij)12 − (σ/rij)6] between particles i and j. The model parameters are \({\phi }_{1S}=(\sigma ,\epsilon )\). The 2S model is a two-site model, where the sites are oppositely charged (±q) and constrained to a distance r0. The negatively charged (blue in Fig. 2) site interacts additionally with the LJ potential. The model parameters are \({\phi }_{2S}=(\sigma ,\epsilon ,q,{r}_{0})\). In order to satisfy the net neutrality of the water cluster, the three-site model can be constructed in two ways, which we denote as 3S and 3S* models. The 3S model resembles a big water molecule where all three particles are charged. The central (blue) site has a charge of −q, and the other two sites have a charge of +q/2. In the 3S* model, the central site is chargeless and the other two carry a ±q charge. Both three-site models have the parameters \({\phi }_{3S\mathrm{,3}S\ast }=(\sigma ,\epsilon ,q,{r}_{0},{\vartheta }_{0})\). For all rigid model structures, we consider four levels of resolution with the number of grouped water molecules equal to 3, 4, 5, and 6. For the 1S model, we additionally consider the M = 1 mapping, while for the models with partial charges we investigate also M = 12. The level of resolution fixes the total mass of the CG representation. The mass ratio between the interaction sites in the two and three-sited models is fixed to 2 with the central particle carrying the larger mass. The electrostatic is in all cases modeled with the Coulomb's interaction Ue (rij) = qiqj/(4πεε0rij), where we set the global dielectric screening to ε = 2.5. For M = 4, we consider also the flexible analogs of the models with charges. In the 2SF model, the two sites are interacting with a harmonic potential Ub (rij) = kb (rij − r0)2 with force constant kb. Therefore, the model parameters are \({\phi }_{2SF}=(\sigma ,\epsilon ,q,{r}_{0},{k}_{b})\). For the flexible three-site models 3SF and 3SF*, the angle is unconstrained and modeled with the harmonic angle potential Ua (ϑij) = ka (ϑij − ϑ0)2 thus adding the force constant ka parameter to the parameter set, i.e., the model parameters are \({\phi }_{3SF\mathrm{,3}SF\ast }=(\sigma ,\epsilon ,q,{r}_{0},{\vartheta }_{0},{k}_{a})\).
Model structures: 1S, 2S, 3S, and 3S* from left to right. The black interaction sites interact via the LJ potential, whereas the yellow and blue sites interact only with the electrostatic interaction. In the 2S, 3S models, the black site carries a negative charge and the yellow site carries a positive charge. In the 3S* model, the black interaction site is charge neutral while the yellow and blue sites carry an opposite charge. The models in the bottom line are the flexible versions (denoted with "F") of the top rigid models. The rigidity or flexibility of the bonds/angles is depicted with the straight and zigzag lines, respectively.
We remark that the data used as target QoI is part of the modeling choice. In this work, we use experimental data of density, dielectric constant, surface tension, isothermal compressibility, and shear viscosity, i.e., mostly thermodynamic properties. These are deemed of key importance for biophysical systems. The data used and the properties of the reference coarse-grained water models are reported in Table 1. The structural properties, e.g. radial distribution function or the dynamical properties, e.g. diffusion constant were not considered in this work because these properties cannot be measured experimentally for M > 1.
Table 1 The first three columns show the experimental data56,57 at different temperatures for density ρ, dielectric constant ε, surface tension γ, isothermal compressibility κ, and shear viscosity η used as QoI.
Bayesian Framework
We consider a computational model \({\mathscr{C}}\) that depends on a set of parameters \({\phi }_{c}\in {{\mathbb{R}}}^{{N}_{\phi }}\) and a set of input variables or conditions \({\boldsymbol{x}}\in {{\mathbb{R}}}^{{N}_{x}}\). In the context of the current work, the computational model is the molecular dynamics solver, the model parameters correspond to the parameters of the potential and the input variables to the temperature of the system. Moreover, we consider an observable function \(F({\boldsymbol{x}};\,{\phi }_{c})\in {{\mathbb{R}}}^{N}\) that represents the output of the computational model. Here, the observable function is an equilibrium property of the system, e.g., the density. We are interested in inferring the parameters \({\phi }_{c}\) based on the a set of experimental data d = {di| i = 1, …, N} that correspond to the fixed input parameters of the model x.
In the frequentist statistics framework, the parameters of the model are obtained by optimizing a distance of the model from the data, usually the likelihood function. In the Bayesian framework, the parameters follow a conditional distribution which is given by Bayes' theorem,
$$p(\phi |{\boldsymbol{d}}, {\mathcal M} )=\frac{p({\boldsymbol{d}}|\phi , {\mathcal M} )\,p(\phi | {\mathcal M} )}{p({\boldsymbol{d}}| {\mathcal M} )},$$
where \(p({\boldsymbol{d}}|\phi , {\mathcal M} )\) is the likelihood function, \(p(\phi | {\mathcal M} )\) is the prior probability distribution and \(p({\boldsymbol{d}}| {\mathcal M} )\) is the model evidence. Here, \(\phi \) is the vector containing the computational model parameters \({\phi }_{c}\) and any other parameters needed for the definition of the likelihood or the prior density. \( {\mathcal M} \) stands for the model under consideration and contains all the information that describes the computational and the statistical model.
The likelihood function is a measure of how likely is that the data d are produced by the computational model \({\mathscr{C}}\). Here, we make the assumption that the datum di is a sample from the generative model
$${y}_{i}={F}_{i}({\boldsymbol{x}};{\phi }_{c})+{\sigma }_{n}{d}_{i}\varepsilon ,\,\varepsilon \sim {\mathscr{N}}\mathrm{(0,}\,\mathrm{1).}$$
Namely, yi are random variables independent and normally distributed with mean equal to the observable of the model and standard deviation proportional to the data. The reason we choose this error model is because the set of experimental data d contains elements of different orders of magnitude, e.g., density is of order of 1 and surface tension of order 100. With this model the error allowed by the statistical model becomes proportional to the value of the data we want to fit47. The likelihood of the data \(p({\boldsymbol{d}}|\phi )\) has the form,
$$p({\boldsymbol{d}}|\phi )={\mathscr{N}}({\boldsymbol{d}}|F({\boldsymbol{x}},{\phi }_{c}),{\rm{\Sigma }}),\,{\rm{\Sigma }}={\sigma }_{n}^{2}\,{\rm{diag}}\,(\,{{\boldsymbol{d}}}^{2}),$$
where \(\phi ={({\phi }_{c}^{{\rm{T}}},{\sigma }_{n})}^{{\rm{{\rm T}}}}\) is the parameter vector that contains the model and the error parameters.
The denominator of Eq. (1) is defined as the integral of the numerator and is called the model evidence. This quantity can be used for model selection48 as it is discussed in the next section. Finally, the prior probability encodes all the available information on the parameters prior to observing any data. If no prior information is known for the parameters, a non informative distribution can be used, e.g. a uniform distribution. In this work we use uniform priors, see SI for detailed information.
Assuming we have \({N}_{ {\mathcal M} }\) models \({ {\mathcal M} }_{i},\,\,i=1,\ldots ,{N}_{ {\mathcal M} }\) that describe different computational and statistical models, we wish to choose the model that best fits the data. In Bayesian statistics, this is translated into choosing the model with the highest posterior probability,
$$p({ {\mathcal M} }_{i}|{\boldsymbol{d}})=\frac{p({\boldsymbol{d}}|{ {\mathcal M} }_{i})p({ {\mathcal M} }_{i})}{p({\boldsymbol{d}})},$$
where \(p({ {\mathcal M} }_{i})\) encodes any prior preference to the model \({ {\mathcal M} }_{i}\). Assuming all models have equal prior probabilities, the posterior probability of the model depends only on the likelihood of the data. Taking the logarithm of the likelihood and using Eq. (1) we can write
$$\begin{array}{rcl}\mathrm{ln}\,p({\boldsymbol{d}}|{ {\mathcal M} }_{i}) & = & \int \,\mathrm{ln}\,p({\boldsymbol{d}}|{ {\mathcal M} }_{i})\,p(\phi |{\boldsymbol{d}},{ {\mathcal M} }_{i}){\rm{d}}\phi \\ & = & \int \,\mathrm{ln}\,\frac{p({\boldsymbol{d}}|\phi ,{ {\mathcal M} }_{i})\,p(\phi |{ {\mathcal M} }_{i})}{p(\phi |{\boldsymbol{d}},{ {\mathcal M} }_{i})}p(\phi |{\boldsymbol{d}},{ {\mathcal M} }_{i}){\rm{d}}\phi \\ & = & {\mathbb{E}}[\mathrm{ln}\,p({\boldsymbol{d}}|\phi ,{ {\mathcal M} }_{i})]-{\mathbb{E}}[\mathrm{ln}\,\frac{p(\phi |{ {\mathcal M} }_{i})}{p(\phi |{\boldsymbol{d}},{ {\mathcal M} }_{i})}],\end{array}$$
where the expectation is taken with respect to posterior probability \(p(\phi |{\boldsymbol{d}},{ {\mathcal M} }_{i})\). The first term is the expected fit of the data under the posterior probability of the parameters and is a measure of how well the model fits the data. The second term is the Kullback-Leibler (KL) divergence or relative entropy of the posterior from the prior distribution and is a measure of the information gain from data d under the model \({ {\mathcal M} }_{j}\). The KL divergence can be seen as a measure of the distance between two probability distributions.
If one would only consider the first term of Eq. (5) for the model selection, then the model that fits the data best would be selected. However, such an approach is prone to overfitting, i.e., choosing a too complex model, which reduces the predictive capabilities of the model. The second term serves as a penalization term. Models with posterior distributions that differ a lot from the prior, i.e., models that extract a lot of information from the data, are penalized more. Thus, model evidence can be seen as an implementation of the Ockham's razor that states that simple models (in terms of the number of parameters) that reasonably fit the data should be preferred over more complex models that provide only slight improvements to the fit. For a detailed discussion on model selection and estimators of the model evidence, we refer to refs49,50.
Hierarchical Bayesian Framework
We consider data structured as: \(\overrightarrow{{\boldsymbol{d}}}=\{{{\boldsymbol{d}}}_{1},\ldots ,{{\boldsymbol{d}}}_{{N}_{d}}\}\) where \({{\boldsymbol{d}}}_{i}\in {{\mathbb{R}}}^{{N}_{i}}\) corresponds to the conditions xi. For example, xi may correspond to different thermodynamic conditions under which the experimental data di are produced.
The classical Bayesian method for inferring the parameters of the computational model is to group all the data and estimate the probability \(p(\phi |\overrightarrow{{\boldsymbol{d}}})\) (see Fig. 3 left). However, this approach may not be suitable when the uncertainty on \(\phi \) is large due to the fact that different parameters may be suitable for different data sets. On the opposite side, individual parameters \({\phi }_{i}\) can be inferred using only the data set di (see Fig. 3 middle). This approach preserves the individual information but any information that may be contained in other data sets is lost.
Grouped data (left), non-hierarchical (middle) and hierarchical (right) parameter representation. In the hierarchical graph, each data set di is represented with different parameters \({\phi }_{i}\) and the parameters are connected through a hyper-parameters ψ.
Finally, a balance between retaining individual information and sharing information between different data sets can be achieved with the hierarchical Bayesian framework. In this approach, the independent models corresponding to different conditions are connected using a hyper-parameter vector ψ (see Fig. 3 right). The benefits of this approach is twofold: (i) better informed individual probabilities \(p({\phi }_{i}|\overrightarrow{{\boldsymbol{d}}})\); and (ii) a data informed prior p(ψ|d) is available in case new parameters \({\phi }^{new}\) that correspond to unobserved data need to be inferred. A detailed description of the sampling algorithm of this approach is given in Supporting information (SI).
Impact of mapping
First, we examine the impact of the level of resolution on the model accuracy using density, dielectric constant, surface tension, isothermal compressibility, and shear viscosity experimental data (see SI). In Fig. 4 the model accuracy, as measured by the model evidence, is shown as a function of mapping M, which denotes the number of water molecules represented by a given CG model. It is usually assumed the model's performance is decreasing with the decreased resolution of the model. Indeed, for the 1S model, we observe precisely this trend. For the charged models, the evidence is still overall monotonically decreasing with M, however, compared to the 1S model the dependency of the evidence on M is much less drastic. To investigate this dependency further, we perform the UQ inference also for the charged models at M = 12. The observed evidences are comparable to the evidence of the 1S model at M = 4. Thus, with the models that incorporate partial charges, one can resort to models with higher mappings. According to the UQ, the best model for M = 1, 3 is the 1S model whereas for M > 3 the charged models are superior. However, one should keep in mind that the chargeless and charged models are not comparable as the chargeless models cannot provide the same amount of information, e.g., the dielectric constant is not defined. Comparing the evidences of the 2S, 3S, and 3S* models, we see that the three models rank very closely with the 3S* model being somewhat better than the other two.
Model evidences for the explored rigid models of liquid water: 1S (+), 2S (×), 3S (⊡), and 3S* (■). For each model, we consider different mappings M ranging from 1 to 12, where, for example, M = 4 means that a CG entity represents 4 water molecules.
We emphasize that the model evidence encompasses much more than a mere evaluation of the model's properties at the best parameters. Nonetheless, it is insightful to examine the target QoI and their dependency on the mapping. Figure 5 shows the density ρ, dielectric constant ε, surface tension γ, isothermal compressibility κ, and shear viscosity η for rigid models and mappings 1 to 6. The target QoI are obtained using the maximum a posteriori (MAP) parameters and evaluated as a mean of 5 independent simulation runs with different initial conditions. Note that ε is not defined for the 1S model and it is excluded from the target QoI in the second UQ inference of the charged models. We observe that the ρ and ε are within 10% of the experimental data for all mappings. On the contrary, the γ, κ, and η depend very strongly on the mapping. The general trend is similar for all models, i.e., as we increase the mapping the γ is decreasing, κ is increasing, and η is decreasing. This observation agrees with the general picture of coarse-graining. The more we increase the level of coarse-graining the softer are the interactions between the CG beads which correlates with increased κ and decreased γ and η. We observe that for some models there are no parameters σ and ε of the LJ potential that would fit well a certain target QoI (within the liquid state), in particular, the γ and η. A possible solution would be to replace the LJ non-bonded interaction with another interaction, e.g, the Born-Mayer-Huggins interaction that is used in the BMW model35. The 1S model with M = 4 can be directly compared with the MARTINI model as the models are equal but were developed with different target QoI. With our model, we observe very similar properties as reported for the MARTINI model. Additionally, the inferred parameters with the MAP estimates are also very close to those of the MARTINI (see SI).
Target QoI: density ρ, dielectric constant ε, surface tension γ, isothermal compressibility κ, and shear viscosity η for the rigid water models 1S (green), 2S (red), 3S (blue), and 3S* (dashed blue line) at different mappings M. The error bars denote the standard deviation of 5 independent simulation runs with different initial conditions. The properties of the models are compared to the reported properties of the existing water models MARTINI [32] (■), GROMOS [38] (●), BMW35 (▲), and polarizable MARTINI34 (▼). The experimental data is shown with the horizontal dashed lines.
Rigid vs. flexible models
For the mapping M = 4, we examine also the three flexible models 2SF, 3SF, and 3S* F. The resulting evidences for these models are listed in Table 2 along with the model evidences for the rigid counterparts. The physical motivation behind the flexible models is that they encompass the fluctuations in the dipole moment of the water cluster. In the two-site model, we incorporate them via bond vibrations, whereas in the three-site models with the angle fluctuations. Thus, the flexible models have 1 extra DOF compared to the rigid counterparts. However, as can be seen in Table 2, for the three-site models the flexible versions perform worse than the rigid ones. For the two-site model, the flexible model is only slightly better than the rigid model. Nonetheless, as flexible models usually demand smaller integration timesteps and consequently have a higher computational cost the model's performance should be more substantial to justify the extra computational resources.
Table 2 Model evidences for models with mapping M = 4.
Accuracy vs. efficiency
We examine the accuracy vs. efficiency trade-off in Fig. 6 where we plot the evidence as a function of the speedup compared to the all-atom simulation. As a test simulation, we choose the NVE ensemble simulation at ambient conditions, a cubic domain with an edge of 5 nm and a simulation length of 10 ns. We also employ the maximal integration timestep still permitted by the model (see SI). The variation of the runtime varies extensively between the considered models. The computational cost depends on two factors: (i) on the number of particles, which in turn depends on the employed mapping and the number of interaction sites of the model; (ii) on the integration timestep that is increasing with increased coarse-graining since the interactions are softening. For a given mapping, we observe the smallest computational cost for the 1S model, followed by the 2S, and 3S* model while the 3S model has the highest computational cost. The difference in the 3S and 3S* models is due to the smaller timesteps required by the 3S model.
Model evidences with respect to the speedup of the examined water models marked with the name and the mapping. The speedup factor is the ratio between the atomistic (TIP4P model) and CG model runtime of 10 ns, 125 nm3 NVT simulation at ambient conditions and maximal integration time step (for CG models see SI, for TIP4P 2 fs). The boxed section in plot (a) is enlarged in plot (b). Expected utility is shown in (c) for rigid models: 1S (+), 2S (×), 3S (⊡), and 3S* (■) as a function of mapping.
The trade-off between the accuracy and efficiency can be formally addressed as a decision problem, where the expected utility51 \({\mathscr{U}}({ {\mathcal M} }_{i};{\boldsymbol{d}})\) of an individual model \({ {\mathcal M} }_{i}\) given data d is given by:
$${\mathscr{U}}({ {\mathcal M} }_{i};{\boldsymbol{d}})=p({ {\mathcal M} }_{i}|{\boldsymbol{d}})u({ {\mathcal M} }_{i}\mathrm{).}$$
We define the utility function \(u({ {\mathcal M} }_{i})\) as the decimal logarithm of the computational speedup over the atomistic model. As shown in Fig. 6c the model with the maximal expected utility is found for the 1S model with M = 1. When we consider the models incorporating the partial charge: the 3S model is the most unfavorable, while the 2S and 3S* models are comparable in terms of their expected utility. In turn, the appropriate choice for the 2S and 3S* models are mappings M = 5 and 3, respectively.
Transferability to non-ambient TD conditions
One of the challenges of coarse-graining is the transferability of CG models. Typically, CG models are more sensitive to variations in the thermodynamic conditions than the atomistic models. Furthermore, the more we increase the level of coarse-graining, the more restricted the model is to the thermodynamics state at which it is parametrized. One way of making the model more robust to transferability is to parametrize it for different conditions. Within the Bayesian formalism, the hierarchical UQ allows us to merge multiple QoI. We test the transferability of three models 2SF, 3S, and 3S* for mapping M = 4. In Fig. 7, we plot the model evidences for the hierarchical UQ, where the temperatures T = 283, 298, 323 K are merged and the evidences for the classical UQ at each temperature. We observe that the 3S* model is the most transferable, having the highest hierarchical evidence. For the three-site models, we also observe that it is easier for the CG model to fit higher temperatures.
Logarithm of the model evidences at different temperatures T for models 2SF (×), 3S (⊡), and 3S* (■). The inset shows the model evidences of the hierarchical UQ approach, where all three temperatures are considered concurrently.
Summary and Discussion
We propose a data driven, Bayesian framework for the selection of CG water models. We re-examine the CG modeling approach where the mapping and model structure are based on rather ad-hoc assumptions and the system Hamiltonian is derived either by fitting its parameters to relevant experimental data or by deriving the effective interactions from the more detailed, e.g. atomistic simulations. Such a-priori assumptions predefine the accuracy of the model no matter what approach one employs to obtain the Hamiltonian. In this work, we propose a methodology that broadens the investigated space of all possible CG models of liquid water. The Bayesian framework is not constrained to a specific model design but considers many different mappings and model structures. Our model search space encompasses the 1, 2, and 3-site models with either rigid or flexible geometry. We find that for the 1S model one should consider mappings M < 3, while for the multiple-site models higher M are more appropriate due to the higher computational cost compared to the single sited models. When choosing between single and multiple-site models one should mainly consider whether the local electrostatics screening is essential for the problem at hand. We observed no significant improvement of models when going from rigid to flexible models, thus implying that one should use rigid geometries for efficiency reasons. The distribution of charge in the three-site models, however, plays an important role as the 3S* model outperforms the 3S model and is additionally much cheaper computationally due to the higher maximal integration time step. Additionally, the 3S* is also the best model in regard to the transferability to non-ambient temperatures.
The methodology presented in this work can be extended to investigate the CG model design of other important chemical and biological systems such as bio-molecules. We emphasize that the data used for the calibration of the models is considered an inherent aspect of the modeling process in a Bayesian framework. The adoption of Bayesian framework in studies of CG models could quantify the appropriateness of the model designs employed in established CG force fields52,53 according to target QoIs. The computational limitations associated with Bayesian inference are today largely overcome thanks to the availability of massively parallel computer architectures and a wealth of data is produced by advanced experimental procedures and detailed simulations. This combination enables this 400 year old method54,55 to become a potent alternative to challenging modeling and simulation problems of our times.
Alberts, B. et al. Essential Cell Biology (Garland New York, 1997).
Noid, W. G. Perspective: Coarse-grained models for biomolecular systems. J. Chem. Phys. 139, 090901 (2013).
Shearer, J. & Khalid, S. Communication between the leaflets of asymmetric membranes revealed from coarse-grain molecular dynamics simulations. Sci. Rep. 8, 1805 (2018).
Article ADS Google Scholar
Buslaev, P. & Gushchin, I. Effects of coarse graining and saturation of hydrocarbon chains on structure and dynamics of simulated lipid molecules. Sci. Rep. 7, 11476 (2017).
Bell, D. R., Cheng, S. Y., Salazar, H. & Ren, P. Capturing rna folding free energy with coarse-grained molecular dynamics simulations. Sci. Rep. 7, 45812 (2017).
Fajardo, O. Y., Bresme, F., Kornyshev, A. A. & Urbakh, M. Electrotunable friction with ionic liquid lubricants: How important is the molecular structure of the ions? J. Phys. Chem. Lett. 6, 3998–4004 (2015).
Riniker, S., Allison, J. R. & van Gunsteren, W. F. On developing coarse-grained models for biomolecular simulation: a review. Phys. Chem. Chem. Phys. 14, 12423–12430 (2012).
Foley, T., Shell, M. S. & Noid, W. G. The impact of resolution upon entropy and information in coarse-grained models. J. Chem. Phys. 143, 243104 (2015).
Wang, H., Junghans, C. & Kremer, K. Comparative atomistic and coarse-grained study of water: What do we lose by coarse-graining? Eur. Phys. J. E 28, 221–229 (2009).
Drouffe, J. M., Maggs, A. C. & Leibler, S. Computer simulations of self-assembled membranes. Sci. 254, 1353–1356 (1991).
Cooke, I. R. & Deserno, M. Solvent-free model for self-assembling fluid bilayer membranes: Stabilization of the fluid phase based on broad attractive tail potentials. J. Chem. Phys. 123, 224710 (2005).
Shillcock, J. C. & Lipowsky, R. Tension-induced fusion of bilayer membranes and vesicles. Nat. Mater. 4, 225–228 (2005).
Shelley, J. C., Shelley, M. Y., Reeder, R., Bandyopadhyay, S. & Klein, M. L. A coarse grained model for phospholipid simulations. J Phys Chem B 105, 4464–4470 (2001).
Marrink, S. J., de Vries, A. H. & Mark, A. E. Coarse grained model for semiquantitative lipid simulations. J. Phys. Chem. B 108, 750–760 (2004).
Li, X., Gao, L. & Fang, W. Dissipative particle dynamics simulations for phospholipid membranes based on a four-to-one coarse-grained mapping scheme. PLoS ONE 11, e0154568 (2016).
Orsi, M. & Essex, J. W. The elba force field for coarse-grain modeling of lipid membranes. PLOS Comput. Biol. 6, e28637 (2011).
Español, P., de la Torre, J. A., Ferrario, M. & Ciccotti, G. Coarse-graining stiff bonds. Computational Statistics and Data Analysis 200, 107–129 (2011).
Mullinax, J. W. & Noid, W. G. Extended ensemble approach for deriving transferable coarse-grained potentials. J. Chem. Phys. 131, 104110 (2009).
Das, A., Lu, L., Andersen, H. C. & Voth, G. A. The multiscale coarse-graining method. x. improved algorithms for constructing coarse-grained potentials for molecular systems. J. Chem. Phys. 136, 194115 (2012).
Sinitskiy, A. V., Saunders, M. G. & Voth, G. A. Optimal number of coarse-grained sites in different components of large biomolecular complexes. J. Phys. Chem. B 116, 8363–8374 (2012).
Arkhipov, A., Yin, Y. & Schulten, K. Four-scale description of membrane sculpting by bar domains. Biophys. J. 95, 2806–2821 (2008).
Rudzinski, J. F. & Noid, W. G. Investigation of coarse-grained mappings via an iterative generalized yvon-born-green method. J. Phys. Chem. B 118, 8295–8312 (2014).
Zhang, Z. et al. A systematic methodology for defining coarse-grained sites in large biomolecules. Biophys. J. 95, 5073–5083 (2008).
Reith, D., Pütz, M. & Müller-Plathe, F. Deriving effective mesoscale potentials from atomistic simulations. J. Comput. Chem. 24, 1624–1636 (2003).
Liu, P., Shi, Q., Daumé, H. & Voth, G. A. A bayesian statistics approach to multiscale coarse graining. J. Chem. Phys. 129, 214114 (2008).
Hadley, K. R. & McCabe, C. Coarse-grained molecular models of water: A review. Mol. Sim. 38, 671–681 (2012).
Chaimovich, A. & Shell, M. S. Anomalous waterlike behavior in spherically-symmetric water models optimized with the relative entropy. Phys. Chem. Chem. Phys. 28, 1901–1915 (2009).
Izvekov, S. & Voth, G. A. Multiscale coarse graining of liquid-state systems. J. Chem. Phys. 123, 134105 (2005).
Molinero, V. & Moore, E. B. Water modeled as an intermediate element between carbon and silicon. J. Phys. Chem. B 113, 4008–4016 (2009).
Jagla, E. A. Core-softened potentials and the anomalous properties of water. J. Chem. Phys. 111, 8980–8986 (1999).
Hynninen, T. et al. A molecular dynamics implementation of the 3d mercedes-benz water model. Comp. Phys. Comm. 183, 363–369 (2012).
Article ADS MathSciNet CAS Google Scholar
Marrink, S. J. & Tieleman, D. P. Perspective on the martini model. Chem. Soc. Rev. 42, 6801–6822 (2013).
Zavadlav, J., Melo, M. N., Marrink, S. J. & Praprotnik, M. Adaptive resolution simulation of an atomistic protein in martini water. J. Chem. Phys. 140, 054114 (2014).
Yesylevskyy, S. O., Schäfer, L. V., Sengupta, D. & Marrink, S. J. Polarizable water model for the coarse-grained martini force field. PLoS Comput. Biol. 6, e1000810 (2010).
Wu, Z., Cui, Q. & Yethiraj, A. A new coarse-grained model for water: The importance of electrostatic interactions. J. Phys. Chem. B 114, 10524–10529 (2010).
Zavadlav, J., Melo, M. N., Marrink, S. J. & Praprotnik, M. Adaptive resolution simulation of polarizable supramolecular coarse-grained water models. J. Chem. Phys. 142, 244118 (2015).
Ha-Duong, T., Basdevant, N. & Borgis, D. A polarizable coarse-grained water model for coarse-grained proteins simulations. Chem. Phys. Lett. 469, 79–82 (2009).
Riniker, S. & van Gunsteren, W. F. A simple, efficient polarizable coarse-grained water model for molecular dynamics simulations. J. Chem. Phys. 134, 084110 (2011).
Hadley, K. R. & McCabe, C. On the investigation of the coarse-grained models for water: Balancing computational efficienncy and the retention of structural properties. J. Phys. Chem. 114, 4590–4599 (2010).
Angelikopoulos, P., Papadimiriou, C. & Koumoutsakos, P. Bayesian uncertainty quantification and propagation in molecular dynamics simulations: A high performance computing framework. J. Chem. Phys. 137, 144103 (2012).
Angelikopoulos, P., Papadimiriou, C. M. E. & Koumoutsakos, P. Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty. J. Phys. Chem. B 117, 14808–14816 (2013).
Kulakova, L. et al. Data driven inference for the repulsive exponent of the lennard-jones potential in molecular dynamics simulations. Sci. Rep. 7, 16576 (2017).
Jacobson, L. C., Kirby, R. M. & Molinero, V. How short is too short for the interactions of a water potential? exploring the parameter space of a coarse-grained water model using uncertainty quantification. J Phys Chem B 118, 8190–8202 (2014).
Rizzi, F., Jones, R. E., Debusschere, B. J. & Knio, O. M. Uncertainty quantification in md simulations of concentration driven ionic flow through a silica nanopore. sensitivity to physical parameters of the pore. J. Chem. Phys. 138, 194104 (2013).
Farrell, K., Tinsley Oden, J. & Faghihi, D. A bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems. J. Comp. Phys. 189-208, 214114 (2015).
Wu, S., Angelikopoulos, P., Papadimiriou, C., Moser, R. & Koumoutsakos, P. A hierarchical bayesian framework for force field selection in molecular dynamics simulations. Phil. Trans. R. Soc. A 374, 20150032 (2015).
Article ADS MathSciNet Google Scholar
Cheung, S. H., Oliver, T. A., Prudencio, E. E., Prudhomme, S. & Moser, R. D. Bayesian uncertainty analysis with applications to turbulence modeling. Reliab. Eng. & Syst. Saf. 96, 1137–1149, https://doi.org/10.1016/j.ress.2010.09.013. Quantification of Margins and Uncertainties (2011).
Beck, J. & Yuen, K. Model selection using response measurements: Bayesian probabilistic approach. J. Eng. Mech. 130, 192–203 (2004).
Knuth, K., Habeck, M., Malakar, N., Mubeen, A. & Placek, B. Bayesian evidence and model selection. Digit. Signal Process. 47, 50–67 (2015).
Beck, J. L. Bayesian system identification based on probability logic. Struct. Control. Heal. Monit. 17, 825–847 (2010).
Giovanni Parmigiani, L. Y. T. I. Decision Theory: Principles and Approaches (John Wiley & Sons, Ltd, 2010).
Voth, G. A. (ed.) Coarse-Graining of Condensed Phase and Biomolecular Systems (CRC Press, 2009).
Papoian, G. A. (ed.) Coarse-Grained Modeling of Biomolecules (CRC Press, 2017).
Stigler, S. M. The History of Statistics The Measurement of Uncertainty before 1900 (Harvard University Press, 1990).
Jaynes, E. T. Probability Theory: The Logic of Science (Cambridge University Press, 2003).
Lide, D. R. CRC Handbook of Chemistry and Physics (CRC Press LLC, 2004).
Kell, G. S. Precise representation of volume properties of water at one atmosphere. J. Chem. Eng. Data 12, 66–69 (1967).
Braun, D., Boresch, S. & Steinhauser, O. Transport and dielectric properties of water and the influence of coarse-graining: Comparing bmw, spc/e, and tip3p models. J. Chem. Phys. 140, 064107 (2014).
J. Z. acknowledges financial support as an ETH Zürich Fellow. G.A. and P.K. acknowledge support by the European Research Council Advanced Investigator Award 341117. J.Z. and G.A. acknowledge the help of Lina Kulakova (ETHZ) in using the software Π4U. The authors thank Matej Praprotnik for useful discussions and critical reading of the manuscript. Finally, the authors acknowledge the computational time at Swiss National Supercomputing Center (CSCS) under the project s659.
Computational Science and Engineering Laboratory, ETH Zurich, Clausiusstrasse 33, Zurich, CH-8092, Switzerland
Julija Zavadlav, Georgios Arampatzis & Petros Koumoutsakos
Collegium Helveticum, University Zurich and ETH Zurich, Zurich, 8092, Switzerland
Georgios Arampatzis & Petros Koumoutsakos
Julija Zavadlav
Georgios Arampatzis
Petros Koumoutsakos
P.K. and J.Z. designed the research, J.Z. and G.A. conducted the simulations and analysed the results. All authors wrote the manuscript.
Correspondence to Petros Koumoutsakos.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supporting Information for Bayesian selection for coarse-grained models of liquid water
Zavadlav, J., Arampatzis, G. & Koumoutsakos, P. Bayesian selection for coarse-grained models of liquid water. Sci Rep 9, 99 (2019). https://doi.org/10.1038/s41598-018-37471-0
Renormalization group theory of molecular dynamics
Daiji Ichishima
Yuya Matsumura
Scientific Reports (2021)
About Scientific Reports
Guide to referees
Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
|
CommonCrawl
|
Conductor Material Required in Three-Phase Overhead AC Transmission System
Power SystemsUtilization of Electrical EnergyTransmission of Electric PowerDistribution of Electric Power
Three-Phase Transmission System
A three-phase transmission system is the one in which three line conductors are used to transmit the AC electric power from generating station to the substations. The three-phase system is universally adopted for transmission of electric power.
Depending upon the number of conductors used, the three-phase AC transmission system is classified into two types viz. −
Three-Phase Three-Wire System
Three-Phase Four-Wire System
Conductor Material Required in 3-Phase 3-Wire AC System
Consider a three-phase three-wire AC system as shown in Figure-1, it has three line conductors and one earthed neutral wire. The three-phase three-wire system may be star connected (as shown in Figure-1) or delta connected.
$\mathrm{Maximum \: voltage\: per\: phase,\: \mathit{V_{ph}\mathrm{\, =\, }V_{m}}}$
$\mathrm{RMS\: value\: of\: voltage\: per\: phase\: \mathit{\mathrm{\, =\, }\frac{V_{m}}{\mathrm{\sqrt{2}}}}}$
$\mathrm{Power\: transmitted\: by\: per\: phase\: \mathit{\mathrm{\, =\, }\frac{P}{\mathrm{3}}}}$
Therefore, the load current per phase is given by,
$$\mathrm{\mathit{I_{\mathrm{1}}\mathrm{\, =\, }\frac{P/\mathrm{3}}{\left ( \frac{V_{m}}{\sqrt{\mathrm{2}}}\mathrm{cos}\, \phi \right )}\mathrm{\, =\, }\frac{\sqrt{\mathrm{2}}P}{\mathrm{3}V_{m}\, \mathrm{cos}\, \phi} } }$$
If 𝑎1 is the area of cross-section of each conductor, then the resistance per conductor is given by,
$$\mathrm{\mathit{R_{\mathrm{1}}\mathrm{\, =\, }\frac{\rho \, l}{a_{\mathrm{1}}}}}$$
Hence, the total power losses in the transmission line are
$$\mathrm{\mathit{W\mathrm{\, =\, }\mathrm{3}I\mathrm{_{1}^{2}}\, R_{\mathrm{1}}\mathrm{\, =\, }\mathrm{3}\times \left ( \frac{\mathrm{\sqrt{2}}P}{\mathrm{3}V_{m}\mathrm{cos}\, \phi } \right )^{\mathrm{2}}\times \left ( \frac{\rho \, l}{a_{\mathrm{1}}} \right )\mathrm{\, =\, }\frac{\mathrm{2}P^{\mathrm{2}\, }\rho l}{\mathrm{3}V_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi \,a_{\mathrm{1}}} }}}$$
$$\mathrm{\therefore Area\: of\: cross\: section,\mathit{ a_{\mathrm{1}}\mathrm{\, =\, }\frac{\mathrm{2}P^{\mathrm{2}\, }\rho l}{\mathrm{3}WV_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi }\, }}}$$
Therefore, the volume (say K) of conductor material required in the three-phase three wire overhead AC transmission system is given by,
$$\mathrm{\mathit{K\mathrm{\, =\, }\mathrm{3}\times a_{\mathrm{1}}\times l\mathrm{\, =\, }\mathrm{3}\times \frac{\mathrm{2}P^{\mathrm{2}\, }\rho l}{\mathrm{3}WV_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi }\, }\times l}}$$
$$\mathrm{\mathit{\therefore K\mathrm{\, =\, }\frac{\mathrm{2}P^{\mathrm{2}\, }\rho l^{\mathrm{2}}}{WV_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi }\, }}\, \, \, \cdot \cdot \cdot \left ( 1 \right )}$$
The three-phase four-wire AC transmission system is shown in Figure-2. In this system, the neutral wire is taken from the neutral point and the cross-sectional are of neutral wire is generally one-half that of the line conductors.
If the load connected to the 3-phase 4-wire system is balanced, then current through the neutral wire is zero.
Also, assume that the load is balanced and power factor of load as cos𝜙. Then,
$$\mathrm{Per\: phase\: load \: current,\: \mathit{I_{\mathrm{2}}\mathrm{\, =\, }\frac{P/\mathrm{3}}{\left ( \frac{V_{m}}{\sqrt{\mathrm{2}}}\mathrm{cos}\, \phi \right )}\mathrm{\, =\, }\frac{\sqrt{\mathrm{2}}P}{\mathrm{3}V_{m}\, \mathrm{cos}\, \phi} } }$$
If a2 is the area of cross-section of each line conductor and 𝑎𝑛=(𝑎2/2) is the cross-sectional area of neutral wire. Then
$$\mathrm{Resistance\: of\: each\: line \: conductor,\: \mathit{R_{\mathrm{2}}\mathrm{\, =\, }\frac{\rho \, l}{a_{\mathrm{2}}}}}$$
$$\mathrm{Resistance\: of\: neutral\: wire,\: \mathit{R_{n}\mathrm{\, =\, }\frac{\rho \, l}{a_{n}}\mathrm{\, =\, }\frac{\mathrm{2}\rho \, l}{a_{\mathrm{2}}}}}$$
Since when a balanced three phase load is connected to the system, the neutral current is zero and hence no power loss in the neutral wire.
$$\mathrm{\therefore Line \: losses\:\mathit{ W\mathrm{\, =\, }\mathrm{3}I\mathrm{_{2}^{2}}\, R_{\mathrm{2}}\mathrm{\, =\, }\mathrm{3}\times \left ( \frac{\mathrm{\sqrt{2}}P}{\mathrm{3}V_{m}\mathrm{cos}\, \phi } \right )^{\mathrm{2}}\times \left ( \frac{\rho \, l}{a_{\mathrm{2}}} \right )}}$$
$$\mathrm{\mathit{\Rightarrow W\mathrm{\, =\, }\frac{\mathrm{2}P^{\mathrm{2}\, }\rho l}{\mathrm{3}V_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi \,a_{\mathrm{2}}} }}}$$
Now, the volume (say K1) of conductor material required in the 3-phase 4-wire overhead AC transmission system is given by,
$$\mathrm{\mathit{ K_{\mathrm{1}}\mathrm{\, =\, }\mathrm{3}a_{\mathrm{2}}l\mathrm{\, +\, }a_{n}l\mathrm{\, =\, }\mathrm{3.5}a_{\mathrm{2}}l}}$$
$$\mathrm{\mathit{\Rightarrow K_{\mathrm{1}}\mathrm{\, =\, }\mathrm{3.5}\times \left ( \frac{\mathrm{2}P^{\mathrm{2}\, }\rho l}{\mathrm{3}WV_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi }\, } \right )\times l}}$$
$$\mathrm{\mathit{\therefore K_{\mathrm{1}}\mathrm{\, =\, }\frac{\mathrm{7}P^{\mathrm{2}\, }\rho l^{\mathrm{2}}}{\mathrm{3}WV_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi }\, } }\: \: \: \cdot \cdot \cdot \left ( 2 \right )}$$
Now, comparing equations (1) & (2), we have,
$$\mathrm{\mathit{ \frac{K_{\mathrm{1}}}{K}\mathrm{\, =\, }\frac{\left ( \frac{\mathrm{7}P^{\mathrm{2}\, }\rho l^{\mathrm{2}}}{\mathrm{3}WV_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi }\, } \right )}{\left ( \frac{\mathrm{2}P^{\mathrm{2}\, }\rho l^{\mathrm{2}}}{WV_{m}^{\mathrm{2}}\,\mathrm{cos^{2}\phi }\, } \right )}}\mathrm{\, =\, }\frac{7}{6}}$$
$$\mathrm{\mathit{\therefore K_{\mathrm{1}}}\mathrm{\, =\, }\frac{7}{6}\times \mathit{K}\: \: \: \cdot \cdot \cdot \left ( 3 \right )}$$
Hence, from eq. (3), it is clear that the volume of conductor material required in 3-phase 4-wire system is (7/6)th times of that required in 3-phase 3-wire overhead AC transmission system.
Manish Kumar Saini
Conductor Material Required in Overhead Two-Phase AC Transmission System
Conductor Material Required in Single-Phase Overhead AC Transmission System
Conductor Material Required in Overhead DC Transmission System
Volume of Conductor Material Required in Underground Three-Phase AC System
Volume of Conductor Material Required in Underground Single-Phase AC System
Volume of Conductor Material Required in Underground Two-Phase AC System
What is the Volume of Conductor Material Required in Underground DC System?
Types of AC Generators – Single Phase and Three Phase AC Generator
Two-Phase AC Servo Motor and Three-Phase AC Servo Motor
Advantages of Three-Phase System
Comparison of Conductor Material in Underground Electric System
Insulators Used in Overhead Transmission Lines
What is the Corona Effect in Overhead Transmission Lines?
Harmonics in Three Phase Transformers
Difference between Single-Phase and Three-Phase Transformer
|
CommonCrawl
|
On the asymptotic behaviour of traveling wave solution for a discrete diffusive epidemic model
Global dynamics in a tumor-immune model with an immune checkpoint inhibitor
February 2021, 26(2): 1171-1195. doi: 10.3934/dcdsb.2020158
Enhanced Backscattering of a partially coherent field from an anisotropic random lossy medium
Josselin Garnier 1, and Knut Sølna 2,,
CMAP, CNRS, Ecole polytechnique, Institut Polytechnique de Paris, 91128 Palaiseau Cedex, France
Department of Mathematics, University of California, Irvine CA 92697, USA
* Corresponding author: Knut Sølna
Received September 2019 Revised February 2020 Published May 2020
Fund Project: This research is supported by AFOSR grant FA9550-18-1-0217, NSF grant 1616954
Figure(2) / Table(1)
The weak localization or enhanced backscattering phenomenon has received a lot of attention in the literature. The enhanced backscattering cone refers to the situation that the wave backscattered by a random medium exhibits an enhanced intensity in a narrow cone around the incoming wave direction. This phenomenon can be analyzed by a formal path integral approach. Here a mathematical derivation of this result is given based on a system of equations that describes the second-order moments of the reflected wave. This system derives from a multiscale stochastic analysis of the wave field in the situation with high-frequency waves and propagation through a lossy medium with fine scale random microstructure. The theory identifies a duality relation between the spreading of the wave and the enhanced backscattering cone. It shows how the cone, its regularity and width relate to the statistical structure of the random medium. We discuss how this information in particular can be used to estimate the internal structure of the random medium based on observations of the reflected wave.
Keywords: Waves, random media, enhanced backscattering, asymptotic analysis, imaging, tissue, Paraxial equation.
Mathematics Subject Classification: Primary: 35R60, 76B15; Secondary: 35Q99, 60F05.
Citation: Josselin Garnier, Knut Sølna. Enhanced Backscattering of a partially coherent field from an anisotropic random lossy medium. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1171-1195. doi: 10.3934/dcdsb.2020158
V. A. Banakh and I. N. Smalikho, Determination of optical turbulence intensity by atmospheric backscattering of laser radiation, Atmospheric and Oceanic Optics, 24 (2011), 457. Google Scholar
Y. N. Barabanenkov, Wave corrections for the transfer equation for backward scattering, Izv. Vyssh. Uchebn. Zaved. Radiofiz., 16 (1973), 88-96. Google Scholar
R. Bi, J. Dong and K. Lee, Coherent backscattering cone shape depends on the beam size, Appl. Optics, 51 (2012), 6301-6306. Google Scholar
I. R. Capoglu, J. D. Rogers, A. Taflove and V. Backman, Accuracy of the Born approximation in calculating the scattering coefficient of biological continuous random media, Opt. Lett., 34 (2009), 2679-2681. Google Scholar
J. Chrzanowski, J. Kirkiewicz and Yu. A. Kravtsov, Influence of enhanced backscattering phenomenon on laser measurements of dust and aerosols content in a turbulent atmosphere, Phys. Lett. A, 300 (2002), 298-302. Google Scholar
J. H. Churnside and J. J. Wilson, Enhanced backscatter of a reflected beam in atmospheric turbulence, Appl Opt., 32 (1993), 2651-2655. Google Scholar
M. V. de Hoop, J. Garnier and K. Sølna, Enhanced and specular backscattering in random media, Waves in Random and Complex Media, 22 (2012), 505-530. doi: 10.1080/17455030.2012.728299. Google Scholar
D. de Wolf, Electromagnetic reflection from an extended turbulent medium: Cumulative forward-scatter single-backscatter approximation, IEEE Trans. Antennas Propagat, 19 (1971), 254-262. Google Scholar
D. de Wolf, Discussion of radiatlve transfer methods applied to electromagnetic reflection from turbulent plasma, IEEE Trans. Antennas Propagat, 20 (1972), 805-807. Google Scholar
J.-P. Fouque, J. Garnier, G. Papanicolaou and K. Sølna, Wave Propagation and Time Reversal in Randomly Layered Media, Springer, New York, 2007. Google Scholar
J. Garnier and K. Sølna, Random backscattering in the parabolic scaling, J. Stat. Phys., 131 (2008), 445-486. doi: 10.1007/s10955-008-9488-0. Google Scholar
J. Garnier and K. Sølna, Coupled paraxial wave equations in random media in the white-noise regime, Ann. Appl. Probab., 19 (2009), 318-346. doi: 10.1214/08-AAP543. Google Scholar
J. Garnier and K. Sølna, Wave backscattering by point scatterers in the random paraxial regime, SIAM J. Multiscale Model. Simul., 3 (2014), 1309-1334. doi: 10.1137/140953757. Google Scholar
J. Garnier and K. Sølna, White-noise paraxial approximation for a general random hyperbolic system, SIAM J. Multiscale Model. Simul., 13 (2015), 1022-1060. doi: 10.1137/15M101556X. Google Scholar
J. Garnier and K. Sølna, Fourth-moment Analysis for wave propagation in the white-noise paraxial regime, Arch. Rat. Mech. Anal., 220 (2016), 37-81. doi: 10.1007/s00205-015-0926-2. Google Scholar
J. Garnier and K. Sølna, Imaging through a scattering medium by speckle intensity correlations over incident angle, Inverse Problems, 34 (2018), 094003, 22pp. doi: 10.1088/1361-6420/aacfb0. Google Scholar
J. Garnier and K. Sølna, Non-invasive imaging through random media, SIAM J. Appl. Math., 78 (2018), 3296-3315. doi: 10.1137/18M1171977. Google Scholar
A. K. Glaser, Y. Chen and J. T. C. Liu, Fractal propagation method enables realistic optical microscopy simulations in biological tissues, Optica, 3 (2016), 861-869. Google Scholar
K. S. W. Gong and C. J. R. Shappard, Model for light scattering in biological tissue and cells based on random rough nonspherical particles, Appl. Optics, 48 (2009), 1153-1157. Google Scholar
Y. L. Kim, Y. Liu, V. M. Turzhitsky, H. K. Roy, R. K. Wali and V. Backman, Coherent backscattering spectroscopy, Opt. Lett., 29 (2004), 1906-1908. Google Scholar
Y. L. Kim, Y. Liu, V. M. Turzhitsky, H. K. Roy, R. K. Wali, P. P. Subramanian, P. Pradhan and V. Backman, Low-coherence enhanced backscattering: Review of principles and applications for colon cancer screening, J. Biomed. Opt., 11 (2006), 041125. Google Scholar
Y. L. Kim, Y. Liu, V. M. Turzhitsky, R. K. Wali, H. K. Roy and V. Backman, Depth-resolved low-coherence enhanced backscattering, Opt. Lett., 30 (2007), 741-743. Google Scholar
G. Labeyrie, F. de Tomasi, J.-C. Bernard, C. A. Müller, C. Miniatura and R. Kaiser, Coherent backscattering of light by atoms, Phys. Rev. Lett., 83 (1999), 5266-5269. Google Scholar
J. Liu, Z. Xu, Q. Song, R. L. Konger and Y. L. Kim, Enhancement factor in low-coherence enhanced backscattering and its applications for characterizing experimental skin carcinogenesis, J. Biomed. Opt., 15 (2010), 037011. Google Scholar
N. Mutyal, A. Radosevich, B. Gould, J. D. Rodgers, A. Gomes, V. Turzhitsky and V. Backman, A fiber optic probe design to measure depth- limited optical properties in-vivo with with Low-coherence Enhanced Backscattering (LEBS) Spectroscopy, Opt. Express, 20 (2012), 19643-19657. Google Scholar
J. P. Nolan, Multivariate elliptically contoured stable distributions: Theory and estimation, Computational Statistics, 28 (2013), 2067-2089. doi: 10.1007/s00180-013-0396-7. Google Scholar
A. J. Radosevisch, N. M. Nikhil, J. D. Rogers, B. Gould, T. A. Hensing, D. Ray, V. Backman and H. K. Roy, Buccal spectral markers for lung cancer risk stratification, Plos One, 9 (2014), e10157. Google Scholar
J. D. Rogers, I. R. Capoglu and V. Backman, Nonscalar elastic light scattering from continuous random media in the Born approximation, Opt. Lett., 34 (2009), 1891-1893. Google Scholar
J. D. Rogers, A. J. Radosevich, J. Yi and V. Backman, Modeling light scattering in tissue as continuous random media using a versatile refractive index correlation function, IEEE J. Sel. Top. Quant., 20 (2014), 7000514. Google Scholar
Y. M. Sebrebrennikova and L. H. Garcia-Rubio, Modeling and interpretation of extinction spectra of oriented nonspherical composite particles: application to biological cells, Appl. Optics, 49 (2010), 4460-4471. Google Scholar
C. J. R. Sheppard, Fractal model of light scattering in biological tissue and cells, Opt. Lett., 32 (2007), 142-144. Google Scholar
A. Tourin, A. Derode, P. Roux, B. A. van Tiggelen and M. Fink, Time-dependent coherent backscattering of acoustic waves, Phys. Rev. Lett., 79 (1997), 3637-3639. Google Scholar
V. Turzhitsky, A. J. Radosevich, J. D. Rogers, N. N. Mutyal and V. Backman, Measurement of optical scattering properties with low-coherence enhanced backscattering spectroscopy, J. Biomed. Opt., 16 (2011), 067007. Google Scholar
V. Turzhitsky, J. D. Rogers, N. N. Mutyal, H. K. Roy and V. Backman, Characterization of light transport in scattering media at subdiffusion length scales with Low-coherence Enhanced Backscattering, IEEE J. Sel. Top. Quant., 16 (2010), 619-626. Google Scholar
M. P. van Albada and A. Lagendijk, Observation of weak localization of light in a random medium, Phys. Rev. Lett., 55 (1985), 2692–2695. doi: 10.1103/PhysRevLett.55.2692. Google Scholar
M. C. W. van Rossum and Th. M. Nieuwenhuizen, Multiple scattering of classical waves: Microscopy, mesoscopy, and diffusion, Rev. Mod. Phys., 71 (1999), 313-371. doi: 10.1103/RevModPhys.71.313. Google Scholar
P. E. Wolf and G. Maret, Weak localization and coherent backscattering of photons in disordered media, Phys. Rev. Lett., 55 (1985), 2696–2699. Google Scholar
P. E. Wolf, G. Maret, E. Akkermans and R. Maynard, Optical coherent backscattering by random media: An experimental study, Journal de Physique, 49 (1988), 63-75. Google Scholar
M. Xu and R. R. Alfano, Fractal mechanisms of light scattering in biological tissue and cells, Opt. Lett., 30 (2005), 3051-3053. Google Scholar
K. M. Yoo, G. C. Tang and R. R. Alfano, Coherent backscattering of light from biological tissues, Appl. Opt., 29 (1990), 3237-3239. Google Scholar
Figure 1. Physical interpretation of the scattering of a plane wave by a random medium. The output wave in direction $ A $ is the superposition of many different scattering paths. One of these paths is plotted as well as the reversed path. The phase difference between the two outgoing waves is $ k e = k d \sin A $
Figure Options
Download as PowerPoint slide
Figure 2. The backscattering enhancement cone in Eq. (81) (normalized by $ \pi^2 P_{\rm tot} $). Here we use the Matérn covariance function (55). In the left plot $ p = .6 $, while in the right plot $ p = .9 $ so that the medium fluctuations are smoother in the right plot. In the plots the narrowest cones with largest peak values correspond to the largest $ \beta $ values
Table 1. Notations used in the paper
$ c_o $ background speed of propagation of the medium
$ \sigma_o $ background attenuation of the medium
$ \ell_z $ longitudinal correlation radius of the random medium
$ \ell_x $ transverse correlation radius of the random medium
$ \sigma $ standard deviation of the random medium
$ \omega $ (angular) frequency of the source
$ r_0 $ radius of the source
$ \rho_0 $ correlation radius of the source
$ {{{\boldsymbol k}}}_0 $ transverse wavevector of the source
$ \lambda_o = \frac{2\pi c_o}{\omega} $ wavelength
$ L_{\rm att} = \frac{c_o}{2\sigma_o} $ attenuation length
$ \zeta_L= \frac{L}{L_{\rm att}} $ relative propagation distance
$ K_z= \frac{2\omega \ell_z}{c_o} $ relative wavenumber
$ \alpha= \frac{c_o^2}{2\sigma_o \omega \ell_x^2} $ strength of diffraction
$ \beta= \frac{ \omega^2 \sigma^2 \ell_z}{8 c_o \sigma_o} $ strength of forward scattering
$ \overline{D}_o $ cross spectral density central value (see Eq. (26))
$ P_{\rm tot} $ mean reflected power (see Eq. (31))
Table Options
Patrick W. Dondl, Martin Jesenko. Threshold phenomenon for homogenized fronts in random elastic media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 353-372. doi: 10.3934/dcdss.2020329
Fioralba Cakoni, Pu-Zhao Kow, Jenn-Nan Wang. The interior transmission eigenvalue problem for elastic waves in media with obstacles. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020075
Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002
Alexander Dabrowski, Ahcene Ghandriche, Mourad Sini. Mathematical analysis of the acoustic imaging modality using bubbles as contrast agents at nearly resonating frequencies. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021005
Zhenzhen Wang, Tianshou Zhou. Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020323
Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256
Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407
Yohei Yamazaki. Center stable manifolds around line solitary waves of the Zakharov–Kuznetsov equation with critical speed. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021008
Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305
Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352
Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284
Yongxiu Shi, Haitao Wan. Refined asymptotic behavior and uniqueness of large solutions to a quasilinear elliptic equation in a borderline case. Electronic Research Archive, , () : -. doi: 10.3934/era.2020119
Eduard Marušić-Paloka, Igor Pažanin. Homogenization and singular perturbation in porous media. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020279
Timothy Chumley, Renato Feres. Entropy production in random billiards. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1319-1346. doi: 10.3934/dcds.2020319
Jakub Kantner, Michal Beneš. Mathematical model of signal propagation in excitable media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 935-951. doi: 10.3934/dcdss.2020382
Weihong Guo, Yifei Lou, Jing Qin, Ming Yan. IPI special issue on "mathematical/statistical approaches in data science" in the Inverse Problem and Imaging. Inverse Problems & Imaging, 2021, 15 (1) : I-I. doi: 10.3934/ipi.2021007
Wei-Chieh Chen, Bogdan Kazmierczak. Traveling waves in quadratic autocatalytic systems with complexing agent. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020364
Olivier Pironneau, Alexei Lozinski, Alain Perronnet, Frédéric Hecht. Numerical zoom for multiscale problems with an application to flows through porous media. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 265-280. doi: 10.3934/dcds.2009.23.265
George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003
Josselin Garnier Knut Sølna
|
CommonCrawl
|
Do you want to try Nootropics, but confused with the plethora of information available online? If that's the case, then you might get further confused about what nootropic supplement you should buy that specifically caters to your needs. Here is a list of the top 10 Nootropics or 10 best brain supplements available in the market, and their corresponding uses:
Factor analysis. The strategy: read in the data, drop unnecessary data, impute missing variables (data is too heterogeneous and collected starting at varying intervals to be clean), estimate how many factors would fit best, factor analyze, pick the ones which look like they match best my ideas of what productive is, extract per-day estimates, and finally regress LLLT usage on the selected factors to look for increases.
In 3, you're considering adding a new supplement, not stopping a supplement you already use. The I don't try Adderall case has value $0, the Adderall fails case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the Adderall succeeds case is worth $X-40-4099, where $X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that 0.5 \times (X-4179) > 0 ~> $X>4179$. (Adderall working or not isn't binary, and so you might be more comfortable breaking down the various how effective Adderall is cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I've designed it so it has a reasonable chance of showing that.)
Herbal supplements have been used for centuries to treat a wide range of medical conditions. Studies have shown that certain herbs may improve memory and cognition, and they can be used to help fight the effects of dementia and Alzheimer's disease. These herbs are considered safe when taken in normal doses, but care should be taken as they may interfere with other medications.
70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of:
Smart drugs, formally known as nootropics, are medications, supplements, and other substances that improve some aspect of mental function. In the broadest sense, smart drugs can include common stimulants such as caffeine, herbal supplements like ginseng, and prescription medications for conditions such as ADHD, Alzheimer's disease, and narcolepsy. These substances can enhance concentration, memory, and learning.
"Cavin, you are phemomenal! An incredulous journey of a near death accident scripted by an incredible man who chose to share his knowledge of healing his own broken brain. I requested our public library purchase your book because everyone, those with and without brain injuries, should have access to YOUR brain and this book. Thank you for your legacy to mankind!"
Smart pills have huge potential and several important applications, particularly in diagnosis. Smart pills are growing as a highly effective method of endoscopy, particularly for gastrointestinal diseases. Urbanization and rapid lifestyle changes leaning toward unhealthy diets and poor eating habits have led to distinctive increasing lifestyle disorders such as gastroesophageal reflux disease (GERD), obesity, and gastric ulcers.
12:18 PM. (There are/were just 2 Adderall left now.) I manage to spend almost the entire afternoon single-mindedly concentrating on transcribing two parts of a 1996 Toshio Okada interview (it was very long, and the formatting more challenging than expected), which is strong evidence for Adderall, although I did feel fairly hungry while doing it. I don't go to bed until midnight and & sleep very poorly - despite taking triple my usual melatonin! Inasmuch as I'm already fairly sure that Adderall damages my sleep, this makes me even more confident (>80%). When I grumpily crawl out of bed and check: it's Adderall. (One Adderall left.)
It's not clear that there is much of an effect at all. This makes it hard to design a self-experiment - how big an effect on, say, dual n-back should I be expecting? Do I need an arduous long trial or an easy short one? This would principally determine the value of information too; chocolate seems like a net benefit even if it does not affect the mind, but it's also fairly costly, especially if one likes (as I do) dark chocolate. Given the mixed research, I don't think cocoa powder is worth investigating further as a nootropic.
The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total.
Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector.
Smart drug, also called nootropic or cognitive enhancer, any of a group of pharmaceutical agents used to improve the intellectual capacity of persons suffering from neurological diseases and psychological disorders. The use of such drugs by healthy individuals in order to improve concentration, to study longer, and to better manage stress is a subject of controversy.
"One of my favorites is 1, 3, 7-trimethylxanthine," says Dr. Mark Moyad, director of preventive and alternative medicine at the University of Michigan. He says this chemical boosts many aspects of cognition by improving alertness. It's also associated with some memory benefits. "Of course," Moyad says, "1, 3, 7-trimethylxanthine goes by another name—caffeine."
Finally, it's not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference).
If you want to focus on boosting your brain power, Lebowitz says you should primarily focus on improving your cardiovascular health, which is "the key to good thinking." For example, high blood pressure and cholesterol, which raise the risk of heart disease, can cause arteries to harden, which can decrease blood flow to the brain. The brain relies on blood to function normally.
Table 4 lists the results of 27 tasks from 23 articles on the effects of d-AMP or MPH on working memory. The oldest and most commonly used type of working memory task in this literature is the Sternberg short-term memory scanning paradigm (Sternberg, 1966), in which subjects hold a set of items (typically letters or numbers) in working memory and are then presented with probe items, to which they must respond "yes" (in the set) or "no" (not in the set). The size of the set, and hence the working memory demand, is sometimes varied, and the set itself may be varied from trial to trial to maximize working memory demands or may remain fixed over a block of trials. Taken together, the studies that have used a version of this task to test the effects of MPH and d-AMP on working memory have found mixed and somewhat ambiguous results. No pattern is apparent concerning the specific version of the task or the specific drug. Four studies found no effect (Callaway, 1983; Kennedy, Odenheimer, Baltzley, Dunlap, & Wood, 1990; Mintzer & Griffiths, 2007; Tipper et al., 2005), three found faster responses with the drugs (Fitzpatrick, Klorman, Brumaghim, & Keefover, 1988; Ward et al., 1997; D. E. Wilson et al., 1971), and one found higher accuracy in some testing sessions at some dosages, but no main effect of drug (Makris et al., 2007). The meaningfulness of the increased speed of responding is uncertain, given that it could reflect speeding of general response processes rather than working memory–related processes. Aspects of the results of two studies suggest that the effects are likely due to processes other than working memory: D. E. Wilson et al. (1971) reported comparable speeding in a simple task without working memory demands, and Tipper et al. (2005) reported comparable speeding across set sizes.
Segmental analysis of the key components of the global smart pills market has been performed based on application, target area, disease indication, end-user, and region. Applications of smart pills are found in capsule endoscopy, drug delivery, patient monitoring, and others. Sub-division of the capsule endoscopy segment includes small bowel capsule endoscopy, controllable capsule endoscopy, colon capsule endoscopy, and others. Meanwhile, the patient monitoring segment is further divided into capsule pH monitoring and others.
One curious thing that leaps out looking at the graphs is that the estimated underlying standard deviations differ: the nicotine days have a strikingly large standard deviation, indicating greater variability in scores - both higher and lower, since the means weren't very different. The difference in standard deviations is just 6.6% below 0, so the difference almost reaches our usual frequentist levels of confidence too, which we can verify by testing:
Due to the synthetic nature of racetams, you won't find them in many of the best smart pills on the market. The intentional exclusion is not because racetams are ineffective. Instead, the vast majority of users trust natural smart drugs more. The idea of using a synthetic substance to alter your brain's operating system is a big turn off for most people. With synthetic nootropics, you're a test subject until more definitive studies arise.
A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea.
l-theanine (Examine.com) is occasionally mentioned on Reddit or Imminst or LessWrong32 but is rarely a top-level post or article; this is probably because theanine was discovered a very long time ago (>61 years ago), and it's a pretty straightforward substance. It's a weak relaxant/anxiolytic (Google Scholar) which is possibly responsible for a few of the health benefits of tea, and which works synergistically with caffeine (and is probably why caffeine delivered through coffee feels different from the same amount consumed in tea - in one study, separate caffeine and theanine were a mixed bag, but the combination beat placebo on all measurements). The half-life in humans seems to be pretty short, with van der Pijl 2010 putting it ~60 minutes. This suggests to me that regular tea consumption over a day is best, or at least that one should lower caffeine use - combining caffeine and theanine into a single-dose pill has the problem of caffeine's half-life being much longer so the caffeine will be acting after the theanine has been largely eliminated. The problem with getting it via tea is that teas can vary widely in their theanine levels and the variations don't seem to be consistent either, nor is it clear how to estimate them. (If you take a large dose in theanine like 400mg in water, you can taste the sweetness, but it's subtle enough I doubt anyone can actually distinguish the theanine levels of tea; incidentally, r-theanine - the useless racemic other version - anecdotally tastes weaker and less sweet than l-theanine.)
In our list of synthetic smart drugs, Noopept may be the genius pill to rule them all. Up to 1000 times stronger than Piracetam, Noopept may not be suitable for everyone. This nootropic substance requires much smaller doses for enhanced cognitive function. There are plenty of synthetic alternatives to Adderall and prescription ADHD medications. Noopept may be worth a look if you want something powerful over the counter.
It is often associated with Ritalin and Adderall because they are all CNS stimulants and are prescribed for the treatment of similar brain-related conditions. In the past, ADHD patients reported prolonged attention while studying upon Dexedrine consumption, which is why this smart pill is further studied for its concentration and motivation-boosting properties.
…Four subjects correctly stated when they received nicotine, five subjects were unsure, and the remaining two stated incorrectly which treatment they received on each occasion of testing. These numbers are sufficiently close to chance expectation that even the four subjects whose statements corresponded to the treatments received may have been guessing.
The demands of university studies, career, and family responsibilities leaves people feeling stretched to the limit. Extreme stress actually interferes with optimal memory, focus, and performance. The discovery of nootropics and vitamins that make you smarter has provided a solution to help college students perform better in their classes and professionals become more productive and efficient at work.
Barbaresi WJ, Katusic SK, Colligan RC, Weaver AL, Jacobsen SJ. Modifiers of long-term school outcomes for children with attention-deficit/hyperactivity disorder: Does treatment with stimulant medication make a difference? Results from a population-based study. Journal of Developmental and Behavioral Pediatrics. 2007;28:274–287. doi: 10.1097/DBP.0b013e3180cabc28. [PubMed] [CrossRef]
The smart pill that FDA approved is called Abilify MyCite. This tiny pill has a drug and an ingestible sensor. The sensor gets activated when it comes into contact with stomach fluid to detect when the pill has been taken. The data is then transmitted to a wearable patch that eventually conveys the information to a paired smartphone app. Doctors and caregivers, with the patient's consent, can then access the data via a web portal.
The evidence? A 2012 study in Greece found it can boost cognitive function in adults with mild cognitive impairment (MCI), a type of disorder marked by forgetfulness and problems with language, judgement, or planning that are more severe than average "senior moments," but are not serious enough to be diagnosed as dementia. In some people, MCI will progress into dementia.
As with any thesis, there are exceptions to this general practice. For example, theanine for dogs is sold under the brand Anxitane is sold at almost a dollar a pill, and apparently a month's supply costs $50+ vs $13 for human-branded theanine; on the other hand, this thesis predicts downgrading if the market priced pet versions higher than human versions, and that Reddit poster appears to be doing just that with her dog.↩
In fact, some of these so-called "smart drugs" are already remarkably popular. One recent survey involving tens of thousands of people found that 30% of Americans who responded had taken them in the last year. It seems as though we may soon all be partaking – and it's easy to get carried away with the consequences. Will this new batch of intellectual giants lead to dazzling, space-age inventions? Or perhaps an explosion in economic growth? Might the working week become shorter, as people become more efficient?
the larger size of the community enables economies of scale and increases the peak sophistication possible. In a small nootropics community, there is likely to be no one knowledgeable about statistics/experimentation/biochemistry/neuroscience/whatever-you-need-for-a-particular-discussion, and the available funds increase: consider /r/Nootropics's testing program, which is doable only because it's a large lucrative community to sell to so the sellers are willing to donate funds for independent lab tests/Certificates of Analysis (COAs) to be done. If there were 1000 readers rather than 23,295, how could this ever happen short of one of those 1000 readers being very altruistic?
They can cause severe side effects, and their long-term effects aren't well-researched. They're also illegal to sell, so they must be made outside of the UK and imported. That means their manufacture isn't regulated, and they could contain anything. And, as 'smart drugs' in 2018 are still illegal, you might run into legal issues from possessing some 'smart drugs' without a prescription.
One idea I've been musing about is the connections between IQ, Conscientiousness, and testosterone. IQ and Conscientiousness do not correlate to a remarkable degree - even though one would expect IQ to at least somewhat enable a long-term perspective, self-discipline, metacognition, etc! There are indications in studies of gifted youth that they have lower testosterone levels. The studies I've read on testosterone indicate no improvements to raw ability. So, could there be a self-sabotaging aspect to human intelligence whereby greater intelligence depends on lack of testosterone, but this same lack also holds back Conscientiousness (despite one's expectation that intelligence would produce greater self-discipline and planning), undermining the utility of greater intelligence? Could cases of high IQ types who suddenly stop slacking and accomplish great things sometimes be due to changes in testosterone? Studies on the correlations between IQ, testosterone, Conscientiousness, and various measures of accomplishment are confusing and don't always support this theory, but it's an idea to keep in mind.
There are certain risks associated with smart pills that might restrain their use. A smart pill usually leaves the body within two weeks. Sometimes, the pill might get lodged in the digestive tract rather than exiting the body via normal bowel movements. The risk might be higher in people with a tumor, Crohns disease, or some surgery within that area that lead to narrowing of the digestive tract. CT scan is usually performed in people with high-risk to assess the narrowing of the tract. However, the pill might still be lodged even if the results are negative for the CT scan, which might lead to bowel obstruction and can be removed either by surgery or traditional endoscopy. Smart pills might lead to skin irritation, which results in mild redness and need to be treated topically. It may also lead to capsule aspiration, which involves the capsule going down the wrong pipe and entering the airway instead of the esophagus. This might result in choking and death if immediate bronchoscopic extraction is not performed. Patients with comorbidities related to brain injury or chronic obstructive pulmonary disease may be at a higher risk. So, the health risks associated with the use of smart pills are hindering the smart pills technology market. The other factors, such as increasing cost with technological advancement and ethical constraints are also hindering the market.
Clearly, the hype surrounding drugs like modafinil and methylphenidate is unfounded. These drugs are beneficial in treating cognitive dysfunction in patients with Alzheimer's, ADHD or schizophrenia, but it's unlikely that today's enhancers offer significant cognitive benefits to healthy users. In fact, taking a smart pill is probably no more effective than exercising or getting a good night's sleep.
When comparing supplements, consider products with a score above 90% to get the greatest benefit from smart pills to improve memory. Additionally, we consider the reviews that users send to us when scoring supplements, so you can determine how well products work for others and use this information to make an informed decision. Every month, our editor puts her name on that month's best smart bill, in terms of results and value offered to users.
Never heard of OptiMind before? This supplement promotes itself as an all-natural nootropic supplement that increases focus, improves memory, and enhances overall mental drive. The product first captured our attention when we noticed that their supplement blend contains a few of the same ingredients currently present in our editor's #1 choice. So, of course, we grew curious to see whether their formula was as (un)successful as their initial branding techniques. Keep reading to find out what we discovered… Learn More...
"I am nearly four years out from my traumatic brain injury and I have been through 100's of hours of rehabilitation therapy. I have been surprised by how little attention is given to adequate nutrition for recovering from TBI. I'm always looking for further opportunities to recover and so this book fell into the right hands. Cavin outlines the science and reasoning behind the diet he suggests, but the real power in this book comes when he writes, "WE." WE can give our brains proper nutrition. Now I'm excited to drink smoothies and eat breakfasts that look like dinners! I will recommend this book to my friends.
Nootropics are a broad classification of cognition-enhancing compounds that produce minimal side effects and are suitable for long-term use. These compounds include those occurring in nature or already produced by the human body (such as neurotransmitters), and their synthetic analogs. We already regularly consume some of these chemicals: B vitamins, caffeine, and L-theanine, in our daily diets.
The methodology would be essentially the same as the vitamin D in the morning experiment: put a multiple of 7 placebos in one container, the same number of actives in another identical container, hide & randomly pick one of them, use container for 7 days then the other for 7 days, look inside them for the label to determine which period was active and which was placebo, refill them, and start again.
Rabiner et al. (2009) 2007 One public and one private university undergraduates (N = 3,390) 8.9% (while in college), 5.4% (past 6 months) Most common reasons endorsed: to concentrate better while studying, to be able to study longer, to feel less restless while studying 48%: from a friend with a prescription; 19%: purchased it from a friend with a prescription; 6%: purchased it from a friend without a prescription
Enhanced learning was also observed in two studies that involved multiple repeated encoding opportunities. Camp-Bruno and Herting (1994) found MPH enhanced summed recall in the Buschke Selective Reminding Test (Buschke, 1973; Buschke & Fuld, 1974) when 1-hr and 2-hr delays were combined, although individually only the 2-hr delay approached significance. Likewise, de Wit, Enggasser, and Richards (2002) found no effect of d-AMP on the Hopkins Verbal Learning Test (Brandt, 1991) after a 25-min delay. Willett (1962) tested rote learning of nonsense syllables with repeated presentations, and his results indicate that d-AMP decreased the number of trials needed to reach criterion.
I noticed what may have been an effect on my dual n-back scores; the difference is not large (▃▆▃▃▂▂▂▂▄▅▂▄▂▃▅▃▄ vs ▃▄▂▂▃▅▂▂▄▁▄▃▅▂▃▂▄▂▁▇▃▂▂▄▄▃▃▂▃▂▂▂▃▄▄▃▆▄▄▂▃▄▃▁▂▂▂▃▂▄▂▁▁▂▄▁▃▂▄) and appears mostly in the averages - Toomim's quick two-sample t-test gave p=0.23, although a another analysis gives p=0.138112. One issue with this before-after quasi-experiment is that one would expect my scores to slowly rise over time and hence a fish oil after would yield a score increase - the 3.2 point difference could be attributable to that, placebo effect, or random variation etc. But an accidentally noticed effect (d=0.28) is a promising start. An experiment may be worth doing given that fish oil does cost a fair bit each year: randomized blocks permitting an fish-oil-then-placebo comparison would take care of the first issue, and then blinding (olive oil capsules versus fish oil capsules?) would take care of the placebo worry.
Hall, Irwin, Bowman, Frankenberger, & Jewett (2005) Large public university undergraduates (N = 379) 13.7% (lifetime) 27%: use during finals week; 12%: use when party; 15.4%: use before tests; 14%: believe stimulants have a positive effect on academic achievement in the long run M = 2.06 (SD = 1.19) purchased stimulants from other students; M = 2.81 (SD = 1.40) have been given stimulants by other studentsb
Piracetam is well studied and is credited by its users with boosting their memory, sharpening their focus, heightening their immune system, even bettering their personalities. But it's only one of many formulations in the racetam drug family. Newer ones include aniracetam, phenylpiracetam and oxiracetam. All are available online, where their efficacy and safety are debated and reviewed on message boards and in podcasts.
If you have spent any time shopping for memory enhancer pills, you have noticed dozens of products on the market. Each product is advertised to improve memory, concentration, and focus. However, choosing the first product promising results may not produce the desired improvements. Taking the time to research your options and compare products will improve your chances of finding a supplement that works.
But while some studies have found short-term benefits, Doraiswamy says there is no evidence that what are commonly known as smart drugs — of any type — improve thinking or productivity over the long run. "There's a sizable demand, but the hype around efficacy far exceeds available evidence," notes Doraiswamy, adding that, for healthy young people such as Silicon Valley go-getters, "it's a zero-sum game. That's because when you up one circuit in the brain, you're probably impairing another system."
My worry about the MP variable is that, plausible or not, it does seem relatively weak against manipulation; other variables I could look at, like arbtt window-tracking of how I spend my computer time, # or size of edits to my files, or spaced repetition performance, would be harder to manipulate. If it's all due to MP, then if I remove the MP and LLLT variables, and summarize all the other variables with factor analysis into 2 or 3 variables, then I should see no increases in them when I put LLLT back in and look for a correlation between the factors & LLLT with a multivariate regression.
Scientists found that the drug can disrupt the way memories are stored. This ability could be invaluable in treating trauma victims to prevent associated stress disorders. The research has also triggered suggestions that licensing these memory-blocking drugs may lead to healthy people using them to erase memories of awkward conversations, embarrassing blunders and any feelings for that devious ex-girlfriend.
These are the most popular nootropics available at the moment. Most of them are the tried-and-tested and the benefits you derive from them are notable (e.g. Guarana). Others are still being researched and there haven't been many human studies on these components (e.g. Piracetam). As always, it's about what works for you and everyone has a unique way of responding to different nootropics.
"A system that will monitor their behavior and send signals out of their body and notify their doctor? You would think that, whether in psychiatry or general medicine, drugs for almost any other condition would be a better place to start than a drug for schizophrenia," says Paul Appelbaum, director of Columbia University's psychiatry department in an interview with the New York Times.
Your mileage will vary. There are so many parameters and interactions in the brain that any of them could be the bottleneck or responsible pathway, and one could fall prey to the common U-shaped dose-response curve (eg. Yerkes-Dodson law; see also Chemistry of the adaptive mind & de Jongh et al 2007) which may imply that the smartest are those who benefit least23 but ultimately they all cash out in a very few subjective assessments like energetic or motivated, with even apparently precise descriptions like working memory or verbal fluency not telling you much about what the nootropic actually did. It's tempting to list the nootropics that worked for you and tell everyone to go use them, but that is merely generalizing from one example (and the more nootropics - or meditation styles, or self-help books, or getting things done systems - you try, the stronger the temptation is to evangelize). The best you can do is read all the testimonials and studies and use that to prioritize your list of nootropics to try. You don't know in advance which ones will pay off and which will be wasted. You can't know in advance. And wasted some must be; to coin a Umeshism: if all your experiments work, you're just fooling yourself. (And the corollary - if someone else's experiments always work, they're not telling you everything.)
As with other nootropics, the way it works is still partially a mystery, but most research points to it acting as a weak dopamine reuptake inhibitor. Put simply, it increases your dopamine levels the same way cocaine does, but in a much less extreme fashion. The enhanced reward system it creates in the brain, however, makes it what Patel considers to be the most potent cognitive enhancer available; and he notes that some people go from sloth to superman within an hour or two of taking it.
Flow diagram of epidemiology literature search completed July 1, 2010. Search terms were nonmedical use, nonmedical use, misuse, or illicit use, and prescription stimulants, dextroamphetamine, methylphenidate, Ritalin, or Adderall. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies of the extent of nonmedical prescription stimulant use by students and related questions addressed in the present article including students' motives and frequency of use.
Either prescription or illegal, daily use of testosterone would not be cheap. On the other hand, if I am one of the people for whom testosterone works very well, it would be even more valuable than modafinil, in which case it is well worth even arduous experimenting. Since I am on the fence on whether it would help, this suggests the value of information is high.
Known widely as 'Brahmi,' the Bacopa Monnieri or Water Hyssop, is a small herb native to India that finds mention in various Ayurvedic texts for being the best natural cognitive enhancer. It has been used traditionally for memory enhancement, asthma, epilepsy and improving mood and attention of people over 65. It is known to be one of the best brain supplement in the world.
Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.)
Didn't seem very important to me. Trump's ability to discern importance in military projects, sure, why not. Shanahan may be the first honest cabinet head; it could happen. With the record this administration has I'd need some long odds to bet that way. Does anyone doubt he got the loyalty spiel and then the wink and nod that anything he could get away with was fine. monies
So what's the catch? Well, it's potentially addictive for one. Anything that messes with your dopamine levels can be. And Patel says there are few long-term studies on it yet, so we don't know how it will affect your brain chemistry down the road, or after prolonged, regular use. Also, you can't get it very easily, or legally for that matter, if you live in the U.S. It's classified as a schedule IV controlled substance. That's where Adrafinil comes in.
The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.)
Please note: Smart Pills, Smart Drugs or Brain Food Supplements are also known as: Brain Smart Vitamins, Brain Tablets, Brain Vitamins, Brain Booster Supplements, Brain Enhancing Supplements, Cognitive Enhancers, Focus Enhancers, Concentration Supplements, Mental Focus Supplements, Mind Supplements, Neuro Enhancers, Neuro Focusers, Vitamins for Brain Function,Vitamins for Brain Health, Smart Brain Supplements, Nootropics, or "Natural Nootropics"
Nootropics are a great way to boost your productivity. Nootropics have been around for more than 40 years and today they are entering the mainstream. If you want to become the best you, nootropics are a way to level up your life. Nootropics are always personal and what works for others might not work for you. But no matter the individual outcomes, nootropics are here to make an impact!
Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything:
Participants (n=205) [young adults aged 18-30 years] were recruited between July 2010 and January 2011, and were randomized to receive either a daily 150 µg (0.15mg) iodine supplement or daily placebo supplement for 32 weeks…After adjusting for baseline cognitive test score, examiner, age, sex, income, and ethnicity, iodine supplementation did not significantly predict 32 week cognitive test scores for Block Design (p=0.385), Digit Span Backward (p=0.474), Matrix Reasoning (p=0.885), Symbol Search (p=0.844), Visual Puzzles (p=0.675), Coding (p=0.858), and Letter-Number Sequencing (p=0.408).
Turning to analyses related specifically to the drugs that are the subject of this article, reanalysis of the 2002 NSDUH data by Kroutil and colleagues (2006) found past-year nonmedical use of stimulants other than methamphetamine by 2% of individuals between the ages of 18 and 25 and by 0.3% of individuals 26 years of age and older. For ADHD medications in particular, these rates were 1.3% and 0.1%, respectively. Finally, Novak, Kroutil, Williams, and Van Brunt (2007) surveyed a sample of over four thousand individuals from the Harris Poll Online Panel and found that 4.3% of those surveyed between the ages of 18 and 25 had used prescription stimulants nonmedically in the past year, compared with only 1.3% between the ages of 26 and 49.
Many of the positive effects of cognitive enhancers have been seen in experiments using rats. For example, scientists can train rats on a specific test, such as maze running, and then see if the "smart drug" can improve the rats' performance. It is difficult to see how many of these data can be applied to human learning and memory. For example, what if the "smart drug" made the rat hungry? Wouldn't a hungry rat run faster in the maze to receive a food reward than a non-hungry rat? Maybe the rat did not get any "smarter" and did not have any improved memory. Perhaps the rat ran faster simply because it was hungrier. Therefore, it was the rat's motivation to run the maze, not its increased cognitive ability that affected the performance. Thus, it is important to be very careful when interpreting changes observed in these types of animal learning and memory experiments.
I ultimately mixed it in with the 3kg of piracetam and included it in that batch of pills. I mixed it very thoroughly, one ingredient at a time, so I'm not very worried about hot spots. But if you are, one clever way to get accurate caffeine measurements is to measure out a large quantity & dissolve it since it's easier to measure water than powder, and dissolving guarantees even distribution. This can be important because caffeine is, like nicotine, an alkaloid poison which - the dose makes the poison - can kill in high doses, and concentrated powder makes it easy to take too much, as one inept Englishman discovered the hard way. (This dissolving trick is applicable to anything else that dissolves nicely.)
I can test fish oil for mood, since the other claimed benefits like anti-schizophrenia are too hard to test. The medical student trial (Kiecolt-Glaser et al 2011) did not see changes until visit 3, after 3 weeks of supplementation. (Visit 1, 3 weeks, visit 2, supplementation started for 3 weeks, visit 3, supplementation continued 3 weeks, visit 4 etc.) There were no tests in between the test starting week 1 and starting week 3, so I can't pin it down any further. This suggests randomizing in 2 or 3 week blocks. (For an explanation of blocking, see the footnote in the Zeo page.)
An additional complexity, related to individual differences, concerns dosage. This factor, which varies across studies and may be fixed or determined by participant body weight within a study, undoubtedly influences the cognitive effects of stimulant drugs. Furthermore, single-unit recordings with animals and, more recently, imaging of humans indicate that the effects of stimulant dose are nonmonotonic; increases enhance prefrontal function only up to a point, with further increases impairing function (e.g., Arnsten, 1998; Mattay et al., 2003; Robbins & Arnsten, 2009). Yet additional complexity comes from the fact that the optimal dosage depends on the same kinds of individual characteristics just discussed and on the task (Mattay et al., 2003).
Past noon, I began to feel better, but since I would be driving to errands around 4 PM, I decided to not risk it and take an hour-long nap, which went well, as did the driving. The evening was normal enough that I forgot I had stayed up the previous night, and indeed, I didn't much feel like going to bed until past midnight. I then slept well, the Zeo giving me a 108 ZQ (not an all-time record, but still unusual).
Statements made, or products sold through this web site, have not been evaluated by the Food and Drug Administration. They are not intended to diagnose, treat, cure, or prevent any diseases. Consult a qualified health care practitioner before taking any substance for medicinal purposes.California Proposition 65 WARNING: Some products on this store contains progesterone, a chemical known to the State of California to cause cancer. Consult with your physician before using this product.
Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it.
Began double-blind trial. Today I took one pill blindly at 1:53 PM. at the end of the day when I have written down my impressions and guess whether it was one of the Adderall pills, then I can look in the baggy and count and see whether it was. there are many other procedures one can take to blind oneself (have an accomplice mix up a sequence of pills and record what the sequence was; don't count & see but blindly take a photograph of the pill each day, etc.) Around 3, I begin to wonder whether it was Adderall because I am arguing more than usual on IRC and my heart rate seems a bit high just sitting down. 6 PM: I've started to think it was a placebo. My heart rate is back to normal, I am having difficulty concentrating on long text, and my appetite has shown up for dinner (although I didn't have lunch, I don't think I had lunch yesterday and yesterday the hunger didn't show up until past 7). Productivity wise, it has been a normal day. All in all, I'm not too sure, but I think I'd guess it was Adderall with 40% confidence (another way of saying placebo with 60% confidence). When I go to examine the baggie at 8:20 PM, I find out… it was an Adderall pill after all. Oh dear. One little strike against Adderall that I guessed wrong. It may be that the problem is that I am intrinsically a little worse today (normal variation? come down from Adderall?).
Smart drugs offer significant memory enhancing benefits. Clinical studies of the best memory pills have shown gains to focus and memory. Individuals seek the best quality supplements to perform better for higher grades in college courses or become more efficient, productive, and focused at work for career advancement. It is important to choose a high quality supplement to get the results you want.
I have personally found that with respect to the NOOTROPIC effect(s) of all the RACETAMS, whilst I have experienced improvements in concentration and working capacity / productivity, I have never experienced a noticeable ongoing improvement in memory. COLURACETAM is the only RACETAM that I have taken wherein I noticed an improvement in MEMORY, both with regards to SHORT-TERM and MEDIUM-TERM MEMORY. To put matters into perspective, the memory improvement has been mild, yet still significant; whereas I have experienced no such improvement at all with the other RACETAMS.
Phenserine, as well as the drugs Aricept and Exelon, which are already on the market, work by increasing the level of acetylcholine, a neurotransmitter that is deficient in people with the disease. A neurotransmitter is a chemical that allows communication between nerve cells in the brain. In people with Alzheimer's disease, many brain cells have died, so the hope is to get the most out of those that remain by flooding the brain with acetylcholine.
Next, if these theorized safe and effective pills don't just get you through a test or the day's daily brain task but also make you smarter, whatever smarter means, then what? Where's the boundary between genius and madness? If Einstein had taken such drugs, would he have created a better theory of gravity? Or would he have become delusional, chasing quantum ghosts with no practical application, or worse yet, string theory. (Please use "string theory" in your subject line for easy sorting of hate mail.)
After trying out 2 6lb packs between 12 September & 25 November 2012, and 20 March & 20 August 2013, I have given up on flaxseed meal. They did not seem to go bad in the refrigerator or freezer, and tasted OK, but I had difficulty working them into my usual recipes: it doesn't combine well with hot or cold oatmeal, and when I tried using flaxseed meal in soups I learned flaxseed is a thickener which can give soup the consistency of snot. It's easier to use fish oil on a daily basis.
Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage.
I can only talk from experience here, but I can remember being a teenager and just being a straight-up dick to any recruiters that came to my school. And I came from a military family. I'd ask douche-bag questions, I'd crack jokes like so... don't ask, don't tell only applies to everyone BUT the Navy, right? I never once considered enlisting because some 18 or 19 year old dickhead on hometown recruiting was hanging out in the cafeteria or hallways of my high school.Weirdly enough, however, what kinda put me over the line and made me enlist was the location of the recruiters' office. In the city I was living in at the time, the Armed Forces Recruitment Center was next door to an all-ages punk venue that I went to nearly every weekend. I spent many Saturday nights standing in a parking lot after a show, all bruised and bloody from a pit, smoking a joint, and staring at the windows of the closed recruiters' office. Propaganda posters of guys in full-battle-rattle obscured by a freshly scrawled Anarchy symbol or a collage of band stickers over the glass.I think trying to recruit kids from school has a child-molester-vibe to it. At least it did for me. But the recruiters defiantly being right next to a bunch of drunk and high punks, that somehow made it seem more like a truly bad-ass option. Like, sure, I'll totally join. After all, these guys don't run from the horde of skins and pins that descend every weekend like everyone else, they must be bad-ass.
Accordingly, we searched the literature for studies in which MPH or d-AMP was administered orally to nonelderly adults in a placebo-controlled design. Some of the studies compared the effects of multiple drugs, in which case we report only the results of stimulant–placebo comparisons; some of the studies compared the effects of stimulants on a patient group and on normal control subjects, in which case we report only the results for control subjects. The studies varied in many other ways, including the types of tasks used, the specific drug used, the way in which dosage was determined (fixed dose or weight-dependent dose), sample size, and subject characteristics (e.g., age, college sample or not, gender). Our approach to the classic splitting versus lumping dilemma has been to take a moderate lumping approach. We group studies according to the general type of cognitive process studied and, within that grouping, the type of task. The drug and dose are reported, as well as sample characteristics, but in the absence of pronounced effects of these factors, we do not attempt to make generalizations about them.
Omega-3 fatty acids: DHA and EPA – two Cochrane Collaboration reviews on the use of supplemental omega-3 fatty acids for ADHD and learning disorders conclude that there is limited evidence of treatment benefits for either disorder.[42][43] Two other systematic reviews noted no cognition-enhancing effects in the general population or middle-aged and older adults.[44][45]
Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap
|
CommonCrawl
|
The European Physical Journal C
December 2015 , 75:600 | Cite as
Transverse-target-spin asymmetry in exclusive \(\omega \)-meson electroproduction
The HERMES Collaboration
A. Airapetian
N. Akopov
Z. Akopov
E. C. Aschenauer
W. Augustyniak
A. Avetissian
S. Belostotski
H. P. Blok
A. Borissov
V. Bryzgalov
G. P. Capitani
G. Ciullo
M. Contalbrigo
P. F. Dalpiaz
W. Deconinck
R. De Leo
E. De Sanctis
M. Diefenthaler
P. Di Nezza
M. Düren
G. Elbakian
F. Ellinghaus
L. Felawka
S. Frullani
D. Gabbert
G. Gapienko
V. Gapienko
V. Gharibyan
F. Giordano
S. Gliske
D. Hasch
M. Hoek
Y. Holler
A. Ivanilov
H. E. Jackson
S. Joosten
R. Kaiser
G. Karyan
T. Keri
E. Kinney
A. Kisselev
V. Korotkov
V. Kozlov
V. G. Krivokhijine
L. Lagamba
L. Lapikás
I. Lehmann
P. Lenisa
W. Lorenzon
B.-Q. Ma
S. I. Manaenkov
Y. Mao
B. Marianski
H. Marukyan
Y. Miyachi
A. Movsisyan
V. Muccifora
Y. Naryshkin
A. Nass
M. Negodaev
W.-D. Nowak
L. L. Pappalardo
R. Perez-Benito
A. Petrosyan
P. E. Reimer
A. R. Reolon
C. Riedl
K. Rith
G. Rosner
A. Rostomyan
J. Rubin
D. Ryckbosch
Y. Salomatin
G. Schnell
B. Seitz
T.-A. Shibata
M. Statera
E. Steffens
J. J. M. Steijger
F. Stinzing
S. Taroian
A. Terkulov
R. Truty
A. Trzcinski
M. Tytgat
Y. Van Haarlem
C. Van Hulse
V. Vikhrov
I. Vilardi
C. Vogel
S. Wang
S. Yaschenko
S. Yen
B. Zihlmann
P. Zupranski
Regular Article - Experimental Physics
First Online: 17 December 2015
Hard exclusive electroproduction of \(\omega \) mesons is studied with the HERMES spectrometer at the DESY laboratory by scattering 27.6 GeV positron and electron beams off a transversely polarized hydrogen target. The amplitudes of five azimuthal modulations of the single-spin asymmetry of the cross section with respect to the transverse proton polarization are measured. They are determined in the entire kinematic region as well as for two bins in photon virtuality and momentum transfer to the nucleon. Also, a separation of asymmetry amplitudes into longitudinal and transverse components is done. These results are compared to a phenomenological model that includes the pion pole contribution. Within this model, the data favor a positive \(\pi \omega \) transition form factor.
Transition Form Factor Deeply Virtual Compton Scattering Azimuthal Modulation Exclusive Production Asymmetry Amplitude
F. Stinzing: Deceased.
In the framework of quantum chromodynamics (QCD), hard exclusive meson leptoproduction on a longitudinally or transversely polarized proton target provides important information about the spin structure of the nucleon. The process amplitude is a convolution of the lepton–quark hard-scattering subprocess amplitude with soft hadronic matrix elements describing the structure of the nucleon and that of the meson. Here, factorization is proven rigorously only if the lepton–quark interaction is mediated by a longitudinally polarized virtual photon [1, 2]. The soft hadronic matrix elements describing the nucleon contain generalized parton distributions (GPDs) to parametrize its partonic structure. Hard exclusive production of vector mesons is described by GPDs \(H^f\) and \(E^f\), where f denotes a quark flavor or a gluon. These "unpolarized", i.e., parton–helicity–nonflip distributions describe the photon–parton interaction with conservation and flip of nucleon helicity, respectively. Both are of special interest, as they are related to the total angular momentum of partons, \(J^f\) [3]. The GPDs \(H^f\) are well constrained by existing experimental data. The GPDs \(E^{u}\) and \(E^{d}\) for up and down quarks, respectively, are partially constrained by nucleon form-factor data [4], while experimental information on sea-quark GPD \(E^{\text {sea}}\) and gluon GPD \(E^{g}\) is scarce. For a recent review on the status of GPD determinations, see Ref. [5]. In contrast to leptoproduction of vector mesons with an unpolarized target, which is mainly sensitive to GPDs \(H^f\), vector-meson leptoproduction off a transversely polarized nucleon is sensitive to the interference between two amplitudes containing \(H^f\) and \(E^f\), respectively, and thus opens access to \(E^f\).
For a transversely polarized virtual photon mediating the lepton–quark interaction, there exists no rigorous proof of collinear factorization. In the QCD-inspired phenomenological "GK" model [6, 7, 8] however, factorization is also assumed for the transverse amplitudes. In this so-called modified perturbative approach [9], infrared singularities occurring in these amplitudes are regularized by quark transverse momenta in the subprocess, while the partons are still emitted and reabsorbed collinearly by the nucleon. By using the quark transverse momenta in the subprocess, the transverse size of the meson is effectively taken into account. Using this approach, the GK model describes cross sections, spin density matrix elements (SDMEs), and spin asymmetries in exclusive vector-meson production for values of Bjorken-x below 0.2 [6, 7, 8]. The GPDs parametrized in the GK model were used in calculations of deeply virtual Compton scattering (DVCS) amplitudes, which led to good agreement with most DVCS measurements over a wide kinematic range [10]. In the most recent version of the model, the \(\gamma ^* \pi \omega \) vertex function in the one-pion-exchange contribution is identified with the \(\pi \omega \) transition form factor [11]. Its magnitude is determined in a model-dependent way, while its unknown sign may be determined from comparisons with experimental data on spin asymmetries in hard exclusive leptoproduction.
Measurements of hard-exclusive production of various types of mesons are complementary to DVCS, as they allow access to various flavor combinations of GPDs. Previous HERMES publications on measurements of azimuthal transverse-target-spin asymmetries include results on exclusive production of \(\rho ^0\) [12] and \(\pi ^+\) mesons [13] as well as on DVCS [14].
In the present paper, the azimuthal modulations of the transverse-target-spin asymmetry in the cross section of exclusive electroproduction of \(\omega \) mesons are studied. The available data allow for an estimation of the kinematic dependence of the measured asymmetry amplitudes on photon virtuality and four-momentum transfer to the nucleon. The measured asymmetry amplitudes are compared to the most recent calculations of the GK model using either possible sign of the \(\pi \omega \) transition form factor.
2 Data collection and process identification
The data were accumulated with the HERMES forward spectrometer [15] during the running period 2002–2005. The 27.6 GeV positron (electron) beam was scattered off a transversely polarized hydrogen target, with the average magnitude \(P_{T}\) of the proton-polarization component \(\mathbf {P_T}\) perpendicular to the beam direction being equal to 0.72. The lepton beam was longitudinally polarized, and in the analysis the data set is beam-helicity balanced. The \(\omega \) meson is produced in the reaction
$$\begin{aligned} e + p \rightarrow e + p + \omega , \end{aligned}$$
with a branching ratio \(Br = 89.1~ \%\) for the \(\omega \) decay
$$\begin{aligned} \omega \rightarrow \pi ^+ + \pi ^- + \pi ^0,\quad ~\pi ^0 \rightarrow 2\gamma . \end{aligned}$$
Two-photon invariant-mass distribution after application of all criteria to select exclusively produced \(\omega \) mesons. The Breit–Wigner fit to the mass distribution is shown as a continuous line and the vertical dashed line indicates the PDG value of the \(\pi ^{0}\) mass [17]
The same requirements to select exclusively produced \(\omega \) mesons as in Ref. [16] are applied. The candidate events for exclusive \(\omega \)-meson production are required to have exactly three charged tracks, i.e., the scattered lepton and two oppositely charged pions, and at least two clusters in the calorimeter not associated with a charged track. The \(\pi ^0\) meson is reconstructed from two photon clusters with an invariant mass \(M(\gamma \gamma )\) in the interval 0.11 \(\hbox {GeV}< M(\gamma \gamma ) <0.16 \) GeV. Its distribution is shown in Fig. 1, where the fit with a Breit–Wigner function yields \(136.1\pm 0.8\) MeV (\(19\pm 2\) MeV) for the mass (width). The charged hadrons and leptons are identified through the combined responses of four particle-identification detectors [15]. The three-pion invariant mass is calculated as \( M(\pi ^{+}\pi ^{-}\pi ^{0}) = \) \(\sqrt{(p_{\pi ^+} + p_{\pi ^-} + p_{\pi ^0})^2}\), where \(p_{\pi }\) are the four-momenta of the charged and neutral pions. Events containing \(\omega \) mesons are selected through the requirement 0.71 \(\hbox {GeV}< M(\pi ^{+}\pi ^{-}\pi ^{0}) < 0.87\) GeV.
Further event-selection requirements are the following:
1.0 \(\hbox {GeV}^2< Q^2 < 10.0~\hbox {GeV}^2\), where \(Q^2\) represents the negative square of the virtual-photon four-momentum. The lower value is applied in order to facilitate the application of perturbative QCD, while the upper value delimits the measured phase space;
\(-t^{\prime } < 0.2~\hbox {GeV}^2\) in order to improve exclusivity, where \(t'=t - t_{min}\), t is the squared four-momentum transfer to the nucleon and \(-t_{min}\) represents the smallest kinematically allowed value of \(-t\) at fixed virtual-photon energy and \(Q^{2}\);
\(W> 3\) GeV in order to be outside of the resonance region and \(W < 6.3\) GeV in order to clearly delimit the kinematic phase space, where W is the invariant mass of the photon-nucleon system;
the scattered-lepton energy lies above 3.5 GeV in order to avoid a bias originating from the trigger.
Missing-energy distribution for exclusive \(\omega \)-meson production. The unshaded histogram shows experimental data, while the shaded area shows the distribution obtained from a PYTHIA simulation of the SIDIS background. The vertical dashed line denotes the upper limit of the exclusive region
In order to isolate exclusive production, the energy not accounted for by the leptons and the three pions must be zero within the experimental resolution. We require the missing energy to be in the interval \(-1.0~\hbox {GeV}< \Delta E < 0.8~\hbox {GeV}\), which is referred to as "exclusive region" in the following. Here, the missing energy is calculated as \( \Delta E = \frac{M^{2}_{X} -M^{2}_{p}}{2 M_{p}}\), with \(M_{p}\) being the proton mass and \(M^{2}_{X}=(p+q-p_{\pi ^+}-p_{\pi ^-}-p_{\pi ^0})^{2}\) the missing-mass squared, where p and q are the four-momenta of target nucleon and virtual photon, respectively. The distribution of the missing energy \(\Delta E\) is shown in Fig. 2. It exhibits a clearly visible exclusive peak centered about \(\Delta E= 0\) . The shaded area represents semi-inclusive deep-inelastic scattering (SIDIS) background events obtained from a PYTHIA [18] Monte-Carlo simulation that is normalized to the data in the region 2 \(\hbox {GeV}< \Delta E< 20\) GeV. The simulation is used to determine the fraction of background under the exclusive peak. This fraction is calculated as the ratio of the number of background events to the total number of events and amounts to about \(21~\%\).
After application of all these constraints, the sample contains 279 exclusively produced \(\omega \) mesons. This data sample is referred to in the following as data in the "entire kinematic region". The \(\pi ^{+}\pi ^{-}\pi ^{0}\) invariant-mass distribution for this data sample is shown in Fig. 3. A Breit–Wigner fit yields \(785\pm 2\) MeV (\(52\pm 5\) MeV) for the mass (width).
The \( \pi ^+ \pi ^- \pi ^0\) invariant-mass distribution after application of all criteria to select exclusively produced \(\omega \) mesons. The Breit–Wigner fit to the mass distribution is shown as a continuous line and the vertical dashed line indicates the PDG value of the \(\omega \) mass [17]
3 Extraction of the asymmetry amplitudes
The cross section for hard exclusive leptoproduction of a vector meson on a transversely polarized proton target, written in terms of polarized photo-absorption cross sections and interference terms, is given by Eq. (34) in Ref. [19]. In this equation, the transverse-target-spin asymmetry \(A_{UT}\) is decomposed into a Fourier series of terms involving \(\sin (m\phi \pm \phi _{S})\), with \(m=0,\ldots ,3\). The angles \(\phi \) and \(\phi _S\) are the azimuthal angles of the \(\omega \)-production plane and of the component \(\mathbf {S}_{\perp }\) of the transverse nucleon polarization vector that is orthogonal to the virtual-photon direction. They are measured around the virtual-photon direction and with respect to the lepton-scattering plane (see Fig. 4). These definitions are in accordance with the Trento Conventions [20]. For the HERMES kinematics and acceptance in exclusive \(\omega \) production, \(\sin \theta _{\gamma ^{*}}<0.1\) and \(\cos \theta _{\gamma ^{*}}>0.99\), which can be approximated by \(\sin \theta _{\gamma ^{*}} \approx 0\) and \(\cos \theta _{\gamma ^{*}} \approx 1\). Here, \(\theta _{\gamma ^{*}}\) is the angle between the lepton-beam and virtual-photon directions.
Lepton-scattering and \(\omega \)-production planes together with the azimuthal angles \(\phi \) and \(\phi _S\)
In this approximation, the angular-dependent part of Eq. (34) in Ref. [19] for an unpolarized beam reads:
$$\begin{aligned} \mathcal{W}(\phi ,\phi _{S})= & {} 1 +A^{\cos (\phi )}_{UU}\cos (\phi )+ A^{\cos (2\phi )}_{UU}\cos (2\phi )\nonumber \\&+S_{\perp } [A^{\sin (\phi +\phi _{S})}_{UT}\sin (\phi +\phi _{S}) \nonumber \\&+A^{\sin (\phi -\phi _{S})}_{UT}\sin (\phi -\phi _{S}) \nonumber \\&+A^{\sin (\phi _{S})}_{UT}\sin (\phi _{S}) \nonumber \\&+A^{\sin (2\phi -\phi _{S})}_{UT}\sin (2\phi -\phi _{S}) \nonumber \\&+A^{\sin (3\phi -\phi _{S})}_{UT}\sin (3\phi -\phi _{S})], \end{aligned}$$
where \(S_{\perp }=|\mathbf {S}_{\perp }|\). Here, \(A_{UU}\) and \(A_{UT}\) denote the amplitudes of the corresponding cosine and sine modulations as given in their superscripts. The first letter in the subscript denotes unpolarized beam and the second letter U (T) denotes unpolarized (transversely polarized) target. The above approximation in conjunction with the additional factor \(\epsilon /2\) \(\approx \)0.4, where \(\epsilon \) is the ratio of fluxes of longitudinal and transverse virtual photons, allows one to neglect the contribution of the \(\sin (2\phi +\phi _S)\) modulation, appearing in Eq. (34) of Ref. [19]. This approximation also makes the angular dependence of \(S_{\perp }\) disappear (see Eq. (8) of Ref. [19]), and \(S_{\perp }\simeq P_T=0.72\) is used in the following. Note that the modulation \(\sin (\phi -\phi _{S})\) is the only one that appears at leading twist.
For exclusive production of \(\omega \) mesons decaying into three pions, the angular distribution of the latter can be decomposed into parts corresponding to longitudinally (L) and transversely (T) polarized \(\omega \) mesons:
$$\begin{aligned} \mathcal{W}(\phi ,\phi _S,\theta )= & {} \frac{3}{2}\, r^{04}_{00} \, \cos ^2(\theta ) \, w_L(\phi ,\phi _S)\nonumber \\&+ \frac{3}{4}\, (1-r^{04}_{00}) \, \sin ^2(\theta ) \, w_T(\phi ,\phi _S). \end{aligned}$$
Here, \(\theta \) is the polar angle of the unit vector normal to the \(\omega \) decay plane in the \(\omega \)-meson rest frame, with the z-axis aligned opposite to the outgoing nucleon momentum [16]. The pre-factors \(r^{04}_{00}\) and (\(1 - r^{04}_{00}\)) represent the fractional contribution to the full cross section by longitudinally and transversely polarized \(\omega \) mesons, respectively [16]. The first (second) term on the right-hand side of Eq. (4) represents the angular distribution of the longitudinally (transversely) polarized \(\omega \) mesons, with
$$\begin{aligned} w_{L}(\phi ,\phi _S)&= 1 + A_{UU,L}(\phi ) + S_{\perp } A_{UT,L}(\phi ,\phi _S),\nonumber \\ w_{T}(\phi ,\phi _S)&= 1 + A_{UU,T}(\phi ) + S_{\perp } A_{UT,T}(\phi ,\phi _S). \end{aligned}$$
The functions \(A_{UU,K}(\phi )\) and \(A_{UT,K}(\phi ,\phi _S)\), with \(K=L\) and \(K=T\) denoting longitudinal-separated and transverse-separated contributions, respectively, are decomposed into a Fourier series in complete analogy to Eq. (3).
The function \(\mathcal{W}(\phi ,\phi _{S})\) is fitted to the experimental angular distribution using an unbinned maximum likelihood method. Here and in the following, the angle \(\theta \) has to be added to the argument list of the function \(\mathcal {W}\), when applicable. The function to be minimized is the negative of the logarithm of the likelihood function:
$$\begin{aligned} -\ln L(\mathcal{R})=-\sum _{i=1}^{N}\ln \frac{\mathcal{W}(\mathcal{R};\phi ^{i},\phi _{S}^{i})}{\widetilde{\mathcal N}(\mathcal{R})}. \end{aligned}$$
Here, \(\mathcal R\) denotes the set of 7 asymmetry amplitudes of the unseparated fit or 14 asymmetry amplitudes of the longitudinal-to-transverse separated fit and the sum runs over the N experimental-data events. The normalization factor
$$\begin{aligned} \widetilde{\mathcal N}(\mathcal{R})=\sum _{j=1}^{N_{MC}}\mathcal{W}(\mathcal{R};\phi ^{j},\phi _{S}^{j}) \end{aligned}$$
is determined using \(N_{MC}\) events from a PYTHIA Monte-Carlo simulation, which are generated according to an isotropic angular distribution and processed in the same way as experimental data. The number of Monte-Carlo events in the exclusive region amounts to about 40,000.
Each asymmetry amplitude is corrected for the background asymmetry according to
$$\begin{aligned} A_{corr}=\frac{A_{meas}-f_{bg}A_{bg}}{1-f_{bg}}, \end{aligned}$$
where \(A_{corr}\) is the corrected asymmetry amplitude, \(A_{meas}\) is the measured asymmetry amplitude, \(f_{bg}\) is the fraction of the SIDIS background and \(A_{bg}\) is its asymmetry amplitude. While \(A_{meas}\) is evaluated in the exclusive region, \(A_{bg}\) is obtained by extracting the asymmetry from the experimental SIDIS background in the region 2 \(\hbox {GeV}< \Delta E < 20\) GeV.
The systematic uncertainty is obtained by adding in quadrature two components. The first one, \(\Delta A_{corr} = A_{corr}-A_{meas}\), is due to the correction by background amplitudes. In the most conservative approach adopted here, it is estimated as the difference between the asymmetry amplitudes \(A_{corr}\) and \(A_{meas}\). This approach also covers the small uncertainty on \(f_{bg}\). The second component accounts for effects from detector acceptance, efficiency, smearing, and misalignment. It is determined as described in Ref. [16]. An additional scale uncertainty arises because of the systematic uncertainty on the target polarization, which amounts to 8.2 %.
The amplitudes of the five sine and two cosine modulations as determined in the entire kinematic region. The first uncertainty is statistical, the second systematic. The results receive an additional 8.2 % scale uncertainty corresponding to the target polarization uncertainty
\(A^{\sin (\phi +\phi _{S})}_{UT}\)
\(-\)0.06 \(\pm \) 0.20 \(\pm \) 0.02
\(A^{\sin (\phi -\phi _{S})}_{UT}\)
\(A^{\sin (\phi _{S})}_{UT}\)
0.26 \(\pm \) 0.27 \(\pm \) 0.05
\(A^{\sin (2\phi -\phi _{S})}_{UT}\)
\(A^{\cos (\phi )}_{UU}\)
\(A^{\cos (2\phi )}_{UU}\)
The definition of intervals and the mean values of the kinematic variables
\(\langle Q^{2} \rangle \) [GeV\(^2\)]
\(\langle -t'\rangle \) [GeV\(^2\)]
\(\langle W \rangle \) [GeV]
\(\langle x_{B}\rangle \)
Entire kinematic bin
1.00 GeV\(^2<Q^{2}< 1.85\) GeV\(^2\)
1.85 GeV\(^2<Q^{2}< 10.00\) GeV\(^2\)
0.00 GeV\(^2<-t'< 0.07\) GeV\(^2\)
Results on the kinematic dependences of the five asymmetry amplitudes \(A_{UT}\) and two amplitudes \(A_{UU}\). The first two columns correspond to the \(-t'\) intervals \(0.00 - 0.07 - 0.20\) GeV\(^2\) and the last two columns to the \( Q^{2}\) intervals \(1.00 - 1.85 - 10.00\) GeV\(^2\). The first uncertainty is statistical, the second systematic. The results receive an additional 8.2 % scale uncertainty corresponding to the target polarization uncertainty
\(\langle -t'\rangle \) \(=\) 0.035 GeV\(^{2}\)
\(\langle Q^{2}\rangle \) \(=\) 1.39 GeV\(^{2}\)
The five amplitudes describing the strength of the sine modulations of the cross section for hard exclusive \(\omega \)-meson production. The full circles show the data in two bins of \(Q^2\) or \(-t'\). The open squares represent the results obtained for the entire kinematic region. The inner error bars represent the statistical uncertainties, while the outer ones indicate the statistical and systematic uncertainties added in quadrature. The results receive an additional 8.2 % scale uncertainty corresponding to the target polarization uncertainty. The solid (dash-dotted) lines show the calculation of the GK model [11, 21] for a positive (negative) \(\pi \omega \) transition form factor, and the dashed lines are the model results without the pion pole
The results for the five \(A_{UT}\) and two \(A_{UU}\) amplitudes, as determined in the entire kinematic region, are shown in Table 1. These results are presented in Table 3 in two intervals of \(Q^2\) and \(-t'\), with the definition of intervals together with the average values of the respective kinematic variables given in Table 2. The results for the five \(A_{UT}\) amplitudes are also shown in Fig. 5, in two rows of five panels each, where the upper and lower rows show the \(Q^2\) and \(-t'\) dependences, respectively. Each panel shows as two filled circles the results in two kinematic bins, and as one open square the result in the entire kinematic region. The results are compared to calculations of the GK model [11, 21], for both signs of the \(\pi \omega \) form factor. For completeness, also the model prediction without the pion–pole contribution is included.
The model predictions differ substantially upon sign change of the \(\pi \omega \) form factor for the two amplitudes \(A^{\sin (\phi -\phi _{S})}_{UT}\) and \(A^{\sin (\phi _{S})}_{UT}\), in particular when considering the \(-t'\) dependence. The data seem to favor a positive \(\pi \omega \) transition form factor.
Asymmetry amplitudes can be written in terms of SDMEs, as shown in the "Appendix". By using Eqs. (9) and (10) and the earlier HERMES results on \(\omega \) SDMEs [16],
$$\begin{aligned} A_{UU}^{\cos (\phi )}&=-0.13\pm 0.04\pm 0.08\\ A_{UU}^{\cos (2\phi )}&=-0.03\pm 0.04\pm 0.01 \end{aligned}$$
are obtained, which are consistent within uncertainties with the results shown in Table 1.
The cross section for exclusive production of transversely polarized \(\omega \) mesons dominates that for longitudinally polarized ones [16]. This is the reason why the 14-parameter fit used here leads to still acceptable uncertainties for the results in the entire kinematic region on the transverse-separated asymmetry amplitudes, while those for the longitudinal-separated ones are so large that any interpretation is precluded. Also, kinematic dependences can no longer be studied due to the large uncertainties. Therefore, for the transverse-separated asymmetry amplitudes only the results in the entire kinematic region are shown in Fig. 6 and Table 4 together with the corresponding predictions of the GK model [11, 21]. Here, the large uncertainties prevent any conclusion on the sign of the \(\pi \omega \) transition form factor.
As Fig. 5, but only for transversely polarized \(\omega \) mesons
Results on the five asymmetry amplitudes \(A_{UT}\) and two amplitudes \(A_{UU}\) in the entire kinematic region, but separated into longitudinal and transverse parts. The first column (\(K=L\)) gives the results for the longitudinal components, while the second column, (\(K=T\)), shows the results for the transverse components. The first uncertainty is statistical, the second systematic. The results receive an additional 8.2 % scale uncertainty corresponding to the target polarization uncertainty
Longitudinal (\(K=L\))
Transverse (\(K=T\))
\(A^{\sin (\phi +\phi _{S})}_{UT,K}\)
\(A^{\sin (\phi -\phi _{S})}_{UT,K}\)
\(A^{\sin (\phi _{S})}_{UT,K}\)
\(A^{\sin (2\phi -\phi _{S})}_{UT,K}\)
\(A^{\cos (\phi )}_{UU,K}\)
\(A^{\cos (2\phi )}_{UU,K}\)
In this Paper, results are reported on exclusive \(\omega \) electroproduction off transversely polarized protons in the kinematic region 1 GeV\(^2 < Q^{2}< 10\) GeV\(^2\) and 0.0 GeV\(^2 < -t' < 0.2\) GeV\(^2\). The amplitudes of seven azimuthal modulations of the cross section for unpolarized beam are determined, i.e., of two cosine modulations for unpolarized target and five sine modulations for transversely polarized target. Results are presented for the entire kinematic region as well as alternatively in two bins of \(-t'\) or \(Q^{2}\). Additionally, a separation into asymmetry amplitudes for the production of longitudinally and transversely polarized \(\omega \) mesons is done. A comparison of extracted asymmetry amplitudes to recent calculations of the phenomenological model of Goloskokov and Kroll favors a positive sign of the \(\pi \omega \) form factor.
We are grateful to Sergey Goloskokov and Peter Kroll for fruitful discussions on the comparison between our data and their model calculations. We gratefully acknowledge the DESY management for its support and the staff at DESY and the collaborating institutions for their significant effort. This work was supported by the Ministry of Education and Science of Armenia; the FWO-Flanders and IWT, Belgium; the Natural Sciences and Engineering Research Council of Canada; the National Natural Science Foundation of China; the Alexander von Humboldt Stiftung, the German Bundesministerium für Bildung und Forschung (BMBF), and the Deutsche Forschungsgemeinschaft (DFG); the Italian Istituto Nazionale di Fisica Nucleare (INFN); the MEXT, JSPS, and G-COE of Japan; the Dutch Foundation for Fundamenteel Onderzoek der Materie (FOM); the Russian Academy of Science and the Russian Federal Agency for Science and Innovations; the Basque Foundation for Science (IKERBASQUE) and the UPV/EHU under program UFI 11/55; the U.K. Engineering and Physical Sciences Research Council, the Science and Technology Facilities Council, and the Scottish Universities Physics Alliance; as well as the U.S. Department of Energy (DOE) and the National Science Foundation (NSF).
Appendix: Relations between azimuthal asymmetry amplitudes and spin-density matrix elements
The full information on vector-meson leptoproduction is contained in the differential cross section \(\frac{d^3 \sigma }{dQ^2dtdx}\) and the SDMEs in the Diehl representation [22]. Therefore, the azimuthal asymmetry amplitudes can be expressed in terms of the SDMEs. For scattering off an unpolarized target, the asymmetry amplitudes can be written in terms of the Diehl SDMEs \(u^{\mu _1\mu _2}_{\lambda _1 \lambda _2}\) or alternatively in terms of the Schilling–Wolf SDMEs \(r^{n}_{ij}\) [23] as
$$\begin{aligned} A_{UU}^{\cos {\phi }}&=-2\sqrt{\epsilon (1+\epsilon )}\,\mathrm {Re}[u_{0+}]\nonumber \\&=\sqrt{2\epsilon (1+\epsilon )}\,[2r^5_{11}+r^5_{00}], \end{aligned}$$
$$\begin{aligned} A_{UU}^{\cos {2\phi }}&=-\epsilon \,\mathrm {Re}[u_{-+}] \nonumber \\&=-\epsilon \, [2r^1_{11}+r^1_{00}]. \end{aligned}$$
Here, the abbreviated notation
$$\begin{aligned} u_{\lambda _1 \lambda _2}=u_{\lambda _1 \lambda _2}^{++}+u_{\lambda _1 \lambda _2}^{--}+ u_{\lambda _1 \lambda _2}^{00} \end{aligned}$$
is used, where \(\lambda _1\), \(\lambda _2\) denote the virtual-photon helicities and \(\mu _1\), \(\mu _2\) the vector-meson helicities. The symbol \(\pm \) describes the virtual-photon or vector-meson helicities \(\pm \)1, while the symbol 0 describes longitudinal polarization. Equations (9) and (10) show that the asymmetry amplitudes can be calculated from the Schilling–Wolf SDMEs obtained in Ref. [16].
For scattering off a transversely polarized target, the asymmetry amplitudes can be expressed in terms of the Diehl SDMEs \(n^{\mu _1\mu _2}_{\lambda _1 \lambda _2}\) and \(s^{\mu _1\mu _2}_{\lambda _1 \lambda _2}\) as
$$\begin{aligned}&A_{UT}^{\sin (\phi +\phi _S)} =(\epsilon /2)\,\mathrm {Im}[n_{-+}-s_{-+}],\end{aligned}$$
$$\begin{aligned}&A_{UT}^{\sin (\phi -\phi _S)} =\mathrm {Im}[n_{++}+\epsilon n_{00}],\end{aligned}$$
$$\begin{aligned}&A_{UT}^{\sin (\phi _S)} =\sqrt{\epsilon (1+\epsilon )}\,\mathrm {Im}[n_{0+}-s_{0+}],\end{aligned}$$
$$\begin{aligned}&A_{UT}^{\sin (2\phi -\phi _S)} =-\sqrt{\epsilon (1+\epsilon )}\,\mathrm {Im}[n_{0+}+s_{0+}],\end{aligned}$$
$$\begin{aligned}&A_{UT}^{\sin (3\phi -\phi _S)} =-(\epsilon /2)\,\mathrm {Im}[n_{-+}+s_{-+}]. \end{aligned}$$
The abbreviated notations
$$\begin{aligned} n_{\lambda _1 \lambda _2}&=n_{\lambda _1 \lambda _2}^{++}+n_{\lambda _1 \lambda _2}^{--}+ n_{\lambda _1 \lambda _2}^{00}, \end{aligned}$$
$$\begin{aligned} s_{\lambda _1 \lambda _2}&=s_{\lambda _1 \lambda _2}^{++}+s_{\lambda _1 \lambda _2}^{--}+ s_{\lambda _1 \lambda _2}^{00} \end{aligned}$$
are analogous to those in Eq. (11). In this case, Schilling–Wolf SDMEs \(r^{n}_{ij}\) [23] are not defined.
In order to get from Eqs. (9), (10) and (12)–(16) expressions for the asymmetry amplitudes for the production of longitudinally polarized vector mesons, the terms with \(\mu _1=\mu _2=0\) have to be retained in Eqs. (9)–(18), and the result has to be divided by the Schilling–Wolf SDME \(r^{04}_{00}\). For instance, \(A_{UT}^{\sin (2\phi -\phi _S)}\) becomes
$$\begin{aligned} A_{UT,L}^{\sin (2\phi -\phi _S)}= & {} -\frac{\sqrt{\epsilon (1+\epsilon )}}{r^{04}_{00}}\,\mathrm {Im}[n^{00}_{0+}+s^{00}_{0+}] \nonumber \\= & {} -\frac{\sqrt{\epsilon (1+\epsilon )}}{u^{00}_{++}+\epsilon u^{00}_{00}}\,\mathrm {Im}[n^{00}_{0+}+s^{00}_{0+}]. \end{aligned}$$
Correspondingly, for the production of transversely polarized vector mesons, the terms with \(\mu _1=\mu _2=\pm 1\) have to be retained in Eqs. (9)–(18), and the result has to be divided by \((1-r^{04}_{00})\). For instance, \(A_{UT}^{\sin (2\phi -\phi _S)}\) becomes
$$\begin{aligned}&A_{UT,T}^{\sin (2\phi -\phi _S)} \nonumber \\&\quad =-\frac{\sqrt{\epsilon (1+\epsilon )}}{1-r^{04}_{00}}\,\mathrm {Im}[n^{++}_{0+}+s^{++}_{0+}+n^{--}_{0+}+s^{--}_{0+}] \nonumber \\&\quad =-\frac{\sqrt{\epsilon (1+\epsilon )}}{1-u^{00}_{++}-\epsilon u^{00}_{00}}\,\mathrm {Im}[n^{++}_{0+}+s^{++}_{0+}+n^{--}_{0+}+s^{--}_{0+}]. \nonumber \\ \end{aligned}$$
J.C. Collins, L. Frankfurt, M.S. Strikman, Phys. Rev. D 56, 2982 (1997)CrossRefADSGoogle Scholar
A.V. Radyushkin, Phys. Rev. D 56, 5524 (1997)CrossRefADSGoogle Scholar
X. Ji, Phys. Rev. Lett. 78, 610 (1997)CrossRefADSGoogle Scholar
M. Diehl, P. Kroll, Eur. Phys. J. C 73, 2397 (2013)CrossRefADSGoogle Scholar
P. Kroll, EPJ Web Conf. 85, 01005 (2015)CrossRefGoogle Scholar
S.V. Goloskokov, P. Kroll, Eur. Phys. J. C 42, 02298 (2005)CrossRefGoogle Scholar
S.V. Goloskokov, P. Kroll, Eur. Phys. J. C 50, 829 (2007)CrossRefADSGoogle Scholar
S.V. Goloskokov, P. Kroll, Eur. Phys. J. C 74, 2725 (2014)CrossRefADSGoogle Scholar
J. Botts, G.F. Sterman, Nucl. Phys. B 325, 62 (1989)CrossRefADSGoogle Scholar
P. Kroll, H. Moutarde, F. Sabatié, Eur. Phys. J. C 73, 2278 (2013)CrossRefADSGoogle Scholar
S.V. Goloskokov, P. Kroll, Eur. Phys. J. A 50, 146 (2014)CrossRefADSGoogle Scholar
A. Airapetian et al., HERMES Collaboration, Phys. Lett. B 679, 100 (2009)Google Scholar
A. Airapetian et al., HERMES Collaboration, JHEP 06, 66 (2008)Google Scholar
K. Ackerstaff et al., HERMES Collaboration, Nucl. Instrum. Methods A 417, 230 (1998)Google Scholar
A. Airapetian et al., HERMES Collaboration, Eur. Phys. J. C 74, 3110 (2014)Google Scholar
K.A. Olive et al., Particle Data Group, Chin. Phys. C 38, 090001 (2014)Google Scholar
T. Sjöstrand et al., Comput. Phys. Commun. 135, 238 (2001)CrossRefADSGoogle Scholar
M. Diehl, S. Sapeta, Eur. Phys. J. C 41, 515 (2005)CrossRefADSGoogle Scholar
A. Bacchetta, U. D'Alesio, M. Diehl, A.C. Miller, Phys. Rev. D 70, 117504 (2004)CrossRefADSGoogle Scholar
S.V. Goloskokov, P. Kroll, Private communicationGoogle Scholar
M. Diehl, JHEP 09, 064 (2007)CrossRefADSGoogle Scholar
K. Schilling, G. Wolf, Nucl. Phys. B 61, 381 (1973)CrossRefADSGoogle Scholar
Funded by SCOAP3
1.Physics DivisionArgonne National LaboratoryArgonneUSA
2.Sezione di BariIstituto Nazionale di Fisica NucleareBariItaly
3.School of PhysicsPeking UniversityBeijingChina
4.Department of Theoretical PhysicsUniversity of the Basque Country UPV/EHUBilbaoSpain
5.IKERBASQUEBasque Foundation for ScienceBilbaoSpain
6.Nuclear Physics LaboratoryUniversity of ColoradoBoulderUSA
7.DESYHamburgGermany
8.DESYZeuthenGermany
9.Joint Institute for Nuclear ResearchDubnaRussia
10.Physikalisches InstitutUniversität Erlangen-NürnbergErlangenGermany
11.Sezione di FerraraIstituto Nazionale di Fisica NucleareFerraraItaly
12.Dipartimento di Fisica e Scienze della TerraUniversità di FerraraFerraraItaly
13.Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di FrascatiFrascatiItaly
14.Department of Physics and AstronomyGhent UniversityGentBelgium
15.II. Physikalisches InstitutJustus-Liebig Universität GießenGießenGermany
16.SUPA, School of Physics and AstronomyUniversity of GlasgowGlasgowUK
17.Department of PhysicsUniversity of IllinoisUrbanaUSA
18.Randall Laboratory of PhysicsUniversity of MichiganAnn ArborUSA
19.Lebedev Physical InstituteMoscowRussia
20.National Institute for Subatomic Physics (Nikhef)AmsterdamThe Netherlands
21.B.P. Konstantinov Petersburg Nuclear Physics InstituteLeningrad RegionRussia
22.Institute for High Energy PhysicsMoscow RegionRussia
23.Gruppo Collegato Sanità, Sezione di RomaIstituto Nazionale di Fisica NucleareRomeItaly
24.Istituto Superiore di SanitàRomeItaly
25.TRIUMFVancouverBCCanada
26.Department of PhysicsTokyo Institute of TechnologyTokyoJapan
27.Department of Physics and AstronomyVU UniversityAmsterdamThe Netherlands
28.National Centre for Nuclear ResearchWarsawPoland
29.Yerevan Physics InstituteYerevanArmenia
Airapetian, A., Akopov, N., Akopov, Z. et al. Eur. Phys. J. C (2015) 75: 600. https://doi.org/10.1140/epjc/s10052-015-3825-7
Accepted 01 December 2015
First Online 17 December 2015
DOI https://doi.org/10.1140/epjc/s10052-015-3825-7
Published in cooperation with
EPJC is an open-access journal funded by SCOAP3 and licensed under CC BY 4.0
|
CommonCrawl
|
Higher Tactile Temporal Resolution as a Basis of Hypersensitivity in Individuals with Autism Spectrum Disorder
Masakazu Ide ORCID: orcid.org/0000-0002-2704-98891,2,
Ayako Yaguchi1,2,3,
Misako Sano4,5,
Reiko Fukatsu4,6 &
Makoto Wada1
Journal of Autism and Developmental Disorders volume 49, pages 44–53 (2019)Cite this article
Many individuals with autism spectrum disorder (ASD) have symptoms of sensory hypersensitivity. Several studies have shown high individual variations in temporal processing of tactile stimuli. We hypothesized that these individual differences are linked to differences in hyper-reactivity among individuals with ASD. Participants performed two tasks as to vibrotactile stimuli: One is a temporal order judgement task, and another is a detection task. We found that individuals with ASD with higher temporal resolution tended to have more severe hypersensitivity symptoms. In contrast, the tactile detection threshold/sensitivity were related to the severities of stereotyped behaviour and restricted interests, rather than to hypersensitivity. Our findings demonstrate that higher temporal resolution to sensory stimuli may contribute to sensory hypersensitivity in individuals with ASD.
Individuals with autism spectrum disorder (ASD) not only show deficits in social communication, but also show atypical sensory processing characterized by sensory hypersensitivity. This feature is also emphasized in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (APA 2013). Sensory dysfunction such as this can be assessed using the Sensory Profile, which is a self-report questionnaire that determines the individual's sensitivity; for example, if he/she shows strong emotional responses to sensory stimuli experienced as a part of daily life (Dunn 1999; Dunn and Westman 1997). Tomchek and Dunn (2007) reported that children with ASD are more sensitive to sensory information (tactile, 65.1%; taste and olfaction, 56.2%; visual and auditory, 50.9%) than typically developing (TD) peers (13, 9.7, and 7.9%, respectively) according to their scores on a short version of the Sensory Profile (i.e. Short Sensory Profile).
Sensory hypersensitivity is conventionally explained by the hypothesis that an abnormally high sensitivity for detecting sensory signals leads to atypical responsiveness. Blakemore et al. (2006) reported that individuals with Asperger's syndrome can detect small displacements in vibrotactile stimuli with a lower detection threshold than TD control individuals; this feature remained dominant even when the stimuli were delivered at a high frequency (200 Hz). Cascio et al. (2008) and Puts et al. (2014) demonstrated that the lower detection thresholds were found even when the vibrotactile stimuli were presented at low frequencies (33 and 25 Hz). However, another study showed that the detection thresholds of children with ASD and TD children were not significantly different regardless of stimulus frequency (40 and 250 Hz) (Guclu et al. 2007). These contrasting results from previous studies may partly be due to the inherent variability in sensory processing of individuals with ASD (Simon and Wallace 2016). Thus, individual differences in sensory processing must be accounted for and it should be considered how they relate to varied responsiveness to sensory stimuli in patients' with ASD daily life.
Another hypothesis that can explain hypersensitivity in patients with ASD involves aberrant temporal processing of sensory inputs. Individuals with ASD frequently complain about the flickering nature of fluorescent illumination, which is also thought to induce their repetitive behaviour (Colman et al. 1976). These findings might indicate that some patients with ASD might have extremely high temporal resolution (exceeding the 60-cycle flicker) of processing sensory stimuli. A typical symptom of hypersensitivity, avoiding wearing clothes, may stem from aberrant temporal processing of texture (Green and Ben-Sasson 2010). Another study described the superior ability of individuals with ASD to temporally process visual stimuli (Falter et al. 2012). In that study, vertical bars were consecutively presented to the left and right of a fixation cross on a monitor with different temporal lag times ranging from 8.3 to 99.6 ms. The participants were instructed to determine whether the stimuli were presented simultaneously or not. The authors found that individuals with ASD judged the stimuli to not be simultaneously presented more frequently than controls, indicating that they may have superior temporal resolution. On the other hand, individuals with ASD have also been shown to have lower temporal resolution while processing tactile stimulation (Tommerdahl et al. 2008; Wada et al. 2014). Tommerdahl et al. (2008) reported lower temporal resolution in the tactile temporal order judgment (TOJ) of the index and middle fingers of one hand in individuals with ASD, although the tactile TOJ of both hands was not significantly different between the ASD and TD groups. Moreover, the temporal resolution of one hand came to be precise compared with TD individuals when conditioning vibrotactile stimuli (frequency, 25 Hz) were presented on another skin site. In contrast, Wada et al. (2014) reported that while the temporal resolution of tactile TOJ for both hands was slightly lower in children with ASD than in TD children, the p-value for this comparison was not so significant, given the large individual differences in sensory processing in the ASD group.
The contrasting results for both sensitivity and temporal resolution of sensory processing in individuals with ASD indicate a diversity in sensory processing in this population. The type of sensory processing underlying hyper-reactivity remains unclear and seems to be related to the temporal processing of stimuli from the environment. In this study, we elucidated the relationship between individual differences in temporal resolution of sensory processing and those in the severity of hypersensitivity. We focused on the tactile modality, given the variety of findings related to tactile temporal processing (Puts et al. 2014; Tommerdahl et al. 2008; Wada et al. 2014). We adopted the TOJ task with vibrotactile stimuli to measure the temporal resolution of stimulus processing and compare it between the ASD and TD groups.
Temporal Order Judgement Task
Twelve individuals with a clinical diagnosis of ASD were recruited from parent groups for children with developmental disorders and the Hospital of National Rehabilitation Center for Persons with Disabilities. An occupational therapist (M.S.) confirmed the diagnosis using the Autism Diagnostic Observation Schedule, Second Edition (ADOS-2) (Lord et al. 2012). Fourteen participants were recruited to the typically developing (TD) group. We asked the participants to complete the Japanese version of the Autism Spectrum Quotient (AQ) scale (Baron-Cohen et al. 2001; Wakabayashi et al. 2004). As one who was initially recruited as the TD adults had very high AQ (37; cut-off, 33) and ADOS-2 reciprocal social interaction subscale (6; cut-off, 4) scores, this participant can be regarded as AS condition and was included in the ASD group. (Wheelwright et al. 2010) with the final number of participants in each of the groups being 13 (ASD group: 12 clinically diagnosed participants + 1 participant with high autistic traits = 13; TD group, 14 initial TD participants—one participant with high autistic traits = 13). The participants' Intelligence Quotients (IQs) were also assessed using the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III). We also used the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) to evaluate one 15-year-old male participant (verbal comprehension = 88, perceptual reasoning = 132, working memory = 120, processing speed = 127, full-scale IQ (FSIQ) = 120). All the participants from both the groups had an FSIQ above 80 (within 2 standard deviations [SDs] of the standardized average). There were no significant differences in the age, verbal IQ, and performance IQ of the two groups (verbal IQ: t (23) = − 1.89, p = 0.07, Cohen's d = 0.75, performance IQ: t (23) = − 1.09, p = 0.29, Cohen's d = 0.43); the FSIQ (t (23) = 2.07, p = 0.02, Cohen's d = 0.95) of the two groups was significantly different. The participant information is described in Table 1.
Table 1 Participant information
Detection Task
Eleven individuals with a clinical diagnosis of ASD were recruited for this experiment. The same TD individuals participated in the detection and TOJ experiments. The FSIQ (WAIS-III) of participants in both groups was above 80. There were no significant differences in age, verbal IQ, performance IQ, and FSIQ between the two groups (verbal IQ: t (21) = − 1.87, p = 0.08, Cohen's d = 0.78, performance IQ: t (21) = − 1.44, p = 0.17, Cohen's d = 0.6, FSIQ: t (21) = − 1.2, p = 0.24, Cohen's d = 0.51). All participants and their parents gave written informed consent after the study procedures had been fully explained.
We administered the two behavioural tasks on different days; the TOJ task was administered on day 1, and the detection task on day 2. The temporal resolution and detection threshold and sensitivity to vibrotactile stimuli were estimated from the responses in the TOJ and detection tasks, respectively. In addition, the degree of atypical sensory processing was assessed using a self-report questionnaire (Adolescent/Adult Sensory Profile, or AASP).
Solenoid skin contactors (FR-2007-2α, Uchida Denshi, Tokyo, Japan) were used to deliver vibrotactile stimulation (Fig. 1a). We used two frequencies of vibration (40 and 200 Hz) because Blakemore et al. (2006) reported that different responses in individuals with ASD show lower detection threshold in 200 Hz vibrotactile stimuli. It is possible that there are different response traits depending on the type of mechanoreceptors (the Pacinian and Meissner corpuscles respond to 40 and 200 Hz stimuli, respectively). The displacement (2 µm) and duration (50 ms) of the vibrations were measured by the laser displacement meter (LK-G15, KEYENCE, Osaka, Japan). White noise was presented through headphones (HD380PRO, SENNHEISER, Wedemark, Germany).
a Schematic representation of the TOJ task. Vibrotactile stimulation was delivered to both index fingers with a range of SOAs. The participants determined the order of the stimuli and responded by pressing a key with their middle fingers. b The temporal resolutions in the ASD and TD groups for the 40- and 200-Hz conditions. The error bars denote standard errors of the means
We successively delivered brief vibrotactile stimuli to the ventral surface of the participant's left and right index fingers placed 20 cm apart, with stimulus onset asynchronies (SOAs) ranging from − 240 to 240 ms (± 15, 30, 60, 120, 240 ms), each repeated 12 times. Positive values indicate that the vibrotactile stimulus was delivered on the right index finger. Thus, each block consisted of 120 trials in each of the 40- and 200-Hz conditions (240 trials in total). The inter-stimulus intervals were randomly selected to be between 1.5 and 2.5 s. The participants were asked to determine the side to which the second stimulus was presented and respond by pressing a key as soon as possible. When the reaction time was larger than 5000 ms or the participants responded before the second stimulus, the response was excluded from the data and an additional trial was inserted at the end of the condition.
The Piezo skin contactor (FPZT-2015-1, Uchida Denshi, Tokyo, Japan) was used to deliver vibrotactile stimulation of two frequencies (40 and 200 Hz) (Fig. 3a), with stimulus displacements of 0, 1, 3, 6, 9, 12, 15, 18, 21, 24, 27, or 30 µm and duration of 500 ms, as measured by a laser displacement meter. White noise was presented through headphones.
We delivered vibrotactile stimuli to the ventral surface of the participant's left index finger with the extent of displacement varying as described above. Each stimulus displacement condition was repeated 12 times. Thus, each block consisted of 144 trials in each of the 40- and 200-Hz conditions (288 trials in total). The participants were instructed to determine whether the stimulus was presented or not and respond by pressing a key as soon as possible after the presentation of a beep sound (pure tone, 500 Hz). The subsequent stimulus was not delivered until the subject pressed the key.
Subjective Ratings of Hypersensitivity
We used the AASP to evaluate the degree of responsiveness to stimuli of various modalities in their daily life (Brown et al. 2001). This self-report questionnaire consists of four subscales: low registration, sensation seeking, sensory sensitivity, and sensory avoiding. The former two categories (i.e. low registration and sensation seeking) reflect "lower responsiveness" to sensory stimuli, while the latter (i.e. sensory sensitivity and sensory avoiding) correspond to the opposite ("enhanced responsiveness"). There were no significant between-group differences in the subscales (low registration: t (24) = 1.55, p < 0.13, Cohen's d = 0.61; sensation seeking: t (24) = − 0.18, p < 0.86, Cohen's d = 0.07; sensory sensitivity: t (24) = − 0.43, p < 0.67, Cohen's d = 0.17; sensory avoiding: t (24) = 0.32, p < 0.75, Cohen's d = 0.13) or in the total sensory responsiveness (sensory sensitivity + sensation avoiding) (Cascio et al. 2008) (t (24) = − 0.07, p < 0.94, Cohen's d = 0.03). Thus, we focused on the relationship between individual performances in behavioural tasks and their AASP scores.
We calculated the temporal resolution, detection threshold, and sensitivity by fitting the response data in each task to a Gaussian cumulative density function (Yamamoto and Kitazawa 2001).
In the TOJ task, the response data were sorted by the SOAs to calculate the order-judgment probability that the right index finger was stimulated later (or the left index finger was stimulated first). The judgment probabilities of the data in the TOJ task were fitted using the following function:
$$p(t)=(\mathop p\nolimits_{{\hbox{max} }} - {p_{\hbox{min} }})\int_{{ - \infty }}^{t} {\mathop {\frac{1}{{\sqrt {2\pi } \mathop \sigma \nolimits_{t} }}\exp \left( {\frac{{\mathop { - (\tau - dt)}\nolimits^{2} }}{{2\mathop \sigma \nolimits_{t}^{2} }}} \right)}\nolimits^{} } dt\tau +{p_{\hbox{min} }}$$
where t, dt, σt, Pmax, and Pmin represent the SOAs, size of the horizontal transition, temporal resolution, and upper and lower asymptotes of the judgment probability, respectively. The σt corresponded to the stimulation interval that yielded 84% correct responses (relative to the asymptote). We used the MATLAB optimization toolbox (MathWorks, Natick, MA, USA) for fitting to minimize the Pearson's Chi square statistic, which reflects the discrepancy between the sampled order-judgment probability and the prediction using the four-parameter model. SPSS statistics 23 (IBM Corp., Armonk, NY, USA) was used to analyse the statistical significance of the data.
In the detection task, the data were sorted by stimulus displacements to calculate the stimulus detection probabilities. The probabilities in the detection task were fitted by the following function corresponding to the TOJ task:
$$p(t)=({p_{\hbox{max} }} - {p_{\hbox{min} }})\int_{{ - \infty }}^{t} {\mathop {\frac{1}{{\sqrt {2\pi } \mathop \sigma \nolimits_{d} }}\exp \left( {\frac{{\mathop { - (\tau - dd)}\nolimits^{2} }}{{2\mathop \sigma \nolimits_{d}^{2} }}} \right)}\nolimits^{} } dd\tau +{p_{\hbox{min} }}$$
where t, dd, σd, Pmax, and Pmin represent the extent of displacement of vibration, size of the horizontal transition, sensitivity, and upper and lower asymptotes of the detection probability. The dd and σd values corresponded to the extent of stimulus displacement and the steepness of the function, respectively, that yielded 50% correct responses. Thus, in the detection task, we defined dd as the detection threshold and σd as the detection sensitivity.
Temporal Resolution of Processing Vibrotactile Stimuli
We examined whether the temporal resolution (σt) of processing vibrotactile stimuli was different between the ASD and TD groups or not (Fig. 1b). We found no significant difference between the groups (F (1, 24) = − 0.32, p = 0.57, partial η2 = 0.013). A previous study have also demonstrated that the temporal resolution in individuals with ASD is comparable with that in TD individuals (Puts et al. 2014) in agreement with the current result. In addition, there was no significant between-group difference in frequencies (F (1, 24) = − 0.04, p = 0.85, partial η2 = 0.002) or any group × frequency interaction (F (1, 24) = − 0.03, p = 0.87, partial η2 = 0.001).
Next, we examined whether the individual differences in temporal resolution and those in sensory hypersensitivity were related to each other. There was a significant correlation (Pearson's rank correlation coefficient) between the extent of temporal resolution (40 and 200 Hz) and the subjective ratings (AASP) for the "enhanced responsiveness" and "total sensory responsiveness" subscales [sensory sensitivity—40 Hz: r = − 0.68, p = 0.01, power (1 − β) = 0.97; 200 Hz: r = − 0.63, p = 0.02, power (1 − β) = 0.95; sensory avoiding—40 Hz: r = − 0.81, p = 0.0006, power (1 − β) = 0.997; 200 Hz: r = − 0.75, p = 0.003, power (1 − β) = 0.99); total sensory sensitivity—40 Hz: r = − 0.72, p = 0.005, power (1 − β) = 0.98; 200 Hz: r = − 0.72, p = 0.005, power (1 − β) = 0.98] in the ASD group (Fig. 2). In contrast, there was no relationship between the temporal resolution and the subjective ratings in the "lower responsiveness" subscale (low registration—40 Hz: r = − 0.45, p = 0.12, power (1 − β) = 0.67; 200 Hz: r = − 0.43, p = 1.57, power (1 − β) = 0.73; sensory exploring—40 Hz: r = 0.24, p = 0.41, power (1 − β) = 0.43; 200 Hz: r = 0.17, p = 0.58, power (1 − β) = 0.089). We did not find any significant correlation between the temporal resolution and the subjective ratings for any of the AASP categories in the TD group (Supplementary Table 1). There was no correlation between the temporal resolution and the ADOS-2 total and subscale scores in the ASD group (Supplementary Table 2).
Relationship of the temporal resolution of vibrotactile stimulus processing and degree of responsiveness to various stimuli with the AASP subscales in the ASD group for the 40- and 200-Hz conditions. Solid lines indicate significant correlations
Detection Threshold and Sensitivity in Vibrotactile Stimulus Processing
We compared the detection thresholds (dd) for vibrotactile stimuli between the stimulus conditions (40 Hz and 200 Hz) and between the ASD and TD groups for each condition (Fig. 3b). We found a significant difference between the 40-Hz and 200-Hz conditions (F (1, 21) = 6.34, p = 0.02, partial η2 = 0.23), which may have been caused by higher sensitivity of Pacinian corpuscles than that of Meissner corpuscles (Bolanowski et al. 1988; Mountcastle et al. 1972; Talbot et al. 1968). There was no significant difference between the groups (F (1, 21) = 0.282, p = 0.6, partial η2 = 0.13) or a significant group × frequency interaction (F (1, 21) = 0.99, p = 0.33, partial η2 = 0.045). Moreover, we found no significant difference in the detection sensitivity between the groups (F (1, 21) = 0.62, p = 0.44, partial η2 = 0.015) or frequencies (F (1, 21) = 0.19, p = 0.67, partial η2 = 0.009) or a significant group × frequency interaction (F (1, 21) = 0.17, p = 0.69, partial η2 = 1.42).
a Schematic representation of the detection task. Tactile stimulation was delivered to the left index finger with a range of stimulus displacements. The participants determined whether they felt the vibrotactile stimuli and responded by pressing a key after a beep sound was presented. b The detection threshold (left) and sensitivity (right) in ASD and TD groups for the 40- and 200-Hz conditions. The error bars denote standard errors of the means
In contrast to the results of the TOJ task, there were no significant correlations (Pearson's rank correlation coefficient) between the detection threshold/sensitivity (40 and 200 Hz) and the AASP subjective ratings in the ASD and TD groups (Supplementary Tables 3 and 4), except for a slight significant correlation in sensory avoiding in the ASD group (r = − 0.61, p = 0.04, power (1 − β) = 0.56).
Instead, we found that the detection threshold was correlated with the stereotyped behaviour and restricted interests' subscale of the ADOS-2 (r = 0.66, p = 0.04, power (1 − β) = 0.66) and marginally correlated with the reciprocal social interaction subscale (r = 0.54, p = 0.08, power (1 − β) = 0.43) in the ASD group only in the 200-Hz condition (Fig. 4a) (Supplementary Tables 5 and 6). Furthermore, we found that the detection sensitivity was positively correlated with the reciprocal social interactions (r = 0.64, p = 0.017, power (1 − β) = 0.71) and stereotyped behaviours and restricted interests (r = 0.82, p = 0.001, power (1 − β) = 0.94) subscales in the ASD group only in the 200-Hz condition (Fig. 4b). There were no significant correlations observed in the 40-Hz condition.
Relationship of the detection threshold (a) and detection sensitivity (b) with the severity of atypical behaviour as assessed by the Autism Diagnostic Observation Schedule, Second Edition (ADOS-2) in the ASD group for the 200-Hz condition. Solid lines indicate significant correlations and the dotted line indicates marginally significant correlation
Previous studies have demonstrated large individual differences in sensory processing, particularly in detection sensitivity and temporal resolution of processing vibrotactile stimuli, among individuals with ASD. As temporal processing is often considered to be linked to sensory hypersensitivity, we elucidated whether individual differences in temporal resolution are related to the degree of hyper-reactivity in patients with ASD. Our results suggest that individuals in the ASD group who had higher temporal resolution of processing vibrotactile stimuli tended to be more affected by various sensory stimuli experienced as a part of their daily life. However, the detection threshold and sensitivity were almost not related to this atypical responsiveness but were related to the severity of stereotyped behaviour and restricted interests and, partially, to reciprocal social interactions as assessed by the ADOS-2. These data indicate that temporal processing of tactile stimuli may underlie sensory hyper-reactivity in individuals with ASD, while the detection threshold and sensitivity may underlie the severity of other aspects of ASD.
Our data provide first evidence that temporal processing of stimuli, and not detection threshold and sensitivity, is correlated with a self-assessed score of hypersensitivity. In fact, the temporal resolution and detection threshold/sensitivity were not significantly different between the two groups. This kind of behavioural performance points to the existence of a wide continuum in the degree of hyper-reactivity. Wada et al. (2014) reported lower temporal resolution in children with ASD; however, the mean age of their sample (mean age, 11.8 years) was lower than that of our sample (mean age, 19.1 years). Dysfunctions according to developmental changes in sensory processing might contribute to the differences attributed to the participant groups. Since the temporal resolution of stimuli in neurotypical individuals reportedly increases from childhood to adolescence (Stevenson et al. 2017), it is possible that deviations in temporal resolution appear in individuals with ASD until adolescence. Interestingly, we found relationships between temporal resolution and hypersensitivity related to the 'enhanced responsiveness' and 'total sensory responsiveness' AASP sub-categories (and not the 'lower responsiveness' sub-category). These results suggest that the temporal processing of tactile stimuli is predominantly associated with hyper-reactivity when some types of sensory information included in the AASP such as visual, auditory, somatosensory, olfactory, and gustatory are provided in daily life.
Furthermore, we found that the detection threshold and sensitivity to 200-Hz vibrotactile stimuli were positively correlated with the reciprocal social interactions and stereotyped behaviour and restricted interests ADOS-2 sub-scores. Guclu et al. (2007) also indicated that elevated tactile sensitivity was related to socioemotional problems experienced in daily life. Moreover, the detection threshold was lower in individuals with Asperger's syndrome than in their TD peers, predominantly for high frequency vibrotactile stimuli (Blakemore et al. 2006), although this result could not be replicated in another ASD sample (Guclu et al. 2007). Thus, while temporal processing of vibrotactile stimuli is linked to the degree of hypersensitivity, detection threshold/sensitivity for high frequency vibrotactile stimuli might be linked to stereotyped behaviour and restricted interests and other social impairments characteristic of ASD. However, as our sample size was small, further studies are required to examine this link between the severities of several aspects of ASD symptoms and detection performances while processing high frequency vibrotactile stimuli.
With regard to the neural basis of atypical responses to sensory stimuli, mainly hyper-reactivity, idiosyncratic somatosensory evoked potentials for tactile stimuli have been reported (Miyazaki et al. 2007). Cascio et al. (2015) also reported that early (120–220 ms) and late (220–270 ms) brain waves elicited by air-puff stimulation might be related to the degrees of hyper-reactivity and hypo-reactivity, respectively. Hyper-reactivity would then be consistent with the somatosensory association cortical response, while hypo-reactivity would be consistent with later brain processes such as allocation of attention or ascribing emotional valence to stimuli. Simon et al. (2017) demonstrated that the degree of hypo-reactivity was associated with elevated levels of left alpha and theta power and increased alpha and theta connectivity in resting state electroencephalography in toddlers at high risk (HR) of being diagnosed with ASD in the future. They also found that hypo-reactivity was related to reduced signal complexity at occipital and temporal electrodes. These findings indicate that reduced sensory responsiveness in HR toddlers corresponds to broad changes in neural synchronization, both within and across cortical areas, and a resultant loss of complex neural interactions.
Several studies using mouse models of autism have reported that autistic mice frequently show an excitation-inhibition imbalance (i.e. E/I imbalance) in the central nervous system (Braat and Kooy 2015; Pizzarelli and Cherubini 2011; Rubenstein and Merzenich 2003). One of the major features of the model mice is reduced concentration of gamma-aminobutyric acid (GABA) in the brain that involves the deactivated GABA receptor and subsequent degraded release of the neurotransmitter. The model mice showed defensive behaviour to air-puff stimulation at the whisker more frequently than wild-type mice (He et al. 2017), in addition to more frequent pathognomonic behaviour while interacting with cage mates and deficits in social communication. Similarly, human post-mortem studies showed reduced concentrations of GABA in the anterior cingulate cortex and fusiform gyrus in patients with ASD (Oblak et al. 2010). Recent magnetic resonance spectroscopy studies showing in vivo GABA states in the human brain have revealed the relationship between GABA concentrations in the human brain and behavioural performances. Robertson et al. (2016) demonstrated that GABA concentration in the visual cortex was lower in individuals with ASD than in their TD peers. Moreover, individuals with ASD also showed decreased suppressive ratio of perceptual switching when different visual stimuli were concurrently presented to each eye (i.e. binocular rivalry). Puts et al. (2015) reported reduction in GABA levels also in individuals with Tourette syndrome, with lower GABA concentrations in the somatosensory cortex being associated with more severe tics. Another study (Terhune et al. 2014) showed that reduced GABA levels might lead to more precise estimates of the temporal duration of visual stimuli. It is possible that the reduction in GABA levels in the primary sensory cortices underlies perceptual states in various sensory modalities, resulting in the inhibition of sensory inputs and involuntary movement, and sometimes in excessive sensory processing. We speculated that the enhanced temporal resolution in individuals with ASD might be caused by an E/I imbalance in their brains, and future studies are needed to further elucidate this hypothesis.
Aberrant sensory processing in patients with ASD is regarded as the basis of their impairments in social cognition and adaptive behaviour (Ben-Sasson et al. 2009). Green et al. (2018) showed that task-irrelevant tactile stimulation complicates the comprehension of the meaning of "sarcasm", which is needed to interpret communicative intents in non-literal language. In the task, neural activities in the left auditory language areas (angular gyrus) and the occipital cortex degraded by the distractive tactile stimuli with strong activation in the somatosensory cortex. The degraded neural activity would reflect that they shifted their attention away from the task and towards the sensory stimuli. In contrast, the degradation in neural responses disappeared when they were required to shift their attention to the facial expression and tone of voice of the speaker, while the medial prefrontal cortex (mPFC) strongly activated. Thus, strong mPFC activation was assumed to inhibit distractive sensory inputs in order to properly interpret the communicative intents. These findings including our present results indicate that several stages exist regarding the occurrence factor of hyper-reactivity. In one stage, weak inhibitory function on sensory inputs and resulting strong neural activation in the primary sensory cortex would be an important factor. Higher temporal resolution of sensory stimuli may be related with this stage because this feature of sensory processing would result in vast amounts of inflow of sensory information. Difficulty in attentional deprivation from distractive and/or unpleasant feelings of sensory stimuli may play an important role at another stage. Aversive touch was found to facilitate neural activation in the posterior cingulate cortex and the insula in individuals with ASD (Cascio et al. 2012), and the amplitude of the activity in the insula was positively correlated with the severity of disabilities in social communication measured by the ADOS-2. Thus, we speculate that excessively strong neural responses by sensory inputs in connection with weak inhibitory function exist at the first stage, and then, this over-responsivity would interfere with adaptive social communication and emotional processing.
Our study provides the first report indicating that the temporal processing of vibrotactile stimuli may underlie sensory hypersensitivity, while detection threshold/sensitivity for high frequency vibrotactile stimuli may be linked to the severity of some ASD symptoms. Enhanced sensory processing in patients with ASD may result in a large inflow of sensory signals from the surrounding environment, the several neural substrates contributing to the diversity of sensory processing in these patients. Thus, treatment plans must consider individual sensory sensitivity levels, which may consequently determine the patients' compliance to treatment.
APA. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington: American Psychiatric Association.
Baron-Cohen, S., Wheelwright, S., Skinner, R., Martin, J., & Clubley, E. (2001). The autism-spectrum quotient (AQ): Evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. Journal of Autism and Developmental Disorders, 31, 5–17.
Ben-Sasson, A., Hen, L., Fluss, R., Cermak, S. A., Engel-Yeger, B., & Gal, E. (2009). A meta-analysis of sensory modulation symptoms in individuals with autism spectrum disorders. Journal of Autism and Developmental Disorders, 39, 1–11. https://doi.org/10.1007/s10803-008-0593-3.
Blakemore, S. J., et al. (2006). Tactile sensitivity in Asperger syndrome. Brain and Caognition, 61, 5–13. https://doi.org/10.1016/j.bandc.2005.12.013.
Bolanowski, S. J. Jr., Gescheider, G. A., Verrillo, R. T., & Checkosky, C. M. (1988). Four channels mediate the mechanical aspects of touch. The Journal of the Acoustical Society of America, 84, 1680–1694.
Braat, S., & Kooy, R. F. (2015). The GABAA receptor as a therapeutic target for neurodevelopmental disorders. Neuron, 86, 1119–1130. https://doi.org/10.1016/j.neuron.2015.03.042.
Brown, C., Tollefson, N., Dunn, W., Cromwell, R., & Filion, D. (2001). The Adult Sensory Profile: Measuring patterns of sensory processing. The American Journal of Occupational Therapy: Official Publication of the American Occupational Therapy Association, 55, 75–82.
Cascio, C., et al. (2008). Tactile perception in adults with autism: A multidimensional psychophysical study. Journal of Autism and Developmental Disorders, 38, 127–137. https://doi.org/10.1007/s10803-007-0370-8.
Cascio, C. J., et al. (2012). Perceptual and neural response to affective tactile texture stimulation in adults with autism spectrum disorders. Autism Research: Official Journal of the International Society for Autism Research, 5, 231–244. https://doi.org/10.1002/aur.1224.
Cascio, C. J., Gu, C., Schauder, K. B., Key, A. P., & Yoder, P. (2015). Somatosensory event-related potentials and association with tactile behavioral responsiveness patterns in children with ASD. Brain Topography, 28, 895–903. https://doi.org/10.1007/s10548-015-0439-1.
Colman, R. S., Frankel, F., Ritvo, E., & Freeman, B. J. (1976). The effects of fluorescent and incandescent illumination upon repetitive behaviors in autistic children. Journal of Autism and Childhood Schizophrenia 6, 157–162.
Dunn, W. (1999). Sensory profile user's manual. San Antonio: Harcourt Assessment.
Dunn, W., & Westman, K. (1997). The sensory profile: The performance of a national sample of children without disabilities. The American Journal of Occupational Therapy: Official Publication of the American Occupational Therapy Association, 51, 25–34.
Falter, C. M., Elliott, M. A., & Bailey, A. J. (2012). Enhanced visual temporal resolution in autism spectrum disorders. PLoS ONE, 7, e32774. https://doi.org/10.1371/journal.pone.0032774.
Green, S. A., & Ben-Sasson, A. (2010). Anxiety disorders and sensory over-responsivity in children with autism spectrum disorders: Is there a causal relationship? Journal of Autism and Developmental Disorders 40, 1495–1504. https://doi.org/10.1007/s10803-010-1007-x.
Green, S. A., Hernandez, L. M., Bowman, H. C., Bookheimer, S. Y., & Dapretto, M. (2018). Sensory over-responsivity and social cognition in ASD: Effects of aversive sensory stimuli and attentional modulation on neural responses to social cues. Developmental and Cognition Neuroscience, 29, 127–139. https://doi.org/10.1016/j.dcn.2017.02.005.
Guclu, B., Tanidir, C., Mukaddes, N. M., & Unal, F. (2007). Tactile sensitivity of normal and autistic children. Somatosensory & Motor Research, 24, 21–33. https://doi.org/10.1080/08990220601179418.
He, C. X., Cantu, D. A., Mantri, S. S., Zeiger, W. A., Goel, A., & Portera-Cailliau, C. (2017). Tactile defensiveness and impaired adaptation of neuronal activity in the Fmr1 knock-out mouse model of Autism. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 37, 6475–6487. https://doi.org/10.1523/JNEUROSCI.0651-17.2017.
Lord, C., Rutter, M., Dilavore, P. C., Risi, S., Gotham, K., & Bishop, S. L. (2012). The Autism Diagnostic Observation Schedule 2nd Edition (ADOS-2). Los Angeles: WPS Publishing.
Miyazaki, M., et al. (2007). Short-latency somatosensory evoked potentials in infantile autism: Evidence of hyperactivity in the right primary somatosensory area. Developmental Medicine and Child Neurology, 49, 13–17. https://doi.org/10.1111/j.1469-8749.2007.0059a.x.
Mountcastle, V. B., LaMotte, R. H., & Carli, G. (1972). Detection thresholds for stimuli in humans and monkeys: Comparison with threshold events in mechanoreceptive afferent nerve fibers innervating the monkey hand. Journal of Neurophysiology 35, 122–136.
Oblak, A. L., Gibbs, T. T., & Blatt, G. J. (2010). Decreased GABA(B) receptors in the cingulate cortex and fusiform gyrus in autism. Journal of Neurochemistry, 114, 1414–1423. https://doi.org/10.1111/j.1471-4159.2010.06858.x.
Pizzarelli, R., & Cherubini, E. (2011). Alterations of GABAergic signaling in autism spectrum disorders. Neural Plasticity 2011, 297153. https://doi.org/10.1155/2011/297153.
Puts, N. A., et al. (2015). Reduced GABAergic inhibition and abnormal sensory symptoms in children with Tourette syndrome. Journal of Neurophysiology, 114, 808–817. https://doi.org/10.1152/jn.00060.2015.
Puts, N. A., Wodka, E. L., Tommerdahl, M., Mostofsky, S. H., & Edden, R. A. (2014). Impaired tactile processing in children with autism spectrum disorder. Journal of Neurophysiology, 111, 1803–1811. https://doi.org/10.1152/jn.00890.2013.
Robertson, C. E., Ratai, E. M., & Kanwisher, N. (2016). Reduced GABAergic action in the autistic brain. Current Biology: CB, 26, 80–85. https://doi.org/10.1016/j.cub.2015.11.019.
Rubenstein, J. L., & Merzenich, M. M. (2003). Model of autism: Increased ratio of excitation/inhibition in key neural systems. Genes, Brain, and Behavior 2, 255–267.
Simon, D. M., et al. (2017). Neural correlates of sensory hyporesponsiveness in toddlers at high risk for Autism Spectrum Disorder. Journal of Autism and Developmental Disorders. https://doi.org/10.1007/s10803-017-3191-4.
Simon, D. M., & Wallace, M. T. (2016). Dysfunction of sensory oscillations in Autism Spectrum Disorder. Neuroscience and Biobehavioral Reviews 68, 848–861. https://doi.org/10.1016/j.neubiorev.2016.07.016.
Stevenson, R. A., Baum, S. H., Krueger, J., Newhouse, P. A., & Wallace, M. T. (2017). Links between temporal acuity and multisensory integration across life span. Journal of Experimental Psychology: Human Perception and Performance. https://doi.org/10.1037/xhp0000424.
Talbot, W. H., Darian-Smith, I., Kornhuber, H. H., & Mountcastle, V. B. (1968). The sense of flutter-vibration: Comparison of the human capacity with response patterns of mechanoreceptive afferents from the monkey hand. Journal of Neurophysiology 31, 301–334.
Terhune, D. B., Russo, S., Near, J., Stagg, C. J., & Kadosh, R. C. (2014). GABA predicts time perception. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 34, 4364–4370. https://doi.org/10.1523/JNEUROSCI.3972-13.2014.
Tomchek, S. D., & Dunn, W. (2007). Sensory processing in children with and without autism: A comparative study using the short sensory profile. The American Journal of Occupational Therapy: Official Publication of the American Occupational Therapy Association, 61, 190–200.
Tommerdahl, M., Tannan, V., Holden, J. K., & Baranek, G. T. (2008). Absence of stimulus-driven synchronization effects on sensory perception in autism: Evidence for local underconnectivity? Behavioral and Brain Functions: BBF, 4, 19. https://doi.org/10.1186/1744-9081-4-19.
Wada, M., Suzuki, M., Takaki, A., Miyao, M., Spence, C., & Kansaku, K. (2014). Spatio-temporal processing of tactile stimuli in autistic children. Scientific Reports, 4, 5985. https://doi.org/10.1038/srep05985.
Wakabayashi, A., Tojo, Y., Baron-Cohen, S., & Wheelwright, S. (2004). The Autism-Spectrum Quotient(AQ)Japanese version:Evidence from high-functioning clinical group and normaladults. Japanese Journal of Psychology, 75, 78–84.
Wheelwright, S., Auyeung, B., Allison, C., & Baron-Cohen, S. (2010). Defining the broader, medium and narrow autism phenotype among parents using the Autism Spectrum Quotient (AQ). Molecular Autism, 1, 10. https://doi.org/10.1186/2040-2392-1-10.
Yamamoto, S., & Kitazawa, S. (2001). Reversal of subjective temporal order due to arm crossing. Nature Neuroscience, 4, 759–765. https://doi.org/10.1038/89559.
We would like to thank T. Atsumi for his comments, and A. Tanaka and T. Nawa for technical help. We also thank Y. Gorie, K. Nishimaki, S. Kim, H. Agarie, and M. Suzuki for help with participant recruitment, and Dr. Y. Nakajima for their continuous encouragement. This study was supported by a Grant-in-Aid from Japan Society for the Promotion of Science (JP15K17333, JP17942790), Ministry of Education, Culture, Sports, Science and Technology-JAPAN (JP16H01520, JP17H05966).
This study was supported by a Grant-in-Aid from Japan Society for the Promotion of Science (JP15K17333, JP17942790), Ministry of Education, Culture, Sports, Science and Technology-JAPAN (JP16H01520, JP17H05966).
Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, 4-1, Namiki, Tokorozawa-shi, Saitama, 359-8555, Japan
Masakazu Ide, Ayako Yaguchi & Makoto Wada
Japan Society for the Promotion of Science, Tokyo, Japan
Masakazu Ide & Ayako Yaguchi
Department of Contemporary Psychology, Rikkyo University, Saitama, Japan
Ayako Yaguchi
Information and Support Center for Persons with Developmental Disorders, National Rehabilitation Center for Persons with Disabilities, Saitama, Japan
Misako Sano & Reiko Fukatsu
National Rehabilitation Center for Children with Disabilities, Tokyo, Japan
Misako Sano
Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Saitama, Japan
Reiko Fukatsu
Masakazu Ide
Makoto Wada
MI, AY and MW designed the research. MI and AY conducted the experiments and analysed the data. MS, RF conducted the assessment of ASD; MI, AY, and MW wrote the manuscripts. All authors gave final approval for publication.
Correspondence to Masakazu Ide.
The study was approved by the ethics committee of the National Rehabilitation Center for Persons with Disabilities.
All participants and their parents gave written informed consent after the study procedures had been fully explained.
Below is the link to the electronic supplementary material.
Supplementary material 1 (DOCX 60 KB)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Ide, M., Yaguchi, A., Sano, M. et al. Higher Tactile Temporal Resolution as a Basis of Hypersensitivity in Individuals with Autism Spectrum Disorder. J Autism Dev Disord 49, 44–53 (2019). https://doi.org/10.1007/s10803-018-3677-8
Issue Date: 15 January 2019
Temporal order judgment
Detection threshold/sensitivity
E/I imbalance
|
CommonCrawl
|
New Photograph
Last Friday, we had a seminar at Berkeley — or rather, at Noah's house — featuring Mike Freedman and some quantity of beer. Mike spoke about some of the hurdles he had to overcome in writing his recent paper with Danny Calegari and Kevin Walker. One of the main results of this paper is that there is a "complexity function" c, which maps from the set of closed 3-manifolds to an ordered set, and that this function satisfies the "topological" Cauchy-Schwarz inequality.
Here, and are 3-manifolds with boundary . [EDIT: and equality is only achieved if ] This inequality looks like the sort of things you might derive from topological field theory, using the fact that . Unfortunately, it's difficult to actually derive this sort of theorem from any well-understood TQFT, thanks to an old theorem of Vafa's, which states roughly, that there's always two 3-manifolds related by a Dehn twist that a given rational TQFT can't distinguish. Mike speculated that non-rational TQFT might be able to do the trick, but what he and his collaborators actually did was an end run around the TQFT problem. They simply proved that that the function exists.
I tell you all this, not because I'm about to explain what is, but to explain our new banner picture. We realized after the talk that there were a fair number of us Secret Blogging Seminarians in one place, and that we ought to take a photo.
March 25, 2008 A.J. Tolland
← SF&PA: An example
Tao blogs Perelman →
38 thoughts on "New Photograph"
What, no one's GIMPing in the zombie face?
I usually interpret theorems like Vafa's as a statement that the definition of TQFT could or should be refined, i.e., one may want the target category to have more structure than just Hilbert spaces. I don't actually have a candidate replacement in mind, although some of the structures in Beilinson's topological algebras paper look very appealing.
The function c reminds me of some of the pathological counterexamples I saw in a decision theory class I took as an undergraduate. Apparently, some economists really like nonseparable preference systems, but I was never convinced that they were useful for whatever it is that economists do.
Todd Trimble says:
Perhaps you could identify the people in the photograph? I'm going to guess Noah Snyder at the far left, then A.J. Tolland, and from there on I'm even less sure.
Actually, Noah is the one with the big bald spot in the middle with the fleece. Joel is the one in the long pink shirt, A. J. in the hat, David in the middle with the green shirt, and Emily Peters (a friend of the blog) on the far right.
So Noah keeps a whiteboard in his yard for just such occasions? You seminarians are certainly dedicated to your trade.
Having been to the talk, I have a question.
What is a rational TQFT? Does it mean that on 2-manifolds the TQFT takes values in Q-vector spaces?
I like the question next to Noah's head. Is there some interesting context, or possibly an answer?
Noah Snyder says:
The short answer was it depends. Sometimes 2-spheres are good and sometimes they're bad.
Joel,
By a rational TQFT I mean one constructed from a rational 2d CFT, which you should think of as more or less the same thing as a modular tensor category. I'm not sure that this is standard terminology: A "rational TQFT" is really just what mathematicians call a TQFT. The idea is that there should be some more general thing that deserves to be called a TQFT.
My recollection of what Mike said was that Vafa's result only held when your modular tensor category was semisimple.
Urs Schreiber says:
I'm not sure that this is standard terminology:
I know people address 3dTFTs coming from modular tensor categories by the names of their discoverers: Reshitikhin-Turaev. "Consider the Reshitikhin-Turaev 3d TQFT functor defined by the modular tensor category C…". I can't quite recall people saying "rational" for these, even though of course one important point is that these RT-3d TQFT functors allow you to describe rational 2d CFT — if we agree that modular tensor category means: finitely many isomorphism classes of simple objects (= "primary CFT fields").
I am also only going from my memory of the talk. The reason that rational TQFT's couldn't distinguish everything had something to do with roots of unity. (Specifically, with twisting kn times, where we were working with nth roots of unity.) That's compatible with Urs' statement above that rational TQFT's should have finitely many isomorphism classes of simple object. It seems very restrictive though — surely topologists work with quantum groups at general values of q all the time, yes?
rational TQFT's should have finitely many isomorphism classes of simple objects.
In general I wouldn't know what you might want to mean by an "object of a TQFT" (I could make up a sensible definition, but maybe it would not coincide with what you have in mind here).
The way the finiteness assumption enters most directly into the Reshitikhin-Turaev construction is as follows: the RT 3-functor is defined not just on 3d cobordisms, but on "extended 3d-cobordisms" which are equipped with extra data: each 2-d surface may have "marked points" which are colored by a simple object of the modular tensor category. Then inside the 3-d cobordisms there are ribbons running which connect these marked points (the Wilson lines of CS theory!).
The way the Reshitikhin-Turaev 3d TFT functor is defined goes like this: to each surface with marked points it assigns the vector space
of morphisms in the modular category from the tensor unit to the direct sum of the labelling simple objects.
Given a 3-d manifold, the functor is evaluated on it by first determining the surgery of the 3-sphere needed to obtain that manifold. Then there is a rule for how this surgery induces a certain knot to be drawn inside the 3-spere. That knot is included with the Wilson line ribbons we had previosuly. Then there is a formula which tells you how to label all these ribbons by morphisms in the modular tensor category, sum up the result, and finally obtain a linear map
from input to output given by postcomposition with that sum of morphisms in C.
It's a rather unilluminating procedure, I must say. In any case, it involves summing over isomorphism classes of simple objects. If there are finitely many such, everything is well defined and one proves that the result really defines a cobordism rep. If not, nobody knows how to make sense of this prescription.
So that's, as far as I understand, where semisimplicity enters the RT 3d TFT. I don't know what it would mean to take an arbitrary TFT and ask if it is "rational" or "semisimple". But that's just me, probably somebody does have a sensible definition for that.
Sorry, I misspoke: Reshitikhin-Turaev don't have a 3-functor, but a 1-functor on 3-d cobordisms, of course.
(But there should be a 3-functor refining it…)
Noah,
My source (Bakalov & Kirrilov) defines modular tensor categories as semisimple ribbon categories with finitely many simple objects satisfying blah blah. Perhaps this is abnormal. But I think the concepts here are clear enough; we seem to be confused only about terminology.
This point has always confused me, but I think Schechtman and Varchenko showed that the braiding in quantum groups "comes from" the KZ equation, which lives on a configuration space of points (or boundary circles) on a curve. Dehn twists are loops in M_g, and the braiding doesn't see that part of the fundamental group of the universal curve. You can get connections on the base by Beilinson-Bernstein descent, but I don't know what relation that has to your choice of q.
modular tensor categories as semisimple ribbon categories with finitely many simple objects
It's clear, but just for the record: it's finitely many isomorphism classes of simple objects.
Urs,
Right. Thanks!
Chris Schommer-Pries says:
Scott, David, Joel,
Part of the confusion comes from the fact that topologist study both knots and three manifolds. These are intimately related, but are not exactly the same thing.
If you are interested in invariants of knots and links (let's say framed links) then what you want is to look at ribbon categories. There is a general construction that lets you get link invariants out of such categories. Basically the usual "graphical calculus" will give you an invariant of a "colored" (framed) link via its projection to the plane. Here we color the link by the objects of our category. If our category had finitely many simple objects we could do some sort of summation over simples to get (framed) link invariants. For this quantum groups at general q work, and this is one way you get things like the Jone's polynomial.
But now let's suppose that you like 3-manifolds and not knots. What can you do? Well if you are given a framed link, you can view it in the 3-sphere and do surgery to get a 3-manifold. Maybe our invariant of a link is an invariant of the 3-manifold? The problem is that different links can give the same 3-manifold. You'll need an invariant which is invariant under the Kirby calculus moves. So already you see that we need more then just a ribbon category. Let's go for the gold and ask that our invariant is actually part of a TQFT, can we do this?
The answer is "yes" and "no". If what you want is an honest invariant and an honest functor from the category of bordisms to vector spaces, then the answer is usually no. But if you allow for projective TQFTs (which I won't explain yet), then there is a general machine. The input is a special kind of ribbon category called a modular tensor category and the construction of the TQFT is similar to the link invariant construction above. It also generalizes to more exotic situations where you have 3-manifolds with colored (framed=ribbon) links inside them. This is what Urs was alluding to in 12.
So now, how do we get such MT categories? We'll as far as I understand it quantum groups at general q don't work. However at a root of unity you can do something. At a root of unity, the representation category is bad, but it has a nice quotient. This quotient is a MT category and so feeds into the general machine to give a TQFT.
Urs, Noah, A. J.,
There is also a notion of modular tensor category where you don't require things to be semi-simple. Is it possible to build TQFTs out of these? The answer is "kinda".
The construction I'm thinking of is described in the book "Non-semisimple Topological Quantm Field Theories for 3-Manifolds with Corners" by Kerler and Lyubashenko.
Part of the problem is that this is all tied up "projective" TQFTs and what sort of "projective anomalies" you allow. As I understand it, in the non-semisimple case you need really bad ones which can make your TQFT take unexpected zero values. For example, one of these TQFTs gives zero for the value of .
For those less savvy in 3D TQFTs the value of should be the trace of the identity map on the vector space for . In their example it vanishes because of this weird "half-projective" anomaly.
There is more to say, maybe I should write a post on 3D TQFTs…
Thanks. I wasn't aware of that.
Please do, I'd be intersted.
Bruce Bartlett says:
When thinking about 3d TQFT's whose associated braided tensor categories are not semisimple, the best example is Rozansky-Witten theory, as y'all probably know. References are "Rozansky-Witten Theory" (arXiv:math/0112209) by Justin Roberts as well as "On the Rozansky-Witten weight systems" (arXiv:math/0602653) by said Mr Roberts and my good supervisor Simon Willerton.
In Rozansky-Witten theory, you fix a holomorphic symplectic manifold X, and the category associated to the circle is the graded derived category D(X). The theorem is that this can be given a ribbon structure… but its not semisimple. So its an example of a "non-semisimple" modular category… even though that doesn't make sense according to the traditional definition, of course.
This stuff isn't completely understood… at least I certainly don't understand it! But it's clear that Rozansky-Witten forms a great example of a "non-semisimple 3d TQFT" and we need to make sure that any new definitions we make will fit this example.
I don't know much about the TQFT aspect, but nonsemisimple versions of modular tensor categories seem to arise from logarithmic conformal field theories. The term "logarithmic" here comes from the log q terms that may appear in the characters of nontrivial extensions of modules over, say, a vertex algebra describing the theory. Presumably, one could get such a vertex algebra from something like chiral differential operators on a holomorphic symplectic manifold X (see Bruce's comment), but I'm very fuzzy on this.
Normally, the characters are found by a weighted trace of an operator L_0 (more precisely, it is ) coming from the conformal structure, but nontrivial extensions allow L_0 to act with nontrivial Jordan blocks, and this is accounted for using log terms. A more geometric interpretation is that characters of modules (conjecturally) span the space of q-expansions of genus one conformal blocks (possibly with an eta-function multiplier). The conformal blocks form a bundle with (projective) connection on the space of genus one curves, and as the curves degenerate to a nodal cubic, the flat sections can be written as formal solutions to some differential equations.
There is a hypothesis of factorization that identifies some of these solutions as characters of modules, by specifying an isomorphism between the genus one conformal blocks near the boundary of the moduli space and genus zero blocks with dual module insertions at the preimage of the node under the normalization map. Much of this has been proved in special cases, but the general picture is still sketchy. I suspect nonsemisimplicity is related to nonvanishing higher chiral homology on a genus zero curve, but there doesn't seem to be any literature on that.
thanks a lot for the useful remark about Rozansky-Witten.
Concering that remark I made
I don't know what it would mean to take an arbitrary TFT and ask if it is "rational" or "semisimple".
at the end of 12 I should add that in saying so, I was thinking of the 1-functorial formulation where data is assigned only to top and top minus one dimension.
In our comment you essentially say that when we have actually an extended TQFT, which assigns data to manifolds of all dimensions, we can say it is semisiple if the categories it assigns to top minus two dimensional manifolds is.
Okay, good. In that case let's go one step further:
we should admit then that if an abelian monoidal category is semisimple, it may be regarded as a 2-vector space of finite dimension. That's of course a major emphasis of your (Bruce's) work, I am just saying it for the record here.
So if we are talking about extended TQFT, we might actually want to say:
Definition An extded n-dimensional TQFT is finite dimensional if all the k-vector spaces assigned to (n-k)-dimensional manifolds are of finite dimension.
(i.e. the usual notion of finiteness in representation theory, keeping in mind that an extended TQFT is a representation of a cobordism n-category.)
And then we say: "suppose we have a finite dimensional TQFT" instead of: "suppose we have a semisimple TQFT".
Dear moderators,
I just sent another message which apparently did not pass the spam filter's scrutiny. I'd be grateful if you could resurrect it.
(I am wondering what about my commenting style it is that makes the robot suspect that all I really want is to sell [something I can't name lest this message won't go through either] to you all…)
David Ben-Zvi says:
I think the issue of semisimplicity is a reflection of trying
to make TFTs in an abelian setting, rather than following
the physicists lead and making them in a derived setting.
(as far as I know Chern-Simons is about the only example
in physics of a TFT which is not formulated in a differential
graded fashion). More precisely TFTs (such as Rozansky-Witten,
Seiberg-Witten, "geometric Langlands", etc) arise
typically from twisting supersymmetric quantum field theories,
meaning we have an underlying graded (or at
least Z/2 graded) vector space with a differential Q,
which is part of our SUSY algebra that's survived the twisting.
This means that in codim one you really have a dg vector
space, in codim two a dg category, etc.
In an abelian setting you can find that the
modules associated to codim one manifolds are forced to be
surjective, and some kind of semisimplicity is forced
in codimension two.
But if you look at TFTs in a dg or other suitably homotopical
setting there are no such restrictions. Fully extended TFTs
always satisfy some strong "finite dimensionality" (or more
precisely, dualizability) hypotheses, but for example
the derived category of coherent sheaves
on any smooth projective variety satisfies a strong
form of these and is far from semisimple.
as far as I know Chern-Simons is about the only example
graded fashion
On the other hand, it is also not quite natural to restrict attention to TQFTs and to QFTs of low dimension, only because these are the more accessible examples.. Every QFT should come from a rep of cobordisms with suitable extra structure.
So in particular the 2d SCFTs which give rise, via twists, to 2d TFTs are themselves cobordisms reps (namely of 2-cobordisms with conformal structure — for the bosonic rational case this has been made fully precise, the susy and non-rational cases are bound to work in a similar fashion).
This gives another large class of examples of cobordism reps which take values in things that are not dg-enriched.
And in principle, even though nothing much seems to be known about it, one has Chern-Simons theories in higher dimensions, each of them coming from a transgressive cohomology class on the group in question. And each of these (as also suggested by physics, for that matter) should have a "holographically" related higher dimensional CFT "on its boundary". And, while precise details are essentially unknown, there does not seem to be a reason to expect that any of these higher CS/CFT pairs live in the dg-world a priori.
What I am lacking is a good general intuition for what the twisting of susy CFTs to topological theories means on general abstract grounds. It's usually introduced as a trick: "look, we can do the following and the result is interesting" without much or any motivation for what it means to "do the twist".
Well, you possibly have more insight here than I do. But I am just thinking that understanding what twisting really means might go a long way towards explaining how dg-enrichment fits into the grand picture. After all, dg-enrichment can be regarded as enrichment over infinity-vector spaces, so if we have a sequence
dg-category , dg-2-category, etc.
we really seem to be looking at a categorificatoin in two different directions, as with bisimplicial sets.
That must mean something more profoundly than seems to be apparent. I don't know what. Maybe you do.
I don't think there's anything mysterious about twisting —
it's basically a physicist's way of saying that we start with
a naive theory (one "in coordinates"), and then we realize
we can make the fields live in various natural bundles
(ie ones associated to the symmetry group of the theory,
typically some superPoincare or superconformal group)
rather than being plain scalars or spinors etc.
A mathematician might skip the 0-th step and just say
consider a theory with the following fields living in
various bundles.
Given such a ("twisted") supersymmetric theory,
we can then study what are the conserved supercharges Q.
The space of these Q's tells us all the ways in particular
to make a topological field theory from our SUSY QFT,
and the result comes in a dg flavor by construction.
I have learned only very recently (embarrassing!) to appreciate
the incredible depth and importance of the
study of the possible supersymmetric field theories,
which the physicists have done so well. It really seems
to cover so much of the math we're interested in, eg
the study of SUSY gauge theories really seems to know
ALL the structures we care about in representation theory
in characteristic zero. So I personally don't see much motivation
to work in greater generality than what the physics already gives
us, rather than to try to understand the structures we're given.
Witten has explained for example
how all the subtlest aspects of geometric Langlands
come from classifying supersymmetries and twistings
in N=4 d=4 super-Yang-Mills, but that this is only
a pale reflection of a superconformal 6d CFT,
which itself is but a pale reflection of 10d strings
or 11d M-theory. I think just understanding
the implications of this for representation theory
will take longer than I can dream.
Regarding dg structure: I don't know anything about
nontopological QFT, but the point is that
cobordism categories are topological, and differential
graded structure is the simplest way we know
to algebraize topological spaces (in characteristic zero).
From this POV dg stuff is completely interchangeable
with any other context that's rich enough to sense
topology. To understand nontopological
field theory one would have to have a good idea
of how to measure spaces of cobordisms with
complex structures or metrics or so on the way
we know how to measure topology, which
is certainly beyond what I can understand.
Urs, I agree with you about finite-dimensional abelian categories, etc., but we should also remember that in these 3d TQFTs like Rozansky-Witten theory, the category we want to associate to the circle is not an abelian category, its the derived category. I don't really know what that really means, but it indicates we will have to rethink a bit the Baez-Dolan `Extended TQFT Hypothesis part II' which says that extended TQFT's are n-functors from nCob into nHilb. In the most naive reading of that hypothesis, it would mean that the category associated to the circle should be some kind of abelian category, which is not the case. Something somewhere needs to be jigged a bit.
Bruce – an appropriately homotopically "jigged version" of the extended TQFT hypothesis was presented by Jacob Lurie in the Morse lectures at the IAS (notes at http://www.math.utexas.edu/users/benzvi/GRASP/lectures/IASterm.html)
— Hopkins and Lurie have proved this version in dimensions 1 and
2 (where it generalizes a theorem of Costello by replacing
the target of the functor 2Cob–> dg-categories by
an arbitrary 2-category), and have tentatively
announced (in the last lecture) they expect a proof in arbitrary dimension… exciting times!
Aaron Bergman says:
Just to prevent miscommunication, I think you mean (oo,2) category, right? Though I assume everyone will be dropping the infinity-commas in the nearish future :).
Thanks for putting up the notes. They are very helpful. Unfortunately, when I read them on my mac, the images are confined to the bottom left quarter of each page, so I have to squint or zoom in. Is this a known problem?
Jacob's rephrasing of the Baez-Dolan hypothesis is pretty neat, since the target is made essentially irrelevant. If framed nCob is freely generated by one object, the endomorphisms of that object should form an interesting n-category with respect to universal properties, since it acts on all TQFTs.
Aaron – yes, I mean (oo,2)-category (yes I agree
before we know it we will be meaning (oo,-) in front of everything,
and dropping it!)
Scott – I think it should work better with Firefox maybe?
I use firefox on the PC and there's no such issue but other
browsers do funny things. The problem is I do something
incredibly stupid with the notes — I write them on the tablet
in Windows Journal format (jnt), save it as tiff, and
then convert it to pdf brutally, resulting in huge files..
Thanks for pointing me to these notes David, they are a treasure trove! There are a few points my philosophy differs slightly though. It seems to me that Jacob Lurie's programme is very much a generators/relations programme. In the first lecture, he paraphrases the Baez-Dolan cobordism hypothesis to say that
Cobordism hypothesis (paraphrased): Extended TQFTs are "easy to describe and build".
By the way, as far as I know Baez and Dolan didn't explicitly write down a "Cobordism hypothesis", although perhaps they should have. Instead they wrote down the "Extended TQFT hypothesis I" which is a bit confusing. If we rename that to the "Cobordism hypothesis", then it basicaly says:
Cobordism hypothesis (original, basically): The n-category nCob is the free stable weak n-category with duals on one object.
Personally, I would disagree a bit with Jacob Lurie's paraphrasing of it… I view the Cobordism hypothesis as a statement of the equivalence of topological concepts (manifolds, cobordisms, etc.) with algebraic ones (higher categories with duals). Each side can inform the other, but I don't think the Cobordism hypothesis makes it appreciably easier to explicitly understand nCob. In particular I don't think it makes it any easier to write down a TQFT… in dimension greater than 3 at least, because that's where the generators/relations game becomes too difficult.
Instead, I go along with the viewpoint which Dan Freed also stresses in the MSRI talk available on his webpage. Namely, that in the realm of TQFTs it is better to write down what he calls "a priori" TQFTs, instead of using a generators/relations argument. In other words, one should write down a geometric construction which a priori captures the gluing laws of the path integral for any manifold (and some other laws perhaps yet to be elucidated)… and then you will have an extended TQFT. At THAT point you check what it assigns to the circle (not the other way round).
Let me summarize my point. It is often said, for instance, that "2d TQFTs come from Frobenius algebras". Instead I think it is better to say "2d TQFTs give rise to Frobenius algebras".
Above, I meant "dimension greater than or equal to 3", sorry about that.
Here is a question I have arising from Jacob Lurie's notes, perhaps someone here can help me out. In the last lecture, near the end, he points out, roughly, that the target 2-category of a TQFT should have duals for objects, morphisms and 2-morphisms.
Now, he says that dgCat_k has duals for morphisms provided we restrict our dg-categories to those of the form QCoh(X) where X is smooth and proper. Now this is something I'd love to get to the bottom of, since I have come across it a lot. My understanding is that by "dual for morphism in a 2-category" in this TQFT setting, we mean a morphism which has a simultaneous left and right adjoint… because that is the case in 2Bord.
But that doesn't hold for these morphisms between dg-categories of the form QCoh(X), right?
My bible on this is the paper by Willerton and Calderaru on the Mukai pairing. There they employ the 2-category Var (which it seems to me Jacob Lurie is essentially restricting to), and show that the Serre kernel allows us to equip every morphism with a left and right adjoint (but these aren't the same). So Var is not a 2-category with duals… in the strong sense we apparantly need.
I think Jacob is claiming this makes TFTs much easier to write down –
if you are given an (infty,n) category with duals in an appropriately strong sense, then to give a TFT valued in it in any dimension
is equivalent to just giving a single object, no other relations.
But maybe I misunderstood your objection – it is indeed
hard to find categories with all these duals..
I also am not sure what you meant about Jacob restricting to the
2-category Var – by a theorem of Toen functors between QC(X)
and QC(Y) are given by QC(X x Y) for X,Y arbitrary varieties, so there is no restriction.. for smooth projective varieties the same is true
for coherent sheaves.
You raise a very interesting/troubling point about the fact that adjoints of functors between QC(X) and QC(Y) – the left and right don't agree, but they differ by an orientation (Serre duality). The last couple of pages in the notes was things he was saying informally at
the blackboard after the talk and I probably misunderstood..
in the 2d field theory case then we want a Calabi-Yau variety,
which guarantees left and right adjoints agree up to a shift
(which you can incorporate into your structure using a 2-gerbe
as he says, or as Costello says in a slightly different way).
Of course we don't actually need CY varieties, the 2-category of CY categories (ones that look like sheaves on CYs) will be an example
of a 2-category with strong duals in the sense we need (up to twisting by this 2-gerbe). In general the left and right adjoints will differ by an orientation (Serre functor) for smooth proper varieties, and you'd have to twist this away in some way to get a TFT..
Ok, thanks David I see your point now. I indeed had some misconceptions… Lurie's philosophy is indeed not about generators and relations, as I was suggesting. I have learnt something here.
But I think there is a nugget of truth to what I was saying. It seems to me that the cobordism hypothesis only makes extended TQFTs easier to write down when the dimension is low enough (less than or equal to 2) so that we can explicitly understand the n-category in which the single object is living. However, for 3d TQFTs this means our object (the object assigned to a point) will be living in a 3-category. For example, in Rozansky-Witten theory, the gismo assigned to a point is something like "the 2-category of coherent stacks on X" (whatever that means!); this 2-category is itself an object in some "3-category of gismos"…. this is the 3-category with duals that we are talking about. My point is that its pretty darn tough to understand these things precisely… what on earth is a 3-category with duals, for instance? Remember, we need the fully weak versions of these things, because that's what arises in geometrical situations. Luckily that's where the $(\infty, n)$ homotopy methods can probably help out.
The reason all these complications about 3-categories and so on don't show up in Lurie's talk, is that he takes 3d TQFTs and "dimensionally reduces" them to 2d TQFTs by setting $Z' (M) = Z(M \times S^1)$. For instance, he simplifies Chern-Simons theory into a 2d TQFT which assigns to the circle the twisted equivariant K-theory of G (in the full 3d Chern-Simons theory this is what is assigned to the torus). That's fine, and its an important simplification, but if we are to generate any new quantum invariants of 3-dimensional manifolds, we really want 3d TQFTs, and then we have to deal with 3-categories and so on.
In other words, the Cobordism hypothesis only helps us to write down TQFTs if we truly have a good grip on the theory of higher categories with duals (here "higher" means $n \geq 3$). My understanding though is that we are far from that point; I had the impression that even Lurie's methods have yet to really "tame" these guys. Maybe I am a few years behind the times.
By the way, thanks for your comments on duals:
"the 2-category of CY categories (ones that look like sheaves on CYs) will be an example of a 2-category with strong duals in the sense we need (up to twisting by this 2-gerbe)."
That's great, that's what I was hoping, that there was some way to get this to work. Now I need to understand some deeper abstract formulation of this "twisting by the 2-gerbe" mechanism…
Was offline for a while, so much has happened here.
Bruce wrote:
Now I need to understand some deeper abstract formulation of this "twisting by the 2-gerbe" mechanism…
Is this just taken from David's comment or is there a document describing what exactly this is about?
It should be the return to Freed's "a priori TQFT" picture: Chern-Simons theory (if that' still what we are talking about) is given by a Chern-Simons 2-gerbe with connection on BG and the Chern-Simons TQFT is obtained by "quantizing" that.
If one first discusses this for a CS 2-gerbe of trivial class then passing to nontrivial classes afterwards looks like one is "twisting" the trivial case. (By the way, I'd second Jim Stasheff here: if we can, we should not say "twist". There are billions of things that deserve to be called twists. So none does. It is just not descriptive enough. In my himbe experience, when we have a structure and find that we need to "twist" it, it is an indication that we are still missing a bigger picture. )
I think that the general picture of Freed's "a priori QFTs" (which we could just call "Sigma-models"!) involves starting with a differential cocycle on some target space and turning that into a differential cocycle on some abstract cobordism. The assignments to circles, tori, etc, are then obtained from taking the corresponding holonomies of that latter differential cocycle.
|
CommonCrawl
|
The qualitative and quantitative analysis of the coupled C, N, P and Si retention in complex of water reservoirs
Lilianna Bartoszek1 &
Piotr Koszelnik1
The Solina–Myczkowce complex of reservoirs (SMCR) accounts about 15 % of the water storage in Poland. On the base of historical (2004–2006 years) data, the mass balance of nitrogen, phosphorus, total organic carbon and dissolved silicon were calculated. Large, natural affluents were the main source of the biogenic compounds in the studied ecosystem, delivering 90 % of TOC, 87 % of TN and 81 % of TP and DSi load. Moreover, results show that SMCR is an important sink for all the analysed biogenic elements. About 15–30 % of external loads were retained in the reservoir mainly in upper Solina. Due to the intensive processes of primary production, inorganic forms of nitrogen and phosphorus were mainly retained. Internal production of organic matter lead to an amount of the organic matter deposited in the sediments greater than was anticipated on the basis of the mass balance calculations. A constant load of dissolved silicon originating only from natural sources did not contribute to supplement deficits of Si present in the body of water in the reservoirs, promoting disturbances in N:C:P:Si ratios and another growth condition for other types of algae.
During the last decades an important development of anthropogenic sources of biogenic substances in supplied natural water was observed. However that growth is not similar between specific elements. Anthropogenic sources of nitrogen and phosphorus compounds are connected with sewage and fertilizer loss, while human sources of silicon are minor. Moreover increased N and P loads may stimulate primary and secondary production thus growth of organic carbon (TOC) loads (Zeleňáková et al. 2012; Wiatkowski et al. 2015; Gajewska 2015). This effect distorts the natural water C:N:P:Si ratio causing negative changes in the biology of the ecosystems and the deterioration of water quality. That is especially evident in the case of stagnant waters. The role of lakes and reservoirs as sinks of biogenic elements along the aquatic route from land to oceans is evident (Hejzlar et al. 2009; Bouwman et al. 2013; Grizzetti et al. 2015; Wiatkowski et al. 2015). The mechanism of that retention is complex and connected with physical, hydrological and chemical processes e.g. denitrification (for N), sedimentation and adsorption and also uptake. Qualitative and quantitative analysis of coupled C, N, P and Si retention in reservoirs and knowledge of these element cycles is essential for our understanding of the ecosystem biogeochemistry (Bouwman et al. 2013). Additionally, this information is important for interpretation of greenhouse gasses (e.g. N2O, CO2 and CH4) emission from reservoirs causes and intensity (Gruca-Rokosz and Tomaszek 2015).
Retention of biogenic elements in water ecosystems is frequently described as a function of the morphometric and hydrologic parameters of reservoirs. The most frequently considered factor in model studies is a hydraulic retention time (HRT); also depth and area of the reservoir and loads (Behrend and Opitz 2000; Seitzinger et al. 2002; Tomaszek and Koszelnik 2003; Hejzlar et al. 2009) are utilised. In general, HRT changes proportionally, as longer stoppage of waters in slow flow zones promotes two basic mechanisms of retention, i.e. uptake by organisms and sedimentation. The behaviour of nitrogen is slightly different, as its retention may be high also in shallow and flowing basins due to conditions promoting denitrification, which in the literature is referred to as a third nitrogen retention mechanism, understood as a difference between inflow and outflow of load (Seitzinger et al. 2002; Tomaszek and Koszelnik 2003).
The purpose of the study is interpretation of how a mountain complex of reservoirs can modify natural biogeochemical fluxes of four major biogenic elements in the river waterway. The Solina–Myczkowce complex of mountain reservoirs (SMCR) is a perfect training field for this interpretation. During the 2004–2006 period reservoir chemistry, hydrology and catchment area management were studied intensely and independently by different institutions.
Study site
The SMCR is located in the upstream part of the River San (SE Poland) within the River Vistula system (Fig. 1). The upper, Solina reservoir is the biggest man-made lake in the Vistula basin and accounts for about 15 % of the total water storage capacity in Poland (volume: 502 mln m3, mean depth: 22 m, hydraulic retention time: 215 days, mean discharge: 35 m3 s−1). The lower Myczkowce Reservoir, as a compensatory water body (volume: 10 mln m3, mean depth: 5 m, hydraulic retention time: 6 days), is supplied by the hypolimnetic waters of the upper one (90 % of total supply), and by minor tributaries. The upper reservoir has three major inflows (accounting for 90 % of total water supply) and three minor ones. Reversible pumping takes place sporadically. The outflow from the complex involves bottom water from the lower reservoir flowing through a hydro-electric-power plant.
The bathymetric map of the Solina–Myczkowce complex of reservoirs. Location of sampling stations from stagnant water (M—Myczkowce, S—Solina) as well as studied affluents was shown
The greater part (c. 75 %) of the 1250 km2 catchment area is covered by forest, followed by meadows and pastures. Arable land accounts for only a small fraction of the area. The drainage basin has a low population density of about 6 inhabitants per km2, while about half of the households concerned are not connected to either sewerage or septic systems. A relatively steep (6 %) slope favours the leaching of soil and ground cover, especially during periods of intensive atmospheric precipitation and snow melting.
Sampling strategy and methods
In order to assess the mass balance of N, P, DSi and TOC, water was sampled from the mouth parts of rivers and streams feeding the Solina reservoir, as well as from the outflow. Also water samples obtained from four stations on the Solina reservoir and two on the Myczkowce reservoir were subjected to analysis. Water was sampled 33 times every 1–6 weeks. Sampling dates were adjusted on the basis of the meteorological and hydrological factors. About 0.5 dm3 of glass-fiber-membrane-filtered water was made subject to spectrophotometric determinations for concentrations of nitrate-nitrogen (N-NO3 −, salicylate method, coefficient of variation for the procedure—CVP: ±1.5 %), nitrite-nitrogen (N-NO2 −, Griess reaction, CVP: ±1.7 %), ammonium-nitrogen (N-NH4 +, Berthelot's reaction, CVP: ±1.4 %), phosphate-phosphorus (P-PO4 3−, Molybdate method, CVP: ±1.8 %) dissolved silicon (the Molybdate method, CVP: ±1.6 %) and chlorophyll a (only in stagnant water, in the methanol) using a PhotoLab S12 spectrophotometer (WTW GmBH). Moreover, Kjeldahl nitrogen (NKjeld, distillation), Total Organic Carbon (TOC, Shimadzu TOC analyzer, CVP: ±0.6 %) as well as total phosphorus (TP, oxidation to phosphate), were analyzed in non-filtered samples. Total nitrogen (TN) was calculated as the sum of nitrate- and nitrite-nitrogen and NKjeld. Balances of respective elements, both for the entire complex and the two reservoirs, were calculated as follows:
$$L_{R} + L_{DD} + L_{At} = L_{Out} + R(E)$$
where (mass time−1): L R —load inflowing from the drainage basin in affluent waters; L DD —load from the direct drainage basin; L At —load supplied from atmosphere along with precipitation; L Out —load removed along with run-off; R(E)—retention (elimination) of an element in the ecosystem.
Load inflowing from the drainage basin in affluent waters (L R ) constituted a sum of loads inflowing with all the rivers and streams feeding the balanced ecosystems. Loads of elements for the particular sections were calculated as a product of concentration and water flow rate. Daily flow rate (Q) values of the analysed sections, necessary for the calculation of loads from the three main affluents (about 90 % of total supply) and run-offs from the reservoirs were obtained from Solina–Myczkowce Power Plant S.A (continuous stage measurements. Q value for smaller watercourses was calculated on the day of water sampling, using installed staff gauge readings. Concentrations between the sampling days were calculated using the statistical approach, according to Mukhopadhyay and Smith (2000). Uncertainty of calculated loads was approximated on the base of Harmel et al. (2006) as cumulative uncertainties of potential sources of error. We assumed errors during sampling (uncertainty of 15 %), chemical procedures and analyses (above mentioned CVP, ca. 2 %) and flows measurement (continuous stage measurements, uncertainty of 2 %). Therefore, C, N, P and Si loads are calculation with errors of ±19 %. To estimate the cumulative probable uncertainty of calculated retentions the root mean square error propagation method were used (Harmel et al. 2006). However, the analysis and discussion of data are based on the most probabilistic, average values due to clarity of the paper and per analogiam to many other authors (e.g. Mengis et al. 1997; Garnier et al. 1999; Torres et al. 2007; Hejzlar et al. 2009).
L DD was calculated with as a sum of (1) load inflowing from point sources within the direct drainage basin (including the WWTP); (2) load inflowing from nonpoint sources within the direct drainage basin; (3) load introduced by bathers. Respective summands were calculated from the available models (Giercuszkiewicz-Bajtlik 1990; Jørgensen 2011) including data on the direct drainage basin area development, touristic burden and amount of wastewater discharged from the WWTP obtained from the Solina Municipality with its seat in Polańczyk. LAt was obtained from parallel study (Urbanik 2007, MSc thesis, unpublished data) utilising precipitation rate measurements carried out by the hydrological survey station in Lesko. As the reservoirs are not located in the area affected by ground waters, this source of feed was disregarded. In addition, because of the mesotrophic nature of waters, the effect of atmospheric nitrogen binding by cyanobacteria was recognised as insignificant for the balance of this element (Ferber et al. 2004; Koszelnik et al. 2007).
The sedimentation rate of TN, TOC was calculated on the basis of the phosphorus mass balance (which, in contrast to nitrogen and carbon, is not present in a gaseous state in the biogeochemical cycle), using the following formulae (Dudel and Kohl 1992):
$$N_sed = P_ret {-}\varDelta P (N:P)$$
$$C_sed = N_sed (C:N)$$
where: N sed (C sed )—nitrogen (carbon) sedimentation rate (t year−1); P ret —phosphorus retention in the reservoir between the sampling time points (t year−1); N:P—ratio of total N:P concentration in benthic deposits (from Koszelnik 2009a, b); C:N—ratio of organic C:total N concentration in benthic deposits (from Koszelnik 2009a, b); ∆P—change of mean phosphorus content in the water body between the sampling time days (from Koszelnik 2009a, b; t year−1); calculated from the following formula:
$$\Delta P = \frac{{\left( {P_{n + 1} - P_{n} } \right)V_{R} }}{\Delta t} \cdot \frac{365}{{10^{6} }}$$
where P n+1 and P n correspond to TP concentration at nth and n + 1th day of sampling (g P m−3), V R is a reservoir volume (m3), ∆t is a time period between nth and n + 1th day of sampling (day), and 106 and 365 are conversion factors to obtain the t year−1 unit;
The results of the mass balance for the dammed reservoir complex Solina–Myczkowce are listed in Table 1. The complied data show that the inflow of TN to the reservoir from all the sources in 2004 and 2005 amounted to approx. 1700 t. In the following, last year of study, the value was ca. 50 % greater, and amounted to 2489 t. Soluble forms, in particular nitrate(V) nitrogen (ca. 60 %) were the main contributors to the load. For the other contributors loads calculated for each year of balance were less variable. Annual inflow of TP was within a range of 76–101 t; RSi 2056–2244 t; and TOC 2517–2837 t. Predominantly the loads fed the Solina reservoir. Natural affluents of the Myczkowce reservoir played a minor part in biogenic compound feed (3–7 %). Large, natural affluents (Fig. 1) were the main source of the biogenic compounds in the studied ecosystem, delivering 90 % of TOC, 87 % of TN and 81 % of TP and DSi load (Fig. 2). The share of inflows from the direct drainage basin is more evident in the case of DSi balance (5 %), whereas the load originating from atmosphere is immaterial in the annual balance.
Table 1 Mass balance of N, C, P and Si selected compounds for Solina and Myczkowce reservoirs
The share of different biogenic element sources in the Solina–Myczkowce complex of reservoirs supply
Except for TOC (R2 = 0.85–0.90; p < 0.001), no significant relations between hydraulic flows and concentration of total forms of the analysed elements in water were discovered. No seasonal changes in N and P concentration indicate that both N and P originate from point and non-point sources. Occurrence of these changes in the case of regional-originating Si can be related to its assimilation in river waters upstream of the reservoirs (Humborg et al. 2000).
Calculated values of loads, retention/elimination rates of N, P, Si and C are listed in Table 1. An apparent discrepancy between results from both the reservoirs and the total balance, results from inclusion of reverse pumping of water from the Myczkowce to Solina reservoir in partial balances, which influences particular balances, but has no effect on the balance for the reservoir complex. Significant masses of the balanced elements were found retained within the analysed ecosystem. This indicates that the majority of biogenic compounds feeding the reservoirs is incorporated into the trophic chain and/or accumulated in the benthic deposits (Behrend and Opitz 2000; Koszelnik et al. 2007) by various retention mechanisms. The overall reservoir balance reveals that only for nitrogen was the outflowing load higher than the inflowing load in 2006. In general, retention of biogenic compounds in the entire cascade depended on the amount of elements retained in the upper reservoir. The lower reservoir, due to a short HRT and unfavourable thermal conditions normally has neither retained nor eliminated significant amounts of biogenic compounds. Only in 2005 the silicon retention was predominantly present in the Myczkowce reservoir, with the similar relation true for N-NO3 − and TOC for the entire study period.
Except for 2006, when the SMCR was the nitrogen source for downstream waters, the nitrogen retention level was corresponding to the values determined in previous studies, carried out with varying frequency between 1970 and 2003 (10–20 %, Tomaszek and Koszelnik 2003). Approximately 60 % of the TN load supplying the reservoirs was nitrate(V) nitrogen. In 2004, from 132 t of TN retained in the reservoir complex, N-NO3 − amounted to as much as 89 t, and in 2005 the ratio was 374 to 361 t, respectively. Significantly higher load of TN in 2006 was connected rather with intensive water discharge during early spring than new nitrogen source (Urbanik 2007—unpublished data).
The determined TP loads and percentages of element retention in the reservoir complex during the balanced years were slightly less diversified than nitrogen. 19–33 % of the inflowing TP was retained in the reservoirs. Despite the fact that phosphates constitute slightly more than 50 % of the load inflowing to the reservoirs, the calculated balance reveals, that retention of the easily assimilative phosphorus form was prevailing.
Determined retention of the dissolved silicon (DSiret%) in the reservoir complex varied from 5 % (2006) to 24 % (2005) of the annual external load. It was observed that in the first and the last year of the study the majority of the supplied load was accumulated in the Solina reservoir. On the contrary, in 2005 from 492 t of the retained silicon, as much as 352 t was retained in the Myczkowce reservoir. Retention of TOC was observed in both the reservoirs and ranged from 11 to 22 % of the supplied load. In a way similar to the totals of remaining elements, a major amount of carbon was retained in the studied reservoir complex in 2005.
Upon analysis of the morphometric factors on retention, a significant influence of HRT on retention of N, P and Si in waters of the entire reservoir complex was observed (Table 2). Flow rate of inflowing waters was correlated only with Nret% in the Myczkowce reservoir, while the load of elements supplied to the reservoir influenced the N retention in the Myczkowce reservoir and DSiret% in the Solina reservoir, as well as in the entire complex. Much better correlations were observed upon analysis of the influence of hydraulic retention on the elements retention (Wret—water inflow reduced by water outflow). Increase in Wret led to significant rise of Nret% in both the reservoirs and the complex, DSiret% and OWOret% in the Solina reservoir and the complex and Pret%, but only in the balance of the entire reservoir complex (Table 1). Significant correlations between retention values of N, P and Si for the Solina reservoir and the entire complex were seen upon analysis of interrelations of retention values for all the analysed elements (Table 2). Such relations were not seen for the Myczkowce reservoir, there was no influence of TOCret% on Nret%, and Pret% was correlated only with DSiret%.
Table 2 Relationship between of the C, N, P, Si retention values and chosen parameters of the studied reservoirs expressed as the Pearson's correlation coefficient with its statistical significance
Average annual sedimentation rate of TN, TOC in the Solina reservoir, calculated from the mass balance and levels of individual elements in sediments is provided in Table 3. Sedimentation in the Myczkowce reservoir was negligible due to the very short HRT in this reservoir (approximately 2 days). Production of sediments in this reservoir is mainly related to macrophyte production and supply of external matter. The TOC sedimentation calculated for the entire Solina reservoir amounts to 1300 t per year, and TN 110 t per year. Thickness analysis of the matter deposited from the beginning of reservoir's existence shows that approximately twice as much of the sediments were formed in backwaters (S1, S2 stations) of the reservoir when compared to lacustrine deep areas (S3, S4 stations). This information was taken into consideration upon calculation—it was anticipated that 2/3 of the total sedimentation takes place in the said area.
Table 3 Sedimentation rate and content of total nitrogen, total organic carbon in the Solina reservoir
The hydrologic balance (Urbanik 2007—unpublished data) shows that within the 3 years of study the hydraulic inflow to the reservoir was compensated by the outflow, so the calculated element retention is not the result of hydrologic factors, but biogeochemical factors. Mass balance of N, C, P and Si, conducted separately for both the reservoirs and the entire complex enabled identification of the classical biogeochemical cycle of conversion of the analysed elements. The results of the mass balance show that the SMCR retains a significant amount of biogenic elements (Table 1). The major part of elements was retained mostly in the Solina reservoir. The biogenic compounds were retained sporadically in the Myczkowce reservoir due to the hydrologic factors, i.e. feeding of a hypolimnion with N, C, P and Si-reach waters. The relationships presented in Table 2 show that retention of biogenic elements within the reservoir complex (mostly the Solina reservoir) results not only from the hydrologic or functional factors of the power station (storage of waters during the spring period and discharging during low water periods in summer and autumn), but also from inclusion of easily assimilative forms to the trophic chain and various chemical transformations. The fact that correlation between TN, TP and DSi, which are retained mostly in the easily assimilative forms (Table 1), is significant, it confirms that mechanisms of assimilation by water organisms are crucial for retention of elements in the studied reservoirs.
The distinctive feature of the mass balance was DSi depletion from the water body in the reservoirs, mostly in the Solina reservoir (Fig. 2). Approximately 20 % of the inflowing load of DSi was retained in the studied reservoirs. Humborg et al. (2006) reports that the DSi load flowing off from 1 km2 of the River Vistula basin to the Baltic Sea amounts to 0.8 t per year. Average annual load of DSi feeding the SMCR amounts to 1947 t, which is equivalent to 1.5 t of flow off from 1 km2 of the basin. Hence, anticipating that silicon originates only from sources connected with soil erosion (Garnier et al. 1999; Humborg et al. 2002) and that the DSi load level produced in other areas of the Vistula basin is similar, ca. 50 % of dissolved silicon is retained within the area of the Vistula basin, unfavourably reducing loads feeding the Baltic sea. This decrease leads to deterioration of seawater quality due to the deficiency of DSi when compared to other biogenic compounds from anthropogenic sources. The said phenomenon leads to imbalance between diatoms and other algae (Humborg et al. 2000). A similar decrease in DSi load in dammed reservoirs was described by Garnier et al. (1999), while Humborg et al. (2006) conclude that the cascade design of dammed reservoirs on rivers increases HRT and favours retention of dissolved silicon, depleting it from downstream waters. 20 % of DSi retention is a distinctive feature of oligotrophic waters (Garnier et al. 1999), but the data available in the literature (Garnier et al. 1999; Humborg et al. 2002, 2006) also show that similar levels of DSi retention were seen in both oligo- and eutrophic ecosystems. In the studied case, depletion of dissolved silicon in the surface lake body, related in turn to DSiret affects the water quality, stimulating growth of algae. The correlations shown on Fig. 3 may confirm that increase of chlorophyll level can be related with emergence of non-diatomic (green) algae. Diatoms are seen in lake and reservoir waters mostly in spring, but also even in late winter (Humborg et al. 2000; Lehmann et al. 2004). By analogy to the condition of Lake Lugano (Lehmann et al. 2004) it can be concluded that DSi level exceeding 0.7 g m−3 in the epilimnion of the Solina reservoir contributes to chl a level related with the presence of diatoms and green algae. Below this level, in summer, a rapid increase in chl a, reaching even as much as 12 mg m−3, is observed which, in turn, may lead to occurrence of thermophilic cyanobacteria with concomitant disappearance of diatoms. Despite low water temperature in the Myczkowce reservoir, an elevated level of chl a was seen, but the index of >2.5 mg m−3 was noted only in 2005, while in 2006 it was low. In general, phytoplankton production in this reservoir is minor. However, due to the poor silicon feed, silicon shortages can occur, which lead to minor tides of algae, mostly in the warmer water area of the dam, analogous to those present in the Solina reservoir (Koszelnik 2013). A decrease in water DSi below 1 g m−3 was seen in 2005, but not in 2006. In addition, when silicon was almost completely depleted from the water body, a decrease in Si:N and Si:P ratios (see Fig. 4) was seen, and DSi became the limiting element. With silicon shortage present, phosphorus and nitrogen are the main substrates utilised in production of organic matter in the reservoirs. In turn, the value of N:P molar ratio (Fig. 4) significantly exceeding 16:1 proves the stoichiometric excess of nitrogen versus phosphorus.
Influence of DSi depletion on phytoplankton growth (Chl a) in the Solina (a) and Myczkowce (b) reservoirs
Mean chlorophyll a concentration versus N:P:Si Redfield ratios in lacustrine zone of the Solina reservoir (a, c) and Myczkowce reservoir (b, d)
Approximately 28 % of phosphorus supplied to the studied reservoirs is retained. The process is occurring mostly in the Solina reservoir. This result depends on various factors allowing storage of phosphorus from the water body in the benthic deposits, related in general with an affinity to specific metals, presence of aerobic conditions or with pH (Golterman 1998). Phosphorus retention is a result of sedimentation of solid particles introduced to the reservoir with affluents and assimilative forms incorporated to the biomass of phytoplankton and transferred to sediments (Hejzlar et al. 2009; Dunalska et al. 2013). A part of such retained phosphorus can be released again to the water body due to resuspension or decomposition of bonds with iron or other metals in anaerobic conditions. The benthic deposits of the Solina reservoir are rich in metals with affinity to phosphorus, mostly in iron. Retention volume of these deposits is very high (Bartoszek et al. 2009), and favourable aerobic conditions make release of phosphorus from the deposit practically absent in both the reservoirs. On that basis it can be concluded that the principal phosphorus retention mechanism in the reservoir is a direct sedimentation of phosphorus contained in suspension and intermediate sedimentation of mineral forms, after their assimilation into the trophic chain. Storage properties of the benthic deposits are large enough so that the calculated Pret value could be greater, but the morphometry of the Solina reservoir (high depth, low area of an active bed) determines the identified level. Consequently, upon balancing of average phosphorus mass retained in the reservoir it was anticipated that the phosphate phosphorus retention will be equal to the amount of this element utilised by phytoplankton, and the difference between retention of P-PO4 3− and TP will be equal to the amount of element originating from external sources and accumulated in the deposit:
For the complex of reservoirs (t year−1) Inflow Retention Partial P sedimentation Sedimentation P accumulated in biomass
A complexity of biogeochemical nitrogen changes in the water environment affects the retention level of this element. In contrast to Si and P, nitrogen has a larger gaseous phase, and denitrification leading to change in state of aggregation affects the mass balance of this element in water ecosystems. In the studied period of 3 years, decrement of load inflowing to the reservoirs amounted only to 9 % per year. This value was affected by a significant element elimination in 2006. In 2005 Nret% was equal to ca. 22 % of the introduced load. Studies on nitrogen retention, carried out between 1999 and 2003, reveal that the retention of this element was varying significantly, amounting to from 12 to 36 % of the annual load (Tomaszek and Koszelnik 2003). Previous studies show that the rate of denitrification in the Solina reservoir is stable and amounts to ca. 5 g m−2 year−1 (Koszelnik et al. 2007), which corresponds to ca. 20 % of retention and 5 % of nitrogen load. Empirical models of denitrification in conditions occurring in the Solina reservoir, contingent on presence of nitrates and temperature (Gruca-Rokosz 2005, PhD thesis, unpublished data), were utilised to analyse various nitrogen retention mechanism. Estimated denitrification rate was 4 g N m−2 year−1, and on that basis it was calculated that, on average, 70 t of N is denitrified annually. Nitrogen sedimentation calculated from the balance mass equals to 110 t year−1, hence the balance of retained nitrogen is as follows:
For the complex of reservoirs (t year−1) Inflow Retention Sedimentation Denitrification Indefinite
1957 174 110 70 −6
Share of denitrification process in nitrogen retention amounted to 40 % and was twice as high as that calculated for previous years (Koszelnik et al. 2007). However, load reduction is significantly lesser (similar to Nret%) and was only 3.6 %.
Influence of denitrification on the nitrogen mass balance depends on various factors. Seitzinger et al. (2002) describes that for North American estuaries 50 % of nitrogen is supplied from denitrification. Estuaries are bodies of water similar in many cases to dammed reservoirs; mostly due to the ratio between areas of basin and water table, retention time or biogenic compounds load. The said value can be real, but in many cases—including estuaries—significantly lower values are seen, i.e. 5–30 % (Dudel and Kohl 1992; Koszelnik et al. 2007; Povilaitis et al. 2012), but also values as high as 70 % are reported (Mengis et al. 1997). A comprehensive analysis of the available data presented in previous papers (Koszelnik et al. 2007) enables one to conclude that in water regions with high hydraulic dynamics contribution of nitrification to the mass balance is minor when compared to natural lakes, where this process can be significant.
Nitrogen sedimentation, in a way similar to phosphorus, is a result of its consumption. Thus, calculation of Nsed and Ncons contributions can be difficult. Jickells et al. (2000) states that in inland waters, retention mainly consists of storage of organic nitrogen produced within the ecosystems in the benthic deposits. In the studied case retention of assimilative forms, mostly nitrates, is equal to ca. 100 t year−1. Nevertheless, it should not be anticipated that the overall mass of retained NO3 − will be assimilated. Some part of it will be denitrified, as the main substrate for the said process, occurring in the anoxic layer of the benthic deposits, are nitrates(V) diffusing from water (Tomaszek and Gruca-Rokosz 2007). A surplus nitrogen seen in the above balance (−6 t) can result from utilisation of nitrogen stored in the deposits in the nitrification process, which, after various transformations, can be denitrified.
The mass balance of total organic carbon calculated for the reservoir complex shows that annually approximately 442 t of TOC is retained. The calculated TOC sedimentation is three times higher and amounts to 1300 t year−1. Hence, it should be recognised that a significant part of sedimentation matter is produced within the ecosystems:
For the complex of reservoirs (t year−1) Inflow Retention Sedimentation
Lentic waters are characterised by a high carbon retention capability, as the major part of TOC supplied to lakes and reservoirs is respirated and included in the trophic chain (Garnier et al. 1999). The above is true for both deep and shallow reservoirs. Anderson and Sobek (2006) provide an example of a shallow lake in Sweden, in which the annual carbon load amounts to approximately 3 t. Calculated phytoplankton production for the lake is as high as 53 t C per year, and the macrophyte production—16 t C per year, while carbon sedimentation is three times greater than the carbon inflow to the ecosystem. Unlike the other elements, TOC retention in the Myczkowce reservoir was fairly high (7–8 % of the load), which can be explained by carbon uptake by macrophytes after its respiration. Rate of decomposition for these forms is low, and annually they release only 40 % of the retained organic carbon (Gessner 2001).
Both the SMCR reservoirs are loaded with nitrogen and phosphorus in amounts significantly exceeding theoretical values considered to be allowable. Although the easily assimilative inorganic forms are dominating in the supplied mass of elements, concentration of both of the biogenic compounds and the amount of chlorophyll a fall within the level specific for mesotrophy. However, the inflow of biogenic elements is so significant that no distinct, seasonal variations of nitrogen and phosphorus concentrations were seen in the reservoirs. The major part of the biogenic compound load supplied to the studied reservoirs is retained therein via utilisation as substrates in the primary production process.
During the summer, dissolved silicon deficits were observed in waters in both the reservoirs. This phenomenon was present due to silicon consumption within the water body and reduced inflow from the basin. In this case silicon became a limiting element for the production of diatomic organic matter, especially in the warmer Solina reservoir. This effect was accompanied by an increase in chlorophyll a concentration, sporadically reaching the value specific for eutrophy, which can be related to production of other species of (non-diatomic) algae.
Retention values of particular elements, calculated from the mass balance, prove the intensity of element uptake process carried out by organisms. Approximately 20 % of inflowing total forms of N, P and Si are accumulated, mostly in the Solina reservoir. Dissolved inorganic forms are retained in more than 50 % of cases. Sedimentation of autochthonous biogenic forms is a result of inclusion of supplied elements to the trophic chain. The denitrification rate of nitrogen amounts to 20 % of retention and only to 5 % of the supplied load. TOC retention at the level of 30 % proves the allochthonous matter is accumulated in the deposit.
Sedimentation of allochthonous organic matter calculated from the mass balance is three times lower than the value estimated on the basis of the overall TOC sedimentation in the Solina reservoir. The remaining amount results from an intra-ecosystemic production stimulated by an external inflow of biogenic compounds and elements circulation within the reservoirs.
Anderson E, Sobek S (2006) Comparison of a mass balance and an ecosystem model approach when evaluating the carbon cycling in a lake ecosystem. Ambio 33(8):476–483. doi:10.1579/0044-7447(2006)35[476:COAMBA]2.0.CO;2
Bartoszek L, Tomaszek JA, Sutyła M (2009) Vertical phosphorus distribution in the bottom sediments of the Solina–Myczkowce reservoirs. Environ Prot Eng 35(4):21–29
Behrend H, Opitz D (2000) Retention of nutrients in river systems, dependence of specific runoff and hydraulic load. Hydrobiologia 410:111–122
Bouwman AF, Bierkens MFP, Griffioen J, Hefting MM, Middelburg JJ, Middelkoop H, Slomp CP (2013) Nutrient dynamics, transfer and retention along the aquatic continuum from land to ocean: towards integration of ecological and biogeochemical models. Biogeosciences 10:1–22. doi:10.5194/bg-10-1-2013
Dudel G, Kohl J-G (1992) The nitrogen budget of a shallow Lake (Grosser Műggelsee, Berlin). Int Rev Gesamten Hydrobiol 77:43–72
Dunalska J, Zieliński R, Bigaj I, Szymański D (2013) Indicators of changes in the phytoplankton metabolism in the littoral and pelagial zones of a eutrophic lake. Rocznik Ochrona Środowiska 15(1):621–636
Fantin-Cruz I, Pedrollo O, Girard P, Zeilhofer P, Hamilton SK (2015) Changes in river water quality caused by a diversion hydropower dam bordering the Pantanal floodplain. Hydrobiologia. doi:10.1007/s10750-015-2550-4
Ferber LR, Levine SN, Lini A, Livingston GP (2004) Do cyanobacteria dominate in eutrophic lakes because they fix atmospheric nitrogen? Freshw Biol 49(6):690–708
Gajewska M (2015) Influence of composition of raw wastewater on removal of nitrogen compounds in multistage treatment wetlands. Environ Prot Eng 41(3):19–30
Garnier J, Leporcq B, Sanchez N, Philippon X (1999) Biogeochemical mass balances (C, N, P, Si) in three large reservoirs of the Seine Basin (France). Biogeochemistry 47:119–146
Gessner MO (2001) Mass loss, fungal colonisation and nutrient dynamics of Phragmites australis leaves during senescence and early decay. Aquat Bot 69:325–339
Giercuszkiewicz-Bajtlik M (1990) Prognozowanie zmian jakości wód stojących. Warszawa, Wydawnictwo Instytutu Ochrony Środowiska, p 1990
Golterman HL (1998) The distribution of phosphate over iron-bound and calcium-bound phosphate in stratified sediments. Hydrobiologia 364:75–81
Grizzetti B, Passy P, Billen G, Bouraoui F, Garnier J, Lassaletta L (2015) The role of water nitrogen retention in integrated nutrient management: assessment in a large basin using different modelling approaches. Environ Res Lett. doi:10.1088/1748-9326/10/6/065008
Gruca-Rokosz R, Tomaszek JA (2015) Methane and carbon dioxide in the sediment of a eutrophic reservoir: production pathways and diffusion fluxes at the sediment-water interface. Water Air Soil Pollut. doi:10.1007/s11270-014-2268-3
Harmel RD, Cooper RJ, Slade RM, Haney RL, Arnold JG (2006) Cumulative uncertainty in measured streamflow and water quality data for small watersheds. Trans ASABE 49(3):689–701
Hejzlar J, Anthony S, Arheimer B, Behrendt H, Bouraoui F, Grizzetti B et al (2009) Nitrogen and phosphorus retention in surface waters: an inter-comparison of predictions by catchment models of different complexity. J Environ Monit 11:584–593. doi:10.1039/b901207a
Humborg C, Conley DJ, Rahm L, Wulff F, Cociasu A, Ittekkot V (2000) Silicon retention in river basins, far-reaching effects on bigeochemistry and aquatic food webs in coastal marine environments. Ambio 29:44–49
Humborg C, Blomquist S, Avsan E, Bergensund Y, Smedberg E, Brink J, Mörth C-M (2002) Hydrological alterations with river damming in northern Sweden: implications for weathering and river biochemistry. Global Biogeochem Cycles. doi:10.1029/2000GB001369
Humborg C, Pastuszak M, Aigars J, Siegmund H, Mörth C-M, Ittekkot V (2006) Diatoms silica land-sea fluxes through damming in the Baltic Sea catchment—significance of particle trapping and hydrological alterations. Biogeochemistry 77:265–281
Jickells T, Andrews J, Samways G, Sanders R, Malcolm S, Sivyer D, Parker R, Nedwell D, Trimmer M, Ridgway J (2000) Nutrient fluxes through the Humber estuary—past, present and future. Ambio 29(3):130–135
Jørgensen SE (2011) Fundamentals of ecological modellin. Applications in environmental management and research. Amsterdam, Elsevier
Koszelnik P (2009a) Atmospheric deposition as a source of nitrogen and phosphorus loads into the Rzeszow reservoir SE Poland. Environ Prot Eng 33(2):157–164
Koszelnik P (2009b) Źródła i dystrybucja pierwiastków biogennych na przykładzie zespołu zbiorników zaporowych Solina–Myczkowce. Rzeszów, Oficyna Wydawnicza Politechniki Rzeszowskiej
Koszelnik P (2013) Rola krzemu w procesie eutrofizacji wód na przykładzie zbiorników Solina i Myczkowce. Rocznik Ochrona Środowiska 15:2218–2231
Koszelnik P, Tomaszek JA, Gruca-Rokosz R (2007) The significance of denitrification in relation to external loading and nitrogen retention in a mountain reservoir. Mar Freshw Res 58(9):818–826. doi:10.1071/MF07012
Lehmann MF, Bernasconi SM, Mckenzie JA, Barbieri A, Simona M, Veronesi M (2004) Seasonal variation of the δ13C and δ15 N of particulate and dissolved carbon and nitrogen in Lake Lugano, constrains on biogeochemical cycling in eutrophic lake. Limnol Oceanogr 49:415–429
Mengis M, Gächter R, Wehrli B, Bernasconi S (1997) Nitrogen elimination in two deep eutrophic lakes. Limnol Oceanogr 42:1530–1543
Mukhopadhyay B, Smith EH (2000) Comparison of statistical methods for examination of nutrient load to surface reservoirs for sparse data set: application with a modified model for phosphorus availability. Water Res 34(12):3258–3268
Povilaitis A, Stålnacke P, Vassiljev A (2012) Nutrient retention and export to surface waters in Lithuanian and Estonian river basins. Hydrol Res 43(4):359–373
Seitzinger S, Styles PRV, Boyer EW, Alexander RB, Billen G, Howarth RW, Mayer B, Van Bremer N (2002) Nitrogen retention in rivers: model development and application to watersheds in northern USA. Biogeochemistry 57(58):199–237
Tomaszek JA, Gruca-Rokosz R (2007) Rates of dissimilatory nitrate reduction to ammonium in two polish reservoirs: impacts of temperature, organic matter content, and nitrate concentration. Environ Technol 28:771–778. doi:10.1080/09593332808618834
Tomaszek JA, Koszelnik P (2003) A simple model of the nitrogen retention in reservoirs. Hydrobiologia 504(1/3):51–58
Torres IC, Resck RP, Pinto-Coelho RM (2007) Mass balance estimation of nitrogen, carbon, phosphorus and total suspended solids in the urban eutrophic, Pampulha reservoir. Brazil. Acta Limnol. Bras. 19(1):79–91
Wiatkowski M, Rosik-Dulewska C, Kasperek R (2015) Inflow of pollutants to the Bukówka drinking water reservoir from the Transboundary Bóbr river basin. Rocznik Ochrona Środowiska 17:316–336
Zeleňáková M, Čarnogurska M, Šlezingr M, Słyś D, Purcz P (2012) A model based on dimensional analysis for prediction of nitrogen and phosphorus concentrations at the river station Ižkovce, Slovakia. Hydrol Earth Syst Sci 17:201–209. doi:10.5194/hess-17-201-2013
Both authors contributed in the writing of this paper with shares amount to 50 %. Both authors read and approved the final manuscript.
The research gained financial support from Poland's Ministry of Science, via Grant No. 2 PO4G 0842.
Department of Environmental Engineering and Chemistry, Faculty of Civil and Environmental Engineering and Architecture, Rzeszów University of Technology, al. Powstańców Warszawy 6, 35-959, Rzeszow, Poland
Lilianna Bartoszek & Piotr Koszelnik
Lilianna Bartoszek
Piotr Koszelnik
Correspondence to Piotr Koszelnik.
Bartoszek, L., Koszelnik, P. The qualitative and quantitative analysis of the coupled C, N, P and Si retention in complex of water reservoirs. SpringerPlus 5, 1157 (2016). https://doi.org/10.1186/s40064-016-2836-7
Mass balance
Biogenic elements
|
CommonCrawl
|
5. Statistics
5.01 Calculating the mean
5.02 Review: Measures of center and spread
5.03 Recognize the center of data
Investigation: Finding the best choice of center
Investigation: The problem with average
5.03 Interpreting circle graphs
Investigation: Creating circle graphs
United States of AmericaVA
Measures of central tendency attempt to summarize a set of data with a single value that describes the center or middle of the scores.
The three main measures of central tendency are the mean, median, and mode. Deciding which one is best depends on the characteristics of the particular set of data, as we already saw with the mean.
Measures of center
The numerical average of a data set, this is the sum of the scores divided by the number of scores.
Appropriate for sets of data where there are no values much higher or lower than those in the rest of the data set
The middle value of a data set ranked in order
A good choice when data sets have a couple of values much higher or lower than most of the others
The data value that occurs most frequently
A good descriptor to use when the set of data has some identical values, when data is non-numeric (categorical) or when data reflects the most popular item
The median is one way of describing the middle or the center of a data set using a single value. The median is the middle score in a data set.
Which term is in the middle?
Suppose we have five numbers in our data set: $4$4, $11$11, $15$15, $20$20 and $24$24.
The median would be $15$15 because it is the value right in the middle. There are two numbers on either side of it.
$4,11,\editable{15},20,24$4,11,
If we have an even number of terms, we will need to find the average of the middle two terms. Suppose we wanted to find the median of the set $2,3,6,9$2,3,6,9, we want the value halfway between $3$3 and $6$6. The average of $3$3 and $6$6 is $\frac{3+6}{2}=\frac{9}{2}$3+62=92, or $4.5$4.5, so the median is $4.5$4.5.
$2,3,\editable{4.5},6,9$2,3,
4.5,6,9
If we have a larger data set, however, we may not be able to see right away which term is in the middle. We can use the "cross out" method.
The "cross out" method
Once a data set is ordered, we can cross out numbers in pairs (one high number and one low number) until there is only one number left. Let's check out this process using an example. Here is a data set with nine numbers:
Check that the data is sorted in ascending order (i.e. in order from smallest to largest).
Cross out the smallest and the largest number, like so:
Repeat step 2, working from the outside in - taking the smallest number and the largest number each time until there is only one term left. We can see in this example that the median is $7$7:
Note that this process will only leave one term if there are an odd number of terms to start with. If there are an even number of terms, this process will leave two terms instead, if you cross them all out, you've gone too far! To find the median of a set with an even number of terms, we can then take the mean of these two remaining middle terms.
Find the median of this set of scores:
$11$11, $11$11, $13$13, $14$14, $18$18, $22$22, $23$23, $25$25
Given the following set of scores:
$65.2,64.3,71.6,63.2,45.2,62.2,46.8,58.7$65.2,64.3,71.6,63.2,45.2,62.2,46.8,58.7
Write the list of scores in ascending order.
Calculate the median.
The mode
The mode is another measure of central tendency - that is, it's a third way of describing a value that represents the center of the data set. The mode describes the most frequently occurring score.
Let's say we ask $10$10 people how many pets they have. $2$2 people say no pets, $6$6 people say one pet and $2$2 people say they have two pets. What is the most common number of pets for people to have? In this case, the most common number is one pet, because the largest number of people $\left(\frac{6}{10}\right)$(610) had one pet. So the mode of this data set is $1$1.
Find the mode of the following scores:
$2,2,6,7,7,7,7,11,11,11,13,13,16,16$2,2,6,7,7,7,7,11,11,11,13,13,16,16
Mode = $\editable{}$
$8,18,5,2,2,10,8,5,14,14,8,8,10,18,14,5$8,18,5,2,2,10,8,5,14,14,8,8,10,18,14,5
The range is a measure of spread in a numerical data set. In other words, it describes whether the scores in a data set are very similar and clustered together, or whether there is a lot of variation in the scores and they are very spread out.
If we looked at the range of ages of students in a $6$6th grade class, everyone would likely be between $11$11 and $13$13, so the range is $2$2 ($13-11$13−11). This is quite a small range.
However, if we looked at the ages of people waiting at a bus stop, the youngest person might be a $2$2 year old and the oldest person might be a $90$90 year old. The range in this set of data is $88$88 ($90-2$90−2) which is quite a large range.
To calculate the range
Subtract the least score in the set from the greatest score in the set.
Affecting the range
Remember, the range only changes if the greatest or least score is changed. Otherwise, it will remain the same.
Consider the set of data: $1,2,2,4,4,5,6,6,8,11$1,2,2,4,4,5,6,6,8,11. If the score of $8$8 is changed to a $9$9, how would the range be affected?
Think: What was the range when the score was an $8$8? What was the range when the score was changed to $11$11?
The range when the score was an $8$8 was $10$10 ($11-1$11−1). The range when the score was changed to $11$11 was also $10$10 ($11-1$11−1). Since the score that was changed was not the greatest or the least score, the range did not change.
Find the range of the following set of scores:
$10,7,2,14,13,15,11,4$10,7,2,14,13,15,11,4
The range of a set of scores is $8$8, and the greatest score is $19$19.
What is the least score in the set?
|
CommonCrawl
|
Tag Archives: Space Physics
Influence of a large-scale field on energy dissipation in magnetohydrodynamic turbulence [CL]
In magnetohydrodynamic (MHD) turbulence, the large-scale magnetic field sets a preferred local direction for the small-scale dynamics, altering the statistics of turbulence from the isotropic case. This happens even in the absence of a total magnetic flux, since MHD turbulence forms randomly oriented large-scale domains of strong magnetic field. It is therefore customary to study small-scale magnetic plasma turbulence by assuming a strong background magnetic field relative to the turbulent fluctuations. This is done, for example, in reduced models of plasmas, such as reduced MHD, reduced-dimension kinetic models, gyrokinetics, etc., which make theoretical calculations easier and numerical computations cheaper. Recently, however, it has become clear that the turbulent energy dissipation is concentrated in the regions of strong magnetic field variations. A significant fraction of the energy dissipation may be localized in very small volumes corresponding to the boundaries between strongly magnetized domains. In these regions the reduced models are not applicable. This has important implications for studies of particle heating and acceleration in magnetic plasma turbulence. The goal of this work is to systematically investigate the relationship between local magnetic field variations and magnetic energy dissipation, and to understand its implications for modeling energy dissipation in realistic turbulent plasmas.
V. Zhdankin, S. Boldyrev and J. Mason
Mon, 13 Mar 17
Comments: 6 pages, 5 figures, to appear in Monthly Notices of the Royal Astronomical Society
Posted in Cross-listed | Tagged Fluid Dynamics, High Energy Astrophysical Phenomena, Plasma Physics, Space Physics
On Kinetic Slow Modes, Fluid Slow Modes, and Pressure-Balanced Structures in the Solar Wind [CL]
Observations in the solar wind suggest that the compressive component of inertial-range solar-wind turbulence is dominated by slow modes. The low collisionality of the solar wind allows for non-thermal features to survive, which suggests the requirement of a kinetic plasma description. The least-damped kinetic slow mode is associated with the ion-acoustic (IA) wave and a non-propagating (NP) mode. We derive analytical expressions for the IA-wave dispersion relation in an anisotropic plasma in the framework of gyrokinetics and then compare them to fully-kinetic numerical calculations, results from two-fluid theory, and MHD. This comparison shows major discrepancies in the predicted wave phase speeds from MHD and kinetic theory at moderate to high $\beta$. MHD and kinetic theory also dictate that all plasma normal modes exhibit a unique signature in terms of their polarization. We quantify the relative amplitude of fluctuations in the three lowest particle velocity moments associated with IA and NP modes in the gyrokinetic limit and compare these predictions with MHD results and in-situ observations of the solar-wind turbulence. The agreement between the observations of the wave polarization and our MHD predictions is better than the kinetic predictions, suggesting that the plasma behaves more like a fluid in the solar wind than expected.
D. Verscharen, C. Chen and R. Wicks
Comments: 8 pages, 5 figures, submitted to ApJ
Posted in Cross-listed | Tagged Plasma Physics, Solar and Stellar Astrophysics, Space Physics
Turbulent kinetic energy in the energy balance of a solar flare [SSA]
The energy released in solar flares derives from a reconfiguration of magnetic fields to a lower energy state, and is manifested in several forms, including bulk kinetic energy of the coronal mass ejection, acceleration of electrons and ions, and enhanced thermal energy that is ultimately radiated away across the electromagnetic spectrum from optical to X-rays. Using an unprecedented set of coordinated observations, from a suite of instruments, we here report on a hitherto largely overlooked energy component — the kinetic energy associated with small-scale turbulent mass motions. We show that the spatial location of, and timing of the peak in, turbulent kinetic energy together provide persuasive evidence that turbulent energy may play a key role in the transfer of energy in solar flares. Although the kinetic energy of turbulent motions accounts, at any given time, for only $\sim (0.5-1)$\% of the energy released, its relatively rapid ($\sim$$1-10$~s) energization and dissipation causes the associated throughput of energy (i.e., power) to rival that of major components of the released energy in solar flares, and thus presumably in other astrophysical acceleration sites.
E. Kontar, J. Perez, L. Harra, et. al.
Comments: 5pages, 4 figures, to be published in Physical Review Letters
Posted in Solar and Stellar Astrophysics | Tagged High Energy Astrophysical Phenomena, Plasma Physics, Solar and Stellar Astrophysics, Space Physics
Ultrarelativistic generalized Lorentzians and the cosmic ray energy flux [HEAP]
We show that the rather tentative application of the ultrarelativistic generalized Lorentzian energy distribution to the spectrum of cosmic ray fluxes may provide evidence for either high TeV chemical potentials generated in the acceleration source region of the observed cosmic rays, or the presence of hypothetical particles of TeV rest mass. Such particles are not known in our accessible Universe at any accessible energies. If true they should have been produced in cosmic ray sources prior to acceleration. Conclusions of this kind depend on the validity of the generalized Lorentzian in application to cosmic rays, a hypothetical statistical mechanical equilibrium distribution occasionally encountered in observations.
R. Treumann and W. Baumjohann
Comments: 5 pages, 1 figure, draft prepared for submission to a meeting on cosmic rays and power law tails
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, Space Physics, Statistical Mechanics
Plasma turbulence at ion scales: a comparison between PIC and Eulerian hybrid-kinetic approaches [CL]
Kinetic-range turbulence in magnetized plasmas and, in particular, in the context of solar-wind turbulence has been extensively investigated over the past decades via numerical simulations. Among others, one of the widely adopted reduced plasma model is the so-called hybrid-kinetic model, where the ions are fully kinetic and the electrons are treated as a neutralizing (inertial or massless) fluid. Within the same model, different numerical methods and/or approaches to turbulence development have been employed. In the present work, we present a comparison between two-dimensional hybrid-kinetic simulations of plasma turbulence obtained with two complementary approaches spanning about two decades in wavenumber – from MHD inertial range to scales well below the ion gyroradius – with a state-of-the-art accuracy. One approach employs hybrid particle-in-cell (HPIC) simulations of freely-decaying Alfv\'enic turbulence, whereas the other consists of Eulerian hybrid Vlasov-Maxwell (HVM) simulations of turbulence continuously driven with partially-compressible large-scale fluctuations. Despite the completely different initialization and injection/drive at large scales, the same properties of turbulent fluctuations at $k_\perp\rho_i\gtrsim1$ are observed. The system indeed self-consistently "reprocesses" the turbulent fluctuations while they are cascading towards smaller and smaller scales, in a way which actually depends on the plasma beta parameter. Small-scale turbulence has been found to be mainly populated by kinetic Alfv\'en wave (KAW) fluctuations for $\beta\geq1$, whereas KAW fluctuations are only sub-dominant for low-$\beta$.
S. Cerri, L. Franci, F. Califano, et. al.
Comments: 18 pages, 4 figures, accepted for publication in J. Plasma Phys. (Collection: "The Vlasov equation: from space to laboratory plasma physics")
Radio and the 1999 UK Total Solar Eclipse [EPA]
On the morning of the August 11th 1999, a total eclipse of the sun plunged Cornwall and parts of Devon into darkness. The event of the eclipse was bound to attract a great deal of scientific and media attention. Realizing that the differences in day-time/night-time propagation of VLF/LF/MF to HF bands would also apply during the darkness of the eclipse, the eclipse offered a rare PR opportunity to promote radio to the general public. At the same time the specific nature of the disturbance to the upper atmosphere and the effect on radio propagation could be examined in detail using scientific instruments at minimum cost since most instruments would not have to be moved. This would allow prediction models to be tested in a controlled fashion. Contained within this report are the details and results of the radio and ionospheric experiments conducted by the Rutherford Appleton Laboratory during the 1999 total solar eclipse. The promoting of the radio experiments with the general public produced nearly 60 appearances on local and national TV, newspapers and periodicals. Close to 1700 people responded to the general public medium wave experiment and 16 million people looked in on the general eclipse web site (part funded by RA) that included the details of the radio experiments. A large database of systematic observations across VLF to HF was collected from radio amateurs and from the RA Regional Offices allowing comparisons to be made with ITU estimates. There is a brief look at the scientific results and a forward look as to how the analysis of this disturbance might have impact on the use of ionospheric models for Space Weather tools in the future.
R. Bamford
Tue, 7 Mar 17
Comments: 41 pages, 33 Figures, government funded research final report, unclassified
Posted in Earth and Planetary Astrophysics | Tagged Atmospheric and Oceanic Physics, Earth and Planetary Astrophysics, Plasma Physics, Space Physics
FRiED: A novel three-dimensional model of coronal mass ejections [CL]
We present a novel three-dimensional (3D) model of coronal mass ejections (CMEs) that unifies all key evolutionary aspects of CMEs and encapsulates their 3D magnetic field configuration. This fully analytic model is capable of reproducing the global geometrical shape of a CME with all major deformations taken into account, i.e., deflection, rotation, expansion, "pancaking", front flattening and rotational skew. Encapsulation of 3D magnetic structure allows the model to reproduce in-situ measurements of magnetic field for trajectories of spacecraft-CME encounters of any degree of complexity. As such, the model can be used single-handedly for consistent analysis of both remote and in-situ observations of CMEs at any heliocentric distance. We demonstrate the latter by successfully applying the model for analysis of two CMEs.
A. Isavnin
Posted in Cross-listed | Tagged Solar and Stellar Astrophysics, Space Physics
Magnetic Reconnection in Turbulent Diluted Plasmas [CL]
We study magnetic reconnection events in a turbulent plasma within the two-fluid theory. By identifying the diffusive regions, we measure the reconnection rates as function of the conductivity and current sheet thickness. We have found that the reconnection rate scales as the squared of the inverse of the current sheet's thickness and is independent of the aspect ratio of the diffusive region, in contrast to other analytical, e.g. the Sweet-Parker and Petscheck, and numerical models. Furthermore, while the reconnection rates are also proportional to the square inverse of the conductivity, the aspect ratios of the diffusive regions, which exhibit values in the range of $0.1-0.9$, are not correlated to the latter. Our findings suggest a new expression for the magnetic reconnection rate, which, after experimental verification, can provide a further understanding of the magnetic reconnection process.
N. Offeddu and M. Mendoza
Comments: 9 Pages, 6 figures
Posted in Cross-listed | Tagged Computational Physics, Plasma Physics, Solar and Stellar Astrophysics, Space Physics
Predictions of solar coronal mass ejections with heliospheric imagers verified with the Heliophysics System Observatory [SSA]
We present a major step forward towards accurately predicting the arrivals of coronal mass ejections (CMEs) on the terrestrial planets, including the Earth. For the first time, we are able to assess a CME prediction model using data over almost a full solar cycle of observations with the Heliophysics System Observatory. We validate modeling results on 1337 CMEs observed with the Solar Terrestrial Relations Observatory (STEREO) heliospheric imagers (HI) with data from 8 years of observations by 5 spacecraft in situ in the solar wind, thereby gathering over 600 independent in situ CME detections. We use the self-similar expansion model for CME fronts assuming 60 degree longitudinal width, constant speed and constant propagation direction. Using these assumptions we find that 23%-35% of all CMEs that were predicted to hit a certain spacecraft lead to clear in situ signatures, so that for 1 correct prediction, 2 to 3 false alarms would have been issued. In addition, we find that the prediction accuracy of HI does not degrade with longitudinal separation from Earth. Arrival times are predicted on average within 2.6 +/- 16.6 hours difference to the in situ arrival time, similar to analytical and numerical modeling. We also discuss various factors that may improve the accuracy of space weather forecasting using wide-angle heliospheric imager observations. These results form a first order approximated baseline of the prediction accuracy that is possible with HI and other methods used for data by an operational space weather mission at the Sun-Earth L5 point.
C. Mostl, A. Isavnin, P. Boakes, et. al.
Fri, 3 Mar 17
Comments: 22 pages, 7 figures, 1 table, submitted to the AGU journal Space Weather on 2 March 2017
Posted in Solar and Stellar Astrophysics | Tagged Solar and Stellar Astrophysics, Space Physics
Electron dynamics surrounding the X-line in asymmetric magnetic reconnection [CL]
Electron dynamics surrounding the X-line in magnetopause-type asymmetric reconnection is investigated using a two-dimensional particle-in-cell simulation. We study electron properties of three characteristic regions in the vicinity of the X-line. The fluid properties, velocity distribution functions (VDFs), and orbits are studied and cross-compared. In the low-$\beta$ side of the X-line, the normal electric field enhances the electron meandering motion from the high-$\beta$ side. The motion leads to a crescent-shaped component in the electron VDF, in agreement with recent studies. In the high-$\beta$ side of the X-line, the magnetic field line is so stretched in the third dimension that its curvature radius is comparable with typical electron Larmor radius. The electron motion becomes nonadiabatic, and therefore the electron idealness is no longer expected to hold. Around the middle of the outflow regions, the electron nonidealness is coincident with the region of the nonadiabatic motion. Finally, we introduce a finite-time mixing fraction (FTMF) to evaluate electron mixing. The FTMF marks the low-$\beta$ side of the X-line, where the nonideal energy dissipation occurs.
S. Zenitani, H. Hasegawa and T. Nagai
Comments: Comments are welcome
Chaos Control with Ion Propulsion [CL]
The escape dynamics around the triangular Lagrangian point L5 in the real Sun-Earth-Moon-Spacecraft system is investigated. Appearance of the finite time chaotic behaviour suggests that widely used methods and concepts of dynamical system theory can be useful in constructing a desired mission design. Existing chaos control methods are modified in such a way that we are able to protect a test particle from escape. We introduce initial condition maps in order to have a suitable numerical method to describe the motion in high dimensional phase space. Results show that the structure of initial condition maps can be split into two well-defined domains. One of these two parts has a regular contiguous shape and is responsible for long time escape; it is a long-lived island. The other one shows a filamentary fractal structure in initial condition maps. The short time escape is governed by this object. This study focuses on a low-cost method which successfully transfers a reference trajectory between these two regions using an appropriate continuous control force. A comparison of the Earth-Moon transfer is also presented to show the efficiency of our method.
J. Sliz, T. Kovacs and A. Suli
Thu, 23 Feb 17
Comments: 14 pages, 11 figures, accepted for publication in Astronomische Nachrichten
Posted in Cross-listed | Tagged Chaotic Dynamics, Earth and Planetary Astrophysics, Space Physics
Sheath-Accumulating Propagation of Interplanetary Coronal Mass Ejection [SSA]
Fast interplanetary coronal mass ejections (interplanetary CMEs, or ICMEs) are the drivers of strongest space weather storms such as solar energetic particle events and geomagnetic storms. The connection between space weather impacting solar wind disturbances associated with fast ICMEs at Earth and the characteristics of causative energetic CMEs observed near the Sun is a key question in the study of space weather storms as well as in the development of practical space weather prediction. Such shock-driving fast ICMEs usually expand at supersonic speed during the propagation, resulting in the continuous accumulation of shocked sheath plasma ahead. In this paper, we propose the "sheath-accumulating propagation" (SAP) model that describe the coevolution of the interplanetary sheath and decelerating ICME ejecta by taking into account the process of upstream solar wind plasma accumulation within the sheath region. Based on the SAP model, we discussed (1) ICME deceleration characteristics, (2) the fundamental condition for fast ICME at Earth, (3) thickness of interplanetary sheath, (4) arrival time prediction and (5) the super-intense geomagnetic storms associated with huge solar flares. We quantitatively show that not only speed but also mass of the CME are crucial in discussing the above five points. The similarities and differences among the SAP model, the drag-based model and the`snow-plough' model proposed by \citet{tappin2006} are also discussed.
T. Takahashi and K. Shibata
Comments: 20 pages, 5 figures, accepted for publication in ApJL
The Twist of the Draped Interstellar Magnetic Field Ahead of the Heliopause: A Magnetic Reconnection Driven Rotational Discontinuity [CL]
Based on the difference between the orientation of the interstellar $B_{ISM}$ and the solar magnetic fields, there was an expectation that the magnetic field direction would rotate dramatically across the heliopause (HP). However, the Voyager 1 spacecraft measured very little rotation across the HP. Previously we showed that the $B_{ISM}$ twists as it approaches the HP and acquires a strong T component (East-West). Here we establish that reconnection in the eastern flank of the heliosphere is responsible for the twist. On the eastern flank the solar magnetic field has twisted into the positive N direction and reconnects with the Southward pointing component of the $B_{ISM}$. Reconnection drives a rotational discontinuity (RD) that twists the $B_{ISM}$ into the -T direction and propagates upstream in the interstellar medium towards the nose. The consequence is that the N component of $B_{ISM}$ is reduced in a finite width band upstream of the HP. Voyager 1 currently measures angles ($\delta=sin^{-1}(B_{N}/B)$) close to solar values. We present MHD simulations to support this scenario, suppressing reconnection in the nose region while allowing it in the flanks, consistent with recent ideas about reconnection suppression from diamagnetic drifts. The jump in plasma $\beta$ (the plasma to magnetic pressure) across the nose of HP is much greater than in the flanks because the heliosheath $\beta$ is greater there than in the flanks. Large-scale reconnection is therefore suppressed in the nose but not at the flanks. Simulation data suggest that $B_{ISM}$ will return to its pristine value $10-15~AU$ past the HP.
M. Opher, J. Drake, M. Swisdak, et. al.
Wed, 22 Feb 17
How Anomalous Resistivity Accelerates Magnetic Reconnection [CL]
Whether Turbulence-induced anomalous resistivity (AR) can facilitate a fast magnetic reconnection in collisionless plasma is a subject of active debate for decades. A particularly difficult problem in experimental and numerical simulation studies of the problem is how to distinguish the effects of AR from those originating from Hall-effect and other non-turbulent processes in the generalized Ohm's. In this paper, using particle-in-cell simulations, we present a case study of how AR produced by Buneman Instability accelerates magnetic reconnection. We first show that in a thin current layer, the AR produced by Buneman instability spontaneously breaks the magnetic field lines and causes impulsive fast non-Hall magnetic line annihilation on electron-scales with a rate reaching 0.6~$V_A$. However, the electron-scale magnetic line annihilation is not a necessary condition for the dissipation of magnetic energy, but rather a result of the inhomogeneity of the AR. On the other hand, the inhomogeneous drag arising from a Buneman instability driven by the intense electron beams at the x-line in a 3D magnetic reconnection can drive in the electron diffusion region electron-scale magnetic line annihilation. The electron-scale annihilations play an essential role in accelerating the magnetic reconnection with a rate two times faster than the non-turbulent Hall-dominated 2D magnetic reconnection. The reconnection rate is enhanced around the x-line, and the coupling between the AR carried out by the reconnection outflow and the Hall effect leads to the breaking of the symmetric structure of the ion diffusion region and the enhancement of the outward Poynting flux.
H. Che
Comments: submitted to Physics of Plasma
Posted in Cross-listed | Tagged Earth and Planetary Astrophysics, Plasma Physics, Solar and Stellar Astrophysics, Space Physics
A Maximum Entropy Principle for inferring the Distribution of 3D Plasmoids [HEAP]
The Principle of Maximum Entropy, a powerful and general method for inferring the distribution function given a set of constraints, is applied to deduce the overall distribution of plasmoids (flux ropes/tubes). The analysis is undertaken for the general 3D case, with mass, total flux and (3D) velocity serving as the variables of interest, on account of their physical and observational relevance. The distribution functions for the mass, width, total flux and helicity exhibit a power-law behavior with exponents of $-4/3$, $-2$, $-3$ and $-2$ respectively for small values, whilst all of them display an exponential falloff for large values. In contrast, the velocity distribution, as a function of $v = |{\bf v}|$, is shown to be flat for $v \rightarrow 0$, and becomes a power law with an exponent of $-7/3$ for $v \rightarrow \infty$. Most of these results exhibit a high degree of universality, as they are nearly independent of the free parameters. A preliminary comparison of our results with the observational evidence is presented, and some of the ensuing space and astrophysical implications are discussed.
M. Lingam, L. Comisso and A. Bhattacharjee
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, Plasma Physics, Solar and Stellar Astrophysics, Space Physics, Statistical Mechanics
Experimental overview on Future Solar and Heliospheric research [CL]
Solar and heliospheric cosmic rays provide a unique perspective in cosmic ray research: we can observe not only the particles, but also the properties of the plasmas in which the they are accelerated and propagate, using in situ and high-resolution remote sensing instruments. The heliospheric cosmic ray observations typically require space missions, which face stern competition against planetary and astrophysics missions, and it can take up to decades from the initial concept proposal until the actual observing of the cosmic rays can commence. Therefore it is important to have continuity in the cosmic ray mission timeline. In this overview, we review the current status and the future outlook in the experimental solar and heliospheric research. We find that the current status of the available cosmic ray observations is good, but that many of the spacecraft are near the end of their feasible mission life. We describe the three missions currently being prepared for launch, and discuss the future outlook of the solar and heliospheric cosmic ray missions.
T. Laitinen
Mon, 20 Feb 17
Comments: XXV ECRS 2016 Proceedings – eConf C16-09-04.3
Posted in Cross-listed | Tagged High Energy Astrophysical Phenomena, Solar and Stellar Astrophysics, Space Physics
Study of Parallel Shock Acceleration, the Bend-over Energy of Spectrum of Charged Energetic Particles [HEAP]
Shock acceleration is considered one of the most important mechanisms of astrophysical energetic particles' acceleration. In this work, we calculate large amount of test charged particles' trajectories accurately in a parallel shock with magnetic turbulence. We investigate energetic particles' acceleration mechanisms by calculating particles' energy and flux evolution with time. From simulations we obtain double power-law energy spectra with the bend-over energy increasing as a function of time. With the mean accelerating time and averaged momentum change during each cycle of particles crossing of shock from diffusive shock acceleration model, a time differential equation for the maximum of shock accelerated energy $E_{acc}$ can be approximately obtained following Drury. We assume the bend-over energy as $E_{acc}$. It is found that the model of the bend-over energy generally agrees with the simulations, and that the bend-over energy model with our non-linear diffusion theory, NLGCE-F, performs better than that with the classic quasi-linear theory, QLT.
L. Zhang, G. Qin, P. Sun, et. al.
Comments: 20 pages, 5 figures and 1 table
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, Solar and Stellar Astrophysics, Space Physics
Centennial evolution of monthly solar wind speeds: Fastest monthly solar wind speeds from long-duration coronal holes [CL]
High speed solar wind streams (HSSs) are very efficient drivers of geomagnetic activity at high latitudes. In this paper we use a recently developed $\Delta{H}$ parameter of geomagnetic activity, calculated from the night-side hourly magnetic field measurements of the Sodankyl\"a observatory, as a proxy for solar wind (SW) speed at monthly time resolution in 1914-2014 (solar cycles 15-24). The seasonal variation in the relation between monthly $\Delta{H}$ and solar wind speed is taken into account by calculating separate regressions between $\Delta{H}$ and SW speed for each month. Thereby, we obtain a homogeneous series of proxy values for monthly solar wind speed for the last 100 years. We find that the strongest HSS-active months of each solar cycle occur in the declining phase, in years 1919, 1930, 1941, 1952, 1959, 1973, 1982, 1994 and 2003. Practically all these years are the same or adjacent to the years of annual maximum solar wind speeds. This implies that the most persistent coronal holes, lasting for several solar rotations and leading to the highest annual SW speeds, are also the sources of the highest monthly SW speeds. Accordingly, during the last 100 years, there were no coronal holes of short duration (of about one solar rotation) that would produce faster monthly (or solar rotation) averaged solar wind than the most long-living coronal holes in each solar cycle produce.
R. Lukianova, L. Holappa and K. Mursula
Electric current filamentation induced by 3D plasma flows in the solar corona [SSA]
Many magnetic structures in the solar atmosphere evolve rather slowly so that they can be assumed as (quasi-)static or (quasi-)stationary and represented via magneto-hydrostatic (MHS) or stationary magneto-hydrodynamic (MHD) equilibria, respectively. While exact 3D solutions would be desired, they are extremely difficult to find in stationary MHD. We construct solutions with magnetic and flow vector fields that have three components depending on all three coordinates. We show that the non-canonical transformation method produces quasi-3D solutions of stationary MHD by mapping 2D or 2.5D MHS equilibria to corresponding stationary MHD states, i.e., states that display the same field line structure as the original MHS equilibria. These stationary MHD states exist on magnetic flux surfaces of the original 2D MHS states. Although the flux surfaces and therefore also the equilibria have a 2D character, these stationary MHD states depend on all three coordinates and display highly complex currents. The existence of geometrically complex 3D currents within symmetric field-line structures provide the base for efficient dissipation of the magnetic energy in the solar corona by Ohmic heating. We also discuss the possibility of maintaining an important subset of non-linear MHS states, namely force-free fields, by stationary flows. We find that force-free fields with non-linear flows only arise under severe restrictions of the field-line geometry and of the magnetic flux density distribution.
D. Nickeler, T. Wiegelmann, M. Karlicky, et. al.
Comments: 14 pages, 5 figures, accepted to ApJ
Posted in Solar and Stellar Astrophysics | Tagged Plasma Physics, Solar and Stellar Astrophysics, Space Physics
Is Proxima Centauri b habitable? — A study of atmospheric loss [EPA]
We address the important question of whether the newly discovered exoplanet, Proxima Centauri b (PCb), is capable of retaining an atmosphere over long periods of time. This is done by adapting a sophisticated multi-species MHD model originally developed for Venus and Mars, and computing the ion escape losses from PCb. The results suggest that the ion escape rates are about two orders of magnitude higher than the terrestrial planets of our Solar system if PCb is unmagnetized. In contrast, if the planet does have an intrinsic dipole magnetic field, the rates are lowered for certain values of the stellar wind dynamic pressure, but they are still higher than the observed values for our Solar system's terrestrial planets.
C. Dong, M. Lingam, Y. Ma, et. al.
Comments: 7 pages, 2 figures, submitted to ApJL
Posted in Earth and Planetary Astrophysics | Tagged Earth and Planetary Astrophysics, Solar and Stellar Astrophysics, Space Physics
Propagation Characteristics of Two Coronal Mass Ejections From the Sun Far into Interplanetary Space [SSA]
Propagation of coronal mass ejections (CMEs) from the Sun far into interplanetary space is not well understood due to limited observations. In this study we examine the propagation characteristics of two geo-effective CMEs, which occurred on 2005 May 6 and 13, respectively. Significant heliospheric consequences associated with the two CMEs are observed, including interplanetary CMEs (ICMEs) at the Earth and Ulysses, interplanetary shocks, a long-duration type II radio burst, and intense geomagnetic storms. We use coronagraph observations from SOHO/LASCO, frequency drift of the long-duration type II burst, in situ measurements at the Earth and Ulysses, and magnetohydrodynamic (MHD) propagation of the observed solar wind disturbances at 1 AU to track the CMEs from the Sun far into interplanetary space. We find that both of the two CMEs underwent a major deceleration within 1 AU and thereafter a gradual deceleration when they propagated from the Earth to deep interplanetary space due to interactions with the ambient solar wind. The results also reveal that the two CMEs interacted with each other in the distant interplanetary space even though their launch times on the Sun were well separated. The intense geomagnetic storm for each case was caused by the southward magnetic fields ahead of the CME, stressing the critical role of the sheath region in geomagnetic storm generation, although for the first case there is a corotating interaction region involved.
X. Zhao, Y. Liu, H. Hu, et. al.
Comments: accepted for publication in ApJ
A VLA Search for Radio Signals from M31 and M33 [EPA]
Observing nearby galaxies would facilitate the search for artificial radio signals by sampling many billions of stars simultaneously, but few efforts have been made to exploit this opportunity. An added attraction is that the Milky Way is the second-largest member of the Local Group, so our galaxy might be a probable target for hypothetical broadcasters in nearby galaxies. We present the first relatively high spectral resolution (<1 kHz) 21 cm band search for intelligent radio signals of complete galaxies in the Local Group with the Jansky VLA, observing the galaxies M31 (Andromeda) and M33 (Triangulum) – the first and third largest members of the group respectively – sampling more stars than any prior search of this kind. We used 122 Hz channels over a 1 MHz spectral window in the target galaxy velocity frame of reference, and 15 Hz channels over a 125 kHz window in our local standard of rest. No narrowband signals were detected above a signal-to-noise ratio of 7, suggesting the absence of continuous narrowband flux greater than approximately 0.24 Jy and 1.33 Jy in the respective spectral windows illuminating our part of the Milky Way during our observations in December 2014 and January 2015. This is also the first study in which the upgraded VLA has been used for SETI.
R. Gray and K. Mooley
Comments: 14 pages, 9 figures, 5 tables. Accepted for publication in the Astronomical Journal
Posted in Earth and Planetary Astrophysics | Tagged Earth and Planetary Astrophysics, Instrumentation and Methods for Astrophysics, Space Physics
On the origin of the crescent-shaped distributions observed by MMS at the magnetopause [CL]
MMS observations recently confirmed that crescent-shaped electron velocity distributions in the plane perpendicular to the magnetic field occur in the electron diffusion region near reconnection sites at Earth's magnetopause. In this paper, we re-examine the origin of the crescent-shaped distributions in the light of our new finding that ions and electrons are drifting in opposite directions when displayed in magnetopause boundary-normal coordinates. Therefore, ExB drifts cannot cause the crescent shapes. We performed a high-resolution multi-scale simulation capturing sub-electron skin depth scales. The results suggest that the crescent-shaped distributions are caused by meandering orbits without necessarily requiring any additional processes found at the magnetopause such as the highly asymmetric magnetopause ambipolar electric field. We use an adiabatic Hamiltonian model of particle motion to confirm that conservation of canonical momentum in the presence of magnetic field gradients causes the formation of crescent shapes without invoking asymmetries or the presence of an ExB drift. An important consequence of this finding is that we expect crescent-shaped distributions also to be observed in the magnetotail, a prediction that MMS will soon be able to test.
G. Lapenta, J. Berchem, M. Zhou, et. al.
Comments: to appear on J. Geophys. Res
Posted in Cross-listed | Tagged Earth and Planetary Astrophysics, Plasma Physics, Space Physics
Evolving waves and turbulence in the outer corona and inner heliosphere: the accelerating expanding box [SSA]
Alfv\'enic fluctuations in the solar wind display many properties reflecting an ongoing nonlinear cascade, e.g. a well-defined spectrum in frequency, together with some characteristics more commonly associated with the linear propagation of waves from the Sun, such as the variation of fluctuation amplitude with distance, dominated by solar wind expansion effects. Therefore both nonlinearities and expansion must be included simultaneously in any successful model of solar wind turbulence evolution. Because of the disparate spatial scales involved, direct numerical simulations of turbulence in the solar wind represent an arduous task, especially if one wants to go beyond the incompressible approximation. Indeed, most simulations neglect solar wind expansion effects entirely. Here we develop a numerical model to simulate turbulent fluctuations from the outer corona to 1 AU and beyond, including the sub-Alfv\'enic corona. The accelerating expanding box (AEB) extends the validity of previous expanding box models by taking into account both the acceleration of the solar wind and the inhomogeneity of background density and magnetic field. Our method incorporates a background accelerating wind within a magnetic field that naturally follows the Parker spiral evolution using a two-scale analysis in which the macroscopic spatial effect coupling fluctuations with background gradients becomes a time-dependent coupling term in a homogeneous box. In this paper we describe the AEB model in detail and discuss its main properties, illustrating its validity by studying Alfv\'en wave propagation across the Alfv\'en critical point.
A. Tenerani and M. Velli
Comments: 19 pages, 6 Figures. Submitted to the ApJ
Generalized phase mixing: Turbulence-like behaviour from unidirectionally propagating MHD waves [SSA]
We present the results of three-dimensional (3D) ideal magnetohydrodynamics (MHD) simulations on the dynamics of a perpendicularly inhomogeneous plasma disturbed by propagating Alfv\'enic waves. Simpler versions of this scenario have been extensively studied as the phenomenon of phase mixing. We show that, by generalizing the textbook version of phase mixing, interesting phenomena are obtained, such as turbulence-like behavior and complex current-sheet structure, a novelty in longitudinally homogeneous plasma excited by unidirectionally propagating waves. This constitutes an important finding for turbulence-related phenomena in astrophysics in general, relaxing the conditions that have to be fulfilled in order to generate turbulent behavior.
N. Magyar, T. Doorsselaere and M. Goossens
Solar Energetic Particle Acceleration by a Shock Wave Accompanying a Coronal Mass Ejection in the Solar Atmosphere [HEAP]
Solar energetic particles acceleration by a shock wave accompanying a coronal mass ejection (CME) is studied. The description of the accelerated particle spectrum evolution is based on the numerical calculation of the diffusive transport equation with a set of realistic parameters. The relation between the CME and the shock speeds, which depend on the initial CME radius, is determined. Depending on the initial CME radius, its speed, and the magnetic energy of the scattering Alfven waves, the accelerated particle spectrum is established during 10-60 minutes from the beginning of CME motion. The maximum energies of particles reach 0.1-10 GeV. The CME radii of 3-5 $R_\odot$ and the shock radii of 5-10 $R_\odot$ agree with observations. The calculated particle spectra agree with the observed ones in events registered by ground-based detectors if the turbulence spectrum in the solar corona significantly differs from the Kolmogorov one.
A. Petukhova, I. Petukhov, S. Petukhov, et. al.
Comments: 11 pages, 14 figures, published in ApJ
Preferential Heating and Acceleration of Heavy Ions in Impulsive Solar Flares [CL]
We simulate decaying turbulence in a homogeneous pair plasma using three dimensional electromagnetic particle-in-cell (PIC) method. A uniform background magnetic field permeates the plasma such that the magnetic pressure is three times larger than the thermal pressure and the turbulence is generated by counter-propagating shear Alfv\'en waves. The energy predominately cascades transverse to the background magnetic field, rendering the turbulence anisotropic at smaller scales. We simultaneously move several ion species of varying charge to mass ratios in our simulation and show that the particles of smaller charge to mass ratios are heated and accelerated to non-thermal energies at a faster rate, in accordance with the enhancement of heavy ions and non-thermal tail in their energy spectrum observed in the impulsive solar flares. We further show that the heavy ions are energized mostly in the direction perpendicular to the background magnetic field with a rate consistent with our analytical estimate of the rate of heating due to cyclotron resonance with the Alfv\'en waves of which a large fraction is due to obliquely propagating waves.
R. Kumar, D. Eichler, M. Gaspari, et. al.
Wed, 8 Feb 17
Posted in Cross-listed | Tagged High Energy Astrophysical Phenomena, Plasma Physics, Solar and Stellar Astrophysics, Space Physics
The formation of magnetic depletions and flux annihilation due to reconnection in the heliosheath [CL]
The misalignment of the solar rotation axis and the magnetic axis of the Sun produces a periodic reversal of the Parker spiral magnetic field and the sectored solar wind. The compression of the sectors is expected to lead to reconnection in the heliosheath (HS). We present particle-in-cell simulations of the sectored HS that reflect the plasma environment along the Voyager 1 and 2 trajectories, specifically including unequal positive and negative azimuthal magnetic flux as seen in the Voyager data \citep{Burlaga03}. Reconnection proceeds on individual current sheets until islands on adjacent current layers merge. At late time bands of the dominant flux survive, separated by bands of deep magnetic field depletion. The ambient plasma pressure supports the strong magnetic pressure variation so that pressure is anti-correlated with magnetic field strength. There is little variation in the magnetic field direction across the boundaries of the magnetic depressions. At irregular intervals within the magnetic depressions are long-lived pairs of magnetic islands where the magnetic field direction reverses so that spacecraft data would reveal sharp magnetic field depressions with only occasional crossings with jumps in magnetic field direction. This is typical of the magnetic field data from the Voyager spacecraft \citep{Burlaga11,Burlaga16}. Voyager 2 data reveals that fluctuations in the density and magnetic field strength are anti-correlated in the sector zone as expected from reconnection but not in unipolar regions. The consequence of the annihilation of subdominant flux is a sharp reduction in the "number of sectors" and a loss in magnetic flux as documented from the Voyager 1 magnetic field and flow data \citep{Richardson13}.
J. Drake, M. Swisdak, M. Opher, et. al.
Tue, 7 Feb 17
Particle acceleration model for the broadband baseline spectrum of the Crab nebula [HEAP]
We develop a simple one-zone model of the steady-state Crab nebula spectrum encompassing both the radio/soft $X$-ray and the GeV/multi-TeV observations. We determine analytically the photon differential energy spectrum as originated by an electron distribution evolved from a log-parabola injection spectrum: we find an impressive agreement with the synchrotron region observations whereas synchrotron self-Compton accommodates the previously unsolved origin of the broad $200$ GeV peak that matches the Fermi/LAT data beyond $1$ GeV with the MAGIC data. We determine the parameters of the log-parabola electron distribution, ruling out a simple power-law. The scale of the acceleration region is found to be $ \sim 3.8 \times 10^{-4}$ pc. The resulting photon differential spectrum provides a natural interpretation of the deviation from power-law customarily fit with empirical broken power-laws. Our model can be applied to the radio-to-multi-TeV spectrum of a variety of astrophysical sources of relativistic flows as well as to fast interplanetary shocks.
F. Fraschetti and M. Pohl
Mon, 6 Feb 17
Comments: 8 pages, 7 figures. Submitted. Comments welcome
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, Plasma Physics, Space Physics
How Electron Two-Stream Instability Drives Cyclic Langmuir Collapse and Continuous Coherent Emission [CL]
Continuous plasma coherent emission is maintained by repetitive Langmuir collapse driven by the nonlinear evolution of a strong electron two-stream instability. The Langmuir waves are modulated by solitary waves in the linear stage, and by electrostatic whistler waves in the nonlinear stage. Modulational instability leads to Langmuir collapse and electron heating that fills in cavitons. The high pressure is released via excitation of a short wavelength ion acoustic mode that is damped by electrons and that re-excites small-scale Langmuir waves—this process closes a feedback loop that maintains the continuous coherent emission.
H. Che, M. Goldstein, P. Diamond, et. al.
Comments: 1/30/2017, published in Proceedings of the National Academy of Sciences of the United States of America
A propagation tool to connect remote-sensing observations with in-situ measurements of heliospheric structures [CL]
The remoteness of the Sun and the harsh conditions prevailing in the solar corona have so far limited the observational data used in the study of solar physics to remote-sensing observations taken either from the ground or from space. In contrast, the `solar wind laboratory' is directly measured in situ by a fleet of spacecraft measuring the properties of the plasma and magnetic fields at specific points in space. Since 2007, the solar-terrestrial relations observatory (STEREO) has been providing images of the solar wind that flows between the solar corona and spacecraft making in-situ measurements. This has allowed scientists to directly connect processes imaged near the Sun with the subsequent effects measured in the solar wind. This new capability prompted the development of a series of tools and techniques to track heliospheric structures through space. This article presents one of these tools, a web-based interface called the 'Propagation Tool' that offers an integrated research environment to study the evolution of coronal and solar wind structures, such as Coronal Mass Ejections (CMEs), Corotating Interaction Regions (CIRs) and Solar Energetic Particles (SEPs). These structures can be propagated from the Sun outwards to or alternatively inwards from planets and spacecraft situated in the inner and outer heliosphere. In this paper, we present the global architecture of the tool, discuss some of the assumptions made to simulate the evolution of the structures and show how the tool connects to different databases.
A. Rouillard, B. Lavraud, V. Genot, et. al.
Comments: 22 pages, 10 figures, submitted to Planetary and Space Science
A deterministic model for forecasting long-term solar activity [SSA]
A phenomenological model is presented for the quantitative description of the evolution of solar cycles in terms of the number of M-class flares. The determining factor of the model is based on the relative ecliptic longitude of the planets Jupiter and Saturn. Using as input the temporal distribution of flares during cycle 21, results in notable agreement with the observations are obtained for cycles 22-24 and predictions are provided for the evolution of solar activity in the next years.
E. Petrakou
Posted in Solar and Stellar Astrophysics | Tagged Earth and Planetary Astrophysics, Solar and Stellar Astrophysics, Space Physics
The optimisation of low-acceleration interstellar relativistic rocket trajectories using genetic algorithms [CL]
A vast wealth of literature exists on the topic of rocket trajectory optimisation, particularly in the area of interplanetary trajectories due to its relevance today. Studies on optimising interstellar and intergalactic trajectories are usually performed in flat spacetime using an analytical approach, with very little focus on optimising interstellar trajectories in a general relativistic framework. This paper examines the use of low-acceleration rockets to reach galactic destinations in the least possible time, with a genetic algorithm being employed for the optimisation process. The fuel required for each journey was calculated for various types of propulsion systems to determine the viability of low-acceleration rockets to colonise the Milky Way. The results showed that to limit the amount of fuel carried on board, an antimatter propulsion system would likely be the minimum technological requirement to reach star systems tens of thousands of light years away. However, using a low-acceleration rocket would require several hundreds of thousands of years to reach these star systems, with minimal time dilation effects since maximum velocities only reached about 0.2c. Such transit times are clearly impractical, and thus, any kind of colonisation using low acceleration rockets would be difficult. High accelerations, on the order of 1g, are likely required to complete interstellar journeys within a reasonable time frame, though they may require prohibitively large amounts of fuel. So for now, it appears that humanity's ultimate goal of a galactic empire may only be possible at significantly higher accelerations, though the propulsion technology requirement for a journey that uses realistic amounts of fuel remains to be determined.
K. Fung, G. Lewis and X. Wu
Comments: 32 pages, 16 figures, Accepted for publication in Acta Astronautica
Posted in Cross-listed | Tagged Astrophysics of Galaxies, Instrumentation and Methods for Astrophysics, Space Physics
The Formation of Heliospheric Arcs of Slow Solar Wind [CL]
A major challenge in solar and heliospheric physics is understanding how highly localized regions, far smaller than 1 degree at the Sun, are the source of solar-wind structures spanning more than 20 degrees near Earth. The Sun's atmosphere is divided into magnetically open regions, coronal holes, where solar-wind plasma streams out freely and fills the solar system, and closed regions, where the plasma is confined to coronal loops. The boundary between these regions extends outward as the heliospheric current sheet (HCS). Measurements of plasma composition imply that the solar wind near the HCS, the so-called slow solar wind, originates in closed regions, presumably by the processes of field-line opening or interchange reconnection. Mysteriously, however, slow wind is also often seen far from the HCS. We use high-resolution, three-dimensional magnetohydrodynamic simulations to calculate the dynamics of a coronal hole whose geometry includes a narrow corridor flanked by closed field and which is driven by supergranule-like flows at the coronal-hole boundary. We find that these dynamics result in the formation of giant arcs of closed-field plasma that extend far from the HCS and span tens of degrees in latitude and longitude at Earth, accounting for the slow solar wind observations.
A. Higginson, S. Antiochos, C. DeVore, et. al.
Solar signatures and eruption mechanism of the 2010 August 14 CME [SSA]
On 2010 August 14, a wide-angled coronal mass ejection (CME) was observed. This solar eruption originated from a destabilized filament that connected two active regions and the unwinding of this filament gave the eruption an untwisting motion that drew the attention of many observers. In addition to the erupting filament and the associated CME, several other low-coronal signatures that typically indicate the occurrence of a solar eruption were associated to this event. However, contrary to what is expected, the fast CME ($\mathrm{v}>900~\mathrm{km}~\mathrm{s}^{-1}$) was accompanied by only a weak C4.4 flare.
We investigate the various eruption signatures that were observed for this event and focus on the kinematic evolution of the filament in order to determine its eruption mechanism. Had this solar eruption occurred just a few days earlier, it could have been a significant event for space weather. The risk to underestimate the strength of this eruption based solely on the C4.4 flare illustrates the need to include all eruption signatures in event analyses in order to obtain a complete picture of a solar eruption and assess its possible space weather impact.
E. DHuys, D. Seaton, A. Groof, et. al.
Planar magnetic structures in coronal mass ejection-driven sheath regions [SSA]
Planar magnetic structures (PMSs) are periods in the solar wind during which interplanetary magnetic field vectors are nearly parallel to a single plane. One of the specific regions where PMSs have been reported are coronal mass ejection (CME)-driven sheaths. We use here an automated method to identify PMSs in 95 CME sheath regions observed in-situ by the Wind and ACE spacecraft between 1997 and 2015. The occurrence and location of the PMSs are related to various shock, sheath and CME properties. We find that PMSs are ubiquitous in CME sheaths; 85% of the studied sheath regions had PMSs with the mean duration of 6.0 hours. In about one-third of the cases the magnetic field vectors followed a single PMS plane that covered a significant part (at least 67%) of the sheath region. Our analysis gives strong support for two suggested PMS formation mechanisms: the amplification and alignment of solar wind discontinuities near the CME-driven shock and the draping of the magnetic field lines around the CME ejecta. For example, we found that the shock and PMS plane normals generally coincided for the events where the PMSs occurred near the shock (68% of the PMS plane normals near the shock were separated by less than 20{\deg} from the shock normal), while deviations were clearly larger when PMSs occurred close to the ejecta leading edge. In addition, PMSs near the shock were generally associated with lower upstream plasma beta than the cases where PMSs occurred near the leading edge of the CME. We also demonstrate that the planar parts of the sheath contain a higher amount of strongly southward magnetic field than the non-planar parts, suggesting that planar sheaths are more likely to drive magnetospheric activity.
E. Palmerio, E. Kilpua and N. Savani
Tue, 31 Jan 17
Comments: 10 pages, 7 figures, accepted for publication in Annales Geophysicae
Radio occultations of the Io plasma tours by Juno are feasible [EPA]
The flow of material from Io's volcanoes into the Io plasma torus, out into the magnetosphere, and along field lines into Jupiter's upper atmosphere is not adequately understood. The lack of observations of spatial and temporal variations in the Io plasma torus impedes attempts to understand the system as a whole. Here we propose that radio occultations of the Io plasma torus by the Juno spacecraft can measure plasma densities in the Io plasma torus. We find that the line-of-sight column density of plasma in each of the three regions of the Io plasma torus (cold torus, ribbon, and warm torus) can be measured with uncertainties of 10%. We also find that scale heights describing the spatial variation in plasma density in each of these three regions can be measured with similar uncertainties. Such observations will be sufficiently accurate to support system-scale studies of the flow of plasma through the magnetosphere of Jupiter.
P. Phipps and P. Withers
Comments: 53 pages (unformatted manuscript), 10 figures, 3 tables, accepted for publication in JGR Space Physics
Posted in Earth and Planetary Astrophysics | Tagged Earth and Planetary Astrophysics, Space Physics
Radiometric Actuators for Spacecraft Attitude Control [CL]
CubeSats and small satellites are emerging as low-cost tools to perform astronomy, exoplanet searches and earth observation. These satellites can be dedicated to pointing at targets for weeks or months at a time. This is typically not possible on larger missions where usage is shared. Current satellites use reaction wheels and where possible magneto-torquers to control attitude. However, these actuators can induce jitter due to various sources. In this work, we introduce a new class of actuators that exploit radiometric forces induced by gasses on surface with a thermal gradient. Our work shows that a CubeSat or small spacecraft mounted with radiometric actuators can achieve precise pointing of few arc-seconds or less and avoid the jitter problem. The actuator is entirely solid-state, containing no moving mechanical components. This ensures high-reliability and long-life in space. A preliminary design for these actuators is proposed, followed by feasibility analysis of the actuator performance.
R. Nallapu, A. Tallapragada and J. Thangavelautham
Comments: 7 pages, 11 figures in Proceedings of the IEEE Aerospace Conference 2017
Posted in Cross-listed | Tagged Instrumentation and Methods for Astrophysics, Space Physics
Linking Fluid and Kinetic Scales in Solar Wind Turbulence [CL]
We investigate possible links between the large-scale and small-scale features of solar wind fluctuations across the frequency break separating fluid and kinetic regimes. The aim is to correlate the magnetic field fluctuations polarization at dissipative scales with the particular state of turbulence within the inertial range of fluctuations. We found clear correlations between each type of polarization within the kinetic regime and fluid parameters within the inertial range. Moreover, for the first time in literature, we showed that left-handed and right-handed polarized fluctuations occupy different areas of the plasma instabilities-temperature anisotropy plot, as expected for Alfv$\acute{\textrm{e}}$n Ion Cyclotron and Kinetic Alfv$\acute{\textrm{e}}$n waves, respectively.
D. Telloni and R. Bruno
Electron cyclotron maser instability (ECMI) in strong magnetic guide field reconnection [CL]
Reconnection in strong current-aligned magnetic guide fields allows for the excitation of the electron-cyclotron-maser instability and emission of electromagnetic radiationfrom the electron exhaust at the {\sf X} point. The electrons in the guide field remain magnetized, with reconnection barely affected. The guide field is responsible for the asymmetric properties of the {\sf X} point and exhaust. Asymmetry in the electron population results in conditions favorable for ECMI. Fundamental mission beneath the guide field cyclotron is similar to electron hole emission discussed elsewhere. It can be treated in the proper exhaust frame, and maps the local magnetic field when moving together with the exhaust along the guide field. Many applications of this mechanism can be imagined. We propose an outline of the mechanism and discuss some of its advantages and prospects. Among potential applications are AKR in auroral physics, various types of solar radio emissions during flares, planetary emissions and several astrophysical scenarios involving the presence of strong fields and field-aligned currents. Escape of radiation from {\sf X} is no problem. However, observation from remote requires traversing the stop-band of X modes and implies source displacements to weaker fields.
Keywords: Electron cyclotron maser, radio emissions, radio bursts, reconnection, auroral physics, AKR, solar radiation, pulsars
Wed, 25 Jan 17
Suprathermal electron strahl widths in the presence of narrow-band whistler waves in the solar wind [CL]
We perform the first statistical study of the effects of the interaction of suprathermal electrons with narrow-band whistler mode waves in the solar wind. We show that this interaction does occur and that it is associated with enhanced widths of the so called strahl component. The latter is directed along the inter- planetary magnetic field away from the Sun. We do the study by comparing the strahl pitch angle widths in the solar wind at 1AU in the absence of large scale discontinuities and transient structures, such as interplanetary shocks, interplanetary coronal mass ejections, stream interaction regions, etc. during times when the whistler mode waves were present and when they were absent. This is done by using the data from two Cluster instruments: STAFF data in frequency range between ~0.1 Hz and ~200 Hz were used for determining the wave properties and PEACE datasets at twelve central energies between ~57 eV (equivalent to ~10 typical electron thermal energies in the solar wind, E_T ) and ~676 eV (~113 E_T ) for pitch angle measurements. Statistical analysis shows that during the inter- vals with the whistler waves the strahl component on average exhibits pitch angle widths between 2 and 12 degrees larger than during the intervals when these waves are not present. The largest difference is obtained for the electron central energy of ~344 eV (~57 E_T ).
P. Kajdic, O. Alexandrova, M. Maksimovic, et. al.
Comments: Published in ApJ
Posted in Cross-listed | Tagged Earth and Planetary Astrophysics, Space Physics
Conductivity spectrum and dispersion relation in solar wind turbulence [CL]
Magnetic turbulence in the solar wind is treated from the point of view of electrodynamics. This can be done based on the use of Poynting's theorem attributing all turbulent dynamics to the spectrum of turbulent conductivity. For two directions of propagation of the turbulent fluctuations of the electromagnetic field with respect to the mean plus external magnetic fields an expression is constructed for the spectrum of turbulent dissipation. Use of solar wind observations of electromagnetic power spectral densities in the inertial subrange then allows determination of the conductivity spectrum, the dissipative response function, in this range. It requires observations of the complete electromagnetic spectral energy densities including electric power spectral densities. The dissipative response function and dispersion relation of solar wind inertial range magnetic turbulence are obtained. The dispersion relation indicates the spatial scale decay with increasing frequency providing independent support for the use of Taylor's hypothesis. The dissipation function indicates an approximate shot-noise spectrum of turbulent resistivity in the inertial range suggesting progressive structure formation in the inertial range which hints on the presence of discrete mode turbulence and nonlinear resonances.
Comments: 7 pages, no figures, preprint
Evidence for the Stochastic Acceleration of Secondary Antiprotons by Supernova Remnants [HEAP]
The antiproton-to-proton ratio in the cosmic-ray spectrum is a sensitive probe of new physics. Using recent measurements of the cosmic-ray antiproton and proton fluxes in the energy range of 1-1000 GeV, we study the contribution to the $\bar{p}/p$ ratio from secondary antiprotons that are produced and subsequently accelerated within individual supernova remnants. We consider several well-motivated models for cosmic-ray propagation in the interstellar medium and marginalize our results over the uncertainties related to the antiproton production cross section and the time-, charge-, and energy-dependent effects of solar modulation. We find that the increase in the $\bar{p}/p$ ratio observed at rigidities above $\sim$ 100 GV cannot be accounted for within the context of conventional cosmic-ray propagation models, but is consistent with scenarios in which cosmic-ray antiprotons are produced and subsequently accelerated by shocks within a given supernova remnant. In light of this, the acceleration of secondary cosmic rays in supernova remnants is predicted to substantially contribute to the cosmic-ray positron spectrum, accounting for a significant fraction of the observed positron excess.
I. Cholis, D. Hooper and T. Linden
Comments: 5 pages, 3 figures and 4 tables
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, High Energy Physics - Phenomenology, Space Physics
Comment to: The interaction of relativistic spacecrafts with the interstellar medium [GA]
Recently, Hoang et al. (arXiv:1608.05284) reported analysis of the interaction of relativistic spacecrafts with interstellar medium (ISM, i.e. gas atoms and dust particles) relevant for the Breakthrough starshot initiative (https://breakthroughinitiatives.org/Initiative/3). The main conclusion is that dust pose much greater threat to the starship than gas atoms. However, analysis used to treat interaction of the spaceship with gas atoms is based on the incorrect use of the Szenes model. Only by proper treatment of the Szenes model can be found if the conclusion remains valid – or not. In the following, the main comments we have raised about the paper are listed. Present text is based on the v2 version of the above mentioned paper [0] that was accepted for publication in Astophysical Journal.
M. Karlusic
Comments: working paper, comment to arXiv:1608.05284
Posted in Galaxy Astrophysics | Tagged Astrophysics of Galaxies, Space Physics
Solar Energetic Particle transport near a Heliospheric Current Sheet [CL]
Solar Energetic Particles (SEPs), a major component of space weather, propagate through the interplanetary medium strongly guided by the Interplanetary Magnetic Field (IMF). In this work, we analyse the implications a flat Heliospheric Current Sheet (HCS) has on proton propagation from SEP release sites to the Earth. We simulate proton propagation by integrating fully 3-D trajectories near an analytically defined flat current sheet, collecting comprehensive statistics into histograms, fluence maps and virtual observer time profiles within an energy range of 1–800 MeV. We show that protons experience significant current sheet drift to distant longitudes, causing time profiles to exhibit multiple components, which are a potential source of confusing interpretation of observations. We find that variation of current sheet thickness within a realistic parameter range has little effect on particle propagation. We show that IMF configuration strongly affects deceleration of protons. We show that in our model, the presence of a flat equatorial HCS in the inner heliosphere limits the crossing of protons into the opposite hemisphere.
M. Battarbee, S. Dalla and M. Marsh
Comments: 16 pages, 15 figures, accepted for publication in The Astrophysical Journal
The Solar Orbiter Mission: an Energetic Particle Perspective [IMA]
Solar Orbiter is a joint ESA-NASA mission planed for launch in October 2018. The science payload includes remote-sensing and in-situ instrumentation designed with the primary goal of understanding how the Sun creates and controls the heliosphere. The spacecraft will follow an elliptical orbit around the Sun, with perihelion as close as 0.28 AU. During the late orbit phase the orbital plane will reach inclinations above 30 degrees, allowing direct observations of the solar polar regions. The Energetic Particle Detector (EPD) is an instrument suite consisting of several sensors measuring electrons, protons and ions over a broad energy interval (2 keV to 15 MeV for electrons, 3 keV to 100 MeV for protons and few tens of keV/nuc to 450 MeV/nuc for ions), providing composition, spectra, timing and anisotropy information. We present an overview of Solar Orbiter from the energetic particle perspective, summarizing the capabilities of EPD and the opportunities that these new observations will provide for understanding how energetic particles are accelerated during solar eruptions and how they propagate through the Heliosphere.
R. Gomez-Herrero, J. Rodriguez-Pacheco, R. Wimmer-Schweingruber, et. al.
Posted in Instrumentation and Methods for Astrophysics | Tagged High Energy Astrophysical Phenomena, Instrumentation and Methods for Astrophysics, Solar and Stellar Astrophysics, Space Physics
Spatial and temporal variations of high-energy electron flux in the outer radiation belt [HEAP]
The results of observation of short-term variations of high-energy electron flux in the outer radiation belt, obtained in ARINA satellite experiment (2006 – 2016), are presented. Scintillation spectrometer ARINA on board the Resurs-DK1 Russian satellite has been developed in MEPhI. The instrument carried out continuous measurements of high-energy electron flux and its energy spectrum in low-Earth orbits in the range 3-30 MeV with 10 15% energy resolution. A time profile of electron flux at different L – shells has been studied in detail on the example of March 2012, and analysis of experimental data on high-energy (4-6 MeV) electrons in the outer radiation belt zone (L~3-7) was fulfilled. It was shown a large variability of flux of such electrons there. The sharp effects in electron flux (as rise and as fall) in magnetosphere interrelated with geomagnetic storms caused by solar flares and coronal mass ejections have been observed.
S. Koldashov, S. Aleksandrin and N. Eremina
Mon, 16 Jan 17
Posted in High Energy Astrophysical Phenomena | Tagged High Energy Astrophysical Phenomena, Space Physics
Characterizing Fluid and Kinetic Instabilities using Field-Particle Correlations on Single-Point Time Series [CL]
A recently proposed technique correlating electric fields and particle velocity distributions is applied to single-point time series extracted from linearly unstable, electrostatic numerical simulations. The form of the correlation, which measures the transfer of phase-space energy density between the electric field and plasma distributions and had previously been applied to damped electrostatic systems, is modified to include the effects of drifting equilibrium distributions of the type that drive counter-streaming and bump-on-tail instabilities. By using single-point time series, the correlation is ideal for diagnosing dynamics in systems where access to integrated quantities, such as energy, is observationally infeasible. The velocity-space structure of the field-particle correlation is shown to characterize the underlying physical mechanisms driving unstable systems. The use of this correlation in simple systems will assist in its eventual application to turbulent, magnetized plasmas, with the ultimate goal of characterizing the nature of mechanisms that damp turbulent fluctuations in the solar wind.
K. Klein
Comments: 9 pages, 6 figures, accepted for publication in Physics of Plasmas
Proton fire hose instabilities in the expanding solar wind [CL]
Using two-dimensional hybrid expanding box simulations we study the competition between the continuously driven parallel proton temperature anisotropy and fire hose instabilities in collisionless homogeneous plasmas. For quasi radial ambient magnetic field the expansion drives $T_{\mathrm{p}\|}>T_{\mathrm{p}\perp}$ and the system becomes eventually unstable with respect to the dominant parallel fire hose instability. This instability is generally unable to counteract the induced anisotropization and the system typically becomes unstable with respect to the oblique fire hose instability later on. The oblique instability efficiently reduces the anisotropy and the system rapidly stabilizes while a significant part of the generated electromagnetic fluctuations is damped to protons. As long as the magnetic field is in the quasi radial direction, this evolution repeats itself and the electromagnetic fluctuations accumulate. For sufficiently oblique magnetic field the expansion drives $T_{\mathrm{p}\perp}>T_{\mathrm{p}\|}$ and brings the system to the stable region with respect to the fire hose instabilities.
P. Hellinger
Comments: Journal of Plasma Physics, 14 pages, 9 figures
Amplitude limits and nonlinear damping of shear-Alfvén waves in high-beta low-collisionality plasmas [CL]
This work, which extends Squire et al. [ApJL, 830 L25 (2016)], explores the effect of self-generated pressure anisotropy on linearly polarized shear-Alfv\'en fluctuations in low-collisionality plasmas. Such anisotropies lead to stringent limits on the amplitude of magnetic perturbations in high-beta plasmas, above which a fluctuation can destabilize itself through the parallel firehose instability. This causes the wave frequency to approach zero, "interrupting" the wave and stopping its oscillation. These effects are explored in detail in the collisionless and weakly collisional "Braginskii" regime, for both standing and traveling waves. The focus is on simplified models in one dimension, on scales much larger than the ion gyroradius. The effect has interesting implications for the physics of magnetized turbulence in the high-beta conditions that are prevalent in many astrophysical plasmas.
J. Squire, A. Schekochihin and E. Quataert
Posted in Cross-listed | Tagged Cosmology and Nongalactic Astrophysics, High Energy Astrophysical Phenomena, Plasma Physics, Space Physics
|
CommonCrawl
|
International Veterinary Epilepsy Task Force consensus proposal: medical treatment of canine epilepsy in Europe
Sofie F.M. Bhatti1,
Luisa De Risio2,
Karen Muñana3,
Jacques Penderis4,
Veronika M. Stein5,
Andrea Tipold5,
Mette Berendt6,
Robyn G. Farquhar7,
Andrea Fischer8,
Sam Long9,
Wolfgang Löscher10,
Paul J.J. Mandigers11,
Kaspar Matiasek12,
Akos Pakozdy13,
Edward E. Patterson14,
Simon Platt15,
Michael Podell16,
Heidrun Potschka17,
Clare Rusbridge18,19 &
Holger A. Volk20
In Europe, the number of antiepileptic drugs (AEDs) licensed for dogs has grown considerably over the last years. Nevertheless, the same questions remain, which include, 1) when to start treatment, 2) which drug is best used initially, 3) which adjunctive AED can be advised if treatment with the initial drug is unsatisfactory, and 4) when treatment changes should be considered. In this consensus proposal, an overview is given on the aim of AED treatment, when to start long-term treatment in canine epilepsy and which veterinary AEDs are currently in use for dogs. The consensus proposal for drug treatment protocols, 1) is based on current published evidence-based literature, 2) considers the current legal framework of the cascade regulation for the prescription of veterinary drugs in Europe, and 3) reflects the authors' experience. With this paper it is aimed to provide a consensus for the management of canine idiopathic epilepsy. Furthermore, for the management of structural epilepsy AEDs are inevitable in addition to treating the underlying cause, if possible.
In Europe, the number of antiepileptic drugs (AEDs) licensed for dogs has grown considerably over the last years. Nevertheless, the same questions remain, which include, 1) when to start treatment, 2) which drug is best used initially, 3) which adjunctive AED can be advised if treatment with the initial drug is unsatisfactory, and 4) when treatment changes should be considered. In this consensus proposal, an overview is given on the aim of AED treatment, when to start long-term treatment in canine epilepsy and which veterinary AEDs are currently in use for dogs. The consensus proposal for drug treatment protocols, 1) is based on current published evidence-based literature [17], 2) considers the current legal framework of the cascade regulation for the prescription of veterinary drugs in Europe, and 3) reflects the authors' experience. With this paper it is aimed to provide a consensus for the management of canine idiopathic epilepsy. Furthermore, for the management of structural epilepsy AEDs are inevitable in addition to treating the underlying cause, if possible.
At present, there is no doubt that the administration of AEDs is the mainstay of therapy. In fact, the term AED is rather inappropriate as the mode of action of most AEDs is to suppress epileptic seizures, not epileptogenesis or the pathophysiological mechanisms of epilepsy. Perhaps, in the future, the term anti-seizure drugs might be more applicable in veterinary neurology, a term that is increasingly used in human epilepsy. Additionally, it is known that epileptic seizure frequency appears to increase over time in a subpopulation of dogs with untreated idiopathic epilepsy, reflecting the need of AED treatment in these patients [63].
In our consensus proposal on classification and terminology we have defined idiopathic epilepsy as a disease in its own right, per se. A genetic origin of idiopathic epilepsy is supported by genetic testing (when available) and a genetic influence is supported by a high breed prevalence (>2 %), genealogical analysis and /or familial accumulation of epileptic individuals. However in the clinical setting idiopathic epilepsy remains most commonly a diagnosis of exclusion following diagnostic investigations for causes of reactive seizures and structural epilepsy.
Aims of AED treatment
The ideal goal of AED therapy is to balance the ability to eliminate epileptic seizures with the quality of life of the patient. Seizure eradication is often not likely in dogs. More realistic goals are to decrease seizure frequency, duration, severity and the total number of epileptic seizures that occur over a short time span, with no or limited and acceptable AED adverse effects to maximize the dog's and owner's quality of life. Clinicians should approach treatment using the following paradigm [23, 76, 91, 92, 120]:
Decide when to start AED treatment
Choose the most appropriate AED and dosage
Know if and when to monitor serum AED concentrations and adjust treatment accordingly
Know when to add or change to a different AED
Promote pet owner compliance
When to recommend maintenance AED treatment?
Definitive, evidence-based data on when to start AED therapy in dogs based on seizure frequency and type is lacking. As such, extrapolation from human medicine may be possible to provide treatment guidelines. Clinicians should consider the general health of the patient, as well as the owner's lifestyle, financial limitations, and comfort with the proposed therapeutic regimen. Individualized therapy is paramount for choosing a treatment plan. As a general rule, the authors recommend initiation of long-term treatment in dogs with idiopathic epilepsy when any one of the following criteria is present:
Interictal period of ≤ 6 months (i.e. 2 or more epileptic seizures within a 6 month period)
Status epilepticus or cluster seizures
The postictal signs are considered especially severe (e.g. aggression, blindness) or last longer than 24 hours
The epileptic seizure frequency and/or duration is increasing and/or seizure severity is deteriorating over 3 interictal periods
In humans, the decision regarding when to recommend AED treatment is based on a number of risk factors (e.g. risk of recurrence, seizure type, tolerability, adverse effects) [42, 115]. In people, clear proof exists that there is no benefit initiating AED treatment after a single unprovoked seizure [42], but there is evidence to support starting treatment after the second seizure [43, 108]. In dogs, long-term seizure management is thought to be most successful when appropriate AED therapy is started early in the course of the disease, especially in dogs with a high seizure density and in dog breeds known to suffer from a severe form of epilepsy [12−14]. A total number of ≥ 10 seizures during the first 6 months of the disease appeared to be correlated with a poor outcome in Australian Shepherds with idiopathic epilepsy [132]. Furthermore, recent evidence exists that seizure density is a crucial risk factor, experiencing cluster seizures, and being male is associated with poor AED response [84].
A strong correlation exists in epileptic people between a high seizure frequency prior to AED treatment and poor AED response [16, 34, 59]. Historically, this has been attributed to kindling, in which seizure activity leads to intensification of subsequent seizures [117]. However, there is little clinical evidence that kindling plays a role in either dogs [54] or humans [111] with recurrent seizures. In humans, a multifactorial pathogenesis is suggested [14, 52]. Recent epidemiologic data suggest that there are differences in the intrinsic severity of epilepsy among individuals, and these differences influence a patient's response to medication and long-term outcome. Additionally, evidence for seizure-associated alterations that affect the pharmacodynamics and pharmacokinetics of AEDs have been suggested [99]. Breed-related differences in epilepsy severity have been described in dogs, with a moderate to severe clinical course reported in Australian Shepherds [132], Border Collies [49, 84], Italian Spinoni [24], German Shepherds and Staffordshire Bull Terriers [84], whereas a less severe form of the disease has been described in a different cohort of Collies (mainly rough coated) [77], Labrador Retrievers [7] and Belgian Shepherds [45]. Consequently, genetics may affect the success of treatment and may explain why some breeds are more predisposed to drug resistant epilepsy [3, 77].
Choice of AED therapy
There are no evidence-based guidelines regarding the choice of AEDs in dogs. When choosing an AED for the management of epilepsy in dogs several factors need to be taken into account (AED-specific factors (e.g. regulatory aspects, safety, tolerability, adverse effects, drug interactions, frequency of administration), dog-related factors (e.g. seizure type, frequency and aetiology, underlying pathologies such as kidney/hepatic/gastrointestinal problems) and owner-related factors (e.g. lifestyle, financial circumstances)) [23]. In the end, however, AED choice is often determined on a case-by-case basis.
Until recently, primary treatment options for dogs with epilepsy have focused mainly on phenobarbital (PB) and potassium bromide (KBr) due to their long standing history, widespread availability, and low cost. While both AEDs are still widely used in veterinary practice, several newer AEDs approved for use in people are also being used for the management of canine idiopathic epilepsy mainly as add-on treatment. Moreover, since early 2013, imepitoin has been introduced in most European countries for the management of recurrent single generalized epileptic seizures in dogs with idiopathic epilepsy.
Several AEDs of the older generation approved for humans have been shown to be unsuitable for use in dogs as most have an elimination half-life that is too short to allow convenient dosing by owners, these include phenytoin, carbamazepine, valproic acid, and ethosuximide [119]. Some are even toxic in dogs such as lamotrigine (the metabolite is cardiotoxic) [26, 136] and vigabatrin (associated with neurotoxicity and haemolytic anemia) [113, 131, 138].
Since the 1990s, new AEDs with improved tolerability, fewer side effects and reduced drug interaction potential have been approved for the management of epilepsy in humans. Many of these novel drugs appear to be relatively safe in dogs, these include levetiracetam, zonisamide, felbamate, topiramate, gabapentin, and pregabalin. Pharmacokinetic studies on lacosamide [68] and rufinamide [137] support the potential use of these drugs in dogs, but they have not been evaluated in the clinical setting. Although these newer drugs have gained considerable popularity in the management of canine epilepsy, scientific data on their safety and efficacy are very limited and cost is often prohibitive.
PB has the longest history of chronic use of all AEDs in veterinary medicine. After decades of use, it has been approved in 2009 for the prevention of seizures caused by generalized epilepsy in dogs. PB has a favourable pharmacokinetic profile and is relatively safe [2, 87, 97]. PB seems to be effective in decreasing seizure frequency in approximately 60−93 % of dogs with idiopathic epilepsy when plasma concentrations are maintained within the therapeutic range of 25−35 mg/l [10, 31, 74, 105]. According to Charalambous et al. (2014) [17], there is overall good evidence for recommending the use of PB as a monotherapy AED in dogs with idiopathic epilepsy. Moreover, the superior efficacy of PB was demonstrated in a randomized clinical trial comparing PB to bromide (Br) as first-line AED in dogs, in which 85 % of dogs administered PB became seizure-free for 6 months compared with 52 % of dogs administered Br [10]. This study demonstrated a higher efficacy of PB compared to Br as a monotherapy, providing better seizure control and showing fewer side effects.
PB is rapidly (within 2h) absorbed after oral administration in dogs, with a reported bioavailability of approximately 90 % [2, 87]. Peak serum concentrations are achieved approximately 4−8h after oral administration in dogs [2, 97]. The initial elimination half-life in normal dogs has been reported to range from 37−73h after multiple oral dosing [96]. Plasma protein binding is approximately 45 % in dogs [36]. PB crosses the placenta and can be teratogenic.
PB is metabolized primarily by hepatic microsomal enzymes and approximately 25 % is excreted unchanged in the urine. There is individual variability in PB absorption, excretion and elimination half-life [2, 87, 97]. In dogs, PB is a potent inducer of cytochrome P450 enzyme activity in the liver [48], and this significantly increases hepatic production of reactive oxygen species, thus increasing the risk of hepatic injury [107]. Therefore PB is contraindicated in dogs with hepatic dysfunction. The induction of cytochrome P450 activity in the liver can lead to autoinduction or accelerated clearance of itself over time, also known as metabolic tolerance, as well as endogenous compounds (such as thyroid hormones) [40, 48]. As a result, with chronic PB administration in dogs, its total body clearance increases and elimination half-life decreases progressively which stabilizes between 30−45 days after starting therapy [97]. This can result in reduction of PB serum concentrations and therapeutic failure and therefore, monitoring of serum PB concentrations is very important for dose modulation over time.
A parenteral form of PB is available for intramuscular (IM) or intravenous (IV) administration. Different PB formulations are available in different countries, it should be emphasized, however, that IM formulations cannot be used IV and vice versa. Parenteral administration of PB is useful for administering maintenance therapy in hospitalized patients that are unable to take oral medication. The pharmacokinetics of IM PB have not been explored in dogs, however, studies in humans have shown a similar absorption after IM administration compared to oral administration [135]. The elimination half-life in dogs after a single IV dose is approximately 93h [87].
Pharmacokinetic interactions
In dogs, chronic PB administration can affect the disposition of other co-administered medications which are metabolized by cytochrome P450 subfamilies and/or bound to plasma proteins [48]. PB can alter the pharmacokinetics and as a consequence may decrease the therapeutic effect of other AEDs (levetiracetam, zonisamide, and benzodiazepines) as well as corticosteroids, cyclosporine, metronidazole, voriconazole, digoxin, digitoxin, phenylbutazone and some anaesthetics (e.g. thiopental) [23, 33, 72, 82, 130]. As diazepam is used as first-line medicine for emergency use (e.g. status epilepticus) in practice it should be emphasized to double the IV or rectal dose of diazepam in dogs treated chronically with PB [130]. Concurrent administration of PB and medications that inhibit hepatic microsomal cytochrome P450 enzymes such as cimetidine, omeprazole, lansoprazole, chloramphenicol, trimethoprim, fluoroquinolones, tetracyclines, ketoconazole, fluconazole, itraconazole, fluoxetine, felbamate and topiramate may inhibit PB metabolism, increase serum concentration and can result in toxicity [10].
Common adverse effects
Most of the adverse effects due to PB are dose dependent, occur early after treatment initiation or dose increase and generally disappear or decrease in the subsequent weeks due to development of pharmacokinetic and pharmacodynamic tolerance [35, 121] (Table 1). The adverse effects include sedation, ataxia, polyphagia, polydipsia and polyuria. For an in-depth review on the adverse effects of PB, the reader is referred to comprehensive book chapters [23, 32, 91].
Table 1 Most common reported adverse effects seen in dogs treated with PB, imepitoin and KBr (rarely reported and/or idiosyncratic adverse effects are indicated in grey
Idiosyncratic adverse effects
These effects occur uncommonly in dogs and include hepatotoxicity [13, 22, 39, 75], haematologic abnormalities (anaemia, and/or thrombocytopenia, and/or neutropenia) [51, 56]), superficial necrolytic dermatitis [66], potential risk for pancreatitis [38, 46], dyskinesia [58], anxiety [58], and hypoalbuminaemia [41] (Table 1). Most of these idiosyncratic reactions are potentially reversible with discontinuation of PB. For an in-depth review on the idiosyncratic adverse effects of PB the reader is referred to comprehensive book chapters [23, 32, 91].
Laboratory changes
Laboratory changes related to chronic PB administration in dogs include elevation in serum liver enzyme activities [39, 41, 75], cholesterol and triglyceride concentrations [41]. Alterations in some endocrine function testing may occur (thyroid and adrenal function, pituitary-adrenal axis) [21, 41, 128]. For an in-depth review on these laboratory changes the reader is referred to comprehensive book chapters [23, 32, 91].
Dose and monitoring (Fig. 1)
PB treatment flow diagram for decision making during seizure management in an otherwise healthy dog. The authors advise to start with PB (and add KBr if inadequate seizure control after optimal use of PB (Fig. 3)): in dogs with idiopathic epilepsy experiencing recurrent single generalised epileptic seizures; in dogs with idiopathic epilepsy experiencing cluster seizures or status epilepticus; in dogs with other epilepsy types. *Criteria for (in)adequate seizure control with regard to efficacy and tolerability (see Consensus proposal: Outcome of therapeutic interventions in canine and feline epilepsy [94]). 1. Treatment efficacious: a: Achievement of complete treatment success (i.e. seizure freedom or extension of the interseizure interval to three times the longest pretreatment interseizure interval and for a minimum of three months (ideally > 1 year); b: Achievement of partial treatment success (i.e. a reduction in seizure frequency including information on seizure incidence (usually at least 50 % or more reduction defines a drug responder), a reduction in seizure severity, or a reduction in frequency of seizure clusters and/or status epilepticus). 2. Treatment not tolerated i.e. appearance of severe adverse effects necessitating discontinuation of the AED
The recommended oral starting dose of PB in dogs is 2.5−3 mg/kg BID. Subsequently, the oral dosage is tailored to the individual patient based on seizure control, adverse effects and serum concentration monitoring.
Because of considerable variability in the pharmacokinetics of PB among individuals, the serum concentration should be measured 14 days after starting therapy (baseline concentration for future adjustments) or after a change in dose. To evaluate the effect of metabolic tolerance, a second PB serum concentration can be measured 6 weeks after initiation of therapy. Recommendations on optimal timing of blood collection for serum PB concentration monitoring in dogs vary among studies [23]. Generally, serum concentrations can be checked at any time in the dosing cycle as the change in PB concentrations through a daily dosing interval is not therapeutically relevant once steady-state has been achieved [62, 70]. However, in dogs receiving a dose of 5 mg/kg BID or higher, trough concentrations were significantly lower than non-trough concentrations and serum PB concentration monitoring at the same time post-drug dosing was recommended, in order to allow accurate comparison of results in these dogs [70]. Another study recommended performing serum PB concentration monitoring on a trough sample as a significant difference between peak and trough PB concentration was identified in individual dogs [10]. The therapeutic range of PB in serum is 15 mg/l to 40 mg/l in dogs. However, it is the authors' opinion that in the majority of dogs a serum PB concentration between 25−30 mg/l is required for optimal seizure control. Serum concentrations of more than 35 mg/l are associated with an increased risk of hepatotoxicity and should be avoided [22, 75]. In case of inadequate seizure control, serum PB concentrations must be used to guide increases in drug dose. Dose adjustments can be calculated according to the following formula (Formula A):
$$ \mathrm{New}\ \mathrm{PB}\ \mathrm{total}\ \mathrm{daily}\ \mathrm{dosage}\ \mathrm{in}\ \mathrm{mg} = \left(\mathrm{desired}\ \mathrm{serum}\ \mathrm{PB}\ \mathrm{concentration}/\mathrm{actual}\ \mathrm{serum}\ \mathrm{PB}\ \mathrm{concentration}\right) \times \mathrm{actual}\ \mathrm{PB}\ \mathrm{total}\ \mathrm{daily}\ \mathrm{dosage}\ \mathrm{in}\ \mathrm{mg} $$
A dog with adequate seizure control, but serum drug concentrations below the reported therapeutic range, does not require alteration of the drug dose, as this serum concentration may be sufficient for that individual. Generally, the desired serum AED concentration for individual patients should be the lowest possible concentration associated with >50 % reduction in seizure frequency or seizure-freedom and absence of intolerable adverse effects [23].
In animals with cluster seizures, status epilepticus or high seizure frequency, PB can be administered at a loading dose of 15−20 mg/kg IV, IM or PO divided in multiple doses of 3−5 mg/kg over 24−48h to obtain a therapeutic brain concentration quickly and then sustain it [10]. Serum PB concentrations can be measured 1−3 days after loading. Some authors load as soon as possible (over 40 to 60 min) and start with a loading dose of 10 to 12 mg/kg IV followed by two further boluses of 4 to 6 mg/kg 20 min apart.
Complete blood cell count, biochemical profile (including cholesterol and triglycerides), and bile acid stimulation test should be performed before starting PB treatment and periodically at 3 months and then every 6 months during treatment. In case of adequate seizure control, serum PB concentrations should be monitored every 6 months. If the dog is in remission or has no seizures, a periodical control every 12 months is advised.
Imepitoin
Imepitoin was initially developed as a new AED for humans, but, the more favourable pharmacokinetic profile of imepitoin in dogs versus humans led to the decision to develop imepitoin for the treatment of canine idiopathic epilepsy [102]. Based on randomized controlled trials that demonstrated antiepileptic efficacy, high tolerability and safety in epileptic dogs, the drug was approved in 2013 for this indication in Europe [64, 98, 122]. It has been recommended to use imepitoin in dogs with idiopathic epilepsy experiencing recurrent single generalized epileptic seizures, however, its efficacy has not yet been demonstrated in dogs with cluster seizures or status epilepticus [30]. In a recent randomized controlled study [122], the efficacy of imepitoin was compared with PB in 226 client-owned dogs. The administration of imepitoin twice daily in incremental doses of 10, 20 or 30 mg/kg demonstrated that the majority of dogs with idiopathic epilepsy were managed successfully with imepitoin without significant difference to the efficacy of PB. The frequency of adverse events (e.g. sedation, polydipsia, polyphagia) was significantly higher in the PB group [122]. In a study by Rieck et al. (2006) [98], dogs with chronic epilepsy not responding to PB or primidone received imepitoin (in its initial formulation) or KBr as adjunct AED and the seizure frequency improved to a similar degree in both groups. According to Charalambous et al. (2014) [17], there is good evidence for recommending the use of imepitoin as monotherapy in dogs with recurrent single generalized epileptic seizures, but insufficient evidence for use as adjunct AED. At present, scientific data and evidence-based guidelines on which AED can best be combined with imepitoin are lacking, and further research is needed. Nevertheless, at this moment, the authors recommend the use of PB as adjunct AED in dogs receiving the maximum dose of imepitoin and experiencing poor seizure control. According to the authors, in case of combined therapy with imepitoin and PB, it is advised to slowly wean off imepitoin over several months if seizure control appears successful on PB and/or to reduce the dose of imepitoin if adverse effects (e.g. sedation) occur (Fig. 2).
Imepitoin treatment flow diagram for decision making during seizure management in an otherwise healthy dog. The authors advise to start with imepitoin in dogs with idiopathic epilepsy experiencing recurrent single generalised epileptic seizures. *Criteria for (in)adequate seizure control with regard to efficacy and tolerability (see Consensus proposal: Outcome of therapeutic interventions in canine and feline epilepsy [94]). 1. Treatment efficacious: a: Achievement of complete treatment success (i.e. seizure freedom or extension of the interseizure interval to three times the longest pretreatment interseizure interval and for a minimum of three months (ideally > 1 year), b: Achievement of partial treatment success (i.e. a reduction in seizure frequency including information on seizure incidence (usually at least 50 % or more reduction defines a drug responder), a reduction in seizure severity, or a reduction in frequency of seizure clusters and/or status epilepticus). 2. Treatment not tolerated i.e. appearance of severe adverse effects necessitating discontinuation of the AED. #Currently there are no data available on which AED should be added to imepitoin in case of inadequate seizure control. At this moment, the authors recommend the use of PB as adjunct AED in dogs receiving the maximum dose of imepitoin and experiencing poor seizure control
Following oral administration of imepitoin at a dose of 30 mg/kg in healthy Beagle dogs, high plasma levels were observed within 30 min, but maximal plasma levels were only reached after 2−3h following a prolonged absorption time [101]. The elimination half-life was found to be short; approximately 1.5 to 2h. However, in another study in Beagle dogs, a longer half-life (~6 h) was found after higher doses of imepitoin, and accumulation of plasma levels was seen during chronic BID treatment [64]. Also, it has to be considered that Beagle dogs eliminate AEDs more rapidly than other dog strains [122]. Despite the short half-life in healthy Beagle dogs, this pharmacokinetic profile is reported as adequate to maintain therapeutically active concentrations with twice daily dosing in dogs [64, 122]. Imepitoin is extensively metabolized in the liver prior to elimination. In dogs, imepitoin is mainly excreted via the faecal route rather than the urinary route. Neither reduced kidney function nor impaired liver function is likely to greatly influence the pharmacokinetics of imepitoin [122].
Pharmacokinetic interactions and adverse reactions
There is no information on pharmacokinetic interactions between imepitoin and other medications. Although, imepitoin is a low affinity partial agonist for the benzodiazepine binding site of the GABAA receptor it has not prevented the pharmacological activity of full benzodiazepine agonists such as diazepam in the clinical setting (e.g. in dogs with status epilepticus) [122]. Consequently, because the affinity of diazepam for the GABAA receptor is much higher than imepitoin, care should be taken in the emergency setting [122]. Therefore, dogs with idiopathic epilepsy treated with imepitoin and presented in status epilepticus might require, in addition to diazepam, an additional AED parenterally (e.g. PB, levetiracetam).
Mild and most commonly transient adverse reactions (Table 1) have been reported in dogs administered 10−30 mg/kg BID of imepitoin in its initial formulation; polyphagia at the beginning of the treatment, hyperactivity, polyuria, polydipsia, somnolence, hypersalivation, emesis, ataxia, lethargy, diarrhoea, prolapsed nictitating membranes, decreased vision and sensitivity to sound [64, 98].
As part of the development of imepitoin for the treatment of canine epilepsy, a target animal safety study in dogs was conducted [96]. Under laboratory conditions, healthy Beagle dogs were exposed to high doses (up to 150 mg/kg q12h) of imepitoin for 6 months. Clinical signs of toxicity were mild and infrequent and they were mostly CNS (depression, transient ataxia) or gastrointestinal system (vomiting, body weight loss, salivation) related. These clinical signs were not life-threatening and generally resolved within 24h if symptomatic treatment was given. These data indicate that imepitoin is a safe AED and is well tolerated up to high doses in dogs treated twice daily [96]. However, the safety of imepitoin has not been evaluated in dogs weighing less than 5 kg or in dogs with safety concerns such as renal, liver, cardiac, gastrointestinal or other disease. No idiosyncratic reactions have been demonstrated so far. The routinely measured liver enzymes' activity do not appear to be induced by imepitoin [96]. Compared with the traditional benzodiazepines, such as diazepam, which acts as full agonists at the benzodiazepine site of the GABAA receptor, partial agonists such as imepitoin show less sedative adverse effects and are not associated with tolerance and dependence during long-term administration in animal models [122]. Also in epileptic dogs, tolerance did not develop and no withdrawal signs were observed after treatment discontinuation [64].
The oral dose range of imepitoin is 10−30 mg/kg BID. The recommended oral starting dose of imepitoin is 10−20 mg/kg BID. If seizure control is not satisfactory after at least 1 week of treatment at this dose and the medication is well tolerated, the dose can be increased up to a maximum of 30 mg/kg BID. Reference range of plasma or serum imepitoin concentrations is unknown and there are no therapeutic monitoring recommendations for imepitoin from the manufacturer. Pharmacokinetic studies in dogs suggest variability in plasma imepitoin concentrations among individuals and sampling times. However, no correlation between plasma imepitoin concentration and seizure frequency reduction was identified [64] therefore and because of its wide therapeutic index, serum imepitoin monitoring is not needed.
The authors recommend a complete blood cell count and biochemical profile before starting imepitoin treatment and periodically every 6 months during treatment. If the dog is in remission or has no seizures, a periodical control every 12 months is advised.
Br is usually administered as the potassium salt (KBr). The sodium salt form (NaBr) contains more Br per gram of compound, therefore, the dose should be approximately 15 % less than that calculated for KBr [124]. In most EU countries, KBr is approved only for add-on treatment in dogs with epilepsy drug-resistant to first-line AED therapy. PB and KBr have a synergistic effect and add-on treatment with KBr in epileptic dogs improves seizure control in dogs that are poorly controlled with PB alone [46, 93, 126]. A recent study showed that KBr was less efficacious and tolerable than PB as first-line drug [10]. According to Charalambous et al. (2014) [17] there is fair level of evidence for recommending the use of KBr as a monotherapy, but less as adjunct AED.
The bioavailability of Br after oral administration in normal dogs is approximately 46 %. The elimination half-life is long and ranges from 25−46 days in dogs, consequently, it can take several months (approximately 3 months) before steady-state concentrations after treatment initiation at maintenance dose are reached [46, 67, 90, 125]. KBr is unbound to plasma proteins and can diffuse freely across cellular membranes. KBr is not metabolised in the liver and is therefore a good alternative in dogs with hepatic dysfunction. KBr is excreted unchanged in the urine and undergoes tubular reabsorption in competition with chloride. Therefore, dietary factors affecting chloride levels can alter serum KBr concentrations [123]. High (low) dietary chloride concentrations increase (decrease) the excretion of KBr and shorten (prolong) its half-life. Dogs administered KBr should be maintained on a constant diet (and chloride intake) to prevent fluctuations in serum KBr concentrations, which could result in therapeutic failure or toxicity. If dietary changes are necessary they should be made gradually (over at least 5 days) and serum concentrations of KBr should be monitored following dietary changes, especially if the dog becomes sedated or has unexpected seizures. On biochemistry profiles serum chloride concentrations are often falsely elevated ("pseudohyperchloraemia") because the assays cannot distinguish between chloride and Br ions [123].
Pharmacokinetic interactions and adverse effects
Pharmacokinetic interactions of KBr are limited as KBr is not metabolized or protein-bound. The main interactions are associated with alterations in the renal excretion of KBr. As already mentioned, the rate of elimination of KBr varies proportionally and inversely to chloride intake. Loop diuretics such as furosemide may enhance KBr elimination by blocking KBr reabsorption through renal tubular chloride channels. KBr should be avoided in dogs with renal dysfunction to prevent toxicity secondary to reduced renal elimination [80].
Common, dose-dependent adverse effects of KBr in dogs include sedation, ataxia and pelvic limb weakness, polydipsia/polyuria, and polyphagia with weight gain [4, 25, 46, 124] (Table 1). These effects occur in the initial weeks of treatment and may be magnified by concurrent PB administration. These adverse effects subside (partly or completely), once KBr steady-state concentrations are reached [125]. Gastrointestinal irritation and clinical signs can be prevented or minimized by administering Br with food and dividing the daily dose into 2 or more doses [4].
Uncommon idiosyncratic reactions of KBr in dogs include personality changes (aggressive behaviour, irritability, hyperactivity), persistent cough, increased risk of pancreatitis and megaoesofagus [4, 46, 67, 106] (Table 1). Kbr may cause skin problems (bromoderma) in humans [106], but no reports exist currently in dogs. For an in-depth review on the adverse effects of Br the reader is referred to comprehensive book chapters [23, 32, 91].
KBr adjunct treatment flow diagram for decision making during seizure management in an otherwise healthy dog. *Criteria for (in)adequate seizure control with regard to efficacy and tolerability (see Consensus proposal: Outcome of therapeutic interventions in canine and feline epilepsy [94]). 1. Treatment efficacious: a: Achievement of complete treatment success (i.e. seizure freedom or extension of the interseizure interval to three times the longest pretreatment interseizure interval and for a minimum of three months (ideally > 1 year), b: Achievement of partial treatment success (i.e. a reduction in seizure frequency including information on seizure incidence (usually at least 50 % or more reduction defines a drug responder), a reduction in seizure severity, or a reduction in frequency of seizure clusters and/or status epilepticus). 2. Treatment not tolerated i.e. appearance of severe adverse effects necessitating discontinuation of the AED
The recommended oral starting dose of KBr is 15 mg/kg BID when used as an add-on drug. An oral dose of 20 mg/kg BID is advised when used as a monotherapy. Because of the long elimination half-life, KBr can be administered once daily (preferably in the evening), however, twice daily dosing as well as administration with food can help to prevent gastrointestinal mucosa irritation [123]. Twice daily dosing is also recommended if excessive sedation is present. Therapeutic ranges have been reported as approximately 1000 mg/l to 2000 mg/l when administered in conjunction with PB and 2000mg/l to 3000mg/l when administered alone [126]. Br has a long half-life, consequently, reaching a steady-state serum concentration may require several months (approximately 3 months). Due to this long half-life, timing of blood sample collection relative to oral administration is not critical [123].
Baseline complete blood cell count, biochemical profile (including cholesterol and triglycerides) should be performed before starting KBr treatment and periodically every 6 months during treatment. Serum KBr concentrations should be monitored 3 months after treatment initiation (or dose change). In the long term, in dogs with adequate seizure control, serum KBr concentrations should be monitored every 6 months. If the dog is in remission or has no seizures, a periodical control every 12 months is advised.
A loading dose may be recommended to achieve steady-state therapeutic concentrations more rapidly (e.g. in dogs with frequent or severe seizures, or when PB must be rapidly discontinued because of life-threatening adverse effects). Different protocols have been reported. Oral loading can be performed by administering KBr at a dose of 625 mg/kg given over 48h and divided into eight or more doses. A more gradual loading can be accomplished giving 125 mg/kg/day divided in three to four daily administrations for 5 consecutive days. Daily phone contact with the owners is advised. Loading can be associated with adverse effects (e.g. nausea, vomiting, diarrhoea, sedation, ataxia and pelvic limb weakness, polydipsia, polyuria and polyphagia) and the dog should be hospitalized if loading takes place over 48h (7,85). It is advised to stop loading when serious adverse effects occur. Consider that dogs in which KBr is used as adjunct AED to PB may be more prone to adverse effects. In these cases, a PB dose decrease of 25 % may be needed. Serum KBr levels should be monitored 1 month after loading.
Dose increases can be calculated according to the following formula
Formula B:
For concomitant PB and KBr treatment, the new maintenance dose can be calculated as follows:
$$ \left(2000\ \mathrm{mg}/\mathrm{l}\ \hbox{-}\ \mathrm{actual}\ \mathrm{serum}\ \mathrm{K}\mathrm{B}\mathrm{r}\ \mathrm{steady}\hbox{-} \mathrm{state}\ \mathrm{concentration}\right) \times 0.02 = \mathrm{mg}/\mathrm{kg}/\mathrm{day}\ \mathrm{added}\ \mathrm{t}\mathrm{o}\ \mathrm{existing}\ \mathrm{dose} $$
Formula C:
In case of monotherapy KBr, the new maintenance dose can be calculated as follows:
$$ \left(2500\ \mathrm{mg}/\mathrm{l} - \mathrm{actual}\ \mathrm{serum}\ \mathrm{K}\mathrm{B}\mathrm{r}\ \mathrm{steady}-\mathrm{state}\ \mathrm{concentration}\right) \times 0.02 = \mathrm{mg}/\mathrm{kg}/\mathrm{day}\ \mathrm{added}\ \mathrm{t}\mathrm{o}\ \mathrm{existing}\ \mathrm{dose} $$
Only PB and imepitoin are approved as first-line treatment of canine epilepsy in the EU. In most EU countries, KBr is only approved as add-on treatment in dogs resistant to first-line treatments. None of the drugs discussed in the following section are approved for treatment of dogs with epilepsy, thus, according to EU drug laws, these drugs can only be used as adjunctive treatment if monotherapy or polytherapy with the approved treatments have failed. Furthermore, except for levetiracetam, none of the AEDs discussed in the following section have been evaluated in randomized controlled trials in epileptic dogs, so that the evidence for their efficacy is very limited [17].
So far, three studies evaluated the efficacy of levetiracetam as an adjunct to other AEDs [79, 114, 127]. In all these studies, the majority of the dogs were treated successfully by oral levetiracetam as adjunct AED. The use of oral levetiracetam was evaluated in an open-label study and a response rate of 57 % was reported in dogs with drug resistant epilepsy [127]. In a recent randomized placebo-controlled study by Muñana et al. (2012) [79], the use of levetiracetam was evaluated in dogs with drug resistant epilepsy. A significant decrease in seizure frequency was reported compared with baseline, however, no difference was detected in seizure frequency when levetiractam was compared with placebo. However, the divergence in group size and the small sample size (due to the high dropout rate) may have contributed to this result. Nevertheless, a trend towards a decrease in seizure frequency and increase in responder rate during levetiracetam administration compared to placebo warrants further evaluation in a larger scale study. According to the study of Charalambous et al., (2014) [17], there is a fair evidence for recommending the use of levetiracetam as an adjunct AED. Recently, a retrospective study provided further evidence that administering levetiracetam as an adjunct AED is well tolerated, and suppresses epileptic seizures significantly in dogs with idiopathic epilepsy [83]. The authors also confirmed that if seizure frequency increases, an extra AED may be beneficial and they added the possibility of administering levetiracetam as pulse treatment for cluster seizures.
Levetiracetam possesses a favourable pharmacokinetic profile in dogs with respect to its use as an add-on AED. It has rapid and complete absorption after oral administration, minimal protein binding, minimal hepatic metabolism and is excreted mainly unchanged via the kidneys. In humans and dogs, renal clearance of levetiracetam is progressively reduced in patients with increasing severity of renal dysfunction [85], thus, dosage reduction should be considered in patients with impaired renal function. As levetiracetam has minimal hepatic metabolism [85], this drug represents a useful therapeutic option in animals with known or suspected hepatic dysfunction. However, its short elimination half-life of 3−6 h necessitates frequent administration. The recommended oral maintenance dose of levetiracetam in dogs is 20 mg/kg TID-QID. The same dose can be administered parenterally in dogs (SC, IM, IV) when oral administration is not possible [86]. In a previous study [127] it was shown that some dogs develop a tolerance to levetiracetam when used chronically. This phenomenon, the 'honeymoon effect', has been documented for other AEDs, e.g. zonisamide and levetiracetam in dogs with epilepsy [127, 129]. Therefore, the introduction of the pulse treatment protocol (an initial dose of 60 mg/kg orally or parenterally after a seizure occurs or pre-ictal signs are recognized by the owner, followed by 20 mg/kg TID until seizures do not occur for 48h) was developed, in order to start treatment only in case of cluster seizures when therapeutic levetiracetam concentrations need to be reached rapidly. The results in the recent study by Packer et al., 2015 [83] supports this clinical approach. Pulse treatment was, however, associated with more side effects compared to maintenance levetiracetam therapy [83]. Levetiracetam is well tolerated and generally safe in dogs. Except for mild sedation, ataxia, decreased appetite and vomiting adverse effects are very rarely described in dogs [79, 127] (Table 2). Levetiracetam has also a different mode of action compared to other AEDs and therefore may be advantageous when polytherapy is instituted. It selectively binds to a presynaptic protein (SVA2), whereby it seems to modulate the release of neurotransmitters [86]. As, in dogs there is no information available regarding a therapeutic range [79], the human target range of 12−46 μg/l can be used as guidance regarding effective concentrations.
Table 2 Most common reported adverse effects seen in dogs treated with levetiracetam, zonisamide, felbamate, topiramate, gabapentin, and pregabalin (rarely reported and/or idiosyncratic adverse effects are indicated in grey
Studies in humans have shown that concomitant administration of AEDs that induce cytochrome P450 metabolism such as PB, can alter the disposition of levetiracetam [19]. Recently, it has been demonstrated that PB administration significantly alters the pharmacokinetics of levetiracetam in normal dogs [73]. Thus, levetiracetam oral dose may need to be increased or dosing time interval may need to be shortened when concurrently administered with PB [73]. Also in dogs with epilepsy, concurrent administration of PB alone or in combination with KBr increases levetiracetam clearance compared to concurrent administration of KBr alone [78]. Thus, dosage increases might be indicated when utilizing levetiracetam as add-on treatment with PB in dogs [78], preferably guided by levetiracetam serum concentration measurement.
There are few reports on the use of zonisamide in dogs, despite it being licensed for treatment of canine epilepsy in Japan. One report evaluated the efficacy of oral zonisamide as a monotherapy [18]. Two studies have been described evaluating zonisamide as an add-on treatment in dogs with drug resistant epilepsy [28, 129]. Based on the results of these studies, Charalambous et al. (2014) [17] concluded that, at present, there is insufficient evidence to recommend the use of zonisamide either as a monotherapy or as an adjunct AED in dogs. Larger studies are required to evaluate zonisamide as a monotherapy or as an adjunctive AED in dogs. Adverse effects in dogs include sedation, vomiting, ataxia, and loss of appetite [18, 28, 129] (Table 2). Additionally, recently hepatotoxicity has been described in 2 dogs receiving zonisamide monotherapy which is believed to be an idiosyncratic reaction to the drug [69, 104] (Table 2). Renal tubular acidosis has also been described in a dog receiving zonisamide monotherapy [20] (Table 2). Thus, zonisamide should be used with caution in dogs with renal or hepatic impairment. Both, hepatic and renal failures have been described in humans receiving zonisamide as well. Currently, zonisamide is not available in every country and when available, it can be very expensive.
Zonisamide is a sulphonamide-based anticonvulsant approved for use in humans. The exact mechanism of action is unknown, however, blockage of calcium channels, enhancement of GABA release, inhibition of glutamate release, and inhibition of voltage-gated sodium channels might contribute to its anticonvulsant properties [61]. In dogs, zonisamide is well-absorbed after oral administration, has a relatively long elimination half-life (approximately 15h), and has low protein binding so that drug interactions are minimized. The drug mainly undergoes hepatic metabolism via the cytochrome P450 system before excretion by the kidneys [11].
The recommended oral starting dose of zonisamide in dogs is 3−7 mg/kg BID and 7−10 mg/kg BID in dogs co-administered hepatic microsomal enzymes inducers such as PB [11, 28]. Serum concentrations of zonisamide should be measured minimally 1 week after treatment initiation or dosage adjustment to allow steady state concentrations be reached. Care should be taken to avoid haemolysis, as falsely elevated serum zonisamide concentrations from lysed red blood cells may occur. The human target range of 10−40 mg/l can be used as guidance regarding effective concentrations. [28]. Baseline complete blood cell count and biochemical profile should be performed before starting zonisamide treatment and periodically every 6 months during treatment.
One veterinary study evaluated the efficacy of felbamate as an adjunct to PB in 6 dogs with focal idiopathic epilepsy [100]. According to Charalambous et al. (2014) [17], the study demonstrated overall moderate/high risk of bias. On this basis it was concluded that there is currently insufficient evidence to recommending the use of felbamate as an add-on AED. Felbamate should be reserved for dogs refractory to the other more thoroughly investigated and safer AEDs in this species and as such this is a 4th or 5th line option. In the clinical study by Ruehlmann et al., (2001) [100] adverse effects noted included keratoconjunctivitis sicca and mild blood dyscrasias (Table 2).
Felbamate is a dicarbamate AED released for use in humans in 1993 for the control of focal seizures. Its mechanism of action is multiple such as inhibition of glycine-enhanced NMDA-induced intracellular calcium currents [134], blockade of voltage-gated sodium channels and inhibition of voltage –gated calcium currents [133].
In 1993, felbamate was marketed as a safe AED, which lacked demonstrable toxic side effects and did not require laboratory monitoring in humans. However, within a year of its release it became evident that felbamate was associated with an unacceptable incidence of life-threatening side effects [12], such as anorexia, weight loss, vomiting, headache, irritability. Moreover, aplastic anemia and fatal hepatotoxicity were also described [55, 134].
Pharmacokinetic interactions between felbamate and other AEDs have been well described. E.g. felbamate raises concurrent PB serum levels in a dose-dependent manner [12], and the elimination of felbamate was noted to be strikingly reduced when given with gabapentin [50]. Felbamate is mainly metabolized by the liver [88] and should therefore not be used in dogs with pre-existing hepatic disease. Felbamate has an elimination half-life of 5−7h.
The recommended oral starting dose in dogs is 20 mg/kg TID, increasing to 400−600mg/day every 1−2 weeks [1]. Haematologic evaluations and biochemistry panels (esp. liver enzyme concentrations) should be performed before felbamate therapy is initiated and during therapy. This is especially important in animals receiving concurrent PB. In humans, the signs of aplastic anaemia and liver failure are usually seen during the first 6−12 months of therapy. In dogs, a minimum of monthly blood tests should be performed for this period of time, following-up every 6−12 months after this. Currently, felbamate is not available in every country.
In 2013, one sudy evaluated the efficacy of topiramate as an adjunct to PB, KBr, and levetiracetam in 10 dogs [57]. The dose was titrated (2−10 mg/kg) two to three times daily. Sedation, ataxia and weight loss were the most common adverse effects in dogs (Table 2). According to Charalambous et al. (2014) [17], the study demontsrated an overall moderate/high risk of bias. Thus, there is currently insufficient evidence to recommend the use of topiramate as an adjunct AED [17].
In humans, topiramate has served both as a monotherapy and adjunctive therapy to treat focal and generalised seizures [29, 71]. It is a sulphamate-substituted monosaccharide that acts on multiple signalling mechanisms enhancing GABA-ergic activity and inhibiting voltage-sensitive sodium and calcium channels, kainate-evoked currents and carbonic anhydrase isoenzymes [118, 139].
From the available human data, topiramate is not metabolized extensively once absorbed, with 70−80 % of an administered dose eliminated unchanged in the urine [65]. Topiramate has an elimination half-life of 2−4h. Clearance of topiramate is reduced in patients with renal impairment, necessitating dosage adjustments [37]. In dogs, topiramate is not extensively metabolized and is primarily eliminated unchanged in the urine. However, biliary excretion is present following topiramate administration in dogs [15]. The drug has a relatively low potential for clinically relevant interactions with other medications [8, 53]. The most commonly observed adverse effects in humans are somnolence, dizziness, ataxia, vertigo and speech disorders [110]. No adverse reactions were reported in healthy Beagle dogs administered 10−150 mg/kg daily oral doses for 15 days [116].
Two prospective studies evaluated the efficacy of oral gabapentin as an adjunct to other AEDs, giving a combined sample size of 28 dogs [44, 89]. According to Charalambous et al. (2014) [17], one study demonstrated an overall moderate/high risk of bias and the other one demonstrated an overall high risk of bias. None of the studies demonstrated an increased likelihood that the majority of the dogs were treated successfully by oral administration of gabapentin. Accordingly, there is currently overall insufficient evidence for recommending the use of gabapentin as an adjunct AED [17]. If used, the recommended oral dosage of gabapentin in dogs is 10 to 20 mg/kg TID, although dose reduction may be necessary in patients with reduced renal function [9]. Sedation and ataxia were the most common side effects reported in dogs [44, 89] (Table 2).
Gabapentin has been approved in people in Europe and by the US Food and Drug Administration (FDA) since 1993 for adjunctive treatment of focal seizures with or without secondary generalisation and for the treatment of post-herpetic neuralgia [9]. Its precise mechanism of action is unclear, but is believed that much of its anticonvulsant effect is because of its binding to a specific modulatory protein of voltage-gated calcium channels, which results in decreased release of excitatory neurotransmitters [112]. In humans, gabapentin is entirely excreted by the kidneys. In dogs, renal excretion occurs after a partial hepatic metabolism. The elimination half-life is 3−4h.
Although information in veterinary medicine is limited, pharmacokinetic interactions of gabapentin are unlikely to occur as the drug has negligible protein binding and does not induce hepatic cytochrome P450 family enzymes [95]. In humans, the elimination of felbamate was noted to be significantly reduced when given with gabapentin [50]. The most common adverse effects in humans include dizziness, somnolence and fatigue [9]. These effects seem to be dose-dependent and resolve within the first few weeks of treatment. No serious idiosyncratic reactions or organ toxicities have been identified in humans or animals [60].
There is limited data on the use of pregabalin in dogs. In a study by Dewey et al., (2009), the efficacy of oral pregabalin as an adjunct to PB and KBr was evaluated in 9 dogs [27]. According to Charalambous et al. (2014) [17], this study demonstrated an overall moderate/high risk of bias. Consequently, there is currently insufficient evidence to recommend the use of pregabalin as an adjunct AED [17]. If used, the recommended oral dose in dogs is 3−4 mg/kg BID-TID. The most common adverse effects (Table 2) in the study of Dewey et al., (2009) included sedation, ataxia and weakness, and to minimize these, treatment could be initiated at a dose of 2 mg/kg two to three times daily and escalated by 1 mg/kg each week until the final dose is achieved [27]. As pregabalin clearance is highly correlated with renal function, dose reduction is necessary in patients with reduced renal function [5, 9].
Pregabalin is a GABA analogue that is structurally similar to gabapentin. Pregabalin was approved in 2004 for the treatment of adults with peripheral neuropathic pain and as adjunctive treatment for adults with focal seizures with or without secondary generalization. Pregabalin is more potent than gabapentin owing to a greater affinity for its receptor [112]. Pharmacokinetic studies have been performed in dogs, with a reported elimination half-life of approximately 7 h [103]. In humans, pregabalin does not bind to plasma proteins and is excreted virtually unchanged by the kidneys [9]. Pregabalin does not undergo hepatic metabolism and does not induce or inhibit hepatic enzymes such as the cytochrome P450 system [5]. No clinically relevant pharmacokinetic drug interactions have been identified in humans to date. The most commonly reported adverse effects in humans are dose-related and include dizziness, somnolence and ataxia [9].
Discontinuation of AEDs
Two main reasons for discontinuation of an AED are remission of seizures or life-threatening adverse effects. Generally, treatment for idiopathic epilepsy involves lifelong AED administration. However, remission has been reported in dogs. Remission rates between 15−30 % have been described in hospital based populations [6, 7, 47, 49]. In a study by Packer et al. (2014) 14 % of dogs were in remission on PB [84]. When ≥50 % reduction in seizure frequency was used as the outcome measure, success rates were markedly higher with 64,5 % of dogs achieving this level of seizure reduction. Several factors were associated with an increased likelihood of achieving remission, namely: being female, neutered, no previous experience of cluster seizures and an older age at onset of seizures. The same four factors were associated with an increased likelihood of achieving a ≥50 % reduction in seizure frequency [84]. The breed least likely to go into remission or have an ≥50 % reduction in seizure frequency was the Border Collie (0 and 40 %, respectively), the German Shepherd (11 and 35 %, respectively) and Staffordshire Bull Terrier (0 and 57 %, respectively) [84]. In a study by Hülsmeyer et al. (2010) the remission rate was 18 % in Border Collies independent of disease severity [49]. The decision to gradually taper the dose of an AED should be taken on a case-by-case basis, but seizure freedom of at least 1−2 years is advised. In people with prolonged seizure remission (generally 2 or more years), the decision to discontinue AED treatment is done on an individual basis considering relative risks and benefits. Individuals with the highest probability of remaining seizure-free are those who had no structural brain lesion, a short duration of epilepsy, few seizures before pharmacological control, and AED monotherapy [81, 109]. In dogs, however, little information on risk factors associated with seizure relapse exist, thus the pet owner must be aware that seizures may recur anytime during AED dose reduction of after discontinuation. To prevent withdrawal seizures or status epilepticus it is advised to decrease the dose with 20 % or less on a monthly basis.
In case of life-threatening adverse effects, instant cessation of AED administration under 24h observation is necessary. In these cases, loading with an alternative AED should be initiated promptly in order to achieve target serum concentrations before serum PB concentration decreases. Loading with KBr (see section on KBr) or levetiracetam (see section on levetiracetam) is possible. If hepatic function is normal, starting imepitoin or zonisamide at the recommended oral starting dose may be another alternative.
Pet owner education
In order to promote a successful management of an epileptic pet, owners need to be educated thoroughly on [23, 32, 91]:
The disease of their pet and the influence on their daily life (considerations regarding e.g. leaving the dog alone, what to do if travelling and leaving the dog in a kennel, fears of behavioural comorbidities, …)
The need for AED therapy and the understanding that this often is a lifetime commitment
The aim of AED therapy
The importance of regular administration of AEDs
The fact that dose adjustments should only be made after consulting a veterinarian
Potential adverse effects of AED therapy
The importance of maintaining a detailed seizure diary
The importance of regular check-ups to monitor AED blood concentrations as well as haematology/serum biochemistry where appropriate
The need for treatment modulation to achieve optimal seizure control
The possibility of occurrence of status epilepticus and cluster seizures and the administration of additional AEDs at home
The fact that drug interactions might occur when combined with other AEDs or non-AEDs
The understanding that abrupt drug withdrawal might be detrimental
The fact that diet (e.g salt content), diarrhoea and vomiting may affect the absorption of AEDs. It should be advised to keep the diet constant or to make changes gradually and seek veterinary advice if gastrointestinal signs occur.
AED:
Antiepileptic drug
KBr:
IM:
IV:
Orally
Subcutaneously
SID:
Once daily
Three times daily
QID:
Four times daily
Adusumalli VE, Gilchrist JR, Wichmann JK, Kucharczyk N, Sofia RD. Pharmacokinetics of felbamate in pediatric and adult beagle dogs. Epilepsia. 1992;33:955–60.
Al-Tahan F, Frey HH. Absorption kinetics and bioavailability of phenobarbital after oral administration to dogs. J Vet Pharmacol Ther. 1985;8:205–7.
Alves L, Hülsmeyer V, Jaggy A, Fischer A, Leeb T, Drögemüller M. Polymorphisms in the ABCB1 gene in phenobarbital responsive and resistant idiopathic epileptic Border Collies. J Vet Intern Med. 2011;25:484–9.
Baird-Heinz HE, Van Schoick AL, Pelsor FR, Ranivand L, Hungerford LL. A systematic review of the safety of potassium bromide in dogs. J Am Vet Med Assoc. 2012;240:705–15.
Ben-Menachem E. Pregabalin pharmacology and its relevance to clinical practice. Epilepsia. 2004;45.
Berendt M, Gredal H, Ersbøll AK, Alving J. Premature death, risk factors, and life patterns in dogs with epilepsy. J Vet Intern Med. 2007;21:754–9.
Berendt M, Gredal H, Pedersen LG, Alban L, Alving J. A cross-sectional study of epilepsy in Danish Labrador Retrievers: prevalence and selected risk factors. J Vet Intern Med. 2002;16:262–8.
Bialer M, Doose DR, Murthy B, Curtin C, Wang SS, Twyman RE, et al. Pharmacokinetic interactions of topiramate. Clin Pharmacokinet. 2004;43:763–80. Review.
Bockbrader HN, Wesche D, Miller R, Chapel S, Janiczek N, Burger P. Clin Pharmacokinet. A comparison of the pharmacokinetics and pharmacodynamics of pregabalin and gabapentin. 2010;49:661-9.
Boothe DM, Dewey C, Carpenter DM. Comparison of phenobarbital with bromide as a first-choice antiepileptic drug for treatment of epilepsy in dogs. J Am Vet Med Assoc. 2012;240:1073–83.
Boothe DM, Perkins J. Disposition and safety of zonisamide after intravenous and oral single dose and oral multiple dosing in normal hound dogs. J Vet Pharmacol Ther. 2008;31:544–53.
Bourgeois BF. Felbamate. Semin Pediatr Neurol. 1997;4:3–8.
Bunch SE, Castleman WL, Hornbuckle WE, Tennant BC. Hepatic cirrhosis associated with long-term anticonvulsant drug therapy in dogs. J Am Vet Med Assoc. 1982;181:357–62.
Cahan LD, Engel Jr J. Surgery for epilepsy: a review. Acta Neurol Scand. 1986;73:551–60.
Caldwell GW, Wu WN, Masucci JA, McKown LA, Gauthier D, Jones WJ, et al. Metabolism and excretion of the antiepileptic/antimigraine drug, Topiramate in animals and humans. Eur J Drug Metab Pharmacokinet. 2005;30:151–64.
Chadwick DW. The treatment of the first seizure: the benefits. Epilepsia. 2008;49:26–8.
Charalambous M, Brodbelt D, Volk HA. Treatment in canine epilepsy--a systematic review. BMC Vet Res. 2014;10:257.
Chung JY, Hwang CY, Chae JS, Ahn JO, Kim TH, Seo KW, et al. Zonisamide monotherapy for idiopathic epilepsy in dogs. N Z Vet J. 2012;60:357–9.
Contin M1, Albani F, Riva R, Baruzzi A. Levetiracetam therapeutic monitoring in patients with epilepsy: effect of concomitant antiepileptic drugs. Ther Drug Monit. 2004;26:375–9.
Cook AK, Allen AK, Espinosa D, Barr J. Renal tubular acidosis associated with zonisamide therapy in a dog. J Vet Intern Med. 2011;25:1454–7.
Daminet S, Ferguson DC. Influence of drugs on thyroid function in dogs. J Vet Intern Med. 2003;17:463–72.
Dayrell-Hart B, Steinberg SA, VanWinkle TJ, Farnbach GC. Hepatotoxicity of phenobarbital in dogs: 18 cases (1985-1989). J Am Vet Med Assoc. 1991;199:1060–6.
De Risio L. Chapter 12-20. In: De Risio L, Platt S, editors. Canine and feline epilepsy. Diagnosis and Management. 2014. p. 347–475.
De Risio L, Freeman J, Shea A. Proceedings of the 27th Symposium of the European College of Veterinary Neurology, Madrid, 18-20 September 2014, and Journal of Veterinary Internal Medicine 2015; Prevalence and clinical characteristics of idiopathic epilepsy in the Italian Spinone in the UK.
Dewey CW. Anticonvulsant therapy in dogs and cats. Vet Clin North Am Small Anim Pract. 2006;36:1107–27.
Dewey CW, Barone G, Smith K, Kortz GD. Alternative anticonvulsant drugs for dogs with seizure disorders. Vet Med. 2004;99:786–93.
Dewey CW, Cerda-Gonzalez S, Levine JM, Badgley BL, Ducoté JM, Silver GM, et al. Pregabalin as an adjunct to phenobarbital, potassium bromide, or a combination of phenobarbital and potassium bromide for treatment of dogs with suspected idiopathic epilepsy. J Am Vet Med Assoc. 2009;235:1442–9.
Dewey CW, Guiliano R, Boothe DM, Berg JM, Kortz GD, Joseph RJ, et al. Zonisamide therapy for refractory idiopathic epilepsy in dogs. J Am Anim Hosp Assoc. 2004;40:285–91.
Elterman RD, Glauser TA, Wyllie E, Reife R, Wu SC, Pledger G. A double-blind, randomized trial of topiramate as adjunctive therapy for partial-onset seizures in children. Topiramate YP Study Group. Neurology. 1999;52:1338–44.
European Medicines agency http://www.ema.europa.eu/ema/index.jsp?curl=pages/medicines/veterinary/medicines/002543/vet_med_000268.jsp&mid=WC0b01ac058008d7a8 http://www.ema.europa.eu/docs/en_GB/document_library/EPAR_-_Product_Information/veterinary/002543/WC500140840.pdf
Farnbach GC. Serum concentrations and efficacy of phenytoin, phenobarbital, and primidone in canine epilepsy. J Am Vet Med Assoc. 1984;184:1117–20.
Fischer A, Jurina K, Potschka H, Rentmeister K, Tipold A, Volk HA, et al. Hoofdstuk 4: Therapie. In: Enke Verlag, Stuttgart, editor. Idiopathische epilepsie bij de hond. 2013. p. 69–115.
Forrester SD, Wilcke JR, Jacobson JD, Dyer KR. Effects of a 44-day administration of phenobarbital on disposition of clorazepate in dogs. Am J Vet Res. 1993;54:1136–8.
Freitag H, Tuxhorn I. Cognitive function in preschool children after epilepsy surgery: rationale for early intervention. Epilepsia. 2005;46:561–7.
Frey HH. Use of anticonvulsants in small animals. Vet Rec. 1986;118:484–6.
Frey HH, Göbel W, Löscher W. Pharmacokinetics of primidone and its active metabolites in the dog. Arch Int Pharmacodyn Ther. 1979;242:14–30.
Garnett WR. Clinical pharmacology of topiramate: a review. Epilepsia. 2000;41:61–5.
Gaskill CL, Cribb AE. Pancreatitis associated with potassium bromide/phenobarbital combination therapy in epileptic dogs. Can Vet J. 2000;41:555–8.
Gaskill CL, Miller LM, Mattoon JS, Hoffmann WE, Burton SA, Gelens HC, et al. Liver histopathology and liver and serum alanine aminotransferase and alkaline phosphatase activities in epileptic dogs receiving phenobarbital. Vet Pathol. 2005;42:147–60.
Gieger TL, Hosgood G, Taboada J, Wolfsheimer KJ, Mueller PB. Thyroid function and serum hepatic enzyme activity in dogs after phenobarbital administration. J Vet Intern Med. 2000;14:277–81.
Glauser T, Ben-Menachem E, Bourgeois B, Cnaan A, Chadwick D, Guerreiro C, et al. ILAE treatment guidelines: evidence-based analysis of antiepileptic drug efficacy and effectiveness as initial monotherapy for epileptic seizures and syndromes. Epilepsia. 2006;47:1094–120.
Glauser TA, Loddenkemper T. Management of childhood epilepsy. Epilepsy. 2013;19:568.
Govendir M, Perkins M, Malik R. Improving seizure control in dogs with refractory epilepsy using gabapentin as an adjunctive agent. Aust Vet J. 2005;83:602–8.
Gulløv CH, Toft N, Berendt M. A Longitudinal Study of Survival in Belgian Shepherds with Genetic Epilepsy. J Vet Int Med. 2012;26:1115–20.
Hess RS, Kass PH, Shofer FS, Van Winkle TJ, Washabau RJ. Evaluation of risk factors for fatal acute pancreatitis in dogs. J Am Vet Med Assoc. 1999;214:46–51.
Heynold Y, Faissler D, Steffen F, Jaggy A. Clinical, epidemiological and treatment results of idiopathic epilepsy in 54 labrador retrievers: a long-term study. J Small Anim Pract. 1997;38:7–14.
Hojo T, Ohno R, Shimoda M, Kokue E. Enzyme and plasma protein induction by multiple oral administrations of phenobarbital at a therapeutic dosage regimen in dogs. J Vet Pharmacol Ther. 2002;25:121–7.
Hülsmeyer V, Zimmermann R, Brauer C, Sauter-Louis C, Fischer A. Epilepsy in Border Collies: clinical manifestation, outcome, and mode of inheritance. J Vet Intern Med. 2010;24:171–8.
Hussein G, Troupin AS, Montouris G. Gabapentin interaction with felbamate. Neurology. 1996;47:1106.
Jacobs G, Calvert C, Kaufman A. Neutropenia and thrombocytopenia in three dogs treated with anticonvulsants. J Am Vet Med Assoc. 1998;212:681–4.
Janszky J, Janszky I, Schulz R, Hoppe M, Behne F, Pannek HW, et al. Temporal lobe epilepsy with hippocampal sclerosis: predictors for long-term surgical outcome. Brain. 2005;128:395–404.
Johannessen SI. Pharmacokinetics and interaction profile of topiramate: review and comparison with other newer antiepileptic drugs. Epilepsia. 1997;38:18–23.
Jull P, Risio LD, Horton C, Volk HA. Effect of prolonged status epilepticus as a result of intoxication on epileptogenesis in a UK canine population. Vet Rec. 2011;169:361.
Kaufman DW, Kelly JP, Anderson T, Harmon DC, Shapiro S. Evaluation of case reports of aplastic anemia among patients treated with felbamate. Epilepsia. 1997;38:1265–9.
Khoutorsky A, Bruchim Y. Transient leucopenia, thrombocytopenia and anaemia associated with severe acute phenobarbital intoxication in a dog. J Small Anim Pract. 2008;49:367–9.
Kiviranta AM, Laitinen-Vapaavuori O, Hielm-Björkman A, Jokinen T. Topiramate as an add-on antiepileptic drug in treating refractory canine idiopathic epilepsy. J Small Anim Pract. 2013;54:512–20.
Kube SA, Vernau KM, LeCouteur RA. Dyskinesia associated with oral phenobarbital administration in a dog. J Vet Intern Med. 2006;20:1238–40.
Kwan P, Brodie MJ. Early identification of refractory epilepsy. N Engl J Med. 2000;342:314–9.
La Roche SM, Helmers SL. The new antiepileptic drugs: scientific review. JAMA. 2004;291:605–14.
Leppik IE. Zonisamide: chemistry, mechanism of action, and pharmacokinetics. Seizure. 2004;13:5–9.
Levitski RE, Trepanier öA. . Effect of timing of blood collection on serum phenobarbital concentrations in dogs with epilepsy. J Am Vet Med Assoc. 2000;15(217):200–4.
Löscher W, Potschka H, Rieck S, Tipold A, Rundfeldt C. Anticonvulsant efficacy of the low-affinity partial benzodiazepine receptor agonist ELB 138 in a dog seizure model and in epileptic dogs with spontaneously recurrent seizures. Epilepsia. 2004;45:1228–39.
Lyseng-Williamson KA, Yang LP. Topiramate: a review of its use in the treatment of epilepsy. Drugs. 2007;67:2231–56.
March PA, Hillier A, Weisbrode SE, Mattoon JS, Johnson SE, DiBartola SP, et al. Superficial necrolytic dermatitis in 11 dogs with a history of phenobarbital administration (1995-2002). J Vet Intern Med. 2004;18:65–74.
March PA, Podell M, Sams RA. Pharmacokinetics and toxicity of bromide following high-dose oral potassium bromide administration in healthy Beagles. J Vet Pharmacol Ther. 2002;25:425–32.
Martinez SE, Bowen KA, Remsberg CM, Takemoto JK, Wright HM, Chen-Allen AV, et al. High-performance liquid chromatographic analysis of lacosamide in canine serum using ultraviolet detection: application to pre-clinical pharmacokinetics in dogs. Biomed Chromatogr. 2012;26:606–9.
Miller ML, Center SA, Randolph JF, Lepherd ML, Cautela MA, Dewey CW. Apparent acute idiosyncratic hepatic necrosis associated with zonisamide administration in a dog. J Vet Intern Med. 2011;25:1156–60.
Monteiro R, Anderson TJ, Innocent G, Evans NP, Penderis J. Variations in serum concentration of phenobarbitone in dogs receiving regular twice daily doses in relation to the times of administration. Vet Rec. 2009;165:556–8.
Montouris GD, Biton V, Rosenfeld WE. Nonfocal generalized tonic-clonic seizures: response during long-term topiramate treatment. Topiramate YTC/YTCE Study Group. Epilepsia. 2000;41:77–81.
Moore SA, Muñana KR, Papich MG, Nettifee-Osborne JA. The pharmacokinetics of levetiracetam in healthy dogs concurrently receiving phenobarbital. J Vet Pharmacol Ther. 2011;34:31–4.
Morton DJ, Honhold N. Effectiveness of a therapeutic drug monitoring service as an aid to the control of canine seizures. Vet Rec. 1988;9(122):346–9.
Müller PB, Taboada J, Hosgood G, Partington BP, VanSteenhouse JL, Taylor HW, et al. Effects of long-term phenobarbital treatment on the liver in dogs. J Vet Intern Med. 2000;14:165–71.
Muñana KR. Update: seizure management in small animal practice. Vet Clin North Am Small Anim Pract. 2013;43:1127–47.
Muñana KR, Nettifee-Osborne JA, Bergman Jr RL, Mealey KL. Association between ABCB1 genotype and seizure outcome in Collies with epilepsy. J Vet Intern Med. 2012;26:1358–64.
Muñana KR, Nettifee-Osborne JA, Papich MG. Effect of chronic administration of phenobarbital, or bromide, on pharmacokinetics of levetiracetam in dogs with epilepsy. J Vet Intern Med. 2015. In press.
Muñana KR, Thomas WB, Inzana KD, Nettifee-Osborne JA, McLucas KJ, Olby NJ, et al. Evaluation of levetiracetam as adjunctive treatment for refractory canine epilepsy: a randomized, placebo-controlled, crossover trial. J Vet Intern Med. 2012;26:341–8.
Nichols ES, Trepanier LA, Linn K. Bromide toxicosis secondary to renal insufficiency in an epileptic dog. J Am Vet Med Assoc. 1996;208:231–3.
O'Dell C, Shinnar S. Initiation and discontinuation of antiepileptic drugs. Neurol Clin. 2001;19:289–311.
Orito K, Saito M, Fukunaga K, Matsuo E, Takikawa S, Muto M, et al. Pharmacokinetics of zonisamide and drug interaction with phenobarbital in dogs. J Vet Pharmacol Ther. 2008;31:259–64.
Packer RMA, Nye G, Porter SE, Vok HA. Assessment into the usage of levetiracetam in a canine epilepsy clinic. BMC Vet Res. 2015. In press.
Packer RM, Shihab NK, Torres BB, Volk HA. Clinical risk factors associated with anti-epileptic drug responsiveness in canine epilepsy. PLoS One. 2014;25:9.
Patsalos PN. Clinical pharmacokinetics of levetiracetam. Clin Pharmacokinet. 2004;43:707–24.
Patterson EE, Goel V, Cloyd JC, O'Brien TD, Fisher JE, Dunn AW, et al. Intramuscular, intravenous and oral levetiracetam in dogs: safety and pharmacokinetics. J Vet Pharmacol Ther. 2008;31:253–8.
Pedersoli WM, Wike JS, Ravis WR. Pharmacokinetics of single doses of phenobarbital given intravenously and orally to dogs. Am J Vet Res. 1987;48:679–83.
Pellock JM, Faught E, Leppik IE, Shinnar S, Zupanc ML. Felbamate: consensus of current clinical experience. Epilepsy Res. 2006;71:89–101.
Platt SR, Adams V, Garosi LS, Abramson CJ, Penderis J, De Stefani A, et al. Treatment with gabapentin of 11 dogs with refractory idiopathic epilepsy. Vet Rec. 2006;59:881–4.
Podell M. Antiepileptic drug therapy. Clin Tech Small Anim Pract. 1998;13:185–92.
Podell M. Chapter 7. In: Platt S, Olby N, editors. BSAVA Manual of Canine and Feline Neurology. 3rd ed. 2010. p. 97–112.
Podell M. Antiepileptic drug therapy and monitoring. Top Companion Anim Med. 2013;28:59–66.
Podell M, Fenner WR. Bromide therapy in refractory canine idiopathic epilepsy. J Vet Intern Med. 1993;7:318–27.
Potschka H, Fischer A, Löscher W, Patterson EE, Bhatti SFM, Berendt M, De Risio L, Farquhar RG, Long S, Mandigers PJJ, Matiasek K, Muñana K, Pakozdy A, Penderis J, Platt S, Podell M, Pumarola MB, Rusbridge C, Stein VM, Tipold A, Volk HA21. International Veterinary Epilepsy Task Force Consensus Proposal: Outcome of therapeutic interventions in canine and feline epilepsy. BMC Vet Res; 2015.
Radulovic LL, Türck D, von Hodenberg A, Vollmer KO, McNally WP, DeHart PD, et al. Disposition of gabapentin (neurontin) in mice, rats, dogs, and monkeys. Drug Metab Dispos. 1995;23:441–8.
Ravis WR, Nachreiner RF, Pedersoli WM, Houghton NS. Pharmacokinetics of phenobarbital in dogs after multiple oral administration. Am J Vet Res. 1984;45:1283–6.
Ravis WR, Pedersoli WM, Wike JS. Pharmacokinetics of phenobarbital in dogs given multiple doses. Am J Vet Res. 1989;50:1343–7.
Rieck S, Rundfeldt C, Tipold A. Anticonvulsant activity and tolerance of ELB138 in dogs with epilepsy: a clinical pilot study. Vet J. 2006;172:86–95.
Rogawski MA, Johnson MR. Intrinsic severity as a determinant of antiepileptic drug refractoriness. Epilepsy Curr. 2008;8:127–30.
Ruehlmann D, Podell M, March P. Treatment of partial seizures and seizure-like activity with felbamate in six dogs. J Small Anim Pract. 2001;42:403–8.
Rundfeldt C, Gasparic A, Wlaź P. Imepitoin as novel treatment option for canine idiopathic epilepsy: pharmacokinetics, distribution, and metabolism in dogs. J Vet Pharmacol Ther. 2014;37:421–34.
Rundfeldt C, Löscher W. The pharmacology of imepitoin: the first partial benzodiazepine receptor agonist developed for the treatment of epilepsy. CNS Drugs. 2014;28:29–43.
Salazar V, Dewey CW, Schwark W, Badgley BL, Gleed RD, Horne W, et al. Pharmacokinetics of single-dose oral pregabalin administration in normal dogs. Vet Anaesth Analg. 2009;36:574–80.
Schwartz M, Muñana KR, Olby NJ. Possible drug-induced hepatopathy in a dog receiving zonisamide monotherapy for treatment of cryptogenic epilepsy. J Vet Med Sci. 2011;73:1505–8.
Schwartz-Porsche D, Löscher W, Frey HH. Therapeutic efficacy of phenobarbital and primidone in canine epilepsy: a comparison. J Vet Pharmacol Ther. 1985;8:113–9.
Scola N, Kaczmarczyk J, Möllenhoff K. Infantile bromoderma due to antiepileptic therapy. J Dtsch Dermatol Ges. 2012;10:131–2.
Shaik IH, Mehvar R. Cytochrome P450 induction by phenobarbital exacerbates warm hepatic ischemia-reperfusion injury in rat livers. Free Radic Res. 2010;44:441–53.
Shih JJ, Ochoa JG. A systematic review of antiepileptic drug initiation and withdrawal. Neurologist. 2009;15:122–31.
Shorvon SD. Safety of topiramate: adverse events and relationships to dosing. Epilepsia. 1996;37:18–S22.
Shorvon S, Luciano AL. Prognosis of chronic and newly diagnosed epilepsy: revisiting temporal aspects. Curr Opin Neurol. 2007;20:208–12.
Sills GJ. The mechanisms of action of gabapentin and pregabalin. Curr Opin Pharmacol. 2006;6:108–13.
Speciale J, Dayrell-Hart B, Steinberg SA. Clinical evaluation of c-vinyl-c-aminobutyric acid for control of epilepsy in dogs. J Am Vet Med Assoc. 1991;198:995–1000.
Steinberg MFD. Levetiracetam therapy for long-term idiopathic epileptic dogs. J Vet Intern Med. 2004;18:410.
Stephen LJ, Brodie MJ. Selection of antiepileptic drugs in adults. Neurologic Clinics. 2009;27:967–92.
Streeter AJ, Stahle PL, Holland ML, Pritchard JF, Takacs AR. Pharmacokinetics and bioavailability of topiramate in the beagle dog. Drug Metab Dispos. 1995;23:90–3.
Sutula TP, Hagen J, Pitkänen A. Do epileptic seizures damage the brain? Curr Opin Neurol. 2003;16:189–95.
Taverna M, Nguyet TT, Valentin C, Level O, Merry T, Kolbe HV, et al. A multi-mode chromatographic method for the comparison of the N-glycosylation of a recombinant HIV envelope glycoprotein (gp160s-MN/LAI) purified by two different processes. J Biotechnol. 1999;68:37–48.
Thomas WB. Seizures and narcolepsy. In: Dewey CW, editor. A practical guide to canine and feline neurology. Ames (IA): Iowa State Press (Blackwell Publishing); 2003. p. 193–212.
Thomas WB. Idiopathic epilepsy in dogs and cats. Vet Clin North Am Small Anim Pract. 2010;40:161–79.
Thurman GD, McFadyen ML, Miller R. The pharmacokinetics of phenobarbitone in fasting and non-fasting dogs. J S Afr Vet Assoc. 1990;61:86–9.
Tipold A, Keefe TJ, Löscher W, Rundfeldt C, de Vries F. Clinical efficacy and safety of imepitoin in comparison with phenobarbital for the control of idiopathic epilepsy in dogs. J Vet Pharmacol Ther. 2015;38:160–8.
Trepanier LA. Use of bromide as an anticonvulsant for dogs with epilepsy. J Am Vet Med Assoc. 1995;207:163–6.
Trepanier LA, Babish JG. Effect of dietary chloride content on the elimination of bromide by dogs. Res Vet Sci. 1995;58:252–5.
Trepanier LA, Babish JG. Pharmacokinetic properties of bromide in dogs after the intravenous and oral administration of single doses. Res Vet Sci. 1995;58:248–51.
Trepanier LA, Van Schoick A, Schwark WS, Carrillo J. Therapeutic serum drug concentrations in epileptic dogs treated with potassium bromide alone or in combination with other anticonvulsants: 122 cases (1992-1996). J Am Vet Med Assoc. 1998;213:1449–53.
Volk HA, Matiasek LA, Luján Feliu-Pascual A, Platt SR, Chandler KE. The efficacy and tolerability of levetiracetam in pharmacoresistant epileptic dogs. Vet J. 2008;176:310–9.
von Klopmann T, Boettcher IC, Rotermund A, Rohn K, Tipold A. Euthyroid sick syndrome in dogs with idiopathic epilepsy before treatment with anticonvulsant drugs. J Vet Intern Med. 2006;20:516–22.
von Klopmann T, Rambeck B, Tipold A. Prospective study of zonisamide therapy for refractory idiopathic epilepsy in dogs. J Small Anim Pract. 2007;48:134–8.
Wagner SO, Sams RA, Podell M. Chronic phenobarbital therapy reduces plasma benzodiazepine concentrations after intravenous and rectal administration of diazepam in the dog. J Vet Pharmacol Ther. 1998;21:335–41.
Weiss KL, Schroeder CE, Kastin SJ, Gibson JP, Yarrington JT, Heydorn WE, et al. MRI monitoring of vigabatrin-induced intramyelinic edema in dogs. Neurology. 1994;44:1944–9.
Weissl J, Hülsmeyer V, Brauer C, Tipold A, Koskinen LL, Kyöstilä K, et al. Disease progression and treatment response of idiopathic epilepsy in Australian Shepherd dogs. J Vet Intern Med. 2012;26:116–25.
White HS. Comparative anticonvulsant and mechanistic profile of the established and newer antiepileptic drugs. Epilepsia. 1999;40.
White HS, Harmsworth WL, Sofia RD, Wolf HH. Felbamate modulates the strychnine-insensitive glycine receptor. Epilepsy Res. 1995;20:41–8.
Wilensky AJ, Friel PN, Levy RH, Comfort CP, Kaluzny SP. Kinetics of phenobarbital in normal subjects and epileptic patients. Eur J Clin Pharmacol. 1982;23:87–92.
Wong IC, Lhatoo SD. Adverse reactions to new anticonvulsant drugs. Drug Saf. 2000;23:35–56.
Wright HM, Chen AV, Martinez SE, Davies NM. Pharmacokinetics of oral rufinamide in dogs. J Vet Pharmacol Ther. 2012;35:529–33.
Yarrington JT, Gibson JP, Dillberger JE, Hurst G, Lippert B, Sussman NM, et al. Sequential neuropathology of dogs treated with vigabatrin, a GABA-transaminase inhibitor. Toxicol Pathol. 1993;21:480–9.
Zhang X, Velumian AA, Jones OT, Carlen PL. Modulation of high-voltage-activated calcium channels in dentate granule cells by topiramate. Epilepsia. 2000;41.
The authors are grateful to all owners of epileptic pets and veterinary colleagues who have inspired the group to create consensus statements. The authors also would like to thank the research office for assessing the manuscript according to the Royal Veterinary College's code of good research practice (Authorisation Number – CCS_ 01027). This study was not financially supported by any organization or grant.
Department of Small Animal Medicine and Clinical Biology, Faculty of Veterinary Medicine, Ghent University, Salisburylaan 133, Merelbeke, 9820, Belgium
Sofie F.M. Bhatti
Animal Health Trust, Lanwades Park, Kentford, Newmarket, CB8 7UU, Suffolk, United Kingdom
Luisa De Risio
Department of Clinical Sciences, College of Veterinary Medicine, North Carolina State University, 1052 William Moore Drive, Raleigh, NC, 27607, USA
Karen Muñana
Vet Extra Neurology, Broadleys Veterinary Hospital, Craig Leith Road, Stirling, FK7 7LE, Stirlingshire, United Kingdom
Jacques Penderis
Department of Small Animal Medicine and Surgery, University of Veterinary Medicine Hannover, Bünteweg 9, 30559, Hannover, Germany
Veronika M. Stein
& Andrea Tipold
Department of Veterinary and Clinical Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Frederiksberg C, Denmark
Mette Berendt
Fernside Veterinary Centre, 205 Shenley Road, Borehamwood, SG9 0TH, Hertfordshire, United Kingdom
Robyn G. Farquhar
Clinical Veterinary Medicine, Ludwig-Maximillians-University, Veterinärstr. 13, 80539, Munich, Germany
Andrea Fischer
University of Melbourne, 250 Princes Highway, Weibee, 3015, VIC, Australia
Department of Pharmacology, Toxicology and Pharmacy, University of Veterinary Medicine Hannover, Bünteweg 17, 30559, Hannover, Germany
Wolfgang Löscher
Department of Clinical Sciences of Companion Animals, Utrecht University, Yalelaan 108, 3583 CM, Utrecht, The Netherlands
Paul J.J. Mandigers
Section of Clinical & Comparative Neuropathology, Centre for Clinical Veterinary Medicine, Ludwig-Maximilians-University, Veterinärstr. 13, 80539, Munich, Germany
Kaspar Matiasek
Clinical Unit of Internal Medicine Small Animals, University of Veterinary Medicine, Veterinärplatz 1, 1210, Vienna, Austria
Akos Pakozdy
University of Minnesota College of Veterinary Medicine, D426 Veterinary Medical Center, 1352 Boyd Avenue, St. Paul, MN, 55108, USA
Edward E. Patterson
College of Veterinary Medicine, University of Georgia, 501 DW Brooks Drive, Athens, GA, 30602, USA
Simon Platt
Chicago Veterinary Neurology and Neurosurgery, 3123 N. Clybourn Avenue, Chicago, IL, 60618, USA
Michael Podell
Department of Pharmacology, Toxicology and Pharmacy, Ludwig-Maximillians-University, Königinstr. 16, 80539, Munich, Germany
Heidrun Potschka
Fitzpatrick Referrals, Halfway Lane, Eashing, Godalming, GU7 2QQ, Surrey, United Kingdom
Clare Rusbridge
School of Veterinary Medicine, Faculty of Health & Medical Sciences, University of Surrey, Guildford, GU2 7TE, Surrey, United Kingdom
Department of Clinical Science and Services, Royal Veterinary College, Hatfield, AL9 7TA, Hertfordshire, UK
Holger A. Volk
Search for Sofie F.M. Bhatti in:
Search for Luisa De Risio in:
Search for Karen Muñana in:
Search for Jacques Penderis in:
Search for Veronika M. Stein in:
Search for Andrea Tipold in:
Search for Mette Berendt in:
Search for Robyn G. Farquhar in:
Search for Andrea Fischer in:
Search for Sam Long in:
Search for Wolfgang Löscher in:
Search for Paul J.J. Mandigers in:
Search for Kaspar Matiasek in:
Search for Akos Pakozdy in:
Search for Edward E. Patterson in:
Search for Simon Platt in:
Search for Michael Podell in:
Search for Heidrun Potschka in:
Search for Clare Rusbridge in:
Search for Holger A. Volk in:
Correspondence to Sofie F.M. Bhatti.
Following reimbursements, fees and funding have been received by the authors in the last three years and have been declared in the competing interest section. WL, CR, RGF, HAV, KM, MP and JP have received fees for acting as a consultant for Boehringer Ingelheim (WL, KM, MP: consultancy during development and approval of imepitoin; CR: pain consultancy; RGF, JP, HAV: consultancy pre and post launch of imepitoin). AT has been an advisor for Boehringer Ingelheim. SFMB, HAV and AT have been responsible principal investigator of several research studies concerning imepitoin financed by Boehringer Ingelheim. SFMB, HAV, JP, HP, MB, CR and AF received speaking fees from Boehringer Ingelheim. HP received consulting and speaking fees and funding for a collaborative project from Eisai Co. LTD. HAV received funding for a collaborative project from Desitin and Nestlé Purina Research. AF and LDR received reimbursements from Boehringer Ingelheim. LDR has received consulting and speaking fees from Vetoquinol. MP has received consultant fees for Aratana. The other authors declared that they have no competing interests.
SFMB chaired and LDR co-chaired the treatment working group (LDR, SFMB, KM, JP, SVM, AT) and wrote the first draft of the consensus paper with the help of LDR, KM, JP, SVM, AT and HAV. All authors read, critiqued, commented and approved the final manuscript.
Co-chair of the medical treatment of canine epilepsy working group: Luisa De Risio.
Chair of IVETF: Holger A. Volk.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Bhatti, S.F., De Risio, L., Muñana, K. et al. International Veterinary Epilepsy Task Force consensus proposal: medical treatment of canine epilepsy in Europe. BMC Vet Res 11, 176 (2015). https://doi.org/10.1186/s12917-015-0464-z
Epileptic seizure
International Veterinary Epilepsy Task Force Consensus Reports
|
CommonCrawl
|
(When) can the presentation in Steinberg's Yale notes fail to give an algebraic group?
I'm trying to understand a remark which appears on p. 1483 of Cohen, Murray and Taylor's "Computing in Groups of Lie Type." It says, "We have not used the presentations described in [7] or [30] because they define groups which are not necessarily algebraic when $\mathbb{F}$ is not algebraically closed." Reference [7] alluded to in the quotation is Carter's Finite Groups of Lie Type and reference [30] is Steinberg's Lectures on Chevalley groups, Tech. report, Yale University, 1968. Is there a nice example which illustrates what is meant by this remark, where the group given by the presentation in the Steinberg notes fails to be algebraic, in a suitable sense? Guidance on different relevant notions of "algebraic" and which is most likely to be operative here would also be appreciated.
gr.group-theory algebraic-groups algebraic-k-theory chevalley-groups
Joseph Hundley
asked Aug 2 '16 at 1:59
Joseph HundleyJoseph Hundley
$\begingroup$ the two sheeted cover of $SL(2,\mathbb{R})$ is one such example, which is not an algebraic group (it is not even linear). $\endgroup$ – Venkataramana Aug 2 '16 at 3:35
$\begingroup$ This question: mathoverflow.net/questions/69741/… is relevant to the above comment. $\endgroup$ – Robert Furber Aug 2 '16 at 11:31
$\begingroup$ The question needs a more precise formulation: (1) Chevalley oriiginally looked at groups which are simple as abstract groups; Steinberg's lecture notes adopt a more general framework. (2) What are the "Steinberg presentations"? These developed out of papers he wrote before the Yale lectures and distinguish rank 1 groups from the rest. (3) What notion of "algebraic group" are you using? Steinberg for example relied on the 1950s Chevalley-Borel notions. (4) Aside from groups over the real numbers, Steinberg's work led to study of matrix groups over number fields or local fields. $\endgroup$ – Jim Humphreys Aug 2 '16 at 14:32
$\begingroup$ Also, your tags could be broadened by adding 'gr.group-theory' and 'algebraic-k-theory'. The latter is the area especially stimulated by Steinberg's work on generators and relations. $\endgroup$ – Jim Humphreys Aug 2 '16 at 14:34
$\begingroup$ Is this just referring to variants of the basic fact that quotient groups such as ${\rm{SL}}_n(k)/\mu_n(k)$ (the commutator subgroup of ${\rm{PGL}}_n(k)$, generated by the "root groups") are not "algebraic" in the sense that they do not arise as the group of $k$-points of a $k$-group quotient of ${\rm{SL}}_n$? Can you just ignore the remark which is unclear to you and see how the authors of the book you are reading use whatever it is that they are doing? $\endgroup$ – nfdc23 Aug 3 '16 at 3:01
Browse other questions tagged gr.group-theory algebraic-groups algebraic-k-theory chevalley-groups or ask your own question.
Why is the double cover of $Sl(2,\mathbb{R})$ not algebraic?
How much has been written down about Deligne's geometric approach to the order formula for a finite group of Lie type?
Regular elements in the torus of a group of Lie type
Definition of "finite group of Lie type"?
Motivational ideas for the Gelfand-Graev character of a finite group of Lie type
Generalization of a theorem of Steinberg
Maximal torus of Chevalley group $Sp(4)$
Signs in Chevalley's commutator formula
Follow-up to Steinberg's problem (12) in his 1966 ICM talk?
Reference for the rank of the BN-pair of the finite simple groups of Lie type and not Chevalley
Connectedness of centralizers of semisimple Lie-algebra elements under the action of a semisimple algebraic group
|
CommonCrawl
|
Logarithm of the hypergeometric function
For $F(x)={}_2F_1 (a,b;c;x)$, with $c=a+b$, $a>0$, $b>0$, it has been proved in [1] that $\log F(x)$ is convex on $(0,1)$.
I numerically checked that with a variety of $a,\ b$ values, $\log F(x)$ is not only convex, but also has a Taylor series in x consisting of strictly positive coefficients. Can this be proved?
[1] Generalized convexity and inequalities, Anderson, Vamanamurthy, Vuorinen, Journal of Mathematical Analysis and Applications, Volume 335, Issue 2, http://www.sciencedirect.com/science/article/pii/S0022247X07001825#
special-functions hypergeometric-functions
felixfelix
Here's a sketch of a proof of a stronger statement: the coefficients of the Taylor series for $\log{}_2F_1(a,b;a+b+c;x)$ are rational functions of $a$, $b$, and $c$ with positive coefficients.
To see this we first note that $$\begin{aligned} \frac{d\ }{dx} \log {}_2F_1(a,b;a+b+c;x) &= \frac{\displaystyle \frac{d\ }{dx}\,{}_2F_1(a,b;a+b+c;x)}{{}_2F_1(a,b;a+b+c;x)}\\[3pt] &=\frac{ab}{a+b+c}\frac{{}_2F_1(a+1,b+1;a+b+c+1;x)}{{}_2F_1(a,b;a+b+c;x)}. \end{aligned} $$ Then $$ \begin{gathered} \frac{{}_2F_1(a+1,b+1;a+b+c+1;x)}{{}_2F_1(a,b;a+b+c;x)} = \frac{{}_2F_1(a+1,b+1;a+b+c+1;x)}{{}_2F_1(a,b+1;a+b+c;x)} \\ \hfill\times \frac{{}_2F_1(a,b+1;a+b+c;x)}{{}_2F_1(a,b;a+b+c;x)}.\quad \end{gathered} $$ We have continued fractions for the two quotients on the right. Let $S(x; a_1, a_2, a_3, \dots)$ denote the continued fraction $$\cfrac{1}{1-\cfrac{a_1x} {1-\cfrac{a_2x} {1-\cfrac{a_3x} {1-\ddots} }}} $$ Then $$\begin{gathered}\frac{{}_2F_1(a+1,b+1;a+b+c+1;x)}{{}_2F_1(a,b+1;a+b+c;x)} = S \left( x;{\frac { \left( b+1 \right) \left( b+c \right) }{ \left( a +b+c+1 \right) \left( a+b+c \right) }}, \right.\hfill\\ \left. {\frac { \left( a+1 \right) \left( a+c \right) }{ \left( a+b+c+2 \right) \left( a+b+c+1 \right) }}, {\frac { \left( b+2 \right) \left( b+c+1 \right) }{ \left( a+b+c+3 \right) \left( a+b+c+2 \right) }}, \right.\\ \hfill \left. {\frac { \left( a+2 \right) \left( a+c+1 \right) }{ \left( a+b+c+4 \right) \left( a+b+c+3 \right) }},\dots \right) \end{gathered} $$ and $$\begin{gathered} \frac{{}_2F_1(a,b+1;a+b+c;x)}{{}_2F_1(a,b;a+b+c;x)} =S \left( x,{\frac {a}{a+b+c}}, {\frac { \left( b+1 \right) \left( b+c \right) }{ \left( a+b+c+1 \right) \left( a+b+c \right) }}, \right.\hfill\\ \left. {\frac { \left( a+1 \right) \left( a+c \right) }{ \left( a+b+c+2 \right) \left( a+b+c+1 \right) }}, {\frac { \left( b+2 \right) \left( b+c+1 \right) }{ \left( a+b+c+3 \right) \left( a+b+c+2 \right) }}, \right.\\ \hfill \left. {\frac { \left( a+2 \right) \left( a+c+1 \right) }{ \left( a+b+c+4 \right) \left( a+b+c+3 \right) }}, \dots\right) \end{gathered} $$ The first of these continued fractions is Gauss's well-known continued fraction, and the second can easily be derived from the first. It follows from these formulas that the coefficients of the Taylor series for $\log{}_2F_1(a,b;a+b+c;x)$ are rational functions of $a$, $b$, and $c$ with positive coefficients.
Ira GesselIra Gessel
Please note the paper, may be it will be useful:
D. Karp, S.M. Sitnik, Log-convexity and log-concavity of hypergeometric-like functions, Journal of Mathematical Analysis and Applications, Volume 364, Issue 2, P. 384-394.
There is some general result in this paper on positive Taylor coefficients.
SergeiSergei
Not the answer you're looking for? Browse other questions tagged special-functions hypergeometric-functions or ask your own question.
Logarithm of a hypergeometric series
Quadratic Transformation of the Hypergeometric Function 2F1
solving a sum of Hypergeometric function 2F3
hypergeometric function $_2F_1(-n;-r;1;2)$
A definite integral related to hypergeometric function
A definite integral of hypergeometric function 2F1
Integrals involving the Tricomi hypergeometric function
Hypergeometric function asymptotics
Compute Confluent Hypergeometric Function 1F1
Value of the hypergeometric function
|
CommonCrawl
|
Differences between intrinsic and acquired nucleoside analogue resistance in acute myeloid leukaemia cells
Tamara Rothenburger1,2,
Dominique Thomas3,
Yannick Schreiber3,
Paul R. Wratil4,5,
Tamara Pflantz4,5,
Kirsten Knecht6,
Katie Digianantonio6,
Joshua Temple6,
Constanze Schneider7,
Hanna-Mari Baldauf4,
Katie-May McLaughlin8,
Florian Rothweiler1,
Berna Bilen2,
Samira Farmand2,
Denisa Bojkova1,
Rui Costa1,
Nerea Ferreirós3,
Gerd Geisslinger3,9,
Thomas Oellerich10,11,12,
Yong Xiong6,
Oliver T. Keppler4,5,
Mark N. Wass8,
Martin Michaelis8 &
Jindrich Cinatl Jr1
SAMHD1 mediates resistance to anti-cancer nucleoside analogues, including cytarabine, decitabine, and nelarabine that are commonly used for the treatment of leukaemia, through cleavage of their triphosphorylated forms. Hence, SAMHD1 inhibitors are promising candidates for the sensitisation of leukaemia cells to nucleoside analogue-based therapy. Here, we investigated the effects of the cytosine analogue CNDAC, which has been proposed to be a SAMHD1 inhibitor, in the context of SAMHD1.
CNDAC was tested in 13 acute myeloid leukaemia (AML) cell lines, in 26 acute lymphoblastic leukaemia (ALL) cell lines, ten AML sublines adapted to various antileukaemic drugs, 24 single cell-derived clonal AML sublines, and primary leukaemic blasts from 24 AML patients. Moreover, 24 CNDAC-resistant sublines of the AML cell lines HL-60 and PL-21 were established. The SAMHD1 gene was disrupted using CRISPR/Cas9 and SAMHD1 depleted using RNAi, and the viral Vpx protein. Forced DCK expression was achieved by lentiviral transduction. SAMHD1 promoter methylation was determined by PCR after treatment of genomic DNA with the methylation-sensitive HpaII endonuclease. Nucleoside (analogue) triphosphate levels were determined by LC-MS/MS. CNDAC interaction with SAMHD1 was analysed by an enzymatic assay and by crystallisation.
Although the cytosine analogue CNDAC was anticipated to inhibit SAMHD1, SAMHD1 mediated intrinsic CNDAC resistance in leukaemia cells. Accordingly, SAMHD1 depletion increased CNDAC triphosphate (CNDAC-TP) levels and CNDAC toxicity. Enzymatic assays and crystallisation studies confirmed CNDAC-TP to be a SAMHD1 substrate. In 24 CNDAC-adapted acute myeloid leukaemia (AML) sublines, resistance was driven by DCK (catalyses initial nucleoside phosphorylation) loss. CNDAC-adapted sublines displayed cross-resistance only to other DCK substrates (e.g. cytarabine, decitabine). Cell lines adapted to drugs not affected by DCK or SAMHD1 remained CNDAC sensitive. In cytarabine-adapted AML cells, increased SAMHD1 and reduced DCK levels contributed to cytarabine and CNDAC resistance.
Intrinsic and acquired resistance to CNDAC and related nucleoside analogues are driven by different mechanisms. The lack of cross-resistance between SAMHD1/ DCK substrates and non-substrates provides scope for next-line therapies after treatment failure.
Drug resistance is a main obstacle in the successful treatment of cancer [4, 9, 31]. Resistance can be either intrinsic or acquired. Intrinsic resistance means that a therapy-naïve cancer does not respond to treatment right from the start. In acquired resistance, there is an initial therapy response, but resistance develops over time [31, 40].
Intrinsic and acquired resistance are conceptually different. Intrinsic resistance is a collateral event during carcinogenesis not influenced by treatment. In contrast, acquired resistance is the consequence of a directed evolution driven by therapy. In agreement, discrepancies have been detected between drug resistance mechanisms in the intrinsic and the acquired resistance setting [31, 36, 40, 44].
Sterile alpha motif and histidine-aspartate domain-containing protein 1 (SAMHD1) is a deoxynucleoside triphosphate (dNTP) triphosphohydrolase that cleaves physiological dNTPs into deoxyribonucleotides and inorganic triphosphate [11, 38]. SAMHD1 also inactivates the triphosphorylated forms of some anti-cancer nucleoside analogues [13, 21, 36, 39, 41, 50]. High SAMHD1 levels indicate poor clinical response to nucleoside analogues such as cytarabine, decitabine, and nelarabine in acute myeloid leukaemia (AML), acute lymphoblastic leukaemia, and Hodgkin lymphoma [36, 39, 41, 50]. Moreover, previous findings indicated differing roles of SAMHD1 in intrinsic and acquired resistance to nucleoside analogues [36, 41].
Here, we investigated intrinsic and acquired resistance against the nucleoside analogue 2′-C-cyano-2′-deoxy-1-β-D-arabino-pentofuranosyl-cytosine (CNDAC). CNDAC and its orally available prodrug sapacitabine display clinical activity against AML [6, 18,19,20]. We selected CNDAC, because, in contrast to SAMHD1 substrates such as cytarabine and decitabine, it has been proposed to be a SAMHD1 inhibitor [14]. CNDAC is further interesting due to its unique mechanism of action among deoxycytidine analogues, which is characterised by CNDAC triphosphate (CNDAC-TP) incorporation into DNA initially causing single strand breaks and G2 cell cycle arrest [1, 2, 12, 24,25,26,27].
CNDAC was purchased from biorbyt (via Biozol, Eching, Germany), 5-azacytidine, cytarabine, cladribine, clofarabine, decitabine, and fludarabine from Tocris Biosciences (via Bio-Techne GmbH, Wiesbaden, Germany), 6-thioguanine, ganetespib, molibresib, olaparib, sapacitabine, venetoclax, and vismodegib from MedChemExpress (via Hycultec, Beutelsbach, Germany), daunorubicin, gedatolisib, and volasertib from Selleckchem (Berlin, Germany), gemcitabine from Hexal (Holzkirchen, Germany), GTP and dATP from Thermo Scientific (Dreieich, Germany), and CNDAC-TP from Jena Bioscience GmbH (Jena, Germany).
The human AML cell lines HEL (DSMZ No. ACC 11), HL-60 (DSMZ No. ACC 3), KG-1 (DSMZ No. ACC 14), ML-2 (DSMZ No. ACC 15), MOLM-13 (DSMZ No. ACC 554), MONO-MAC-6 (DSMZ No. ACC 124), MV4–11 (DSMZ No. ACC 102), NB-4 (DSMZ No. ACC 207), OCI-AML-2 (DSMZ No. ACC 99), OCI-AML-3 (DSMZ No. ACC 582), PL-21 (DSMZ No. ACC 536), SIG-M5 (DSMZ No. ACC 468), and THP-1 (DSMZ No. ACC16) and the human ALL cell lines 697 (DSMZ No. ACC 42), ALL-SIL (DSMZ No. ACC 511), BALL-1 (DSMZ No. ACC 742), CTV-1 (DSMZ No. ACC 40), GRANTA-452 (DSMZ No. ACC 713), HAL-01 (DSMZ No. ACC 610), HSB-2 (DSMZ No. ACC 435), JURKAT (DSMZ No. ACC 282), KE-37 (DSMZ No. ACC 46), MHH-CALL-4 (DSMZ No. ACC 337), MN-60 (DSMZ No. ACC 138), MOLT-4 (DSMZ No. ACC 362), MOLT-16 (DSMZ No. ACC 29), NALM-6 (DSMZ No. ACC 128), NALM-16 (DSMZ No. ACC 680), P12-ICHIKAWA (DSMZ No. ACC 34), REH (DSMZ No. ACC 22), ROS-50 (DSMZ No. ACC 557), RPMI-8402 (DSMZ No. ACC 290), RS4;11 (DSMZ No. ACC 508), SEM (DSMZ No. ACC 546), TANOUE (DSMZ No. ACC 399), and TOM-1 (DSMZ No. ACC 578) were obtained from DSMZ (Deutsche Sammlung von Mikroorganismen und Zellkulturen GmbH, Braunschweig, Germany). The ALL cell line CCRF-CEM (ATCC No. CCL-119) was received from ATCC (Manassas, VA, US), the ALL cell line KARPAS231 from Cambridge Enterprise Ltd. (Cambridge, UK), and the ALL cell line J-JHAN was kindly provided by Professor R. Tedder (University College London) [5].
Drug-resistant cell sublines were established by continuous exposure of sensitive parental cell lines HL-60 and PL-21 to step-wise increasing drug concentrations, as previously described [30] and are part of the Resistant Cancer Cell Line (RCCL) collection (https://www.kent.ac.uk/stms/cmp/RCCL/RCCLabout.html) [31]. Briefly, cells were cultured at increasing drug concentrations, starting with concentrations that inhibited the viability of the parental cell lines by 50% (IC50). Drug concentrations were increased every 2 to 6 weeks until cells readily grew in the presence of the drug. In this way 12 independent CNDAC-resistant sublines of HL-60 and PL-21 were generated each and designated as HL-60rCNDAC200nMI– XII and PL-21rCNDAC2μMI-XII. HL-60 cells with acquired resistance to the drugs cytarabine (Ara-C), arabinosylguanine (Ara-G), 5-azacytidine (AZA), fludarabine (FLUDA), 6-mercaptopurine (6-MP), venetoclax (VENE), olaparib (OLA), and volasertib (VOLA) were designated as HL-60rAra-C2μg/ml, HL-60rAra-G100μM, HL-60rAZA1µM, HL-60rFLUDA1μg/ml, HL-60r6-MP2μM, HL-60rVENE2μM, HL-60rOLA20μM and HL-60rVOLA200nM.
Clonal sublines were generated by limiting dilution. Cells were plated at a density of 1 cell per well on a 96-well plate and grown for 1–2 weeks. Wells with only one visible cell colony were identified and the respective clones were expanded.
SAMHD1-deficient THP-1 (THP-1 KO) cells and control cells (THP-1 CTRL) were generated using CRISPR/Cas9 approach as previously described [36, 41, 47]. THP-1 cells were plated at a density of 2 × 105 cells/ mL. After 24 h, 2.5 × 106 cells were suspended in 250 μl Opti-MEM, mixed with 5 μg CRISPR/Cas plasmid DNA, and electroporated in a 4-mm cuvette using an exponential pulse at 250 V and 950 mF in a Gene Pulser electroporation device (Bio-Rad Laboratories, Feldkirchen, Germany). We used a plasmid encoding a CMV-mCherry-Cas9 expression cassette and a human SAMHD1 gene specific gRNA driven by the U6 promoter. An early coding exon of the SAMHD1 gene was targeted using the following gRNA construct: 5′-CGGAAGGGGTGTTTGAGGGG-3′. Cells were allowed to recover for 2 days in 6-well plates filled with 4 ml medium per well before being FACS sorted for mCherry-expression on a BD FACS Aria III (BD Biosciences, Heidelberg, Germany). For subsequent limiting dilution cloning, cells were plated at a density of 5, 10, or 20 cells per well of nine round-bottom 96-well plates and grown for 2 weeks. Plates were scanned for absorption at 600 nm and growing clones were identified using custom software and picked and duplicated by a Biomek FXp (Beckman Coulter, Krefeld, Germany) liquid handling system.
DCK-expressing MV4–11rAra-C2μg/ml and MOLM-13rAra-C2μg/ml cells were established by lentiviral transduction and designated as MV4–11rAra-C2μg/ml-pWPI+DCK and MOLM-13rAra-C2μg/ml-pWPI+DCK (or MV4–11rAra-C2μg/ml-pWPI and MOLM-13rAra-C2μg/ml-pWPI for control cells transduced with the empty vector). To generate the pWPI+DCK plasmid, the dCK gene was PCR-amplified from pDNR-Dual_dCK (DNAsu HsCD00000962) using Pfu DNA polymerase (Promega, Germany) and gene-specific primers (Eurofins Genomics, Germany) and subcloned into pWPI IRES puro via BamHI/SpeI. The plasmid was verified by Sanger sequencing (Eurofins Genomics, Germany). For the generation of lentiviral vectors 293 T cells were co-transfected with pWPI+DCK (or pWPI as control), Addgene packaging plasmid pPAX, an envelope plasmid encoding VSV-G and pAdVAntage (Promega). Four days after transfection, lentiviral vectors were harvested and concentrated by ultracentrifugation. For lentiviral transduction MV4–11rAra-C2μg/ml and MOLM-13rAra-C2μg/ml cells were seeded at 5 × 105 cells/ well of a 96-well-plate and spinoculated with the lentiviral vectors. 24 h after transduction, successfully transduced cells were selected with 3 μg/ml puromycin (Sigma-Aldrich) and DCK expression was monitored by Western Blot.
All cell lines were cultured in IMDM (Biochrom, Cambridge, UK) supplemented with 10% FBS (SIG-M5 20% FBS, Sigma-Aldrich, Taufkirchen, Germany), 4 mM L-Glutamine (Sigma-Aldrich), 100 IU/ml penicillin (Sigma-Aldrich), and 100 mg/ml streptomycin (Sigma-Aldrich) at 37 °C in a humidified 5% CO2 incubator. Cell lines were routinely tested for Mycoplasma, using the MycoAlert PLUS assay kit from Lonza (Basel, Switzerland), and were authenticated by short tandem repeat profiling.
Primary AML samples
Peripheral blood or bone marrow samples derived from AML patients between 2018 and 2020 were obtained from the UCT Biobank of the University Hospital Frankfurt. The use of peripheral blood and bone marrow aspirates was approved by the Ethics Committee of Frankfurt University Hospital (approval no. SHN-03-2017). All patients gave informed consent to the collection of samples and to the scientific analysis of their data and of biomaterial obtained for diagnostic purposes according to the Declaration of Helsinki.
Mononuclear cell (MNC) fractions were purified by gradient centrifugation with Biocoll cell separation solution (Merck Millipore, Darmstadt, Germany). Leukemic cells were enriched by negative selection with a combination of CD3-, CD19- and CD235a-microbeads (all obtained from Miltenyi Biotec, Bergisch Gladbach, Germany, 130–050-301, 130–050-101, 130–050-501) according to the manufacturer's instructions and separated by the autoMACS™ Pro Separator (Miltenyi Biotec). FACS staining and treatment for viability assays of AML blasts was executed immediately after isolation. Culture medium for AML blasts consisted of IMDM (Biochrom) supplemented with 10% FBS, 4 mM L-glutamine, 25 ng/ml hTPO, 50 ng/ml hSCF, 50 ng/ml hFlt3-Ligand and 20 ng/ml hIL-3 (all obtained from Miltenyi Biotec, 130–094-013, 130–096-695, 130–096-479, 130–095-069).
Viability assay
The viability of AML and ALL cell lines treated with various drug concentrations was determined by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay modified after Mosman [32], as previously described [37]. Cells suspended in 100 μL cell culture medium were plated per well in 96-well plates and incubated in the presence of various drug concentrations for 96 h. Then, 25 μL of MTT solution (2 mg/mL (w/v) in PBS) were added per well, and the plates were incubated at 37 °C for an additional 4 h. After this, the cells were lysed using 100 μL of a buffer containing 20% (w/v) sodium dodecylsulfate in 50% (v/v) N,N-dimethylformamide with the pH adjusted to 4.7 at 37 °C for 4 h. Absorbance was determined at 570 nm for each well using a 96-well multiscanner (Tecan Spark, Tecan, Crailsheim, Germany). After subtracting of the background absorption, the results are expressed as percentage viability relative to control cultures which received no drug. Drug concentrations that inhibited cell viability by 50% (IC50) were determined using CalcuSyn (Biosoft, Cambridge, UK) or GraphPad Prism (San Diego, CA, USA).
For AML blasts viability assays were performed using the CellTiter-Glo (Promega, Walldorf, Germany) assay according to the manufacturer's protocol. Briefly, cells were seeded at 5000 cells per well in 96-well plates and treated for 96 h. Luminescence was measured on a Tecan Spark (Tecan). IC50 values were calculated using GraphPad Prism.
Caspase 3/7 assay
To determine Caspase 3/7 activity in THP-1 SAMHD1 KO and CTRL cells the Caspase-Glo 3/7 assay (Promega, Walldorf, Germany) was used according to the manufacturer's protocol. Briefly, cells were seeded at 5000 cells per well in white 96-well plates, treated with different concentrations of CNDAC and incubated for 24, 48 and 72 h at 37 °C in a humidified 5% CO2 incubator. After incubation an equal volume of Caspase-Glo 3/7 reagent was added, mixed for 30 min and luminescence was measured on a Tecan Spark (Tecan).
Determination of population doubling time (PDT)
To generate a growth curve, cells were seeded at 2000 cells per well in a white 96-well plate in 100 μl culture medium and incubated for 0, 1, 2, 3, 4 and 7 days at 37 °C in a humidified 5% CO2 incubator. Cell viability was detected using the CellTiter-Glo assay (Promega) according to the manufacturer's protocol. Growth curves were created and the population doubling times calculated using the following formula:
$$ \mathrm{PDT}=\frac{\mathrm{cultivation}\ \mathrm{period}\ \left[\mathrm{h}\right]\times {\log}_{10}(2)}{\log_{10}\left(\mathrm{final}\ \mathrm{cell}\ \mathrm{count}\right)-{\log}_{10}\left(\mathrm{starting}\ \mathrm{cell}\ \mathrm{count}\right)} $$
Whole-cell lysates were prepared by using Triton-X sample buffer containing protease inhibitor cocktail from Roche (Grenzach-Wyhlen, Germany). The protein concentration was assessed by using the DC Protein assay reagent obtained from Bio-Rad Laboratories. Equal protein loads were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and proteins were transferred to nitrocellulose membranes (Thermo Scientific, Dreieich, Germany). The following primary antibodies were used at the indicated dilutions: SAMHD1 (Proteintech, St. Leon-Roth, Germamy, 12,586–1-AP, 1:1000), β-actin (BioVision, Milpitas, CA, US, 3598R-100, 1:5000), pSAMHD1 (Cell Signaling, Frankfurt am Main, Germany, 89930S, 1:1000), GAPDH (Trevigen via Bio-Techne, Wiesbaden, Germany, 2275-PC-10C, 1:5000), DCK (abcam, Berlin, Germany, ab96599, 1:4000), DGK (Santa Cruz Biotechnology, Heidelberg, Germany, sc-398,093, 1:100), PARP (Cell Signaling, 9542S, 1:1000), H2AX (Cell Signaling, 2595S, 1:1000), γH2AX (Cell Signaling, 9718S, 1:1000), Chk2 (Cell Signaling, 2662S, 1:1000), pChk2 (Cell Signaling, 2661S, 1:1000), TIF-1β (Cell Signaling, 4124S, 1:1000), pTIF-1β (Cell Signaling, 4127S, 1:1000). Visualisation and quantification were performed using IRDye-labeled secondary antibodies (LI-COR Biotechnology, Bad Homburg, Germany, IRDye®800CW Goat anti-Rabbit, 926–32,211 and IRDye®800CW Goat anti-Mouse IgG, 926–32,210) according to the manufacturer's instructions. Band volume analysis was conducted by Odyssey LICOR.
The intracellular SAMHD1 staining of AML blasts was performed as previously described [3] with SAMHD1-antibody from Proteintech (12586–1-AP, 1:100). Staining for surface markers (CD33, CD34, CD45) for AML blasts was applied before fixation with the following fluorochrome-conjugated antibodies: CD33-PE and CD34-FITC, both from Miltenyi Biotech (130–111-019, 130–113-178) and CD45-V450 from BD Pharmingen (Heidelberg, Germany, 642,275), all diluted 1:5 per 1 × 107 cells, and goat anti-rabbit Alexa-Fluor-660 from Invitrogen, Life technologies (1:200, A-21073) as secondary antibody for SAMHD1 staining. Samples were analysed by using a FACSVerse flow cytometer from BD Biosciences (Heidelberg, Germany) and the FlowJo software (FlowJo LLC, Ashland, OR, US). To determine the mean fluorescence intensity (MFI) for SAMHD1, the geometric mean for the isotype control was subtracted from the geometric mean for SAMHD1.
SAMHD1 promoter methylation
The SAMHD1 promoter contains five HpaII sites surrounding the transcription start site [7]. To measure methylation of the SAMHD1 promoter genomic DNA was treated with the methylation-sensitive HpaII endonuclease as described previously [7, 36]. Methylation of the HpaII sites in the SAMHD1 promoter prevents digestion by HpaII and the intact sequence serves then as a template for PCR amplification using SAMHD1 promoter-specific primers that flank the HpaII sites: PM3.fwd: TTCCGCCTCATTCGTCCTTG and PM3.rev: GGTTCTCGGGCTGTCATCG were used as SAMHD1 promoter-specific primers. A single PCR product (993-bp) corresponding to the SAMHD1 promoter sequence was obtained from untreated genomic DNA and treated DNA from cells with methylated but not from cells with unmethylated SAMHD1 promoter. To serve as input control, a 0.25-kb fragment of the GAPDH gene lacking HpaII sites was PCR-amplified using the same template DNA.
Manipulation of cellular SAMHD1 levels using siRNA or Vpx-VLPs
For siRNA-mediated silencing, AML cells (1 × 106) were transfected with 2.5 μM ON-TARGET plus human SAMHD1 siRNA SMART-pool obtained from Dharmacon (Munich, Germany, L-013950-01-0050) in resuspension electroporation buffer R (Invitrogen, Dreieich, Germany) using the Neon transfection system (Invitrogen) according to the manufacturer's recommendation. Additionally, ON-TARGET plus Non-targeting Control Pool obtained from Dharmacon (D-001810-10-50) was transfected in parallel. The electroporation was performed with one 20 msec pulse of 1700 V and analysed 48 h after transfection by western blotting and a cell viability assay.
For Vpx virus-like particle (VLP)-mediated SAMHD1 degradation, cells were spinoculated with VSV-G pseudotyped virus-like particles carrying either Vpx or Vpr as control from SIVmac251. VLPs carrying Vpx or Vpr were produced by co-transfection of 293 T cells with pSIV3 + gag pol expression plasmids and a plasmid encoding VSV-G as previously described [36, 41]. For viability assays cells were preincubated with VLPs for 24 h before the studied compounds were added.
LC-MS/MS analysis
AML or ALL cells were seeded at 2.5 × 105 cells per well in 24 well plates, treated with 10 μM CNDAC and incubated at 37 °C in a humidified 5% CO2 incubator for 6 h. Subsequently, cells were washed twice in 1 ml PBS, pelleted and stored at − 80 °C until measurement. The concentrations of canonical dNTPs and CNDAC-triphosphate in the samples were analysed by liquid chromatography-electrospray ionization-tandem mass spectrometry, as previously described for canonical dNTPs [43]. Briefly, the analytes were extracted by protein precipitation with methanol. An anion-exchange HPLC column (BioBasic AX, 150 × 2.1 mm, 5 μM, Thermo Scientific) was used for the chromatographic separation and a 5500 QTrap (Sciex, Darmstadt, Germany) was used as analyser, operating as triple quadrupole in positive multiple reaction monitoring (MRM) mode. CNDAC-TP was quantified using 2-deoxycytidine-13C9,15N3-triphosphate (13C9,15N3-dCTP) as internal standard (IS). The precursor-to-product ion transition used as quantifier was m/z 493.1 → 112.1 for CNDAC-TP. Owing to the lack of commercially available standards for CNDAC-TP, relative quantification was performed by comparing the peak area ratios (analyte/IS) of the differently treated samples.
Protein expression and purification
N-terminal 6 × His-tagged SAMHD1 (residues 113 to 626, H206R D207N) was expressed in BL21 (DE3) Escherichia coli grown in Terrific Broth medium at 200 rpm, 18 °C for 16 h. Cells were re-suspended in buffer and passed through a microfluidizer. Cleared lysates were purified using nickel-nitrilotriacetic acid (Ni-NTA) affinity and size-exclusion chromatography. Proteins were stored in a buffer containing 50 mM Tris-HCl, pH 8, 150 mM NaCl, 0.5 mM TCEP, 5 mM MgCl2, and 10% glycerol.
Crystallization and data collection
Purified SAMHD1 protein in buffer (50 mM Tris-HCl, pH 8.0, 150 mM NaCl, 5 mM MgCl2, and 0.5 mM TCEP) was mixed with 1 mM GTP, 0.1 mM dATP, and 10 mM CNDAC. All crystals were grown at 25 °C using the microbatch under-oil method by mixing 1 μL of protein (3 mg/mL) with 1 μL of crystallization buffer (100 mM succinate–phosphate–glycine (SPG) buffer, pH 7.4, 25% PEG 1500; Qiagen). Crystals were improved by streak seeding. Crystals were cryoprotected in paratone oil and frozen in liquid nitrogen. Diffraction data were collected at Advanced Photon Source beamline 24-ID-E. The data statistics are summarized in Table 1.
Structure determination and refinement
Using the previously published SAMHD1 tetramer structure (PDB ID code 4BZB), with the bound nucleotides removed, as the search model, the structure was solved by molecular replacement using PHASER [29, 45, 46]. The model was refined with iterative rounds of restrained refinement using Refmac5 [33], followed by rebuilding the model to the 2Fo-Fc and the Fo-Fc maps using Coot [8]. Refinement statistics are summarised in Suppl. Table 5. Coordinates and structure factors have been deposited in the Protein Data Bank, with accession codes listed in Suppl. Table 5.
Enzymatic assay
In vitro SAMHD1 activity was measured as described [42]. Briefly, 1 μM his-tagged human SAMHD1 and 1.5 μM PPase from E.coli were incubated at room temperature in 20 μL reaction buffer (50 mM Tris, 150 mM NaCl, 1.25 mM MgCl2, 0.5 mM TCEP, 0.05% Brij-35) and different concentrations of GTP, dGTP and CNDAC-TP in a clear 384-well plate (Corning, 3700, New York, USA). Reactions were stopped by addition of 20 μL EDTA (20 mM in water). Subsequently, 10 μL malachite green reagent (Sigma-Aldrich, MAK307, Missouri, USA) were added. Absorbance was recorded at 620 nm after incubating the samples for 60 min at room temperature. For normalization, background subtraction of controls containing the same substrate and PPase concentrations but no SAMHD1 was performed.
Statistical data analysis was performed using GraphPad Prism. Pearson's correlation coefficient was used to compute correlations between variables, using a t-test to assess significance of the correlation. Group comparisons were performed using Student's t-test.
SAMHD1 levels correlate with leukaemia cell sensitivity to CNDAC
Initially, we characterised a panel of 13 human AML cell lines for the levels of SAMHD1 and deoxycytidine kinase (DCK) (Fig. 1A). DCK phosphorylates and activates cytidine analogues in a rate-limiting step [15, 28, 48] and may, hence, determine cell sensitivity to a nucleoside analogue like CNDAC anticipated to be a SAMHD1 inhibitor [14]. We detected varying SAMHD1 and DCK levels (Fig. 1A, Suppl. Figure 1), varying CNDAC concentrations that reduced cell viability by 50% (IC50) (Fig. 1B, Suppl. Figure 2, Suppl. Table 1), and varying CNDAC-TP levels (Fig. 1C) across the investigated cell lines. However, the CNDAC IC50s did not correlate with the cellular levels of DCK (Fig. 1D), indicating that DCK is not a critical determinant of CNDAC activity in our cell line panel.
SAMHD1 (but not DCK) levels determine sensitivity to CNDAC and inversely correlate with CNDAC-triphosphate (CNDAC-TP) in leukaemia cell lines. (A) Representative Western blots of SAMHD1, phosphorylated SAMHD1 (pSAMHD1), and DCK in 13 AML cell lines. GAPDH served as loading control. Uncropped Western blots are presented in Supplementary Figure 1. (B) CNDAC concentrations that reduce the viability of AML cell lines by 50% (IC50). Horizontal lines and error bars represent means ± SD of three independent experiments. (C) CNDAC triphosphate (CNDAC-TP) levels determined by LC–MS/MS. Horizontal lines and error bars show means ± SD of three independent experiments. (D, E) Correlation of the CNDAC IC50 values with cellular DCK (D) or SAMHD1 (E) protein levels, quantified using near-infrared Western blot images to determine the ratio DCK/ GAPDH or SAMHD1/GAPDH. Closed circles and error bars represent means ± SD of three independent experiments. Linear regression analyses were performed using GraphPad Prism. (F, G) Correlation of CNDAC-TP levels with cellular DCK (F) or SAMHD1 (G) protein levels in AML cell lines, quantified using near-infrared Western blot images to determine the ratio DCK/GAPDH or SAMHD1/GAPDH. Closed circles and error bars represent means ± SD of three independent experiments. Linear regression analyses were performed using GraphPad Prism. (H) Analysis of SAMHD1 promoter methylation in AML cell lines through amplification of a single PCR product (993-bp) corresponding to the promoter sequence after HpaII digestion. A 0.25-kb fragment of the GAPDH gene lacking HpaII sites was PCR-amplified using the same template DNA served as loading control. THP-1 served as control cell for an unmethylated SAMHD1 promotor, while JURKAT served as control cell for a methylated promotor. (I) Correlation of CNDAC IC50 values in 26 ALL cell lines (11 T-ALL, 15 B-ALL) with SAMHD1 protein levels, quantified using near-infrared Western blot images to determine the ratio SAMHD1/ GAPDH relative to the positive control THP-1. Closed circles and error bars represent means ± SD of three independent experiments. Linear regression analyses were performed using GraphPad Prism. (J-L) Comparison of SAMHD1 protein levels (J), CNDAC IC50 values (K) and CNDAC-TP levels determined by LC-MS/MS (L) in T-ALL and B-ALL cells. Each point represents the mean of three independent experiments. One-tailed Student's t-tests were used to compare means in T-ALL and B-ALL cells (represented as horizontal lines ± SEM)
In contrast, the CNDAC IC50s correlated with the cellular SAMHD1 levels (Fig. 1E), suggesting that SAMHD1 may cleave and inactivate CNDAC-TP, although CNDAC had been proposed to be a SAMHD1 inhibitor [14]. Also, there was no correlation between cellular CNDAC-TP and DCK levels (Fig. 1F), but an inverse correlation of the CNDAC-TP levels with SAMHD1 (Fig. 1G). This further supports the notion that SAMHD1 but not DCK critically determines CNDAC phosphorylation and activity. Notably, SAMHD1 promoter methylation (Fig. 1H) did not always indicate cellular SAMHD1 levels (Fig. 1A), showing that multiple mechanisms are involved in regulating the cellular abundance of this protein.
The CNDAC IC50s also correlated with the cellular SAMHD1 levels in acute lymphoblastic leukaemia (ALL) cells (Fig. 1I, Suppl. Table 2). In agreement with previous findings [39], T-cell ALL (T-ALL) cells were characterised by lower SAMHD1 levels than B-ALL cells (Fig. 1J). This was reflected by higher CNDAC sensitivity (Fig. 1K) and higher CNDAC-TP levels (Fig. 1L) in T-ALL cells than in B-ALL cells. Taken together, these findings suggest that CNDAC is a SAMHD1 substrate and that SAMHD1 but not DCK critically determines CNDAC phosphorylation and activity in AML and ALL cells.
SAMHD1 suppression sensitises leukaemia cells to CNDAC
Functional studies further confirmed the impact of SAMHD1 on CNDAC activity. THP-1 AML cells, in which the SAMHD1 gene was disrupted using CRISPR/Cas9 (THP-1 KO cells), displayed increased CNDAC sensitivity (Fig. 2A) and CNDAC-TP levels (Fig. 2B) relative to control cells. Moreover, THP-1 KO cells showed enhanced DNA damage, as indicated by γH2AX levels, CHK2 phosphorylation, and TIF1β phosphorylation (Fig. 2C), and apoptosis, as indicated by PARP cleavage (Fig. 2C) and caspase 3/7 activity (Fig. 2D, Suppl. Table 3), in response to CNDAC. This is in line with the anticipated mechanism of action of CNDAC, i.e. CNDAC-TP incorporation into DNA resulting in strand breaks and apoptosis [1, 2, 12, 24,25,26,27].
SAMHD1 suppression sensitises AML cells to CNDAC. (A) CNDAC dose-response curves in THP-1 knockout (THP-1 KO) cells, in which the THP-1 gene was disrupted using CRISPR–Cas9, or control cells (THP-1 CTRL). Values represent means ± SD of three independent experiments. Concentrations that reduce cell viability by 50% (IC50s) ± SD are provided. (B) Representative LC-MS/MS analysis of CNDAC triphosphate (CNDAC-TP) levels in THP-1 KO and THP-1 CTRL cells. (C) Representative Western blots indicating levels of proteins involved in DNA damage response in THP-1 KO and THP-1 CTRL cells after treatment with increasing CNDAC concentrations (0, 3.2, 16, 80, 400, and 2000 nM) for 72 h. (D) Caspase 3/7 activity in THP-1 KO and THP-1 CTRL cells after treatment with increasing CNDAC concentrations (0.015, 0.9375 and 60 μM) for 24, 48, and 72 h, relative to untreated controls. Mean ± SD is provided for one representative experiment out of three using three technical replicates. p-values were determined by two-tailed Student's t-tests (*p < 0.05; **p < 0.01; ***p < 0.001). (E) CNDAC IC50 values in AML cells after transfection with SAMHD1-siRNAs (siSAMHD1) or non-targeting control siRNAs (siCTRL). Values represent the means ± SD of three technical replicates of one representative experiment out of three. p-values were determined by two-tailed Student's t-tests (*p < 0.05; **p < 0.01; ***p < 0.001). (F) CNDAC dose-response curves in AML cell lines treated with CNDAC in the absence or presence of VPX virus-like particles (VPX-VLPs, cause SAMHD1 depletion), or VPR virus-like particles (VPR-VLPs, negative control). Values represent the means ± SD of three technical replicates of one representative experiment out of three. (G) Representative Western Blots and LC-MS/MS analyses of CNDAC-TP levels in AML cells treated with VPX-VLPs or control VPR-VLPs
Further, SAMHD1 depletion using siRNA (Fig. 2E, Suppl. Figure 3) and virus-like particles (VLPs) carrying the lentiviral VPX protein (VPX-VLPs) [41] (Fig. 2F) increased the CNDAC sensitivity of AML cell lines. VPX-VLP-mediated SAMHD1 depletion was also associated with elevated CNDAC-TP levels (Fig. 2G). These findings further support a critical role of SAMHD1 in determining CNDAC sensitivity of AML cells.
SAMHD1 determines sensitivity of primary AML cells
CNDAC sensitivity also correlated with the cellular SAMHD1 levels in primary leukaemic blasts derived from the bone marrow of 24 therapy-naïve AML patients (Fig. 3A, Suppl. Figure 4, Suppl. Table 4). Moreover, primary leukaemic blasts were sensitised by VPX-VLPs to CNDAC (Fig. 3B, Fig. 3C, Suppl. Figure 5) and VPX-VLP-mediated SAMHD1 depletion resulted in increased CNDAC-TP levels in AML blasts (Fig. 3D, Fig. 3E). This shows that SAMHD1 also determines CNDAC sensitivity in clinical AML samples.
SAMHD1 determines CNDAC sensitivity of primary AML cells. (A) Correlation of SAMHD1 protein levels and CNDAC concentrations that reduce cell viability by 50% (IC50s) in bone-marrow-derived leukaemic blasts derived from 24 therapy-naïve AML patients. Cells were co-immunostained for CD33, CD34, CD45 (surface markers) and intracellular SAMHD1 and the mean fluorescence intensity (MFI) was analysed by flow cytometry. ATP assays were performed in three technical replicates to determine the CNDAC IC50 values. Linear regression analyses were performed using GraphPad Prism. (B) CNDAC IC50 values in bone-marrow-derived leukaemic blasts derived from six therapy-naïve AML patients either treated with VPX virus-like particles (VPX-VLPs, cause SAMHD1 depletion), VPR virus-like particles (VPR-VLPs, negative control), or left untreated. Horizontal lines and error bars indicate means ± SD of three technical replicates. p-values were determined by two-tailed Student's t-tests (*p < 0.05; **p < 0.01; ***p < 0.001). (C) CNDAC dose-response curves in primary AML cells of one exemplary patient (Patient E) treated with VPX-VLPs, VPR-VLPs or left untreated. IC50 values represent means ± SD of three technical replicates. (D) Representative Western blots indicating SAMHD1 levels in primary AML cells derived from Patient E in response to treatment with VPX-VLPs. (E) CNDAC-triphosphate (CNDAC-TP) levels as determined by LC-MS/MS in primary AML cells derived from Patient E in response to treatment with VPX-VLPs
SAMHD1 hydrolyses CNDAC triphosphate (CNDAC-TP)
Next, we studied the interaction of CNDAC-TP and SAMHD1 in an enzymatic assay. SAMHD1 forms a homotetramer complex that cleaves nucleoside triphosphate (Suppl. Figure 6). Tetramer formation depends on nucleoside triphosphate binding to the allosteric SAMHD1 sites 1 (A1) and A2. A1 is activated by guanosine triphosphate (GTP) or desoxy-guanosine triphosphate (dGTP) binding. A2 can be activated by any canonical deoxy-nucleoside triphospate (dNTP) and some triphosphorylated deoxyribose-based nucleoside analogues such as cladribine-TP and decitabine-TP (Suppl. Figure 6) [14, 16, 17, 21, 36]. Arabinose-based nucleoside analogue triphosphates (e.g. cytarabine-TP, fludarabine-TP, or arabinosylguanine-TP (AraG-TP, the active metabolite of nelarabine), and the triphosphorylated 2′-deoxy-2′-fluororibose-based nucleoside analogue clofarabine depend on the activation of A2 by canonical nucleotides [14, 21, 36].
Results from the enzymatic assay confirmed that SAMHD1 hydrolyses CNDAC-TP only in the presence of dGTP (Fig. 4A). This indicates that CNDAC-TP is a SAMHD1 substrate but not able to activate the enzyme by binding to A1 and A2.
CNDAC triphosphate (CNDAC-TP) is a SAMHD1 substrate. (A) Normalised results from a colorimetric SAMHD1 activity assay carried out in presence of different combinations of GTP, dGTP and CNDAC-TP. Horizontal lines and error bars represent means ± SD from three independent experiments. (B) Surface view of SAMHD1 tetramer with each subunit in a different colour. CNDAC-TP in a catalytic pocket is shown in magenta sticks. (B, inset) CNDAC-TP bound to the SAMHD1 catalytic pocket. Black asterisks indicate the site of nitrile modification. The SAMHD1 backbone is shown as ribbons with side chains shown as sticks. A magnesium ion is shown as a green sphere and coordinated waters are shown as red spheres. Portions of the structure are omitted for clarity. (C) Chemical structure of CNDAC with 2′S nitrile modification highlighted (left). 2Fo-Fc electron density (σ = 1.0) for CNDAC-TP co-crystallized in the catalytic pocket of SAMHD1 (right). Black asterisks indicate site of nitrile modification. (D) Concentrations of physiological dNTPs in THP-1 cells determined by LC-MS/MS after pre-treatment with VPX virus-like particles (VPX-VLPs, cause SAMHD1 depletion), VPR virus-like particles (VPR-VLPs, negative control), and with or without cytarabine (Ara-C) or CNDACs. Bars and error bars represent means ± SD from three independent measurements. The Lower Limit of Quantification (LLOQ) for dGTP was 0.2 ng/Pellet, so values below the LLOQ were set to 0.2 ng/Pellet (CNDAC and CNDAC + VPR). p-values were determined by two-tailed Student's t-tests (*p < 0.05; **p < 0.01; ***p < 0.001). (E) AraC IC50s in THP-1 cells in the presence of different CNDAC concentrations (0, 0.375, 0.75, 1.5 μM). Horizontal lines and error bars represent means ± SD of three technical replicates of one representative experiment out of three. p-values were determined by two-tailed Student's t-tests (*p < 0.05; **p < 0.01; ***p < 0.001)
Crystal structure of CNDAC-TP bound to SAMHD1
To investigate the interaction of CNDAC-TP and SAMHD1 further, we crystallised the catalytically inactive HD domain (residues 113–626; H206R, D207N) of SAMHD1 in the presence of GTP, dATP, and excess CNDAC-TP as previously described [21] and collected diffraction data to 2.8 Å. SAMHD1 crystallised as a tetramer with GTP and dATP occupying A1 and A2, respectively, and CNDAC-TP bound to the catalytic site (Fig. 4B, Suppl. Table 5).
Previous studies investigating the binding of triphosphorylated nucleoside analogues to SAMHD1 showed that modifications at the 2'ribose (R) position are major determinants of interaction with the catalytic SAMHD1 site [21]. Analogues with 2'R modifications abrogate binding to SAMHD1, while 2′S stereoisomers are more permissive. Furthermore, the catalytic site tolerates larger 2′S modifications, whereas analogue binding at the A2 site is either impaired or fully blocked by 2′S fluorination or hydroxylation of the sugar ring, respectively [21].
Consistent with these observations, the CNDAC-TP-bound SAMHD1 adopts the same conformation as the canonical nucleotide-bound form (overall RMSD: 0.30 Å vs PDB ID 4BZB). The ribose 2′S nitrile modification of CNDAC-TP (Fig. 4C) protrudes outward from the catalytic pocket without affecting canonical nucleotide contacts with active site residues. CNDAC-TP is therefore easily accommodated in the catalytic site to serve as a substrate for SAMHD1 triphosphohydrolase activity. However, the large nitrile group of CNDAC-TP prevents binding to the more restrictive A2 site. Thus, CNDAC alone is insufficient for SAMHD1 activation.
Impact of CNDAC on cellular levels of physiological nucleoside triphosphates and the activity of SAMHD1 substrates
The finding that CNDAC-TP is itself a substrate of SAMHD1 does not exclude the possibility that it also exerts inhibitory effects on SAMHD1, as previously suggested [14]. Hence, we investigated the effects of CNDAC on the levels of physiological desoxynucleoside triphosphates (dNTPs) and the activity of cytarabine, the triphosphate of which is known to be a SAMHD1 substrate [41].
CNDAC did in contrast to VPX-VLPs, which served as a positive control for suppressing SAMHD1 activity, not increase the levels of physiological dNTPs (Fig. 4D). Moreover, CNDAC did not increase the activity of cytarabine (Fig. 4E). Thus, these findings do not suggest a pharmacologically relevant activity of CNDAC as SAMHD1 inhibitor in AML cells.
Clonal heterogeneity in SAMHD1 levels drives intrinsic AML cell resistance to CNDAC
When we established twelve single cell-derived clones of the AML cell line MV4–11 by limiting dilution (Fig. 5A), we determined an up to 332-fold difference in CNDAC sensitivity (CNDAC IC50 clone 1: 0.065 μM, CNDAC IC50 clone 11: 21.6 μM; Fig. 5B, Suppl. Figure 7). Moreover, the MV4–11 clones displayed substantial discrepancies in the cellular SAMHD1 levels (Fig. 5C), but no changes in SAMHD1 promoter methylation (Fig. 5D).
There was a significant correlation between SAMHD1 protein levels (but not the DCK protein levels) and the CNDAC IC50s (Fig. 5E), and siRNA-mediated SAMHD1 depletion resulted in increased CNDAC (but not daunorubicin) sensitivity in three selected clones displaying differing SAMHD1 levels (Fig. 5F, Suppl. Figure 8). The different effects of SAMHD1 on CNDAC- and daunorubicin-mediated toxicity suggest that SAMHD1 interferes with CNDAC activity predominantly by cleaving CNDAC-TP and not by generally augmenting DNA repair.
Differences in cellular SAMHD1 levels may affect cell proliferation [10, 22, 23, 49], but there was no significant correlation between the SAMHD1 (or DCK) levels of the MV4–11 clones and their doubling times (Fig. 5G).
Taken together, these findings confirm that the response to CNDAC is primarily driven by the SAMHD1 levels in CNDAC-naïve AML cells.
Acquired resistance to CNDAC is associated with decreased DCK levels
To investigate the role of SAMHD1 in acquired CNDAC resistance, we established twelve CNDAC-resistant sublines of each of the AML cell lines HL-60 and PL-21, which are characterised by low SAMHD1 levels (Fig. 1A) and high CNDAC sensitivity (Fig. 1B). Interestingly, none of the 24 resulting CNDAC-resistant sublines displayed increased SAMHD1 levels but all showed reduced, virtually non-detectable DCK levels (Fig. 6A). Among twelve single cell-derived clones of HL-60 and PL-21, none displayed similarly low DCK levels (Fig. 6A).
Clonal heterogeneity in SAMHD1 levels drives intrinsic resistance to CNDAC but not population doubling time in MV4-11 cells. (A) Schematic illustration of the establishment of MV4-11 single cell-derived clones by limiting dilution. (B) CNDAC concentrations that reduce viability of 12 single-cell-derived MV4-11 clones by 50% (IC50). Values represent means ± SD of three independent experiments. (C) Representative Western blots of SAMHD1, phosphorylated SAMHD1 (pSAMHD1), and DCK in single cellderived MV4-11 clones. GAPDH served as a loading control. (D) Analysis of SAMHD1 promoter methylation in MV4-11 clones through amplification of a single PCR product (993-bp) corresponding to the promoter sequence after HpaII digestion. (E) Correlation of the CNDAC IC50 values with cellular SAMHD1 or DCK protein levels, quantified using nearinfrared Western blot images to determine the ratio SAMHD1/ GAPDH or DCK/ GAPDH. Closed circles and error bars represent means ± SD of three independent experiments, each performed in three technical replicates. Linear regression analyses were performed using GraphPad Prism. (F) Western Blots and IC50 values for CNDAC and daunorubicin in MV4-11 clones 9, 11, and 12 after transfection with SAMHD1-siRNAs (siSAMHD1) or non-targeting control siRNAs (siCTRL). Each symbol represents the mean ± SD of three technical replicates of one representative experiment out of three. P-values were determined by two-tailed Student's t-test (*p < 0.05; **p < 0.01; ***p < 0.001). (G) Population doubling time (PDT) in MV4-11 single cell-derived clones and correlation of the PDT with cellular SAMHD1 or DCK protein levels. Closed circles and error bars represent means ± SD from the quantification of three Western Blots. Linear regression analyses were performed using GraphPad Prism
Then, we determined resistance profiles in the CNDAC-resistant HL-60 and PL-21 sublines and the clonal HL-60 and PL-21 sublines to a set of cytotoxic (CNDAC, sapacitabine, cytarabine, clofarabine, cladribine, fludarabine, gemcitabine, decitabine, azacytidine, 6-thioguanine, daunorubicin) and targeted (venetoclax, vismodegib, olaparib, ganetespib, volasertib, gedatolisib, molibresib) drugs (Fig. 6B, Suppl. Table 6).
In addition to resistance to CNDAC and its prodrug sapacitabine, all CNDAC-adapted sublines also consistently displayed a markedly reduced sensitivity to the nucleoside analogues clofarabine, cladribine, fludarabine, gemcitabine, and decitabine, whose activation critically depends on monophosphorylation by DCK (Fig. 6B, Suppl. Table 6). In contrast, there was no cross-resistance to the nucleoside analogues azacytidine and 6-thioguanine that are no DCK substrates and to the anthracycline daunorubicin. This suggests that the reduced DCK expression is the predominant acquired resistance mechanism in our panel of CNDAC-adapted AML cell lines.
This notion was also confirmed by the general lack of cross-resistance to targeted drugs with a range of different targets, including the smoothend receptor (vismodegib), PARP1 (olaparib), HSP90 (ganetespib), PLK1 (volasertib), and PI3K/mTOR (gedatolisib). There was some level of resistance to the BET inhibitor molibresib among the CNDAC-adapted sublines (Fig. 6B, Suppl. Table 6). However, some level of resistance to these drugs was also detected among the clonal HL-60 and PL-21 sublines (Fig. 6B, Suppl. Table 6), which may suggest that this molibresib resistance may rather be the consequence of clonal selection processes during resistance formation and not part of the acquired CNDAC resistance mechanisms.
The Bcl-2 inhibitor venetoclax was the only targeted drug against which the CNDAC-adapted sublines displayed an increased level of resistance that was not detectable in the clonal sublines (Fig. 6B, Suppl. Table 6). This may indicate a generally increased resistance to apoptosis in the CNDAC-adapted sublines (Fig. 6B, Suppl. Table 6), which may reflect that apoptosis induction is anticipated to be part of the anti-cancer mechanism of action of CNDAC [27].
Taken together, our findings suggest that DCK downregulation is the major acquired CNDAC resistance mechanism in AML cells, potentially complemented by a generally reduced potential to undergo apoptosis.
Role of SAMHD1 and DCK in CNDAC cross-resistance of AML cell lines adapted to drugs from different classes
In contrast to the CNDAC-adapted AML cell lines introduced here, which displayed reduced DCK expression as main acquired resistance mechanism, AML cell lines adapted to the SAMHD1 substrates cytarabine or decitabine were characterised by a combination of increased SAMHD1 levels and decreased DCK levels [36, 41].
CNDAC-adapted AML sublines displayed pronounced cross-resistance to nucleoside analogues that are activated by DCK but not to anti-leukaemia drugs with other mechanisms of action (Fig. 6). In a reversed setting, we next investigated CNDAC in a panel consisting of the AML cell line HL-60 and its sublines adapted to the nucleoside analogues cytarabine, Ara-G, azacytidine, and fludarabine, the purine antagonist 6-mercaptopurine, the Bcl-2 inhibitor venetoclax, the PARP inhibitor olaparib, and the polo-like kinase 1 inhibitor volasertib.
The nucleoside analogue-resistant HL-60 sublines displayed increased SAMHD1 and/ or decreased DCK levels (Fig. 7A) and pronounced CNDAC resistance (Fig. 7B, Suppl. Figure 9), while little or no CNDAC resistance was detected in the remaining sublines (Fig. 7A, Fig. 7B, Suppl. Figure 9). Moreover, cellular SAMHD1 levels directly and cellular DCK levels inversely correlated with the CNDAC IC50s (Fig. 7C), indicating that enhanced SAMHD1 levels and reduced DCK levels contribute to cross-resistance to CNDAC. VPX-VLP-mediated SAMHD1 depletion sensitised nucleoside analogue-adapted HL-60 sublines to CNDAC to various extents (Fig. 7D), which probably reflects the relative importance of SAMHD1 and DCK levels for CNDAC resistance in these cell lines.
Acquired resistance to CNDAC is associated with decreased DCK levels and accompanied by cross-resistance to DCK-dependent nucleoside analogues. (A) Schematic illustrations of the establishment of CNDAC-resistant HL-60 and PL-21 cells by step-wise increasing drug concentrations during cell culture and of the establishment of single cellderived clones by limiting dilution. Moreover, representative Western blots indicating SAMHD1 and DCK levels in CNDAC-adapted HL-60 (HL-60rCNDACI-XII) and PL-21 (PL-21rCNDACI-XII) sublines and in single cell-derived clonal sublines of these cell lines. GAPDH and β-Actin served as loading controls. (B) Resistance profiles of CNDAC-adapted HL-60 and PL-21 sublines and single cell-derived clones of HL-60 and PL-21. Left spider webs show sensitivity to the cytotoxic drugs CNDAC, 6-Thioguanine (6-TG), Clofarabine (CLOF), Cladribine (CLAD), Fludarabine (FLU), Gemcitabine (GEM), Decitabine (DAC), 5-Azacytidine (AZA), Daunorubicin (DAU), Cytarabine (ARA-C), and Sapacitabine (SAP), while right spider webs display sensitivity to the targeted drugs Vismodegib (VISMO), Olaparib (OLA), Ganetespib (GANE), Gedatolisib (GEDA), Volasertib (VOLA), Molibresib (MOLI), and Venetoclax (VENE). Values are depicted as fold changes in drug concentrations that reduce cell viability by 50% (IC50s) between the respective parental AML cell line (shown in red) and the resistant cell lines or clones. Points closer to the centre than red lines indicate higher sensitivity to drugs in CNDAC-resistant sublines or clonal sublines than in parental cell lines, while points lying outside red lines indicate reduced sensitivity to the respective drug. IC50 fold changes are shown as means from three independent experiments. Numerical values are provided in Supplementary Table 6
SAMHD1 and DCK regulate CNDAC cross-resistance of AML cell lines adapted to drugs from different classes. (A) Representative Western blots of SAMHD1, phosphorylated SAMHD1 (pSAMH1), DGK, and DCK in HL-60 sublines adapted to cytarabine (Ara-C), arabinosylguanine (Ara-G), 5-azacytidine (AZA), fludarabine (FLU), 6-mercaptopurine (6-MP), venetoclax (VENE), olaparib (OLA), and volasertib (VOLA). GAPDH served as loading control. (B) CNDAC concentrations that reduce cell viability by 50% (IC50s) in drug-adapted HL-60 sublines. Horizontal lines and error bars represent means ± SD of three independent experiments, each performed in three technical replicates. p-values were determined by two-tailed Student's t-tests (*p < 0.05; **p < 0.01; ***p < 0.001). (C) Correlation of CNDAC IC50 values with cellular SAMHD1 or DCK protein levels, quantified using the near-infrared Western blot image shown in (A) to determine the ratio SAMHD1/GAPDH or DCK/GAPDH. (D) CNDAC dose-response curves in drug-adapted HL-60 sublines in the absence or presence of VPX virus-like particles (VPX-VLPs, cause SAMHD1 depletion) or VPR virus-like particles (VPR-VLPs, negative control). Each symbol represents the mean ± SD of three technical replicates of one representative experiment out of three. Concentrations that reduce AML cell viability by 50% (IC50s) ± SD and Western Blots showing SAMHD1 degradation by VPXVLPs are provided. (E) CNDAC or cytarabine (Ara-C) dose-response curve in cytarabineadapted MV4-11 or MOLM-13 cells (characterised by loss of DCK expression) stably transduced with either DCK (pWPI+DCK) or an empty vector (pWPI) in the absence or presence of VPX virus-like particles (VPX-VLPs), or VPR virus-like particles (VPR-VLPs). Each symbol represents the mean ± SD of three technical replicates of one representative experiment out of three. IC50s (mean ± SD) and Western Blots showing successful transduction with DCK and SAMHD1 degradation by VPX-VLPs are provided
Next, we used cytarabine-adapted MV4–11 and MOLM-13 sublines to further study the role of SAMHD1 and DCK in cross-resistance of nucleoside analogue-adapted AML cells to CNDAC (Fig. 7E). In both cell lines, VPX-VLP-mediated SAMHD1 depletion resulted in reduced CNDAC IC50s, which further decreased upon forced DCK expression. Similar findings were obtained with regard to the cytarabine resistance in these two cell lines (Fig. 7E). This confirms that, in principle, cellular SAMHD1 and DCK levels are involved in determining AML cell sensitivity to CNDAC (and cytarabine), although, as shown in this study, intrinsic and acquired CNDAC resistance differ in AML cells in that intrinsic CNDAC resistance is predominantly driven by high SAMHD1 levels and acquired CNDAC resistance by a reduction in DCK.
The findings of this study indicate that in AML cells intrinsic CNDAC resistance is predominantly driven by SAMHD1, whereas acquired CNDAC resistance is primarily caused by reduced DCK levels. This difference is of potential clinical significance, because SAMHD1 is a candidate biomarker for predicting CNDAC sensitivity in therapy-naïve patients, while DCK is a candidate biomarker for the early detection of resistance formation.
SAMHD1 is known to interfere with the activity of a range of anti-cancer nucleoside analogues as hydroxylase that cleaves the activated nucleoside analogue triphosphates [[13, 21, 36, 39, 41]; Xagorias et al., 2021]. The finding that SAMHD1 levels critically determine AML (and ALL) cell sensitivity to CNDAC is nevertheless somewhat unexpected, as CNDAC had originally been suggested to be a SAMHD1 inhibitor [14].
However, data from a large range of cell line models (including clonal AML sublines characterised by varying SAMHD1 levels) and patient samples demonstrated that high SAMHD1 levels are associated with reduced CNDAC sensitivity and that CRISPR/Cas9-, siRNA-, and VPX-VPL (virus-like particles carrying the lentiviral VPX protein)-mediated SAMHD1 depletion increase cellular CNDAC-TP levels and sensitise AML cells to CNDAC. In agreement, enzymatic assays and crystallisation studies showed that CNDAC-TP is cleaved by SAMHD1, but can in contrast to some other nucleoside analogues [14, 16, 17, 21, 36] not activate SAMHD1 via binding to the A2 site.
Moreover, the determination of physiological dNTPs in the presence of CNDAC and combination experiments with the SAMHD1 substrate cytarabine did not provide evidence that CNDAC may function as pharmacological SAMHD1 inhibitor in leukaemia cells.
Although cellular SAMHD1 levels, but not those of DCK that is critical for CNDAC phosphorylation and activation [[15, 28]; Wu et al., 2021], predominantly determined CNDAC sensitivity in CNDAC-naïve cells, the establishment of 24 CNDAC-resistant AML sublines unanimously resulted in a loss of DCK but not in an increase of SAMHD1. This differs from acquired resistance mechanisms against the nucleoside analogues cytarabine and decitabine that were found to include both increased SAMHD1 levels and decreased DCK levels [36, 41]. Two previously established CNDAC-adapted cancer cell lines had been shown to display reduced DCK levels but a contribution of SAMHD1 had not been investigated [34, 35].
CNDAC-adapted AML sublines consistently displayed cross-resistance to other nucleoside analogues known to be activated by DCK but no pronounced cross-resistance to other drugs with various mechanisms of action, further indicating that loss of DCK is the crucial resistance mechanism in CNDAC-adapted cells. Moreover, these data also show that drugs, which do not depend on DCK for activation, remain viable treatment options after resistance acquisition to CNDAC.
Similarly, among AML sublines adapted to a range of different anti-cancer drugs, only nucleoside analogues that displayed increased SAMHD1 and/ or decreased DCK levels were less sensitive to CNDAC. Thus, acquired resistance to a range of different anti-leukaemic drugs is unlikely to affect the efficacy of CNDAC.
Cytarabine- and decitabine-adapted AML cell lines are characterised by a combination of increased SAMHD1 levels and/ or reduced DCK levels as demonstrated previously [36, 41]. Although acquired CNDAC resistance was mediated by decreased DCK levels, both increased SAMHD1 levels and decreased DCK levels contributed to cross-resistance of cytarabine-adapted cells to CNDAC. In the future, it will be interesting to investigate why acquired resistance mechanisms differ between CNDAC-adapted cells on the one hand and cytarabine- and decitabine-adapted cells on the other hand.
Intrinsic AML cell response to CNDAC critically depends on cellular SAMHD1 levels, whereas acquired CNDAC resistance is predominantly mediated by reduced DCK levels. This adds to data demonstrating differences between intrinsic and acquired resistance mechanisms [31, 36, 40, 44]. These findings also indicate that SAMHD1 is a candidate biomarker predicting CNDAC response in the intrinsic resistance setting, while DCK plays a potential role as biomarker indicating therapy failure early in the acquired resistance setting. Moreover, CNDAC-adapted cells displayed no or limited cross-resistance to drugs whose activity is not influenced by DCK or SAMHD1. Similarly, CNDAC was still effective in cells adapted to drugs that are not affected by DCK or SAMHD1. These findings indicate treatment options after therapy failure.
The atomic coordinates and structure factors have been deposited in the Protein Data Bank, www.wwpdb.org. The PDB ID code will be added upon publication. The Preliminary Full wwPDB X-ray Structure Validation Report is provided as supplement.
acute lymphoblastic leukaemia
AML:
acute myeloid leukaemia
AraC:
cytarabine
AraG:
arabinosylguanine
AZA:
5-azacytidine
CLAD:
cladribine
CLOF:
clofarabine
CNDAC:
2′-C-cyano-2′-deoxy-1-β-D-arabino-pentofuranosyl-cytosine
CNDAC-TP:
CNDAC triphosphate
DAU:
daunorubicin
decitabine
DCK:
dGTP:
desoxy-guanosine triphosphate
dNTP:
deoxynucleoside triphosphate
FLUDA:
fludarabine
GANE:
ganetespib
GEDA:
gedatolisib
GEM:
gemcitabine
GTP:
guanosine triphosphate
IC50 :
concentration that reduces cell viability by 50%
IS:
internal standard
mononuclear cells
MOLI:
molibresib
MOLM-13rAraC2μg/ml-pWPI:
MOLM-13rAraC2μg/ml cells transduced with a control vector
MOLM-13rAraC2μg/ml-pWPI+DCK:
DCK-expressing MOLM-13rAraC2μg/ml cells
6-MP:
6-mercaptopurine
MRM:
multiple reaction monitoring
MTT:
3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide
MV4–11rAraC2μg/ml-pWPI cells:
MV4–11rAraC2μg/ml cells transduced with a control vector
MV4–11rAraC2μg/ml-pWPI+DCK:
DCK-expressing MV4–11rAraC2μg/ml cells
OLA:
olaparib
PDT:
population doubling time
RCCL collection:
Resistant Cancer Cell Line collection
SAMHD1:
sterile alpha motif and histidine-aspartate domain-containing protein 1
sapacitabine
6-TG:
6-thioguanine
THP-1 CTRL:
THP-1 control cells
THP-1 KO:
SAMHD1-deficient THP-1
VENE:
venetoclax
VISMO:
vismodegib
VLP:
virus-like particle
VOLA:
volasertib
Al Abo M, Sasanuma H, Liu X, Rajapakse VN, Huang SY, Kiselev E, et al. TDP1 is critical for the repair of DNA breaks induced by Sapacitabine, a nucleoside also targeting ATM- and BRCA-deficient tumors. Mol Cancer Ther. 2017;16(11):2543–51. https://doi.org/10.1158/1535-7163.MCT-17-0110.
Azuma A, Huang P, Matsuda A, Plunkett W. 2′-C-cyano-2′-deoxy-1-beta-D-arabino-pentofuranosylcytosine: a novel anticancer nucleoside analog that causes both DNA strand breaks and G(2) arrest. Mol Pharmacol. 2001;59(4):725–31. https://doi.org/10.1124/mol.59.4.725.
Baldauf HM, Pan X, Erikson E, Schmidt S, Daddacha W, Burggraf M, et al. SAMHD1 restricts HIV-1 infection in resting CD4(+) T cells. Nat Med. 2012;18(11):1682–7. https://doi.org/10.1038/nm.2964.
Bukowski K, Kciuk M, Kontek R. Mechanisms of multidrug resistance in Cancer chemotherapy. Int J Mol Sci. 2020;21(9):3233. https://doi.org/10.3390/ijms21093233.
CAS Article PubMed Central Google Scholar
Cinatl J Jr, Cinatl J, Weber B, Rabenau H, Gümbel HO, Kornhuber B, et al. Replication of human herpesvirus type 6 (strain AJ) in JJHan cells grown in protein-free medium. Res Virol. 1995;146(2):125–9. https://doi.org/10.1016/0923-2516(96)81081-8.
Czemerska M, Robak T, Wierzbowska A. The efficacy of sapacitabine in treating patients with acute myeloid leukemia. Expert Opin Pharmacother. 2018;19(16):1835–9. https://doi.org/10.1080/14656566.2018.1524875.
de Silva S, Hoy H, Hake TS, Wong HK, Porcu P, Wu L. Promoter methylation regulates SAMHD1 gene expression in human CD4+ T cells. J Biol Chem. 2013;288(13):9284–92. https://doi.org/10.1074/jbc.M112.447201.
Emsley P, Lohkamp B, Scott WG, Cowtan K. Features and development of coot. Acta Crystallogr D Biol Crystallogr. 2010;66(Pt 4):486–501. https://doi.org/10.1107/S0907444910007493.
Fenton TR, Garrett MD, Wass MN, Michaelis M. What really matters - response and resistance in cancer therapy. Cancer Drug Resist. 2018;1:200–3. https://doi.org/10.20517/cdr.2018.19.
Franzolin E, Pontarin G, Rampazzo C, Miazzi C, Ferraro P, Palumbo E, et al. The deoxynucleotide triphosphohydrolase SAMHD1 is a major regulator of DNA precursor pools in mammalian cells. Proc Natl Acad Sci U S A. 2013;110(35):14272–7. https://doi.org/10.1073/pnas.1312033110.
Goldstone DC, Ennis-Adeniran V, Hedden JJ, Groom HC, Rice GI, Christodoulou E, et al. HIV-1 restriction factor SAMHD1 is a deoxynucleoside triphosphate triphosphohydrolase. Nature. 2011;480(7377):379–82. https://doi.org/10.1038/nature10623.
Hanaoka K, Suzuki M, Kobayashi T, Tanzawa F, Tanaka K, Shibayama T, et al. Antitumor activity and novel DNA-self-strand-breaking mechanism of CNDAC (1-(2-C-cyano-2-deoxy-beta-D-arabino-pentofuranosyl) cytosine) and its N4-palmitoyl derivative (CS-682). Int J Cancer. 1999;82(2):226–36. https://doi.org/10.1002/(sici)1097-0215(19990719)82:2<226::aid-ijc13>3.0.co;2-x.
Herold N, Rudd SG, Sanjiv K, Kutzner J, Bladh J, Paulin CBJ, et al. SAMHD1 protects cancer cells from various nucleoside-based antimetabolites. Cell Cycle. 2017;16(11):1029–38. https://doi.org/10.1080/15384101.2017.1314407.
Hollenbaugh JA, Shelton J, Tao S, Amiralaei S, Liu P, Lu X, et al. Substrates and inhibitors of SAMHD1. PLoS One. 2017;12(1):e0169052. https://doi.org/10.1371/journal.pone.0169052.
Homminga I, Zwaan CM, Manz CY, Parker C, Bantia S, Smits WK, et al. In vitro efficacy of forodesine and nelarabine (ara-G) in pediatric leukemia. Blood. 2011;118(8):2184–90. https://doi.org/10.1182/blood-2011-02-337840.
Ji X, Wu Y, Yan J, Mehrens J, Yang H, DeLucia M, et al. Mechanism of allosteric activation of SAMHD1 by dGTP. Nat Struct Mol Biol. 2013;20(11):1304–9. https://doi.org/10.1038/nsmb.2692.
Ji X, Tang C, Zhao Q, Wang W, Xiong Y. Structural basis of cellular dNTP regulation by SAMHD1. Proc Natl Acad Sci U S A. 2014;111(41):E4305–14. https://doi.org/10.1073/pnas.1412289111.
Kantarjian H, Garcia-Manero G, O'Brien S, Faderl S, Ravandi F, Westwood R, et al. Phase I clinical and pharmacokinetic study of oral sapacitabine in patients with acute leukemia and myelodysplastic syndrome. J Clin Oncol. 2010;28(2):285–91. https://doi.org/10.1200/JCO.2009.25.0209.
Kantarjian H, Faderl S, Garcia-Manero G, Luger S, Venugopal P, Maness L, et al. Oral sapacitabine for the treatment of acute myeloid leukaemia in elderly patients: a randomised phase 2 study. Lancet Oncol. 2012;13(11):1096–104. https://doi.org/10.1016/S1470-2045(12)70436-9.
Kantarjian HM, Jabbour EJ, Garcia-Manero G, Kadia TM, DiNardo CD, Daver NG, et al. Phase 1/2 study of DFP-10917 administered by continuous intravenous infusion in patients with recurrent or refractory acute myeloid leukemia. Cancer. 2019;125(10):1665–73. https://doi.org/10.1002/cncr.31923.
Knecht KM, Buzovetsky O, Schneider C, Thomas D, Srikanth V, Kaderali L, et al. The structural basis for cancer drug interactions with the catalytic and allosteric sites of SAMHD1. Proc Natl Acad Sci U S A. 2018;115(43):E10022–31. https://doi.org/10.1073/pnas.1805593115.
Kodigepalli KM, Bonifati S, Tirumuru N, Wu L. SAMHD1 modulates in vitro proliferation of acute myeloid leukemia-derived THP-1 cells through the PI3K-Akt-p27 axis. Cell Cycle. 2018;17(9):1124–37. https://doi.org/10.1080/15384101.2018.1480218.
Kohnken R, Kodigepalli KM, Wu L. Regulation of deoxynucleotide metabolism in cancer: novel mechanisms and therapeutic implications. Mol Cancer. 2015;14(1):176. https://doi.org/10.1186/s12943-015-0446-6.
Liu X, Guo Y, Li Y, Jiang Y, Chubb S, Azuma A, et al. Molecular basis for G2 arrest induced by 2′-C-cyano-2′-deoxy-1-beta-D-arabino-pentofuranosylcytosine and consequences of checkpoint abrogation. Cancer Res. 2005;65(15):6874–81. https://doi.org/10.1158/0008-5472.CAN-05-0288.
Liu X, Matsuda A, Plunkett W. Ataxia-telangiectasia and Rad3-related and DNA-dependent protein kinase cooperate in G2 checkpoint activation by the DNA strand-breaking nucleoside analogue 2′-C-cyano-2′-deoxy-1-beta-D-arabino-pentofuranosylcytosine. Mol Cancer Ther. 2008;7(1):133–42. https://doi.org/10.1158/1535-7163.MCT-07-0416.
Liu X, Jiang Y, Nowak B, Qiang B, Cheng N, Chen Y, et al. Targeting BRCA1/2 deficient ovarian cancer with CNDAC-based drug combinations. Cancer Chemother Pharmacol. 2018;81(2):255–67. https://doi.org/10.1007/s00280-017-3483-6.
Liu X, Jiang Y, Takata KI, Nowak B, Liu C, Wood RD, et al. CNDAC-induced DNA double-Strand breaks cause aberrant mitosis prior to cell death. Mol Cancer Ther. 2019;18(12):2283–95. https://doi.org/10.1158/1535-7163.MCT-18-1380.
Lotfi K, Juliusson G, Albertioni F. Pharmacological basis for cladribine resistance. Leuk Lymphoma. 2003;44(10):1705–12. https://doi.org/10.1080/1042819031000099698.
McCoy AJ, Grosse-Kunstleve RW, Adams PD, Winn MD, Storoni LC, Read RJ. Phaser crystallographic software. J Appl Crystallogr. 2007;40(Pt 4):658–74. https://doi.org/10.1107/S0021889807021206.
Michaelis M, Rothweiler F, Barth S, Cinatl J, van Rikxoort M, Löschmann N, et al. Adaptation of cancer cells from different entities to the MDM2 inhibitor nutlin-3 results in the emergence of p53-mutated multi-drug-resistant cancer cells. Cell Death Dis. 2011;2(12):e243. https://doi.org/10.1038/cddis.2011.129.
Michaelis M, Wass MN, Cinatl J Jr. Drug-adapted cancer cell lines as preclinical models of acquired resistance. Cancer Drug Resist 2019;2:447–456. https://doi.org/10.20517/cdr.2019.005.
Mosmann T. Rapid colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assays. J Immunol Methods. 1983;65(1–2):55–63. https://doi.org/10.1016/0022-1759(83)90303-4.
Murshudov GN, Vagin AA, Dodson EJ. Refinement of macromolecular structures by the maximum-likelihood method. Acta Crystallogr D Biol Crystallogr. 1997;53(Pt 3):240–55. https://doi.org/10.1107/S0907444996012255.
Obata T, Endo Y, Tanaka M, Matsuda A, Sasaki T. Development and biochemical characterization of a 2′-C-cyano-2′-deoxy-1-beta-D-arabino-pentofuranosylcytosine (CNDAC)-resistant variant of the human fibrosarcoma cell line HT-1080. Cancer Lett. 1998;123(1):53–61. https://doi.org/10.1016/s0304-3835(97)00402-3.
Obata T, Endo Y, Tanaka M, Uchida H, Matsuda A, Sasaki T. Deletion mutants of human deoxycytidine kinase mRNA in cells resistant to antitumor cytosine nucleosides. Jpn J Cancer Res. 2001;92(7):793–8. https://doi.org/10.1111/j.1349-7006.2001.tb01163.x.
Oellerich T, Schneider C, Thomas D, Knecht KM, Buzovetsky O, Kaderali L, et al. Selective inactivation of hypomethylating agents by SAMHD1 provides a rationale for therapeutic stratification in AML. Nat Commun. 2019;10(1):3475. https://doi.org/10.1038/s41467-019-11413-4.
Onafuye H, Pieper S, Mulac D, Cinatl J Jr, Wass MN, Langer K, et al. Doxorubicin-loaded human serum albumin nanoparticles overcome transporter-mediated drug resistance in drug-adapted cancer cells. Beilstein J Nanotechnol. 2019;10:1707–15. https://doi.org/10.3762/bjnano.10.166.
Powell RD, Holland PJ, Hollis T, Perrino FW. Aicardi-Goutieres syndrome gene and HIV-1 restriction factor SAMHD1 is a dGTP-regulated deoxynucleotide triphosphohydrolase. J Biol Chem. 2011;286(51):43596–600. https://doi.org/10.1074/jbc.C111.317628.
Rothenburger T, McLaughlin KM, Herold T, Schneider C, Oellerich T, Rothweiler F, et al. SAMHD1 is a key regulator of the lineage-specific response of acute lymphoblastic leukaemias to nelarabine. Commun Biol. 2020;3(1):324. https://doi.org/10.1038/s42003-020-1052-8.
Santoni-Rugiu E, Melchior LC, Urbanska EM, Jakobsen JN, Stricker K, Grauslund M, et al. Intrinsic resistance to EGFR-tyrosine kinase inhibitors in EGFR-mutant non-small cell lung Cancer: differences and similarities with acquired resistance. Cancers (Basel). 2019;11(7):923. https://doi.org/10.3390/cancers11070923.
Schneider C, Oellerich T, Baldauf HM, Schwarz SM, Thomas D, Flick R, et al. SAMHD1 is a biomarker for cytarabine response and a therapeutic target in acute myeloid leukemia. Nat Med. 2017;23(2):250–5. https://doi.org/10.1038/nm.4255.
Seamon KJ, Stivers JT. A high-throughput enzyme-coupled assay for SAMHD1 dNTPase. J Biomol Screen. 2015;20(6):801–9. https://doi.org/10.1177/1087057115575150.
Thomas D, Herold N, Keppler OT, Geisslinger G, Ferreirós N. Quantitation of endogenous nucleoside triphosphates and nucleosides in human cells by liquid chromatography tandem mass spectrometry. Anal Bioanal Chem. 2015;407(13):3693–704. https://doi.org/10.1007/s00216-015-8588-3.
Touat M, Li YY, Boynton AN, Spurr LF, Iorgulescu JB, Bohrson CL, et al. Mechanisms and therapeutic implications of hypermutation in gliomas. Nature. 2020;580(7804):517–23.
Vagin A, Teplyakov A. An approach to multi-copy search in molecular replacement. Acta Crystallogr D Biol Crystallogr. 2000;56(Pt 12):1622–4. https://doi.org/10.1107/s0907444900013780.
Winn MD, Ballard CC, Cowtan KD, Dodson EJ, Emsley P, Evans PR, et al. Overview of the CCP4 suite and current developments. Acta Crystallogr D Biol Crystallogr. 2011;67(Pt 4):235–42. https://doi.org/10.1107/S0907444910045749.
Wittmann S, Behrendt R, Eissmann K, Volkmann B, Thomas D, Ebert T, et al. Phosphorylation of murine SAMHD1 regulates its antiretroviral activity. Retrovirology. 2015;12(1):103. https://doi.org/10.1186/s12977-015-0229-6.
Wu B, Mao ZJ, Wang Z, Wu P, Huang H, Zhao W, et al. Deoxycytidine kinase (DCK) mutations in human acute myeloid leukemia resistant to Cytarabine. Acta Haematol. 2021a;24:1–8. https://doi.org/10.1159/000513696.
Wu Y, Niu Y, Wu Y, Chen X, Shen X, Gao W. SAMHD1 can suppress lung adenocarcinoma progression through the negative regulation of STING. J Thorac Dis. 2021b;13(1):189–201. https://doi.org/10.21037/jtd-20-1889.
Xagoraris I, Vassilakopoulos TP, Drakos E, Angelopoulou MK, Panitsas F, Herold N, et al. Expression of the novel tumour suppressor sterile alpha motif and HD domain-containing protein 1 is an independent adverse prognostic factor in classical Hodgkin lymphoma. Br J Haematol. 2021;193(3):488–96. https://doi.org/10.1111/bjh.17352.
The study was supported by the Frankfurter Stiftung für krebskranke Kinder and the Hilfe für krebskranke Kinder Frankfurt e.V. Open Access funding enabled and organized by Projekt DEAL.
Institute for Medical Virology, Goethe-University, Frankfurt am Main, Germany
Tamara Rothenburger, Florian Rothweiler, Denisa Bojkova, Rui Costa & Jindrich Cinatl Jr
Faculty of Biological Sciences, Goethe-University, Frankfurt am Main, Germany
Tamara Rothenburger, Berna Bilen & Samira Farmand
Pharmazentrum frankfurt/ZAFES, Institute of Clinical Pharmacology, Goethe University of Frankfurt, Frankfurt, Germany
Dominique Thomas, Yannick Schreiber, Nerea Ferreirós & Gerd Geisslinger
Max von Pettenkofer Institute & Gene Center, Virology, National Reference Center for Retroviruses, Faculty of Medicine, LMU München, Munich, Germany
Paul R. Wratil, Tamara Pflantz, Hanna-Mari Baldauf & Oliver T. Keppler
German Center for Infection Research (DZIF), Partner Site Munich, Munich, Germany
Paul R. Wratil, Tamara Pflantz & Oliver T. Keppler
Department of Molecular Biophysics and Biochemistry, Yale University, New Haven, CT, USA
Kirsten Knecht, Katie Digianantonio, Joshua Temple & Yong Xiong
Department of Pediatric Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
Constanze Schneider
School of Biosciences, University of Kent, Canterbury, UK
Katie-May McLaughlin, Mark N. Wass & Martin Michaelis
Fraunhofer Institute for Molecular Biology and Applied Ecology (IME), Project group Translational Medicine and Pharmacology (TMP), Frankfurt am Main, Germany
Gerd Geisslinger
Department of Hematology/Oncology, Goethe-University, Frankfurt am Main, Germany
Thomas Oellerich
Molecular Diagnostics Unit, Frankfurt Cancer Institute, Frankfurt am Main, Germany
German Cancer Consortium/German Cancer Research Center, Heidelberg, Germany
Tamara Rothenburger
Dominique Thomas
Yannick Schreiber
Paul R. Wratil
Tamara Pflantz
Kirsten Knecht
Katie Digianantonio
Joshua Temple
Hanna-Mari Baldauf
Katie-May McLaughlin
Florian Rothweiler
Berna Bilen
Samira Farmand
Denisa Bojkova
Nerea Ferreirós
Yong Xiong
Oliver T. Keppler
Mark N. Wass
Martin Michaelis
Jindrich Cinatl Jr
TR, DT, YS, PRW, TP, KK, KD, JT, CS, HB, KM, FR, BB, SF, DB, RC, NF, MNW, and JC performed experiments. All authors analysed data. JC and MM conceptualised and directed the study. TR, JC, and MM wrote the initial manuscript draft. All authors read and approved the final manuscript.
Correspondence to Martin Michaelis or Jindrich Cinatl Jr.
The use of peripheral blood and bone marrow aspirates was approved by the Ethics Committee of Frankfurt University Hospital (approval no. SHN-03-2017). All patients gave informed consent to the collection of samples and to the scientific analysis of their data and of biomaterial obtained for diagnostic purposes according to the Declaration of Helsinki.
Additional file 1: Supplementary Figure 1.
Original uncropped Western Blots.
Dose-response curves of AML cell lines treated with CNDAC.
SAMHD1 suppression by siRNAs sensitises AML cells to CNDAC.
Analysis of primary AML blasts.
SAMHD1 suppression by VPX-VLPs sensitises primary AML blasts to CNDAC.
Illustration of SAMHD1 homotetramerisation and the role of CNDAC-TP.
Dose-response curves of MV4–11 clones treated with CNDAC.
SAMHD1 suppression by siRNAs sensitises MV4–11 clones to CNDAC, but not to Daunorubicin.
Dose-response curves of drug resistant HL-60 cells.
Additional file 10: Supplementary Table 1.
CNDAC concentrations that reduce AML cell viability by 50% (IC50), relative SAMHD1 protein levels, quantified using near-infrared Western blot images to determine the ratio SAMHD1/ GAPDH, and CNDAC-triphosphate levels determined by LC-MS/MS.
CNDAC concentrations that reduce ALL cell line viability by 50% (IC50), relative SAMHD1 protein levels quantified using near-infrared Western blot images to determine the ratio SAMHD1/ GAPDH relative to the positive control THP-1, and CNDAC-triphosphate levels determined by LC-MS/MS.
Relative Caspase 3/7 activity in CRISPR–Cas9-mediated SAMHD1 knockout THP-1 cells (THP-1 KO) or control cells (THP-1 CTRL) following treatment with CNDAC for 24, 48, or 72 h, shown as fold-change compared to the respective untreated control.
Characteristics of AML patients analysed in the data presented in Fig. 3.
Data collection and refinement statistics for structure of SAMHD1 HD bound to CNDAC-triphosphate.
Concentrations that reduce the cell viability by 50% (IC50 values), determined by MTT assay after 96 h incubation.
Rothenburger, T., Thomas, D., Schreiber, Y. et al. Differences between intrinsic and acquired nucleoside analogue resistance in acute myeloid leukaemia cells. J Exp Clin Cancer Res 40, 317 (2021). https://doi.org/10.1186/s13046-021-02093-4
Acute myeloid leukemia
CNDAC
DCK
Intrinsic resistance
Acquired resistance
|
CommonCrawl
|
Prevalence of psychological symptoms among adults with sickle cell disease in Korle-Bu Teaching Hospital, Ghana
Michael Tetteh Anim1,3,
Joseph Osafo2 &
Felix Yirdong1
Previous research revealed high prevalence of psychological symptoms among sickle cell disease (SCD) patients in the West and Europe. In some Black SCD populations such as Nigeria and Jamaica, anxiety and depression had low prevalence rates compared to Europe. With difficulty locating research data on the prevalence of psychological symptoms in Ghana, this study aimed at exploring psychological symptoms among adults with SCD in a Teaching Hospital in Accra, Ghana.
Two hundred and one participants (males 102 and females 99) who were HbSS (n = 131) and HbSC (n = 70), aged 18 years and above were purposively recruited. Using the Brief Symptom Inventory (BSI) in a cross-sectional survey, the research answered questions about the prevalence of psychological symptoms. It also examined gender and genotype differences in psychological symptoms scores.
Results indicated that adults with SCD had non-distress psychological symptoms scores. Although paranoid ideation as a psychological symptom indicated "a little bit" score, its prevalence was only 1 %. The prevalence of psychological symptoms as indexed by the Positive Symptom Total (PST) was 10 %. Anxiety, hostility, and depression were psychological symptoms with low scores. Furthermore, except psychoticism scores, males did not differ significantly from females in other psychological symptoms. On the contrary, HbSS participants differed significantly, reporting more psychological symptoms than their HbSC counterparts.
The study concluded that there was low prevalence of psychological symptoms among adults with SCD in this Ghanaian study. Although psychological symptoms distress scores were not observed among study participants at this time, females differed significantly by experiencing more psychoticism symptoms than males. HbSS participants also differed significantly by experiencing more depression, phobic anxiety, paranoid ideation, psychoticism, and additional symptoms such as poor appetite, trouble falling asleep, thoughts of dying, and feeling guilty, than their HbSC counterparts. Implications for further study and clinical practice were discussed.
SCD is a major genetic disease that negatively impacts individuals in Sub-Saharan African countries [1]. According to the WHO [1] the disease upsets hemoglobin. This results in frequent pain and medical problems that in turn negatively affect the patients' education, employment, and psychosocial development [1].
The WHO further noted the highest prevalence of hemoglobin AS in Africa as occurring "between latitudes 150 North and 200 South, ranging between 10 and 40 % of the population in some areas. Prevalence levels decrease to between 1 and 2 % in North Africa and less than 1 % in southern Africa" [1]. Ghana, Nigeria, Cameroon, Republic of Congo, and Gabon have prevalence between 20 and 30 % while it is as high as 45 % in some parts of Uganda [1–3].
The reason the sickle cell has maintained such high prevalence levels in tropical Africa is because the sickle cell trait partially protects against malaria [1, 4]. However, individuals who are homozygous for gene S do not have defense against malaria and consequently suffer from severe sickle cell disease, with a lot of them dying before attaining the age of procreation [1]. Such HbSS individuals usually die from an infection or severe anemia [5]. Those who survive into adulthood remain susceptible to exacerbations of the disease and its medical and psychosocial complications [6].
With the present lack of cure, many adults with SCD are believed to live in fear of early death or have death anxiety and many other psychological complications [7]. There are effective treatments using painkillers for the sickle cell pain. Other complications of sickle cell disease are treated using antibiotics. Rest, balanced diet, folic acid supplementation and high fluid intake, plus occasionally needed aggressive procedures like transfused blood and operation are used [8]. However, psychological difficulties accompany these medical complications and treatments. According to Anie [7], Becker, Axelrod, Oyesamni, Markov and Kunkei [9] and Levenson et al. [10], psychological complications and "psychiatric issues are common in sickle cell disease" [11].
Psychological symptoms have been reported in western literature to be highly prevalent among adults with sickle cell disease [10–13]. Depression rates, for example, are comparable to those found in other serious chronic medical diseases. These range "from 18 to 44 %" [11, 14–16]. Depression rates among people living with sickle cell disease are higher than rates in the general population despite controlling for illness-related physical symptoms [11, 17]. Twenty-seven and half percent of adults with sickle cell disease were reported in a PiSCES study as having depression and 6.5 % as having anxiety [10]. The PiSCES project study found that depressed and anxious sickle cell disease persons functioned poorly and used opioids and hospital emergency services frequently [11].
In Africa, however, published literature about psychological symptoms in SCD is scarce. It is only in Nigeria in West Africa that prevalence of specific psychological symptoms in SCD have been reported. Prevalence of depression, for example, was reported higher among sickle cell participants in a Nigerian study than among cancer or malaria study participants. Depression, however, was reported to be lower in persons with SCD than it was in persons living with HIV-AIDs [11, 18]. A similar research studied psychosocial impact of sickle cell disorder in a Nigerian setting. From a sample of 408 adolescents and adults attending three hospitals in Lagos, Nigeria, the authors found depression to be commonly experienced among half of the study participants, while feelings of anxiety and self-hate were uncommon [19].
Since psychological symptoms have implications for physical complaints in sickle cell disease, their study emphasizes the importance of studying psychological symptoms among persons with SCD. The implications are that psychological symptoms are known to contribute to vaso-occlusive crisis and other physical complaints. For example, major depression was reported to increase sickle cell chronic disease patients' burden of physical illness and symptoms, their functional disabilities and medical costs [20]. Some researchers reported that it is better to consider psychological variables as contributing to the onset of sickle cell pain. For example, Pell and colleagues [21] found that higher levels of kinesophobia were associated with greater psychological distress. Their findings suggest that, it could be psychological distress that increased kinesophobia or kinesophobia increased psychological distress since the analysis was correlational. The psychological symptoms that were associated with higher levels of kinesophobia were Phobic Anxiety, Psychoticism, Somatization, Anxiety, Obsessive-Compulsive, Interpersonal Sensitivity, and depression. Some research found that psychological problems that sickle cell disease patients most frequently encountered are increased anxiety, depression, social withdrawal, aggression, poor relationships, and poor school performance [22].
Elsewhere, it was found that stigmatization in SCD for pseudo-addiction to opioid analgesics was also related to anxiety and depression [10, 23]. Depression was found to powerfully predict physical and mental health-related quality of life than was genotype [10]. Depression in SCD individuals is associated with increased emergency room treatments, hospital admissions, chronic pain flares, SCD crisis, and higher levels of related psychological disorders.
Another importance of examining psychological symptoms is that symptoms of fatigue, appetite disturbance, and irritability are present both in sickle cell anemia and in clinical depression. Patients with the most clinically severe pain also show the greatest prevalence of depression [14, 15].
An association between anxiety, poorer health-related quality of life, and more pain in SCD was established [10]. Therefore, Levenson and colleagues [10] concluded that anxiety and depression predicted more daily pain and poorer physical and mental quality of life in adults with SCD. These findings point out the importance of recognizing and treating psychological symptoms, particularly anxiety and depression, in adults with SCD.
Although it is a challenge determining the exact prevalence rate of a psychological disorder in any given population, some countries as mentioned above have attempted it and have some figures that guide action, policy and research. This is not the case in Ghana which has no records of national statistics on the prevalence rates of psychological symptoms among sickle cell disease patients. Against this background, this study aimed at investigating the prevalence and exploring psychological symptoms among SCD participants in Accra, Ghana.
Subsequently, the following research questions were posed:
Is there high prevalence of psychological symptoms among adults with SCD?
Is there a significant difference in the mean psychological symptoms score for males and females?
Is there a significant difference in the mean psychological symptoms score for HbSS and HbSC persons?
The study was conceptualized based on the existing theoretical view that women experience and display more psychological symptoms than males [24] and HBSS persons experience more pain and severe psychological distress than HBSC persons [5, 25–27].
This prevalence study was necessary because the lack of baseline data on psychological symptoms makes providing specific psychological services to sickle cell disease individuals uncertain. The study contributed knowledge of prevalence of psychological symptoms from the Ghanaian experience. It contributed information that is lacking on gender and genotype differences in the experience of psychological symptoms. The study firmed up the fact about the low prevalence of psychological symptoms among adults with SCD who live in non-westernized countries as against the high prevalence among those who live in highly industrialized countries. It hints about possible geographical and socio-cultural factors that differentiate psychological symptoms prevalence rates across the world.
Two hundred and one adults with SCD were recruited from the sickle cell clinic at the Korle-Bu Teaching Hospital, Accra, Ghana. The GPower software program was used to determine an adequate sample size to reduce risk of type 2 error. One hundred and seventy-five participants were adequate to recruit for the study. But the researchers contacted 229 potential participants to allow for missing questionnaires and make up for those who would withdraw or not consent. Nine potential participants did not consent, and 19 either poorly completed the questionnaires or did not return them. Study participants were selected from a total of about 11, 230 patients (Korle-Bu Sickle Cell Clinic records, 2014). The 201 study participants comprised 103 males (51 %) and 99 females (49 %). Participants with HBSS genotype represented 65 % and HBSC represented 39 %. They were sampled purposively to satisfy inclusion criteria of having either HBSS or HBSC, of being 18 years and above and being able to read and understand English. They were not in crisis and gave both verbal and written consent to participate in the study.
The cross-sectional survey design was used for the study. This was the most appropriate design according to Smith and Davis [28] to survey the opinions of persons of interest at a point in time. It is also the most appropriate design for conducting prevalence studies [29].
Measuring instruments and materials
The participants completed demographical data about their gender, age, marital status, and education, and their medical records were consulted to confirm their sickling diagnosis. The Brief Symptom Inventory (BSI) [30] is a 53-item instrument that measures psychological symptoms. Respondents rated items on a five-point scale from 0 representing "not at all" to 4 representing "extremely." The BSI has nine subscales that include Somatization, Obsession-Compulsion, Interpersonal Sensitivity, Depression, Anxiety, Hostility, Phobic Anxiety, Paranoid Ideation, and Psychoticism.
The BSI showed suitable reliability and validity values. It has internal consistency alpha coefficients that range from 0.71 to 0.85 for the nine symptoms subscales. It has test-retest reliability coefficients that range from 0.68 (Somatization) to 0.91 (Phobic Anxiety) [30].
To measure overall psychological distress, the Global Severity Index (GSI) was used [30]. The GSI is a measure of the average of psychological distress scores on all nine symptoms. Higher scores show that psychological distress is greater. Test-retest reliability for the three Global Indices ranged from .87 (PSDI) to .90 (GSI). In this current study, Cronbach's alpha coefficients for all nine subscales were Somatization (.70), Obsession-Compulsion (.70), Interpersonal Sensitivity (.68), Depression (.74), Anxiety (.69), Hostility (.69), Phobic Anxiety (.63), Paranoid Ideation (.73), and Psychoticism (.69). The whole scale had a Cronbach's alpha coefficient of .94 which was good, making the scale reliable for use to collect data from this Ghanaian SCD sample.
After ethical clearance was obtained from the Noguchi Memorial Institute for Medical Research Institutional Review Board, permission was obtained from the Director of the Institute of Clinical Genetics, Korle-Bu Teaching Hospital to collect survey data from the sickle cell clinic in Korle-Bu Teaching Hospital. Written informed consent was obtained from potential participants and the principal researcher and his assistants handed the questionnaires to SCD participants who awaited their turn to see their doctor. The questionnaires were completed and collected same day before participants left the clinic. About nine potential participants refused to give consent and 19 did not return the questionnaires, or filled them poorly or returned uncompleted questionnaires. Return rate was 91 %, that is, 201 questionnaires were retrieved out of 220.
Descriptive statistics, calculation of prevalence rate, and independent samples t-tests were used to answer the research questions.
Prevalence rate is a ratio of the number of present cases of a disorder to the number of potential cases [31]. It is the total number of cases existing in a defined population at a specific time, generally measured by doing a survey [28, 31]. To calculate the prevalence rate, the number of individuals in the population who have an illness (e.g. depression) is divided by the total population at risk for the illness.
$$ \mathrm{Prevalence}=\mathrm{Number}\kern0.3em \mathrm{of}\kern0.3em \mathrm{cases}\kern0.3em \mathrm{at}\kern0.3em \mathrm{one}\kern0.3em \mathrm{time}\kern0.3em \mathrm{point}\div $$
Total number of individuals in the defined population at same time point.
As shown by the formula, prevalence is a proportion and can never be greater than one [32]. Specific prevalence measures include point prevalence and period prevalence where point prevalence is the number of individuals who have an illness at a specific point in time divided by the total population who could potentially have an illness on that date. Period prevalence is the number of individuals who have an illness during a specific time period divided by the total population who could have the illness midyear in that year [32].
To answer the first research question, descriptive statistics such as means and standard deviations were calculated for the scale and subscale scores of the Brief Symptom Inventory (BSI). The results revealed a non-distress score for the Global Severity Index [GSI] (m = .18, SD = .14). This score is a measure of general psychological distress level. Additionally, all the subscale scores, namely, somatization, obsessive-compulsive, depression, anxiety, hostility, phobic anxiety, and psychoticism indicated non-distress scores except paranoid ideation (m = 1.01, SD = .86) which indicated "a little bit" psychological distress score. A score that is less than 1 is considered by the scale authors as non-distress score (Table 1).
Table 1 Summary of Descriptive Statistics Showing Prevalence of Psychological Symptoms
To calculate the prevalence rate of psychological symptoms among SCD participants, the mean number of individuals in the SCD sample who had indicated non-zero responses and thus revealed the number of symptoms the respondent reported experiencing, was divided by the total population at risk for the psychological symptom (n = 201). Thus, 19.77 divided by 201 SCD participants equaled 0.10 (10 %) [Table 1].
To answer the second research question, an independent samples t-test was conducted to compare the psychological symptoms (BSI and subscales) scores for males and females. The results indicated significant mean difference occurring only in psychoticism subscale scores for males (m = .43, SD = .55) and females (m = .62, SD = .66); t (199) = −2.12, p = .03, two-tailed. The magnitude of the differences in the means (mean difference = −.18, 95 % CI: −.35 to −.01) was very small [eta squared = 0.02] (Table 2).
Table 2 Summary Table of independent t- Test result showing mean comparison of males and females on BSI and subscale scores
To answer research question three, an independent samples t-test was conducted to compare the psychological symptoms (BSI and subscales) scores for HbSS and HbSC genotypes. The results indicated that there was significant difference in GSI scores for HbSS (m = .20, SD = .14) and HbSC (m = .15, SD = .13); t (199) = 2.42, p = .01, two-tailed. The magnitude of the differences in the means (mean difference = .05, 95 % CI: .01 to .09) was small [eta squared = 0.03] (Table 3). Similarly, significant differences were indicated in four subscales, namely, Depression, Phobic Anxiety, Paranoid Ideation, and Psychoticism, together with significant differences in Additional Items (i.e., Poor Appetite, Trouble Falling Asleep, Thoughts of Death or Dying, and Feeling of Guilt).
Table 3 Summary Table of independent t- Test result showing mean comparison of HbSS and HbSC on BSI and subscale scores
It was the objective of this study to determine the prevalence rate of psychological symptoms among adult participants with SCD, and to examine gender and genotype differences in psychological symptoms in this sample. First, the main findings indicated that 10 % of Positive Symptom Total (PST) was prevalent among the sampled SCD participants. Since PST is a count of all the items with non-zero responses and reveals the number of symptoms the respondents report experiencing [30], we conclude that 10 % of psychological symptoms were present at this time. This notwithstanding, participants' overall psychological distress was 0.00; neither did they have specific psychological symptoms except paranoid ideation that recorded a one percent prevalence rate. Phobic anxiety, anxiety, hostility and psychoticism were among subscales that recorded low mean rates, suggesting that they were least experienced.
This result is comparable to others recorded in Nigeria and Jamaica. The Nigerian study indicated that anxiety was almost absent among a group of adults with SCD but not depression [18]. Thomas, Hambleton, and Sergeant [33] found that Jamaican patients (n = 50) with homozygous SCD had less general anxiety, a lower emotional response to pain, and lower levels of perceived pain compared to their London counterparts (n = 50) who believed the disease had a more marked effect on their psychological health.
The results are however dissimilar to psychosocial findings among United Kingdom and United States of America SCD participants where high incidence and prevalence of psychological symptoms are reported among participants with SCD [10, 12, 13, 33, 34]. Levenson and colleagues [10] found a high prevalence of about 28 % of depression among African Americans. Hasan, Hashmi, Alhassen, Lawson and Castro [14] found depression ranging from 18 to 44 % among SCD participants comparing their rates to the general population.
For some African countries and continental Black SCD populations to indicate low rates of depression, anxiety, and hostility, suggests that culture might play some role in the coping and psychological functioning of adult SCD individuals. This study did not investigate the role of culture in the psychological functioning of SCD participants. But it is highly possible there are cultural differences in the coping and psychological functioning of SCD individuals. Bediako [35] and Barbarin and Christian [36] alluded to this.
Second, males and females did not differ significantly in their experience of psychological symptoms. There were no previous studies to compare this result with since previous sickle cell research did not compare males and females on psychological symptoms. In this study, however, males and females alike had low psychological symptoms scores either in their global severity index score or on each of the symptoms subscales, except on the psychoticism subscale on which females (m = .62, SD = .66) and males (m = .43, SD = .55) recorded significant difference, t (199) = −2.12, p = .03, eta squared = .02. By this result, female adults with SCD reported more psychoticism symptoms than male adults with SCD. Psychoticism items were indicative of a withdrawn, isolated, schizoid lifestyle. The subscale provides for a graduated continuum from mild interpersonal alienation to dramatic psychosis, as defined by Eysenck and Eysenck. These are the symptoms the study result indicated that the female SCD participants experienced more than the males.
According to Colman [37], psychoticism is a psychological condition or state characterized by psychosis or traits such as aggressiveness, coldness, impulsiveness, antisocial behavior, tough-mindedness, and creativity. We opine that these Ghanaian adult SCD females used psychoticism as a defense mechanism to cope with SCD, and not that they experienced psychosis which is a mental disorder characterized by delusions and/or prominent hallucinations without insight into their pathological nature. This is because these SCD females did not exhibit mental impairments that grossly interfered with their capacity to meet ordinary demands of life. Forty-eight percent of them were students and 43 % of them were employed in public, private and personal occupations, suggesting that they led meaningful lives and contributed to their communities. They were thus psychologically healthy according to the European Commission's [38] description of psychological health.
HbSS participants significantly differed from HbSC participants on psychological symptoms. Similar to previous research, HbSS participants outnumbered HbSC participants in any research sample and also experienced more symptoms than HbSC participants. Given that HbSS is a more severe form of the disease than the HbSC genotype [5, 25], and given that disease severity determines the degree of psychological symptoms the patient experiences [25, 27], it is not surprising this current finding agrees with previous research. The severer the disease, the more psychological symptoms the patient reports or experiences. This is consistent with the body- mind interaction and relationship, that whatever happens to the body affects the mind, and whatever happens to the mind affects the body in similar proportion [31].
Furthermore, it is observed from Table 3 that there were significant genotype differences in the following subscales, namely, Depression, Phobic Anxiety, Paranoid Ideation, and Psychoticism, with HbSS participants reporting significantly higher mean scores than HbSC participants. This implies that HbSS participants experienced more of these symptoms than their HbSC counterparts, even if they experienced them to a degree that did not reach distress levels as previously noted. Additionally, HbSS participants experienced significantly more symptoms such as poor appetite, trouble falling asleep, thoughts of dying, and feelings of guilt than their HbSC counterparts.
Because no previous published literature have been found to compare the above-mentioned findings, we consider that these are new findings from Ghana that add up to the existing literature and fill some literature gap.
Limitations and future research
Although the results are useful in giving some idea about prevalence of psychological symptoms, their practical use should be cautioned. This is because, the analyses were not done based on results of principal component analysis (PCA) or confirmatory factor analysis (CFA) of the Brief Symptom Inventory. Although content validity was assured, it was not enough. Future research should consider factor analyzing the scale by using PCA and CFA technique to ascertain the scale's utility for the Ghanaian sample to further determine if the scale and subscales are both valid and reliable.
Given that the results cannot be generalized beyond adult SCD participants in the sickle cell clinic in Korle-Bu Teaching Hospital in Accra, Ghana, replication of the study using other SCD and other chronic disease populations is highly recommended.
The study could not tell what accounted for the non-distress psychological symptoms scores among participants. It is recommended that future study should consider testing a theory involving psychological symptoms or health to ascertain the possible cause of non-distress psychological symptoms scores among SCD participants. Additionally, there were no other groups in the study with which to compare the prevalence rates, (i.e., whether SCD participants experience non-distress psychological symptoms scores compared with other chronically ill populations and the general healthy population. Further research in this direction is encouraged.
One more limitation was the cross-sectional design that was used where the researchers could not determine how particular individuals developed over time because the researchers did not follow up on individuals. A longitudinal study would have permitted observation of individuals over time.
The research concludes that there is high probability of low prevalence rate of psychological symptoms. Majority (90 % ) of adult SCD study participants in the sickle cell clinic at the Korle-Bu Teaching Hospital, Accra, Ghana, had non-distress psychological symptoms scores in Anxiety, Depression, Hostility, Phobic Anxiety, Psychoticism, Interpersonal Sensitivity, Somatization, and Obsessive-Compulsion. Although psychological symptoms distress scores were not observed among study participants at this time, females differed significantly by experiencing more psychoticism symptoms than males. HbSS participants also differed significantly by experiencing more depression, phobic anxiety, paranoid ideation, psychoticism, and additional symptoms such as poor appetite, trouble falling asleep, thoughts of dying, and feeling guilty, than their HbSC counterparts.
The main importance of this prevalence estimates is to gain an understanding of the percentage of individuals in the sickle cell disease population at the Korle-Bu Teaching Hospital who remain psychologically functional after having received a diagnosis and live with sickle cell disease. Such statistics should be useful to the sickle cell clinic charged with planning for the provision of health, continuing medical consultations, and psychological services, and for long-term counselling and support.
Second, this prevalence study is of interest because it forms part of the process of systematically assessing the reality of psychological symptoms under surveillance [39]. It provides information that points toward psychological areas that may require further attention. This study provides baseline data on which to base future assessments of changing patterns by means of assessments performed over time. According to Boyle [40] and Silver, Ordunez, Rodriguez, and Robles [41], prevalence studies are useful for quantitatively and qualitatively assessing changes that take place. This feature makes prevalence studies potential instruments for evaluation purposes.
The statistic on prevalence of psychological symptoms found in this study would be most useful in assessing the impact of psychological symptoms on adults with SCD in the Korle-Bu Teaching Hospital's sickle cell clinic, their families, and the society. The results would be useful in planning for healthcare services. This statistic would help the sickle cell clinic management do some accurate healthcare-related needs assessments. The results further suggest that the management, staff, and clinicians have been offering some services that accrue to patients' psychological benefit. These psychologically beneficial factors in the clinic must be investigated for the purpose of emphasizing and maintaining them.
The psychologist can be certain and confident about how much psychological intervention is needed in the team management effort of the clinic and where to place emphasis in providing psychological services to SCD patients. The purpose, degree and target of psychological intervention become clearer. In light of the results, the management objective for now might be to emphasize positive psychology among patients and not psychotherapy.
WHO Regional Office for Africa. Sickle cell disease prevention and control. 2015. http://www.afro.who.int/en/nigeria/nigeria-publications/1775-. Accessed 7th July, 2016.
Centres for Disease Control and Prevention. Sickle cell disease data and statistics. 2012. Retrieved from: http://www.cdc.gov/NCBDDD/sicklecell/data.html. Accessed 15th December, 2012.
Loureiro MM, Rozenfield S. Epidemiology of sickle cell disease hospital admissions in Brazil. Rev Saude Pubica. 2005;39(6):1–6.
Modell B, Darlison M. Global epidemiology of haemoglobin disorders and derived service indicators. Bull World Health Organ. 2008;86(6):480–7.
Konotey-Ahulu FID. The sickle cell disease patient. London: McMillan Press Ltd; 1991.
Quinn CT, Rogers ZR, McCavit TL, Buchanan GR. Improved survival of children and adolescents with sickle cell disease. Blood. 2010;115:3447–52.
Anie AA. Psychological complications in sickle cell disease. Br J Haematol. 2005;129(6):723–9.
de Montalembert M. Management of sickle cell disease. BJM Clin Rev. 2008;337:a1397. doi:10.1136/bjm.a1397.
Becker M, Axelrod DJ, Oyesanmi O, Markov DD, Kunkel EJ. Hematologic problems in psychosomatic medicine. Psychiatr Clin North Am. 2007;30(4):739–59.
Levenson JL, McClish DK, Dahman BA, et al. Depression and anxiety in adults with sickle cell disease: the PiSCES project. Psychosom Med. 2008;70(2):192–6.
Levenson JL. Psychiatric Issues in Adults with Sickle Cell Disease. Primary Psychiatry, 2008. http://primarypsychiatry.com/psychiatric-issues-in-adults-with-sickle-cell-disease/. Accessed 7 Jul 2016.
Alao AO, Cooley E. Depression and sickle cell disease. Harv Rev Psychiatry. 2001;9(4):169–77.
Alao AO, Dewan MJ, Jindal S, Effron M. Psychopathology in sickle cell disease. West Afr J Med. 2003;22(4):334–7.
Hasan SP, Hashmi S, Alhassen M, Lawson W, Castro O. Depression in sickle cell disease. J Natl Med Assoc. 2003;95(7):533–7.
Laurence B, George D, Woods D. Association between elevated depressive symptoms and clinical disease severity in African-American adults with sickle cell disease. J Natl Med Assoc. 2006;98(3):365–9.
Wison Schaeffer JJ, Gil KM, Burchinal M, et al. Depression, disease severity, and sickle cell disease. J Behav Med. 1999;22(2):115–26.
Molock SD, Belgrave FZ. Depression and anxiety in patients with sickle cell disease: conceptual and methodological considerations. J Health Soc Policy. 1994;5(3–4):39–53.
Ehigie BO. Comparative analysis of the psychological consequences of the traumatic experiences of cancer, HIV/AIDS, and sickle cell anemia patients. IFE Psychologia. 2003;11(3):34–54.
Anie KA, Egunjobi FE, Akinyanju OO. Psychosocial impact of sickle cell disorder: perspectives from a Nigerian setting. Glob Health. 2010;6:2. doi:10.1186/1744-8603-6-2.
WHO. Sickle cell disease and other haemoglobin disorders. Fact sheet N°308 January 2011. www.who.int/entity/mediacentre/factsheets/fs308/en/. Accessed 3 June 2013.
Pells J, Edwards CL, McDougald CS, Wood M, Backsdale C, Jonassaint J, Leach-Beale B, Byrd G, Mattis M, Harrison M, Feliu M, Edwards L, Whitfield K, Rogers L. Fear of movement (Kinesiophobia), Pain, and Psychopathology in Patients with Sickle Cell Disease. Clin J Pain. 2007;23(8):707–13.
Anie KA, Green J. Psychological therapies for sickle cell disease and pain. Cochraine Database Syst Rev. 2012;2:CD001916. The Cochraine Collaboration. Issue 2, John Wiley and Sons, Ltd. Accessed at http://www.thecochrainelibrary.com.
Elander J, Lusher J, Bevan D, Telfer P, Burton B. Understanding the causes of problematic pain management in sickle cell disease: evidence that pseudo-addiction plays a more important role than genuine analgesic dependence. J Pain Symptom Manag. 2004;27(2):156–69.
Hurtig AN. Relationships in families of children and adolescents with sickle cell disease. J Health Soc Policy. 2008;5(3–4):161–83.
Platt OS, Brambilla DJ, Rosse WF, et al. Mortality in sickle cell disease. Life expectancy and risk factors for early death. N Engl J Med. 1994;330:1639–44.
Nettles AS. Scholastic performance of children with sickle cell disease. J Health Soc Policy. 2008;5(3):123–40. doi:10.1300/J045V05n03_08. Accessed on 19 Apr 2014.
Scott KD, Scott AA. Cultural therapeutic awareness of sickle cell anemia. J Black Psychol. 1999;25(3):316–35.
Smith RA, Davis SF. The psychologist as a detective. London: Prentice Hall; 2004.
Schneider M. Introduction to public health. 4th ed. New York: Jones & Barrlett Learning; 2014.
Derogatis LR. Brief Symptom Inventory (BSI): Administration, scoring and procedures manual. 3rd ed. Minneapolis: NCS Pearson, Inc; 1993.
Fadem B. Behavioural science in medicine. Baltimore: Lippincott Williams & Wilkins; 2004.
Carneiro I, Howard N. Introduction to epidemiology. 2nd ed. London: McGraw Hill; 2011.
Thomas VJ, Hambleton I, Serjeant G. Psychological distress and coping in sickle cell disease: comparison of British and Jamaican attitudes. Ethn Health. 2001;6(2):129–36.
Anie KA, Dasgupta T, Ezenduka P, Anarado A, Emodi I. A cross-cultural study of psychosocial aspects of sickle cell disease in the UK and Nigeria. Psychol Health Med. 2007;12(3):299–304. doi:10.1080/13548500600984034.
Bediako SM. Psychosocial aspects of sickle cell disease: A primer for African American psychologists. In: Neville HA, Tynes BM, Utsey SO, editors. Handbook of African American psychology. Thousand Oaks: Sage; 2009. p. 417–27.
Barbarin OA, Christian M. The social and cultural context of coping with sickle cell disease: I. A review of biomedical and psychosocial issues. J Black Psychol. 1999;25:277–93.
Colman AM. Dictionary of psychology. New York: Oxford University Press Inc; 2009.
European Commission Green Pape. Improving the mental health of the population: Towards a strategy on mental health for the European Union, Health & Consumer Protection Directorate-General, Brussel. 2005.
Creary M, Williamson D, Kulkarini R. Report from the SCD: current activities, public health, implications and future directions. J Womens Health. 2007;16(5):575–8.
Boyle MH. Guidelines for evaluating prevalence studies. Evid Based Mental Health. 1998;1:37–9.
Silva LC, Ordúñez P, Rodríguez MP, Robles S. A tool for assessing the usefulness of prevalence studies done for surveillance purposes: the example of hypertension. Rev Panam Salud Publica/Pan Am J Public Health. 2001;10(3):2001.
The University of Cape Coast provided funds to the first author for his PhD research project. Adult SCD patients in the sickle cell clinic at the Korle-Bu Teaching Hospital are highly appreciated for their willingness and readiness to contribute data for this research. Dr. (Mrs) Obiri-Yeboah of the School of Medical Sciences, UCC, is acknowledged for her expert help.
The University of Cape Coast funded this research. She played no part in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
The datasets generated and/or analyzed during the current study are not publicly available due to the fact that they are part of a larger PhD research data but are available from the corresponding author on reasonable request. The questionnaires are also available on request.
MTA designed the study, collected, analyzed and interpreted data, and wrote the first draft of the manuscript. JO helped with the design, supervised the collection, correct inputting and analyses of data, contributed to discussions of the findings and reviewed the manuscript. FY collected, entered, coded and cleaned data in SPSS. All authors read and approved the final manuscript.
Michael Tetteh Anim is a Senior Lecturer at the Dept. of Psychological Medicine & Mental Health, School of Medical Sciences, University of Cape Coast, Ghana.
Joseph Osafo is a Senior Lecturer at the Department of Psychology, University of Ghana.
Felix Yirdong is a Senior Research Assistant at the Dept. of Psy Med. & Mental Health, UCC.
Ethical approval for this study was obtained from the Noguchi Institute for Medical Research, University of Ghana, Legon. All study participants gave voluntary verbal and written consent and enrolled individually in the study during visits to the sickle cell clinic at the Korle-Bu Teaching Hospital, Accra, Ghana. The research personnel approached prospective suitable patients, identified them by their ability to read, and matched their characteristics against inclusion and exclusion criteria, and solicited their participation. With help from the clinic administrator, a participant's medical records were consulted to confirm diagnosis. All participants who consented were given a brief verbal overview of the purpose of the study. Each participant was allowed to read the consent forms, to ask questions for clarification before signing the consent form. For the benefit of participants who were slow at reading English, additional verbal information were given about potential risks and benefits of the study, privacy and confidentiality issues, and their rights to withdraw from the study anytime they felt like doing so, without loss of rights and privileges in accessing health care at the clinic. Participants were then provided a copy of the questionnaire, moved to a relatively quiet place in the waiting area and given additional clarifications for completion of the survey if needed by a member of the study team. Once completed, the questionnaire was collected and an informal debriefing was provided. Each participant was refreshed with some pastry and a bottle of mineral drink.
Department of Psychological Medicine and Mental Health, School of Medical Sciences, University of Cape Coast, Cape Coast, Ghana
Michael Tetteh Anim & Felix Yirdong
Department of Psychology, University of Ghana, Legon, Ghana
Joseph Osafo
Department of Psychological Medicine and Mental Health, School of Medical Sciences, College of Health and Allied Sciences, University of Cape Coast, Cape Coast, Ghana
Michael Tetteh Anim
Felix Yirdong
Correspondence to Michael Tetteh Anim.
Anim, M.T., Osafo, J. & Yirdong, F. Prevalence of psychological symptoms among adults with sickle cell disease in Korle-Bu Teaching Hospital, Ghana. BMC Psychol 4, 53 (2016). https://doi.org/10.1186/s40359-016-0162-z
Received: 28 June 2016
Psychological distress
|
CommonCrawl
|
Toward the reduction of incorrect drawn ink retrieval
Atsushi Kitani ORCID: orcid.org/0000-0002-5579-23031,
Taketo Kimura2 &
Takako Nakatani3
Human-centric Computing and Information Sciences volume 7, Article number: 18 (2017) Cite this article
As tablet devices become popular, various handwriting applications are used. Some of applications incorporate a specific function, which is generally called palm rejection. Palm rejection enables application users to put the palm of a writing hand onto a touch display. It classifies intended touches and unintended touches so that it prevents accidental inking, which has been known to occur under the writing hand. Though some of palm rejections can remove accidental inking afterward, this function occasionally does not execute correctly as it may remove rather correct ink strokes as well. We call this interaction Incorrect Drawn Ink Retrieval (IDIR). In this paper, we propose a software algorithm that is a combination of two palm rejection logics that reduces IDIR with precision and without latency. That algorithm does not depend on specific hardware, such as an active stylus pen. Our data provides 98.98% correctness and the algorithm takes less than 10 ms for the distinction. We confirm that our experimental application reduced the occurrences of IDIR throughout an experiment.
As tablet devices are widespread, various handwriting applications have been and continue to be developed. When a user tries to write something on a tablet with a general handwriting application, a multi-touch interaction compels the user to float the hand above the display to avoid accidental inking. Since this unnatural way of writing produces difficulties [1] for digital handwriting applications, the function called "palm rejection" becomes crucial. Palm rejection distinguishes intended touches from unintended touches, and prevents accidental inking.
All intended and unintended touches are classified into the following four categories, True Positive, True Negative, False Positive and False Negative.
When palm rejection correctly distinguishes an intended touch, it results in the True Positive and the touch draws a correct ink stroke. Palm rejection correctly rejects unintended touches and the result is the True Negative. Then, there is no accidental inking under the palm. On the other hand, when an unintended touch is recognized as an intended touch, it is the False Positive and accidental inking occurs. When an intended touch cannot be recognized correctly, the touch is the False Negative and it is incorrectly rejected.
Making a distinction between intended touches and unintended touches seams to be quite straightforward, though it is rather a complicated problem. Because most touches tend to move rapidly and are not stable. Therefore, it is necessary for palm rejection to analyze all touch data within a short moment and make an immediate distinction.
There are general applications [2,3,4] which have already been embedded in palm rejection. While researching palm rejection of those handwriting applications, a curios interaction was detected. When a user tries to write something on a touch display with those handwriting applications and an intended touch draws an ink stroke correctly, what occasionally occurs afterward is that the stroke is removed in a very short moment against the user's will. From this interaction, we infer that some of palm rejection algorithms iteratively classify intended touches and unintended touches, and switch the distinction afterward. It is reasonable when the interaction happens for the touch that should have been classified as the True Negative but classified as the False Positive and thus draws an accidental ink stroke under the palm; though, in some cases the True Positive touch is switched to the False Negative touch afterward. It is rather perplexing for users when the interaction incorrectly switches the classification and removes correct ink strokes. We call the correct interaction "Drawn Ink Retrieval (DIR)" and call the incorrect interaction "Incorrect DIR (IDIR)" (Fig. 1).
The image of DIR and IDIR. This figure shows the image of DIR and IDIR occurrence
The purpose of this paper is to propose a palm rejection algorithm which reduces the occurrences of IDIR. To realize the algorithm, we took an approach that makes use of multi-touch interaction and then, two different logics are combined. One is a machine learning technique, while the other is an occlusion area protection.
Incorrect Drawn Ink Retrieval will bring a negative outcome for the application's usability because it does not occur in a natural handwriting situation. Therefore, this approach will be an effective option as the palm rejection algorithm.
This paper is structured as follows, in "Background", the problem with palm rejection named IDIR are revealed. In "Related work", we briefly categorize two types of approaches in dealing with palm rejection. In "Our approach", the combination of two logics is introduced. In "Experiment", a process of developing a handwriting application and an experiment are explained. We discuss the results and any remaining problems in "Discussion", and offer "Conclusion".
In this section, the existing approaches are surveyed and are categorized into the following two types. One is an active stylus pen interaction, while the other is a multi-touch interaction.
There are several approaches in classifying intended touches and unintended touches. Annett et al. [5] have researched such approaches and made a comparison. They categorized the various approaches in four types: user adaptions, firmware approaches, operating system approaches and application-specific approaches. Schwarz et al. [6] categorized existing approaches into hardware solution and software solution.
Generally there are two approaches to solve accidental inking. One is positively utilizing various functions that are embedded in hardware: for example, using an active stylus pen, which is distinguished as a specific touch from other general finger touches. This is the current main approach of palm rejection. The other approach is focusing on multi-touch interaction itself and solving the problem with software algorithms, which does not depend on specific hardware or devices.
The approach of multi-touch interaction is less precise than the approach of utilizing active stylus pens. Even though the above statement is the case, researching and developing the approach of multi-touch interaction is meaningful, because every standard capacitive touch device does not always embed an active stylus pen.
Active stylus pen interaction
Various prototypes were researched as novel interaction devices [7,8,9]. Such researches can be the base technology of future products.
Several devices have already had an active stylus pen interaction embedded. Sumsung Galaxy's S Pen [10], WACOM digitizer [11] and Windows Surface's Pro Pen [12] have a similar function to palm rejection, and further, they can even recognize pen pressure. In addition, some of them manipulate the touch device without physical touches on the display. Using those active stylus pens enables the application to simplify the touch classification. Though they are a reliable solution for accidental inking, those approaches depend on specific hardware. For instance, S Pen depends on Sumsung Galaxy, and Pro pen depends on Windows Surface.
Various active stylus pens, which utilize Bluetooth technology, freeing their dependence on specific touch devices, are available for standard capacitive touch devices such as the iPad. BambooPaper by WACOM [3], GoodNotes by Time Base Technology Limited [4] and Penultimate by EVERNOTE [2] have an option of connecting those stylus pens via Bluetooth. When utilizing the active stylus pen, those applications display more accurate palm rejection results.
Fifty three [13] provides both an original active stylus pen called Pencil, and an application called Paper. This application supplies palm rejection but it only works with the original active stylus pen.
Multi-touch interaction
In terms of a precision, the multi-touch interaction approaches are inferior to the active stylus pen interaction approaches. On the other hand, in terms of versatility, the multi-touch interaction approaches are more adaptable for several Operating Systems and multi-touch devices than the active stylus pen interaction approaches.
Several researches for the multi-touch interactions [14,15,16] attempt to recognize finger touches. Current capacitive touch devices can correctly receive all touches, but still have a difficulty in classifying which touches are intended and which touches are not.
One well known approach is that there is a specific region in which applications ignore all touches. Users can put their hand onto the region without ink strokes. All touches outside of the region are recognized as intended touches, and so, draw ink strokes. NoteAnytime, which is a handwriting application by MetaMoJi, takes this approach [17]. An advantage of this simple approach is that while a user is putting his or her hand on the region, accidental inking does not occur. So, DIR and IDIR also do not occur. A disadvantage is that users need to move the region manually according to the hand position and where they want to write. In terms of usability, this uncomfortable way of writing will bring negative user experiences.
Vogel et al. [18] corrected the pen and hand position data, which is named occlusion silhouettes, by means of captured images form a head mounted camera. Then, they present a scalable circle and pivoting rectangle geometric model, which detects a position of a hand and forearm from pen nib coordinates. If the pen nib coordinates are clearly pinpointed, the model can be made use of palm rejection.
Yoon et al. [19] made use of the model of Vogel et al. to reject unintended touches while an active stylus pen is recognized. Whereas for the handwriting with a general stylus pan, the pen nib does not always touch a display. Thus other logics will be needed to apply this model to reject unintended touches for handwriting applications.
Schwarz et al. [6] proposed a novel solution using spatiotemporal touch features. It votes for all touches iteratively each 50–500ms on whether they are intended touches or unintended touches through the utilization of the decision tree, which is one of the machine learning algorithms. It is said that their solution is a current baseline for palm rejection. On the other hand, Annett et al. [5] pointed out the problem of classification speed. In the paper, DIR and IDIR are not evaluated, though it is mentioned that False Positive touches would be switched to True Negative touches through the iterative classification.
BambooPaper [3], GoodNotes [4] and Penultimate [2] also have the palm rejection function, which utilizes multi-touch interactions. In order to activate their function, a registration of the users' dominant hand information is required. Additionally, GoodNotes and Penultimate require a frequent hand posture. Those applications are considered to make use of machine learning techniques to classify intended touches and unintended touches. Therefore applications need to adjust the learning data to users' writing posture. When the writing posture fits the registered posture, palm rejection works mostly correct. However, if the writing posture becomes too estranged from the registered posture, applications tend to make incorrect rejections, and thus, IDIR also tends to occur.
According to the past researches [19] and the existing application [17], utilizing an occlusion area will be a reliable approach to reduce occurrences of IDIR. It is important to point out that in making the dynamical occlusion area without any pen nib information, this then requires other information which detects can detect hand positions.
Making use of the machine learning technique becomes a standard way to classify intended touches and unintended touches. In general, the technique is used to reject all unintended touches which are considered as unnecessary. Most unintended touches are generated by a writing hand, and thus, those touches indicate where the writing hand itself is. This means that the information of unintended touches enables the production of the dynamical occlusion area.
Our approach is to build a touch distinction model by means of Support Vector Machine (SVM), which is one of the machine learning algorithms and is suitable for solving two-class tasks. The SVM model classifies intended touches and unintended touches. The True Positive intended touches are recognized as the pen nib and draw strokes. The True Negative unintended touches do not draw strokes. Furthermore, the unintended touches are taken advantage of in that they produce the dynamical occlusion area.
Touch distinction by the SVM model
To recognize the pen nib from all touches, the following classifier is introduced.
$$\begin{aligned} y=\sum _{i=1}^{N}w_{i}^{\top }\varvec{x}_{i}+b, \end{aligned}$$
where N is a number of explanatory variable. The number of touch coordinates and touch records, which are described in "Developing the SVM model", will be utilized as the explanatory variable. The value of X coordinate and Y coordinate is \(\varvec{x}_i\). A bias is b, and \(w_i\) is determined by SVM, in which L2-regularized L2-loss SVC [20] solves the following primal problem:
$$\begin{aligned} \min _{w} \;\;\;\frac{1}{2}\varvec{w}^{\top }\varvec{w}+C\sum _{i=1}^{l}\left( \max \left( 0,1-y_{i}\varvec{w}^{\top }\varvec{x}_{i}\right) \right) ^{2}. \end{aligned}$$
Support Vector Machine is a supervised learning method, and requires a tagged dataset. In this case, the tags will be intended touches or unintended touches. To build the dataset, both w and \(\varvec{x}\) are matrix in the classifier (1), and therefore w and \(\varvec{x}\) will be vectorized to apply L2-regularized L2-loss SVC (2).
After building the SVM model by means of the dataset, the SVM model classifies intended touches and unintended touches. When y is plus in the classifier (1), the coordinate is classified as the intended touch, whereas y is minus, it is classified as the unintended touch.
The difficulty is when there is only a palm on a display and there should not be intended touches, thus, the classification algorithm occasionally detects intended touches incorrectly and accidental inking is produced. The iterative classification can retrieve it, however latency and IDIR will occur as the side effects.
Touch protection by the dynamic occlusion area
In order to reduce the occurrences of IDIR without latency, a simple and robust rejection logic is needed. The past research utilized the pen nib information and the hand posture geometric model to detect the hand position [18]. In the case of handwriting, the pen nib does not always touch a display. However, we create the dynamical occlusion area by making use of the information of a hand position, which is detected by the SVM model classification. Inside of the occlusion area, all touches will be rejected. This occlusion area supplementarily avoids accidental inking by the False Positive touches.
In order to evaluate this approach, an experiment was implemented. For the experiment, we took the following steps.
Collecting the dataset for SVM
Developing the SVM model
Implementation of the dynamical occlusion area
Developing a handwriting application
To collect the dataset, the technique by Schwarz et al. [6] was applied. The dataset was separately collected from 10 right-handed participants and 2 left-handed participants. Participants held a standard stylus pen, which has a simple rubber nib, and which is not active. They put their palm onto a touch display. Touches inside the circle represent intended pen touches, whereas touches outside of the circle are interpreted as unintended touches.
For the purpose of collecting realistic handwriting data, we let the circle dynamically follow the pen nib. Participants were told to put the pen nib onto the circle and make strokes on the touch display evenly. To do so, the dataset becomes closer to real handwriting data.
When any touch events occur on the display, one record will be produced, the record includes all existing X and Y touch coordinates. About 250,000 records were set as the training dataset, with 5000 records being set as the validation dataset. From the dataset, a total of 20 models were produced. The record number of the model increases by 1 up until 10 and, after that the record number increases 20 each up until 200. Before having the experiment, we applied each model to the validation dataset and examined which model is the most effective for the most precise classification. A model size with 20 records provides the highest correct percentage of 98.98% for the classification. Thus, the model size with 20 records was adopted (Fig. 2).
The correct percentage for each record number of model. This figure indicates the correct percentage and the record number of models
To make the distinction, actual touch dataset also needs the same 20 records. The 20 records are composed of 19 contextual records and 1 most recent record.
We let r stand for the number of the records, and let t stand for the number of touch coordinates. In this experiment, maximum number of touch coordinates is set 10. Thus, in the classifier "Touch distinction by the SVM model" (1), N will be \(t \times r = 10 \times 20 = 200\).
Whenever touch events occur, all touch coordinates are stacked into the record. If there are fewer touches than the maximum number, 0 is stacked to fill the record. In the case that there are no records when the touch events start, it takes approximately 5 ms to stack all recorded 20 records in order to make a distinction. When 20 records already have been stacked, it takes approximately 1 ms for the calculation. If there are no touch events, the records are not stacked. When no touch points are detected, the stacked records expire.
Implementation of the occlusion area protection
After the SVM model has classified intended touches and unintended touches, the dynamical occlusion area is applied to avoid accidental inking and reduce the occurrences of IDIR. the dynamical occlusion area is an invisible circle. The round shaped occlusion area is adopted to cope with changing the writing hand angle. The X coordinate for the center of the circle is the mean value of the X coordinates of all unintended touches. The Y coordinate for the center of the circle is calculated in the same way.
The circle has a radius of 230 px. The radius is derived from the dataset that was collected previously. A mean length between the intended touches and the unintended touches was 390 px. We heuristically adjust the size of the radius of the circle. The suitable radius was slightly longer then half of the mean length.
When the SVM model classifies intended touches and unintended touches, and if a False Positive touch is inside of the dynamical occlusion area, the occlusion area avoids accidental inking (Fig. 3).
The concept of the iterative classification and our approach. It compares the concept of iterative classification and our approach
While a user is putting his or her hand on the display when writing something with a stylus pen, it takes approximately 1 ms to generate the dynamical occlusion area. Totally, it takes 2 ms for the combined classification with the inclusion of the drawing records, and within 10 ms without the records.
In the application of this approach, the dynamical occlusion area cannot be applied without unintended touches. Tush, the first touch offers one especially difficult situation. When a first touch and a second touch occur one by one, and the first touch is recognized as an intended touch before the second touch occurs. Then the first touch will immediately draw an ink stroke. However, there will be three possible cases.
Case one is that the first touch was truly intended and the second touch was unintended. Case two is that the first touch was unintended and the second touch was intended. And case three is that both the first touch and the second touch were unintended. In both case two and three, the unintended drawn ink stroke should be removed. Therefore, to minimize the occurrences of IDIR, we embed a function that invokes DIR only for the first touch.
In order to realize this approach, an experimental handwriting application is developed. The application is developed as a web application with JavaScript and HTML5 Canvas. Though we have various types of multi-touch devices, a hardware specific application will have difficulties of extensibility with all of these various devices. JavaScript and HTML5 Canvas, on the other hand, can execute within most of the modern browsers and multi-touch devices.
Current capacitive touch devices, like the iPad, can utilize a touch radius. Some general applications may take advantage of the touch radius information to improve the precision of palm rejection. However, it depends on an Operating System whether the touch radius can be used or not. this approach does not make use of touch radius information. Therefore, there is a benefit of extensibility with other browsers, Operating Systems and multi-touch devices.
The handwriting application, in which is embedded the combined two logics, was compared with two other applications. One was GoodNotes, with the other being Penultimate. Both of these applications have palm rejection, which is supposed to be based on machine learning techniques. Furthermore, both of them have an option of connecting with active stylus pens, however, the experiment was held using a non-active stylus pen.
The iPad Air 2 was used for the experiment with the device being set on a horizontal orientation.
8 subjects participated in the experiment. Seven subjects were right-handed and one was left-handed. They were university students and were familiar with using tablet devices. All of the subjects who took part were different from the subjects who participated in collecting the dataset.
3 types of characters were displayed in each application. One was the capital alphabet letters A to G; one was the numbers 1 to 6; and the last one was Japanese Kanji, which consisted of four characters (Fig. 4).
The image of the experiment. This is an image of the experiment
The subjects were instructed to trace all characters according to the lists below.
For GoodNotes and Penultimate, set a suitable angle of a dominant hand from each application's setting
For our application, choose right or left hand.
Write every character in the correct order and correct number of strokes
Do not rewrite, even if a stroke is not correctly drawn
Do not rewrite, even if DIR occurs
Writing in script is prohibited
Writing style should be the same as when one writes something using a pen and a notebook
Use a non-active stylus pen, which we provide
For each application, one practice period was provided. The experiment was set in random order.
In this section, we provide the results of the experiment and discuss about them. The writing processes were recorded on video. We classified all strokes as the below three interactions.
Correct ink stroke (True Positive)
Accidental inking (False Positive)
Changing the classification from True Positive to False Negative (IDIR)
The classification of True Negative are invisible and could not be evaluated. The occurrences of DIR, which is changing the classification from False Positive to True Negative, were not evaluated because the interaction mostly happens under the palm and can not be confirmed.
Table 1 shows the total number of interactions for each application. The difference in the number of All Strokes is because some subjects wrote characters following the wrong procedure. For instance, a letter A was written with two strokes instead of the correct three strokes. The number of False Positives was close to all three applications.
GoodNotes shows a low frequency of False Negative strokes. GoodNotes and Penultimate recorded 5 IDIR, while our application recorded 1.
Table 1 Total number of interactions and classifications
Table 2 shows a detail of the IDIR numbers for each subject. The result of our application shows that: subject G recorded 1 IDIR, while the other seven subjects did not record any IDIRs. Compared with our application, the other applications recorded numerously more IDIRs.
Table 2 The number of IDIR for each subjects
Summary of techniques
In this experiment, subjects wrote three types of characters. Some of characters, typically Japanese Kanji, are composed of several strokes which makes them intricately more complex writing than the characters which are composed of a single stroke. The reason of adopting those characters is because IDIR does not occur often, and writing those intricate characters will induce more occurrences of IDIR. In addition, writing such intricate characters will be a closer simulation of a real handwriting situation.
Threats to validity
All subjects are university students and all of them are familiar with using touch devices, thus, if subjects who have had no experiences in the use of touch devices, the results will change. Also, the size of hands will influence for the result of the experiment.
The experiment of Schwarz et al. [6] defines the True Positive strokes as the Stroke Recognition, and the False Positive strokes as the Error Strokes. The results were 97.9% for the True Positive strokes, with the False Positive rate being 0.016. our approach resulted in 95.2% for the True Positive strokes and 0.082 for the False Positive rate. Their experiment was drawing below six symbols, character L, S, vertical line, horizontal line, dot and circle. When we consider that all those symbols are composed of a single stroke and are simpler than the characters in our experiment, our results with regard to the precision is convincing.
Though the whole number of IDIR was much smaller than the total stroke number, we did not reach statistical significance.
The iPad Air 2 was used as the device for collecting the dataset and for the experiment. We set the iPad on a horizontal orientation. To use a vertical orientation or other devices, another dataset will be needed. The device specification will influence the data correction and the classification speed.
Through this research, we illustrate that unintended touches, which used to be regarded as useless information, can take advantage of generating the dynamical occlusion area. Though, the size of the area should adjust according to the user's hand size. In addition, the shape and position of the area should be considered. Those improvements will be included in our future works.
Past researches and existing applications focus on reducing the occurrence of accidental inking. We focus our attention on IDIR, which occurs because of a result of a re-classification from the True Positive to the False Negative.
Concerning the number of correct ink strokes and accidental inking, our application performed on par with the other applications when compared.
We can confirm that our application certainly reduced the occurrences of IDIR throughout the experiment. In reducing the occurrences of IDIR, this approach will be a possible option.
To achieve higher quality, there is still plenty of room for improving the precision in our palm rejection algorithm, and more experimental results will be needed.
DIR:
Drawn Ink Retrieval
IDIR:
Incorrect Drawn Ink Retrieval
SVM:
Camilleri MJ, Malige A, Fujimoto J, Rempel DM (2013) Touch displays: the effects of palm rejection technology on productivity, comfort, biomechanics and positioning. Ergonomics 56(12):1850–1862. doi:10.1080/00140139.2013.847211
Evernote: Penultimate. https://evernote.com/intl/jp/penultimate/
BambooPaper: Wacom. http://www.wacom.com/ja-jp/jp/everyday/bamboo-paper
GoodNotes: Time Base Technology Limited. http://www.goodnotesapp.com
Annett M, Gupta A, Bischof WF (2014) Exploring and understanding unintended touch during direct pen interaction. ACM Trans Comput-Hum Interact 21(5):28. doi:10.1145/2674915
Schwarz J, Xiao R, Mankoff J, Hudson SE, Harrison C (2014) Probabilistic palm rejection using spatiotemporal touch features and iterative classification. In: Proceedings of the 32nd Annual ACM conference on human factors in computing systems. ACM, New York, pp 2009–2012. doi: 10.1145/2556288.2557056
Hinckley K, Yatani K, Pahud M, Coddington N, Rodenhouse J, Wilson A, Benko H, Buxton B (2010) Pen + touch=new tools. In: Proceedings of the 23nd Annual ACM Symposium on user interface software and technology. ACM, New York, pp 27–36. doi: 10.1145/1866029.1866036
Song H, Benko H, Guimbretiere F, Izadi S, Cao X, Hinckley K (2011) Grips and gestures on a multi-touch pen. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, pp 1323–1332. doi: 10.1145/1978942.1979138
Suzuki Y, Misue K, Tanaka J (2007) Stylus enhancement to enrich interaction with computers. In: Proceedings of the 12th international conference on human-computer interaction: interaction platforms and techniques. Springer, Heidelberg, pp 133–142. http://dl.acm.org/citation.cfm?id=1757268.1757284
SAMSUNG: S Pen. http://www.samsung.com/us/
Digitizer: Wacom. http://www.wacom.com/en-us
Microsoft: Pro Pen. http://www.microsoft.com/surface/en-us
Paper: FiftyThree. https://www.fiftythree.com/paper
Wagner J, Huot S, Mackay W (2012) Bitouch and bipad: designing bimanual interaction for hand-held tablets. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, pp 2317–2326. doi: 10.1145/2207676.2208391
Ewerling P, Kulik A, Froehlich B (2012) Finger and hand detection for multi-touch interfaces based on maximally stable extremal regions. In: Proceedings of the 2012 ACM international conference on interactive tabletops and surfaces. ACM, New York, pp 173–182. doi: 10.1145/2396636.2396663
Marquardt N, Kiemer J, Ledo D, Boring S, Greenberg S (2011) Designing user-, hand-, and handpart-aware tabletop interactions with the touchid toolkit. In: Proceedings of the ACM international conference on interactive tabletops and surfaces. ACM, New York, pp 21–30. doi: 10.1145/2076354.2076358
NoteAnytime: MetaMoJi. http://product.metamoji.com/ja/anytime/
Vogel D, Cudmore M, Casiez G, Balakrishnan R, Keliher L (2009) Hand occlusion with tablet-sized direct pen input. In: Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York, pp 557–566. doi: 10.1145/1518701.1518787
Yoon D, Chen N, Guimbretière F (2013) Texttearing: opening white space for digital ink annotation. In: Proceedings of the 26th Annual ACM Symposium on user interface software and technology. ACM, New York, pp 107–112. doi: 10.1145/2501988.2502036
Fan R-E, Chang K-W, Hsieh C-J, Wang X-R, Lin C-J (2008) Liblinear: a library for large linear classification. J Mach Learn Res 9:1871–1874
Author's contributions
AK developed the base handwriting application and carried out the experiment and drafted the manuscript. TK developed the palm rejection algorithm and helped to draft the manuscript. TN advised on the way of the experiment and helped to draft the manuscript. All authors read and approved the final manuscript.
The authors special thank to subjects for joining the palm rejection experiment and data collection of SVM.
Graduate School of Business Sciences, University of Tsukuba, Tokyo, Japan
Atsushi Kitani
Taketo Kimura
The open university of Japan, Chiba, Japan
Takako Nakatani
Correspondence to Atsushi Kitani.
Atsushi Kitani, Taketo Kimura and Takako Nakatani are Equal contributors.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Kitani, A., Kimura, T. & Nakatani, T. Toward the reduction of incorrect drawn ink retrieval. Hum. Cent. Comput. Inf. Sci. 7, 18 (2017). https://doi.org/10.1186/s13673-017-0099-0
|
CommonCrawl
|
The non-Riemannian dislocated crystal: A tribute to Ekkehart Kröner (1919-2000)
JGM Home
Hamiltonian mechanical systems on Lie algebroids, unimodularity and preservation of volumes
September 2010, 2(3): 265-302. doi: 10.3934/jgm.2010.2.265
When is a control system mechanical?
Sandra Ricardo 1, and Witold Respondek 2,
Department of Mathematics, School of Sciences and Technology, University of Trás-os-Montes e Alto Douro, 5001-801 Vila Real, Portugal
INSA-Rouen, Laboratoire de Mathématiques, 76801 Saint-Etienne-du-Rouvray, France
Received May 2010 Published November 2010
In this work we present a geometric setting for studying mechanical control systems. We distinguish a special class: the class of geodesically accessible mechanical systems, for which the uniqueness of the mechanical structure is guaranteed (up to an extended point transformation). We characterise nonlinear control systems that are state equivalent to a system from this class and we describe the canonical mechanical structure attached to them. Several illustrative examples are given.
Keywords: Mechanical control systems, mechanical state equivalence, geodesic accessibility, symmetric product., state equivalence.
Mathematics Subject Classification: Primary: 53Bxx, 93Cxx; Secondary: 37Jx.
Citation: Sandra Ricardo, Witold Respondek. When is a control system mechanical?. Journal of Geometric Mechanics, 2010, 2 (3) : 265-302. doi: 10.3934/jgm.2010.2.265
R. Abraham and J. E. Marsden, "Foundations of Mechanics," Addison-Wesley, 1978. Google Scholar
A. A. Agrachev, Feedback-invariant optimal control theory and differential geometry. II. Jacobi curves for singular extremals, J. Dynam. Control Systems, 4 (1998), 583-604. doi: 10.1023/A:1021871218615. Google Scholar
A. A. Agrachev and R. V. Gamkrelidze, Feedback-invariant optimal control theory and differential geometry. I. Regular extremals, J. Dynam. Control Systems, 3 (1997), 343-389. doi: 10.1007/BF02463256. Google Scholar
A. A. Agrachev and Y. L. Sachkov, "Control Theory from the Geometric Viewpoint," Springer-Verlag Berlin and Heidelberg, 2004. Google Scholar
I. Anderson and G. Thompson, The inverse problem of the calculus of variations for ordinary differential equations, Mem. Amer. Math. Soc., 98 (1992), 108-110. Google Scholar
H. Arai, K. Tanie and N. Shiroma, Nonholonomic control of a three-DOF planar underactuated manipulator, IEEE Trans. Robot. Autom., 14 (1998), 681-695. doi: 10.1109/70.720345. Google Scholar
A. M. Bloch, "Nonholonomics Mechanics and Control," Springer-Verlag, New York, 2003. doi: 10.1007/b97376. Google Scholar
B. Bonnard, Feedback equivalence for nonlinear systems and the time optimal control problem, SIAM J. Control and Optim., 29 (1991), 1300-1321. doi: 10.1137/0329067. Google Scholar
W. Boothby, "An Introduction to Differential Manifolds and Riemannian Geometry," 2nd edition, Academic Press, Inc, 1986. Google Scholar
F. Bullo and A. D. Lewis, "Geometric Control of Mechanical Systems," Springer Verlag, New York, 2004. Google Scholar
F. Bullo and K. M. Lynch, Kinematic controllability for decoupled trajectory planning in underactuated mechanical systems, IEEE Trans. Robot. Autom., 17 (2001), 402-412. doi: 10.1109/70.954753. Google Scholar
D. Cheng, A. Astolfi and R. Ortega, On feedback equivalence to port controlled Hamiltonian systems, Systems Control Lett., 54 (2005), 911-917. doi: 10.1016/j.sysconle.2005.02.005. Google Scholar
J. Cortés, A. J. van der Schaft and P. E. Crouch, Characterization of gradient control systems, SIAM J. Control Optim., 44 (2005), 1192-1214. doi: 10.1137/S0363012903425568. Google Scholar
M. Crampin, G. E. Prince and G. Thompson, A geometrical version of the Helmholtz conditions in time-dependent Lagrangian dynamics, J. Phys. A-Math. Gen., 17 (1984), 1437-1447. doi: 10.1088/0305-4470/17/7/011. Google Scholar
P. E. Crouch and A. J. van der Schaft, Hamiltonian and self-adjoint control systems, Systems & Control Letters, 8 (1987), 289-295. doi: 10.1016/0167-6911(87)90093-4. Google Scholar
P. E. Crouch and A. J. van der Schaft, "Variational and Hamiltonian Control Systems," Lectures Notes in Control and Inform. Sci. 101, Springer-Verlag, New York, 1987. Google Scholar
J. Douglas, Solution of the inverse problem of the calculus of variations, Trans. Amer. Math. Soc., 50 (1941), 71-128. Google Scholar
R. B. Gardner, "The Method of Equivalence and its Applications," CBMS Regional Conference Series in Applied Mathematics, 58, SIAM, Philadelphia, PA, 1989. Google Scholar
R. B. Gardner and W. F. Shadwick, The GS algorithm for exact linearization to Brunovský normal form, IEEE Trans. Automat. Control, 37 (1992), 224-230. doi: 10.1109/9.121623. Google Scholar
R. B. Gardner, W. F. Shadwick and G. R. Wilkens, Feedback equivalence and symmetries of Brunovský normal forms, Contemp. Math., 97 (1989), 115-130. Google Scholar
J. Hauser, S. Sastry and G. Meyer, Nonlinear control design for slightly non-minimum phase systems: Application to V/STOL aircraft, Automatica J. IFAC, 28 (1992), 665-679. doi: 10.1016/0005-1098(92)90029-F. Google Scholar
A. Isidori, "Nonlinear Control Systems," 3rd edition, Springer Verlag, 1995. Google Scholar
B. Jakubczyk, Equivalence and invariants of nonlinear control systems, in "Nonlinear Controllability and Optimal Control" (eds. H.J. Sussmann), Marcel Dekker, New York-Basel, (1990), 177-218. Google Scholar
B. Jakubczyk, Critical Hamiltonians and feedback invariants, in "Geometry of Feedback and Optimal Control" (eds. B. Jakubczyk and W. Respondek), Marcel Dekker, New York-Basel, (1998), 219-256. Google Scholar
B. Jakubczyk, Feedback invariants and critical trajectories; Hamiltonian formalism for feedback equivalence, in "Nonlinear Control in the Year 2000" 1 (eds. A. Isidori, F. Lamnabhi-Lagarrigue, and W. Respondek), LNCS vol. 258, Springer, London, (2000) 545-568. Google Scholar
V. Jurdjevic, "Geometric Control Theory," Cambridge University Press, 1997. Google Scholar
W. Kang and A. J. Krener, Extended quadratic controller normal form and dynamic feedback linearization of nonlinear systems, SIAM J. Control Optim., 30 (1992), 1319-1337. doi: 10.1137/0330070. Google Scholar
J. Koiller, Book review of "Analytical Mechanics: A comprehensive treatise on the dynamics of constrained systems for engineers, physicists and mathematicians," by John G. Papastavridis, Bulletin (New Series) of the American Mathematical Society, 40 (2003), 405-419. Google Scholar
P. Kokkonen, "Energy-Shaping Control of Physical Systems (ESC)," Matematiikan Ja Tilastotieteen Laitos, 2007. Google Scholar
A. D. Lewis, Affine connections and distributions with applications to nonholonomic mechanics, Rep. Math. Phys., 42 (1998), 135-164. doi: 10.1016/S0034-4877(98)80008-6. Google Scholar
A. D. Lewis, Affine connections control systems, in "Proc. IFAC Workshop on Lagrangian and Hamiltonian Methods for Nonlinear control," (2000), 128-133. Google Scholar
A. D. Lewis, The category of affine connection control systems, in "Proc. of the 39th IEEE Conf. on Decision and Control, Sydney, Australia," (2000), 1260-1265. Google Scholar
A. D. Lewis and R. M. Murray, Configuration Controllability of Simple Mechanical Control Systems, SIAM J. Control Optim., 35 (1997), 766-790. doi: 10.1137/S0363012995287155. Google Scholar
A. D. Lewis and R. M. Murray, Decompositions for control systems on manifolds with an affine connection, Syst. Contr. Lett., 31 (1997), 199-205. doi: 10.1016/S0167-6911(97)00040-6. Google Scholar
J. E. Marsden and T. Ratiu, "Introduction to Mechanics and Symmetry," Springer-Verlag, 1994. Google Scholar
P. Martin, S. Devasia and B. Paden, A different look at output tracking: control of a VTOL aircraft, in "Proc. of the 33rd IEEE Conf. on Decision and Control," (1994), 2376-2381. Google Scholar
E. Martínez, J. F. Cariñena and W. Sarlet, A geometric characterization of separable second-order differential equations, Mathematical Proceedings of the Cambridge Philosophical Society, 113 (1993), 205-224. doi: 10.1017/S0305004100075897. Google Scholar
M. Milam and R. M. Murray, A testbed for nonlinear flight control techniques: The Caltech ducted fan, in "Proc. of the IEEE Int. Conf. on Control Applications," 1 (1999), 345-351. Google Scholar
G. Morandi, C. Ferrario, G. Lo Vecchio, G. Marmo and C. Rubano, The inverse problem in the calculus of variations and the geometry of the tangent bundle, Physics Reports, 188 (1990), 147-284. doi: 10.1016/0370-1573(90)90137-Q. Google Scholar
R. M. Murray, Nonlinear control of mechanical systems: A Lagrangian perspective, Annual Reviews in Control, 21 (1997), 31-42. doi: 10.1016/S1367-5788(97)00023-0. Google Scholar
R. M. Murray, Z. Li and S. S. Sastry, "A Mathematical Introduction to Robotic Manipulation," Taylor & Francis Ltd, Boca Raton, 1994. Google Scholar
H. Nijmeijer and A. J. van der Schaft, "Nonlinear Dynamical Control Systems," Springer-Verlag, New York, 1990. Google Scholar
R. Olfati-Saber, Global configuration stabilization for the VTOL aircraft with strong input coupling, IEEE Trans. Automat. Control, 47 (2002), 1949-1952. doi: 10.1109/TAC.2002.804457. Google Scholar
W. M. Oliva, "Geometric Mechanics," Springer-Verlag, Berlin, 2002. Google Scholar
R. Ortega, A. Loria, P. J. Nicklasson and H. Sira-Ramirez, "Passivity-Based Control of Euler-Lagrange Systems: Mechanical, Electrical and Electromechanical Applications," Springer-Verlag, Berlin, 1998. Google Scholar
R. H. Rand and D. V. Ramani, Nonlinear normal modes in a system with nonholonomic constraints, Nonlinear Dynamics, 25 (2001), 49-64. doi: 10.1023/A:1012946515772. Google Scholar
W. Respondek, Feedback classification of nonlinear control systems in $\mathbbR^2$ and $\mathbbR^3$, in "Geometry of Feedback and Optimal Control" 207 (eds. B. Jakubczyk and W. Respondek), Marcel Dekker, New York, (1998), 347-382. Google Scholar
W. Respondek, Introduction to geometric nonlinear control; linearization, observability and decoupling, in "Mathematical Control Theory" (ed. A. Agrachev), ICTP Lecture Notes, (2002), 169-222. Google Scholar
W. Respondek and S. Ricardo, Equivariants of mechanical control systems, submitted, (2010). Google Scholar
W. Respondek and I. A. Tall, Feedback equivalence of nonlinear control systems: A survey on formal approach, in "Chaos in Automatic Control" (eds. J.-P. Barbot et W. Perruquetti), Taylor and Francis, (2006), 137-262. Google Scholar
W. Respondek and M. Zhitomirskii, Feedback classification of nonlinear control systems on 3-manifolds, Math. Control Signals Systems, 8 (1995), 299-333. doi: 10.1007/BF01209688. Google Scholar
S. Ricardo and W. Respondek, Geometry of second-order nonholonomic chained form systems, submitted, (2010). Google Scholar
W. Sarlet, The Helmholtz conditions revisited. A new approach to the inverse problem of Lagrangian dynamics, J. Phys. A-Math. Theor., 15 (1982), 1503-1517. doi: 10.1088/0305-4470/15/5/013. Google Scholar
W. Sarlet, Geometrical structures related to second-order equations, Differential Geometry and Its Applications, (1987), 279-299. Google Scholar
S. Sastry, "Nonlinear Systems: Analysis, Stability, and Control," Springer-Verlag, New York, 1999. Google Scholar
E. D. Sontag, "Mathematical Control Theory: Deterministic Finite Dimensional Systems," Springer-Verlag, New York, 1998. Google Scholar
M. W. Spong, Underactuated mechanical systems, in "Control Problems in Robotics and Automation," 230, Springer Berlin/Heidelberg, (1998), 135-150. Google Scholar
P. Tabuada and G. Pappas, From nonlinear to Hamiltonian via feedback, IEEE Trans. Automat. Control, 48 (2003), 1439-1442. doi: 10.1109/TAC.2003.815040. Google Scholar
A. J. van der Schaft, Symmetries, conservation laws and time-reversibility for Hamiltonian systems with external forces, J. Math. Phys., 24 (1983), 2095-2101. doi: 10.1063/1.525962. Google Scholar
J. Vankerschaver, F. Cantrijn, M. de León and D. Martín de Diego, Geometric aspects of nonholonomic field theories, Rep. Math. Phys., 56 (2005), 387-411. doi: 10.1016/S0034-4877(05)80093-X. Google Scholar
M. Zhitomirskii and W. Respondek, Simple germs of corank one affine distributions, Banach Center Publications, 44 (1998), 269-276. Google Scholar
Dominique Chapelle, Philippe Moireau, Patrick Le Tallec. Robust filtering for joint state-parameter estimation in distributed mechanical systems. Discrete & Continuous Dynamical Systems, 2009, 23 (1&2) : 65-84. doi: 10.3934/dcds.2009.23.65
Kathrin Flasskamp, Sebastian Hage-Packhäuser, Sina Ober-Blöbaum. Symmetry exploiting control of hybrid mechanical systems. Journal of Computational Dynamics, 2015, 2 (1) : 25-50. doi: 10.3934/jcd.2015.2.25
Leonardo Colombo, David Martín de Diego. Optimal control of underactuated mechanical systems with symmetries. Conference Publications, 2013, 2013 (special) : 149-158. doi: 10.3934/proc.2013.2013.149
Leonardo Colombo, Fernando Jiménez, David Martín de Diego. Variational integrators for mechanical control systems with symmetries. Journal of Computational Dynamics, 2015, 2 (2) : 193-225. doi: 10.3934/jcd.2015003
Anthony M. Bloch, Rohit Gupta, Ilya V. Kolmanovsky. Neighboring extremal optimal control for mechanical systems on Riemannian manifolds. Journal of Geometric Mechanics, 2016, 8 (3) : 257-272. doi: 10.3934/jgm.2016007
Cédric M. Campos, Sina Ober-Blöbaum, Emmanuel Trélat. High order variational integrators in the optimal control of mechanical systems. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 4193-4223. doi: 10.3934/dcds.2015.35.4193
Firdaus E. Udwadia, Thanapat Wanichanon. On general nonlinear constrained mechanical systems. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 425-443. doi: 10.3934/naco.2013.3.425
Leo T. Butler. A note on integrable mechanical systems on surfaces. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1873-1878. doi: 10.3934/dcds.2014.34.1873
Anthony M. Bloch, Peter E. Crouch, Nikolaj Nordkvist. Continuous and discrete embedded optimal control problems and their application to the analysis of Clebsch optimal control problems and mechanical systems. Journal of Geometric Mechanics, 2013, 5 (1) : 1-38. doi: 10.3934/jgm.2013.5.1
B. Kaymakcalan, R. Mert, A. Zafer. Asymptotic equivalence of dynamic systems on time scales. Conference Publications, 2007, 2007 (Special) : 558-567. doi: 10.3934/proc.2007.2007.558
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of discrete mechanical systems by stages. Journal of Geometric Mechanics, 2016, 8 (1) : 35-70. doi: 10.3934/jgm.2016.8.35
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems by stages. Journal of Geometric Mechanics, 2020, 12 (4) : 607-639. doi: 10.3934/jgm.2020029
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems. Journal of Geometric Mechanics, 2010, 2 (1) : 69-111. doi: 10.3934/jgm.2010.2.69
Manuel Falconi, E. A. Lacomba, C. Vidal. The flow of classical mechanical cubic potential systems. Discrete & Continuous Dynamical Systems, 2004, 11 (4) : 827-842. doi: 10.3934/dcds.2004.11.827
Franco Cardin, Alberto Lovison. Finite mechanical proxies for a class of reducible continuum systems. Networks & Heterogeneous Media, 2014, 9 (3) : 417-432. doi: 10.3934/nhm.2014.9.417
Viviana Alejandra Díaz, David Martín de Diego. Generalized variational calculus for continuous and discrete mechanical systems. Journal of Geometric Mechanics, 2018, 10 (4) : 373-410. doi: 10.3934/jgm.2018014
Anthony M. Bloch, Melvin Leok, Jerrold E. Marsden, Dmitry V. Zenkov. Controlled Lagrangians and stabilization of discrete mechanical systems. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 19-36. doi: 10.3934/dcdss.2010.3.19
Javier Fernández, Sebastián Elías Graiff Zurita, Sergio Grillo. Erratum for "Error analysis of forced discrete mechanical systems". Journal of Geometric Mechanics, 2021, 13 (4) : 679-679. doi: 10.3934/jgm.2021030
Javier Fernández, Sebastián Elías Graiff Zurita, Sergio Grillo. Error analysis of forced discrete mechanical systems. Journal of Geometric Mechanics, 2021, 13 (4) : 533-606. doi: 10.3934/jgm.2021017
Mikhail Gusev. On reachability analysis for nonlinear control systems with state constraints. Conference Publications, 2015, 2015 (special) : 579-587. doi: 10.3934/proc.2015.0579
Sandra Ricardo Witold Respondek
|
CommonCrawl
|
boundary points of a set
You can set up each boundary group with one or more distribution points and state migration points, and you can associate the same distribution points and state migration points with multiple boundary groups. Hints help you try the next step on your own. Looking for boundary point? Interior and Boundary Points of a Set in a Metric Space. The closure of A is all the points that can An example output is here (blue lines are roughly what I need): Proof. The points (x(k),y(k)) form the boundary. If a set contains none of its boundary points (marked by dashed line), it is open. The point and set considered are regarded as belonging to a topological space.A set containing all its limit points is called closed. The set of all boundary points of the point set. I'm certain that this "conjecture" is in fact true for all nonempty subsets S of R, but from my understanding of each of these definitions, it cannot be true. k = boundary(x,y) returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). A closed set contains all of its boundary points. Weisstein, Eric W. "Boundary Point." Boundary of a set of points in 2-D or 3-D. Let $$A$$ be a subset of a topological space $$X$$, a point $$x \in X$$ is said to be boundary point or frontier point of $$A$$ if each open set containing at $$x$$ intersects both $$A$$ and $${A^c}$$. k = boundary(___,s) specifies shrink factor s using any of the previous syntaxes. Interior points, exterior points and boundary points of a set in metric space (Hindi/Urdu) - Duration: 10:01. consisting of points for which Ais a \neighborhood". It has no boundary points. This MATLAB function returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). Indeed, the boundary points of Z Z Z are precisely the points which have distance 0 0 0 from both Z Z Z and its complement. Where can I get this function?? All points in must be one of the three above; however, another term is often used, even though it is redundant given the other three. Your email address will not be published. A shrink factor of 1 corresponds to the tightest signel region boundary the points. So formally speaking, the answer is: B has this property if and only if the boundary of conv(B) equals B. In words, the interior consists of points in Afor which all nearby points of X are also in A, whereas the closure allows for \points on … Creating Minimum Convex Polygon - Home Range from Points in QGIS. Mathematics Foundation 8,337 views Thus, may or may not include its boundary points. Boundary of a set of points in 2-D or 3-D. Limit Points . Definition 1: Boundary Point A point x is a boundary point of a set X if for all ε greater than 0, the interval (x - ε, x + ε) contains a point in X and a point in X'. Theorem 5.1.8: Closed Sets, Accumulation Points… Do those inner circles count as well, or does the boundary have to enclose the set? Let \((X,d)\) be a metric space with distance \(d\colon X \times X \to [0,\infty)\). Commented: Star Strider on 4 Mar 2015 I need the function boundary and i have matlab version 2014a. Boundary of a set of points in 2-D or 3-D. You should view Problems 19 & 20 as additional sections of the text to study.) https://mathworld.wolfram.com/BoundaryPoint.html. Definition 1: Boundary Point A point x is a boundary point of a set X if for all ε greater than 0, the interval (x - ε, x + ε) contains a point in X and a point in X'. Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. get arbitrarily close to) a point x using points in a set A. now form a set & consisting of all first points M and all points such that in the given ordering they precede the points M; all other points of the set GX form the set d'. The set of all boundary points of a set S is called the boundary of the set… Combinatorial Boundary of a 3D Lattice Point Set Yukiko Kenmochia,∗ Atsushi Imiyab aDepartment of Information Technology, Okayama University, Okayama, Japan bInstitute of Media and Information Technology, Chiba University, Chiba, Japan Abstract Boundary extraction and surface generation are important topological topics for three- dimensional digital image analysis. Also, some sets can be both open and closed. In today's blog, I define boundary points and show their relationship to open and closed sets. Description. The default shrink factor is 0.5. In this lab exercise we are going to implement an algorithm that can take a set of points in the x,y plane and construct a boundary that just wraps around the points. Visualize a point "close" to the boundary of a figure, but not on the boundary. The set of all boundary points of a set $$A$$ is called the boundary of $$A$$ or the frontier of $$A$$. Given a set of N-dimensional point D (each point is represented by an N-dimensional coordinate), are there any ways to find a boundary surface that enclose these points? k = boundary(x,y) returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). Def. 5. • If $$A$$ is a subset of a topological space $$X$$, the $$A$$ is open $$ \Leftrightarrow A \cap {F_r}\left( A \right) = \phi $$. Unlike the convex hull, the boundary can shrink towards the interior of the hull to envelop the points. That is if we connect these boundary points with piecewise straight line then this graph will enclose all the other points. Trying to calculate the boundary of this set is a bit more difficult than just drawing a circle. The boundary command has an input s called the "shrink factor." Interior points, boundary points, open and closed sets. However, I'm not sure. consisting of points for which Ais a \neighborhood". • The boundary of a closed set is nowhere dense in a topological space. Vote. 0 ⋮ Vote. Note that . 2. the boundary of a set A is the set of all elements x of R (in this case) such that every neighborhood of x contains at least one point in A and one point not in A. A point which is a member of the set closure of a given set and the set In other words, for every neighborhood of , (∖ {}) ∩ ≠ ∅. In the basic gift-wrapping algorithm, you start at a point known to be on the boundary (the left-most point), and pick points such that for each new point you pick, every other point in the set is to the right of the line formed between the new point and the previous point. Boundary points are useful in data mining applications since they represent a subset of population that possibly straddles two or more classes. If it is, is it the only boundary of $\Bbb{R}$ ? Set N of all natural numbers: No interior point. démarcations pl f. boundary nom adjectival — périphérique adj. A set A is said to be bounded if it is contained in B r(0) for some r < 1, otherwise the set is unbounded. Find out information about boundary point. It is denoted by $${F_r}\left( A \right)$$. What about the points sitting by themselves? Interior and Boundary Points of a Set in a Metric Space. Note the difference between a boundary point and an accumulation point. Besides, I have no idea about is there any other boundary or not. s is a scalar between 0 and 1.Setting s to 0 gives the convex hull, and setting s to 1 gives a compact boundary that envelops the points. In today's blog, I define boundary points and show their relationship to open and closed sets. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. In this paper, we propose a simple yet novel approach BORDER (a BOundaRy points DEtectoR) to detect such points. There are at least two "equivalent" definitions of the boundary of a set: 1. the boundary of a set A is the intersection of the closure of A and the closure of the complement of A. Then any closed subset of $$X$$ is the disjoint union of its interior and its boundary, in the sense that it contains these sets, they are disjoint, and it is their union. The set A is closed, if and only if, it contains its boundary, and is open, if and only if A\@A = ;. 5. From far enough away, it may seem to be part of the boundary, but as one "zooms in", a gap appears between the point and the boundary. ; A point s S is called interior point of S if there exists a neighborhood of s completely contained in S. Note S is the boundary of all four of B, D, H and itself. Since, by definition, each boundary point of $$A$$ is also a boundary point of $${A^c}$$ and vice versa, so the boundary of $$A$$ is the same as that of $${A^c}$$, i.e. Lemma 1: A set is open when it contains none of its boundary points and it is closed when it contains all of its boundary points. In topology and mathematics in general, the boundary of a subset S of a topological space X is the set of points which can be approached both from S and from the outside of S. More precisely, it is the set of points in the closure of S not belonging to the interior of S. An element of the boundary of S is called a boundary point of S. The term boundary operation refers to finding or taking the boundary of a set. Intuitively, an open set is a set that does not contain its boundary, in the same way that the endpoints of an interval are not contained in the interval. The boundary would look like a "staircase", but choosing a smaller cell size would improve the result. • A subset of a topological space $$X$$ is closed if and only if it contains its boundary. For this discussion, think in terms of trying to approximate (i.e. closure of its complement set. If is a subset of We de ne the closure of Ato be the set A= fx2Xjx= lim n!1 a n; with a n2Afor all ng consisting of limits of sequences in A. 2. the boundary of a set A is the set of all elements x of R (in this case) such that every neighborhood of x contains at least one point in A and one point not in A. For 2-D problems, k is a column vector of point indices representing the sequence of points around the boundary, which is a polygon. THE BOUNDARY OF A FINITE SET OF POINTS 95 KNand we would get a path from A to B with step d. This is a contradiction to the assumption, and so GD,' = GX. Lemma 1: A set is open when it contains none of its boundary points and it is closed when it contains all of its boundary points. Creating Groups of points based on proximity in QGIS? Interior and Boundary Points of a Set in a Metric Space. Explore anything with the first computational knowledge engine. Let S be an arbitrary set in the real line R.. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S.The set of all boundary points of S is called the boundary of S, denoted by bd(S). limitrophe adj. , then a point is a boundary I think the empty set is the boundary of $\Bbb{R}$ since any neighborhood set in $\Bbb{R}$ includes the empty set. https://goo.gl/JQ8Nys Finding the Interior, Exterior, and Boundary of a Set Topology Explanation of Boundary (topology) Theorem: A set A ⊂ X is closed in X iff A contains all of its boundary points. Then by boundary points of the set I mean the boundary point of this cluster of points. It's fairly common to think of open sets as sets which do not contain their boundary, and closed sets as sets which do contain their boundary. All limit points of are obviously points of closure of . If is neither an interior point nor an exterior point, then it is called a boundary point of . boundary point of S if and only if every neighborhood of P has at least a point in common with S and a point Table of Contents. Boundary. Turk J Math 27 (2003) , 273 { 281. c TUB¨ ITAK_ Boundary Points of Self-A ne Sets in R Ibrahim K rat_ Abstract Let Abe ann nexpanding matrixwith integer entries and D= f0;d 1; ;d N−1g Z nbe a set of N distinct vectors, called an N-digit set.The unique non-empty compact set T = T(A;D) satisfying AT = T+ Dis called a self-a ne set.IfT has positive Lebesgue measure, it is called aself-a ne region. An open set contains none of its boundary points. Interior and Boundary Points of a Set in a Metric Space. An example is the set C (the Complex Plane). Find out information about Boundary (topology). ; A point s S is called interior point of S if there exists a neighborhood of s completely contained in S. In the familiar setting of a metric space, the open sets have a natural description, which can be thought of as a generalization of an open interval on the real number line. The set of all limit points of is a closed set called the closure of , and it is denoted by . A point P is an exterior point of a point set S if it has some ε-neighborhood with no points in common with S i.e. Walk through homework problems step-by-step from beginning to end. point of if every neighborhood Unlike the convex hull, the boundary can shrink towards the interior of the hull to envelop the points. $\begingroup$ Suppose we plot the finite set of points on X-Y plane and suppose these points form a cluster. A set which contains all its boundary points – and thus is the complement of its exterior – is called closed. A point on the boundary of S will still have this property when the roles of S and its complement are reversed. Thus C is closed since it contains all of its boundary points (doesn't have any) and C is open since it doesn't contain any of its boundary points (doesn't have any). Boundary points are data points that are located at the margin of densely distributed data (e.g. The set of all boundary points of a set forms its boundary. Drawing boundary of set of points using QGIS? Knowledge-based programming for everyone. • If $$A$$ is a subset of a topological space $$X$$, then $${F_r}\left( A \right) = \overline A – {A^o}$$. As a matter of fact, the cell size should be determined experimentally; it could not be too small, otherwise inside the region may appear empty cells. Learn more about bounding regions MATLAB https://mathworld.wolfram.com/BoundaryPoint.html. From far enough away, it may seem to be part of the boundary, but as one "zooms in", a gap appears between the point and the boundary. The #1 tool for creating Demonstrations and anything technical. All boundary points of a set are obviously points of contact of . Interior and Boundary Points of a Set in a Metric Space Fold Unfold. data points that are located at the margin of densely distributed data (or cluster). Properties. The points of the boundary of a set are, intuitively speaking, those points on the edge of S, separating the interior from the exterior. For example, this set of points may denote a subset Set Q of all rationals: No interior points. It is denoted by $${F_r}\left( A \right)$$. Table of Contents. Example: The set {1,2,3,4,5} has no boundary points when viewed as a subset of the integers; on the other hand, when viewed as a subset of R, every element of the set is a boundary point. In words, the interior consists of points in Afor which all nearby points of X are also in A, whereas the closure allows for \points on the edge of A". • If $$A$$ is a subset of a topological space $$X$$, then $${F_r}\left( A \right) = \overline A \cap \overline {{A^c}} $$. For 3-D problems, k is a triangulation matrix of size mtri-by-3, where mtri is the number of triangular facets on the boundary. k = boundary(x,y) returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). Every non-isolated boundary point of a set S R is an accumulation point of S. An accumulation point is never an isolated point. Given a set of coordinates, How do we find the boundary coordinates. From An average distance between the points could be used as a lower boundary of the cell size. Boundary is the polygon which is formed by the input coordinates for vertices, in such a way that it maximizes the area. The point a does not belong to the boundary of S because, as the magnification reveals, a sufficiently small circle centered at a contains no points of S. Wrapping a boundary around a set of points. A point which is a member of the set closure of a given set and the set closure of its complement set. \(D\) is said to be open if any point in \(D\) is an interior point and it is closed if its boundary \(\partial D\) is contained in \(D\); the closure of D is the union of \(D\) and its boundary: Is the empty set boundary of $\Bbb{R}$ ? We de ne the closure of Ato be the set A= fx2Xjx= lim n!1 a n; with a n2Afor all ng consisting of limits of sequences in A. By default, the shrink factor is 0.5 when it is not specified in the boundary command. <== Figure 1 Given the coordinates in the above set, How can I get the coordinates on the red boundary. Boundary of a set (This is introduced in Problem 19, page 102. You set the distribution point fallback time to 20. A shrink factor of 0 corresponds to the convex hull of the points. Boundary is the polygon which is formed by the input coordinates for vertices, in such a way that it maximizes the area. Let S be an arbitrary set in the real line R.. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S.The set of all boundary points of S is called the boundary of S, denoted by bd(S). of contains at least one point in and at least one All of the points in are interior points… There are at least two "equivalent" definitions of the boundary of a set: 1. the boundary of a set A is the intersection of the closure of A and the closure of the complement of A. • Let $$X$$ be a topological space. The boundary of a set S in the plane is all the points with this property: every circle centered at the point encloses points in S and also points not in S.: For example, suppose S is the filled-in unit square, painted red on the right. By default, the shrink factor is 0.5 when it is not specified in the boundary command. Does that loop at the top right count as boundary? Unlimited random practice problems and answers with built-in Step-by-step solutions. The set A in this case must be the convex hull of B. The interior of S is the complement of the closure of the complement of S.In this sense interior and closure are dual notions.. The concept of boundary can be extended to any ordered set … Each row of k defines a triangle in terms of the point indices, and the triangles collectively form a bounding polyhedron. A point each neighbourhood of which contains at least one point of the given set different from it. A set which contains no boundary points – and thus coincides with its interior, i.e., the set of its interior points – is called open. $${F_r}\left( A \right) = {F_r}\left( {{A^c}} \right)$$. The trouble here lies in defining the word 'boundary.' This follows from the complementary statement about open sets (they contain none of their boundary points), which is proved in the open set wiki. Practice online or make a printable study sheet. 0. Definition: The boundary of a geometric figure is the set of all boundary points of the figure. The set of interior points in D constitutes its interior, \(\mathrm{int}(D)\), and the set of boundary points its boundary, \(\partial D\). Required fields are marked *. Our … <== Figure 1 Given the coordinates in the above set, How can I get the coordinates on the red boundary. Examples: (1) The boundary points of the interior of a circle are the points of the circle. Lors de la distribution de logiciels, les clients demandent un emplacement pour le … For example, 0 and are boundary points of intervals, , , , and . The set of all boundary points of a set $$A$$ is called the boundary of $$A$$ or the frontier of $$A$$. Interior and Boundary Points of a Set in a Metric Space Fold Unfold. Introduced in R2014b. A shrink factor of 0 corresponds to the convex hull of the points. Join the initiative for modernizing math education. • A subset of a topological space has an empty boundary if and only if it is both open and closed. Looking for Boundary (topology)? Given a set of coordinates, How do we find the boundary coordinates. Whole of N is its boundary, Its complement is the set of its exterior points (In the metric space R). Trivial closed sets: The empty set and the entire set X X X are both closed. To get a tighter fit, all you need to do is modify the rejection criteria. Open sets are the fundamental building blocks of topology. In the case of open sets, that is, sets in which each point has a neighborhood contained within the set, the boundary points do not belong to the set. How can all boundary points of a set be accumulation points AND be isolation points, when a requirement of an isolation point is in fact NOT being an accumulation point? point not in . BORDER employs the state-of-the-art database technique - the Gorder kNN join and makes use of the special property of the reverse k-nearest neighbor (RkNN). Definition 5.1.5: Boundary, Accumulation, Interior, and Isolated Points : Let S be an arbitrary set in the real line R. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S. The set of all boundary points of S is called the boundary of S, denoted by bd(S). The boundary command has an input s called the "shrink factor." If is a subset of , then a point is a boundary point of if every neighborhood of contains at least one point in and at least one point not in . Your email address will not be published. Boundary Point. The boundary of A, @A is the collection of boundary points. Solution:A boundary point of a set S, has the property that every neighborhood of the point must contain points in S and points in the complement of S (if not, the point would be an exterior point in the first case and an interior point in the seco nd case). This is finally about to be addressed, first in the context of metric spaces because it is easier to see why the definitions are natural there. Please Subscribe here, thank you!!! For the case of , the boundary points are the endpoints of intervals. a cluster). Exterior point of a point set. The set of all boundary points in is called the boundary of and is denoted by . \begin{align} \quad \partial A = \overline{A} \cap \overline{X \setminus A} \quad \blacksquare \end{align} A shrink factor of 1 corresponds to the tightest signel region boundary the points. Hot Network Questions How to pop the last positional argument of a bash function or script? Finally, here is a theorem that relates these topological concepts with our previous notion of sequences. Lorsque vous enregistrez cette configuration, les clients dans le groupe de limites Branch Office démarrent la recherche de contenu sur les points de distribution dans le groupe de limites Main Office après 20 minutes. A point is called a limit point of if every neighborhood of intersects in at least one point other than . Definition: The boundary of a geometric figure is the set of all boundary points of the figure. Follow 23 views (last 30 days) Benjamin on 6 Dec 2014. The points (x(k),y(k)) form the boundary. MathWorld--A Wolfram Web Resource. Explanation of boundary point 6. An average distance between the points ( X ( k ) ) form the boundary can shrink towards the of. Set are obviously points of the figure Complex Plane ) of N is its boundary points a... Get the coordinates on the boundary not include its boundary points of a topological space $ $ X $. Yet novel approach BORDER ( a \right ) $ $ { F_r \left! Coordinates on the red boundary H and itself triangulation matrix of size mtri-by-3, where mtri is the of... A is the boundary of this cluster of points with piecewise straight line then this graph will enclose the! A single conforming 2-D boundary around the points practice problems and answers with built-in step-by-step solutions in data applications! Contains all of its boundary points thus is the polygon which is by! Set which contains all of its boundary points of closure of its boundary points of a! Of intersects in at least one point other than @ a is set! Boundary if and only if it contains its boundary points of a set in a Metric space Fold Unfold contains... In at least one point other than X, y ( k ), y ( k ) y... And anything technical • a subset of a topological space.A set containing all its boundary points with piecewise line! On 6 Dec 2014 == figure 1 given the coordinates on the red.. Clients demandent un emplacement pour le of population that possibly straddles two or more classes tool creating. Strider on 4 Mar 2015 I need the function boundary and I have version! Contains all of its boundary points of is a member of the set of all boundary of! ( X, y ( k ), y ) terms of trying to (! A limit point of, 0 and are boundary points with piecewise straight then. 4 Mar 2015 I need the function boundary and I have matlab version.. Set Q of all natural numbers: No interior points, exterior points boundary. The hull to envelop the points could be used as a lower of. Can be both open and closed X iff a contains all of its boundary points views boundary of set. Only boundary of a set in a topological space.A set containing all limit... ) the boundary command all rationals: No interior point nor an exterior point, then it is denoted $! Then it boundary points of a set denoted by do we find the boundary command has an empty boundary if only... Hull of B, D, H and itself { F_r } \left ( a ). Set a in this paper, we propose a simple yet novel approach BORDER ( a boundary of. You try the next step on your own or script ( 1 ) the boundary can shrink towards interior! No interior point nor an exterior point, then it is not specified in the set. D, H and itself its boundary points of a set in a Metric space Fold Unfold regarded belonging... Using any of the previous syntaxes around the points could be used as a boundary! Four of B, D, H and itself we propose a simple yet approach. The empty set boundary of a set which contains all of its boundary, its complement set Q all. Are the fundamental building blocks of topology define boundary points of closure of, ( ∖ }. Thus is the polygon which is a bit more difficult than just drawing a circle corresponds to the signel! Points in 2-D or 3-D and I have No idea about is there any other boundary or not population possibly., think in terms of trying to approximate ( i.e applications since represent! ⊂ X is closed in X iff a contains all of its complement set each of. Each row of k defines a triangle in terms of trying to approximate ( i.e X X both! Closed if and only if it is denoted by $ $ be a topological has... Hot Network Questions How to pop the last positional argument of a, @ a the... # 1 tool for creating Demonstrations and anything technical have No idea about is there any other boundary not! Previous notion of sequences collectively form a bounding polyhedron space Fold Unfold set closure of and. The previous syntaxes problems 19 & 20 as additional sections of the closure... Beginning to end space ( Hindi/Urdu ) - Duration: 10:01 an open set none. Of and is denoted by, where mtri is the set closure of,... K ) ) form the boundary can shrink towards the interior of the set C ( the Plane! S R is an accumulation point 2015 I need the function boundary and I have No idea about there! A point which is a member of the hull to envelop the points could used. Boundary and I have matlab version 2014a between the points is neither an interior point of are obviously of... A tighter fit, all you need to do is modify the rejection criteria average distance between the points both... Hindi/Urdu ) - Duration: 10:01 ( topology ) boundary points are endpoints! The point indices, and it is both open and closed this will. To get a tighter fit, all you need to do is modify the criteria... Y ( k ), y ), I define boundary points of boundary points of a set set a ⊂ is. Points in 2-D or 3-D, its complement are reversed is there any other boundary or.! And set considered are regarded as belonging to a topological space has an empty boundary if and only if contains... Default, the boundary coordinates of coordinates, How do we find the boundary coordinates the case of the! == figure 1 given the coordinates on the boundary coordinates space has an empty boundary if and only if contains. $ \Bbb { R } $ located at the top right count as well, or does the coordinates! Strider on 4 Mar 2015 I need the function boundary and I have matlab version 2014a of. Set N of all rationals: No interior points, exterior points and boundary points of a given and! Trying to approximate ( i.e pl f. boundary nom adjectival — périphérique adj and thus is the empty boundary! That loop at the margin of densely distributed data ( e.g line then this graph will enclose all the points. = boundary ( topology ) boundary points of a circle are the fundamental building blocks of.! Is not specified in the above set, How can I get the coordinates in above... Cell size using points in 2-D or 3-D Network Questions How to the. Vector of point indices representing a single conforming 2-D boundary around the points point! Have No idea about is there any other boundary or not of mtri-by-3. X iff a contains all its boundary points of a set in a set are obviously of... Neighborhood of, ( ∖ { } ) ∩ ≠ ∅ anything technical points on. Examples: ( 1 ) the boundary coordinates figure is the set of points on proximity in QGIS )... All you need to do is modify the rejection criteria around the points ( X ( k ) ) the... Visualize a point `` close '' to the convex hull of the set of... The rejection criteria the entire set X X are both closed unlimited practice! Boundary coordinates way that it maximizes the area these boundary points with piecewise line. { R } $ empty set and the set of all boundary points all! All its limit points of are obviously points of a geometric figure is the empty set boundary this. Set, How can I get the coordinates on the boundary points of the set of points 2-D. Fallback time to 20 top right count as well, or does the boundary of a topological space Mar! Points ( in the Metric space is the empty set and the collectively! Plane ) set Q of all four of B exterior – is called a point! Boundary and I have No idea about is there any other boundary or not one point other than subset... Since they represent a subset of a set of coordinates, How we. Boundary and I have No idea about is there any other boundary or not, 0 and boundary!,, and it is not specified in the boundary of a set in a Metric space Unfold! This case must be the convex hull of the figure 3-D problems, k a. Coordinates on the boundary can shrink towards the interior of the text to study. accumulation point closed set a! For creating Demonstrations and anything technical novel approach BORDER ( a \right ) $ $ { F_r } \left a... F. boundary nom adjectival — périphérique adj represent a subset of a which. Pop the last positional argument of a set a in this paper, we propose a simple yet approach. S will still have this property when the roles of S will still have this property when the roles S., where mtri is the set closure of a circle are the.... Set X X X X X X X X X are both closed bit more difficult than just drawing circle! Open set contains none of its boundary points of a set of points for which Ais \neighborhood! In Metric space here is a member of the set of coordinates, How do we find boundary... Non-Isolated boundary point of S. an accumulation point is never an isolated point Mar I. Boundary points of a geometric figure is the empty set and the set a & 20 additional... Blog, I have No idea about is there any other boundary or not Foundation views.
Operations Management Process, Circular Stairs Plan, Feasible Generalized Least Squares Python, Babolat Tennis Backpack Sale, Naturtint 5n Reviews, How To Make Hard Gummy Bears,
boundary points of a set 2020
|
CommonCrawl
|
When you make a cut in an object, similar to a fixed reaction, we describe what is happening at that point using one horizontal force (called normal force), one vertical force (called shear force), and a bending moment.
Adapted from source: Engineering Mechanics, Jacob Moore, et al. http://mechanicsmap.psu.edu/websites/6_internal_forces/6-2_internal_forces_equilibrium/internal_forces_equilibrium.html
There are 3 types of internal forces (& moments):
normal force (N) – the horizontal force we calculated in trusses in the last chapter
shear force (V) – the vertical force that changes based on the applied loads
bending moment (M) – changes based on the applied loads and applied moments
Normal force is represented by 'N'. Shear force, the vertical force is represented with 'V'. Bending moment is 'M'. Normal and shear have units of N or lb and bending moment has units of Nm or ft-lb. The following table summarizes information on internal forces (and moments).
Force/Moment Abbreviation Unit Direction for a horizontal beam
Normal Force N N or lb horizontal
Shear Force V N or lb vertical
Moment M Nm or ft-lb rotation
Note that for a vertical column, the normal force would be vertical. For this reason, the normal force is often called 'axial' as in: along the axis. The shear force for a column would be horizontal and is sometimes called 'transverse'.
This is for a 2d analysis of the beam assuming there is negligible loading in the third dimension.
When a beam or frame is subjected to transverse loadings, the three possible internal forces that are developed are the normal or axial force, the shearing force, and the bending moment, as shown in section k of the cantilever of the figure below. To predict the behavior of structures, the magnitudes of these forces must be known. In this chapter, the student will learn how to determine the magnitude of the shearing force and bending moment at any section of a beam or frame and how to present the computed values in a graphical form, which is referred to as the "shearing force" and the "bending moment diagrams." Bending moment and shearing force diagrams aid immeasurably during design, as they show the maximum bending moments and shearing forces needed for sizing structural members.
Normal Force
The normal force at any section of a structure is defined as the algebraic sum of the axial forces acting on either side of the section.
Shearing Force
The shearing force (SF) is defined as the algebraic sum of all the transverse forces acting on either side of the section of a beam or a frame. The phrase "on either side" is important, as it implies that at any particular instance the shearing force can be obtained by summing up the transverse forces on the left side of the section or on the right side of the section.
Bending Moment
The bending moment (BM) is defined as the algebraic sum of all the forces' moments acting on either side of the section of a beam or a frame.
Source: Internal Forces in Beams and Frames, Libretexts. https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_Structural_Analysis_(Udoeyo)/01%3A_Chapters/1.04%3A_Internal_Forces_in_Beams_and_Frames
In 3 dimensions, there are:
1 normal force (N)
2 shear forces (V1 & V2), and
3 bending moments (M1, M2, & T – torsion).
Source: Engineering Mechanics, Jacob Moore, et al. http://mechanicsmap.psu.edu/websites/6_internal_forces/6-2_internal_forces_equilibrium/internal_forces_equilibrium.html
So that there is a standard within the industry, a sign convention is necessary so we agree on what is positive and what is negative. On the right for shear – up is positive. Notice that both of the following figures show the identical sign convention.
Positive sign convention adapted from source: https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_Structural_Analysis_(Udoeyo)/01%3A_Chapters/1.04%3A_Internal_Forces_in_Beams_and_Frames
When you look at the beam as a whole (in the figure below), positive shear is right side down. When you cut into beam, for it to be in static equilibrium, the positive shear must then be up on the right to be equal and opposite of the overall motion.
Axial (Normal) Force
An axial force is regarded as positive if it tends to tier the member at the section under consideration. Such a force is regarded as tensile, while the member is said to be subjected to axial tension. On the other hand, an axial force is considered negative if it tends to crush the member at the section being considered. Such force is regarded as compressive, while the member is said to be in axial compression.
Shear Force
A shear force that tends to move the left of the section upward or the right side of the section downward will be regarded as positive. Similarly, a shear force that has the tendency to move the left side of the section downward or the right side upward will be considered a negative shear force.
A bending moment is considered positive if it tends to cause concavity upward (sagging). If the bending moment tends to cause concavity downward (hogging), it will be considered a negative bending moment.
To solve the internal forces at a certain point along the beam,
Positive sign convention adapted from https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_Structural_Analysis_(Udoeyo)/01%3A_Chapters/1.04%3A_Internal_Forces_in_Beams_and_Frames
Find the external & reaction forces
Make a cut.
In a FBD of one side of the cut, add the internal forces (and moments) using the positive sign convention.
Use the equilibrium equations to solve for the unknown internal forces and moments.
Example: For the following distributed load, a) what are reaction forces? b) what are the internal forces at the midpoint b) between reaction forces?
Adapted from: Source: Engineering Mechanics, Jacob Moore, et al. http://mechanicsmap.psu.edu/websites/6_internal_forces/6-3_axial_torque_diagrams/axial_torque_diagrams.html
1. Solve external forces:
[latex]\sum F_{X}=A_{x}=0[/latex]
[latex]\sum F_{y}=A_{y}+C-\omega L=0[/latex]
[latex]\sum M_{A}=-(\omega L)\left(\frac{L}{2}\right)+d_{A C} C=0[/latex]
$$C = \left(\frac{\omega L^2}{2d_{A C}}\right) = \frac{(100 \frac{lb}{ft} )*(7ft)^2}{2 * (4ft)} = 612.5 lb \text{ (+j direction)} $$
$$A_y = \omega*L- C = (100 \frac{lb}{ft})*(7 ft) – 612.5 lb = 87.5 lb \text{ (+j direction) }$$
$$\underline{A_x = 0 \qquad A_y = 87.5 \text{ (+j )} \qquad C = 612.5 lb \text{ (+j )} }$$
2. Make a cut at B.
3. In a FBD of one side of the cut, add the internal forces (and moments) using the positive sign convention.
4. Use the equilibrium equations to solve for the unknown internal forces and moments.
For just this portion, the force from intensity is: Fw = ( 100 lb/ft ) * ( 2 ft) = 200 lb and acts 1 ft from the left, so the moment due to intensity is: Mw = w * 2 ft * 1 ft = Fw * 1 ft = ( 100 lb/ft ) * ( 2 ft) * (1 ft) = 200 ft-lb
$$\sum F_y = 87.5 lb – 200 lb – V = 0 \\ V = -112.5 lb \text{ (- indicates going up not down)} $$
$$ \sum M_A = – (w * 2 ft) * (1 ft) – V * (2 ft) + M = 0 \\ M = (100 \frac{lb}{ft}) * 2 ft^2 + (-112.5 lb) * (2 ft) \\ M = 200 ft \cdot lb – 225 ft \cdot lb \\ M = -25 ft \cdot lb \text{ (- indicates going reverse direction)} $$
$$\underline{N = 0 \qquad V = -112.5 lb \text{ (+j )} \qquad M = -25 ft \cdot lb \text{ (clockwise)} }$$
Basically: The internal forces (and moments) for a 2d beam are: shear, normal, and bending moment. There is a positive sign convention to use when making a cut along a beam to determine the forces inside: on the left: shear down, normal out, moment up.
Application: A bridge that has different loads applied (from cars, trucks, lampposts, etc). Use this method to calculate the internal loads at a particular point of interest.
Looking Ahead: In the next section, we'll look at how to calculate the internal force across the whole beam, and display the results graphically.
Previous: Chapter 6: Internal Forces
Next: 6.2 Shear/Moment Diagrams
|
CommonCrawl
|
What is the fourth term of an arithmetic progression with a first-term of 5 and a common...
What is the fourth term of an arithmetic progression with a first-term of 5 and a common difference of -3?
Finding the nth term of an Arithmetic Sequence
As we know, an arithmetic sequence consists of array of terms in which the next term of the sequence is obtained by adding a common difference to the previous term. Suppose this sequence has n number of terms with {eq}a_{1} {/eq} as the first term and common difference d. The general formula for this sequence can be derived using
$$\color{blue}{a_n = a_1 + (n-1)d} $$
where {eq}a_n {/eq} represents the nth term of the sequence.
From this problem, it says that the arithmetic sequence we have has a first term of 5 and a common difference of -3. Thus, we have {eq}a_1 = 5 {/eq}...
Arithmetic Sequence: Formula & Definition
Chapter 26 / Lesson 3
Discover the arithmetic sequence definition and how math uses it. Know its formula and how to solve problems relating to it through sample calculations.
If the third term in an arithmetic sequence is 7 and the common difference is -5, what is the value of the fourth term?
The sixth term of an arithmetic progression is 23 and the sum of the six terms is 78. Find the first term and the common difference.
What are the first four terms of an arithmetic sequence if the common difference is 1.5 and the first term is 15? a. 15, 30, 45, 60 b. 15, 16.5, 18, 19.5 c. 15, 22.5, 33.75, 50.625 d. none of the abo
What is the first term of an arithmetic sequence with a common difference of 5 and the sixth term of 40?
The second term of an arithmetic sequence is 24 and the fifth term is 3. Find the first term and the common difference.
what is the 14th term of the arithmetic sequence with a first term of 7 and a common difference of 10?
Consider the arithmetic sequence with the first term ai = 300 and common difference d=-90. What are the first six terms of this sequence?
Write the first four terms of the arithmetic sequence with a first term of 5 and a common difference of 3.
Write the first five terms of an arithmetic sequence with a first term of 18 and a common difference of 6.
If -20 is the 15th term in an arithmetic sequence with a common difference of -7, what is the first term in the sequence?
What is the sum of the first 13 terms of an arithmetic progression if the first term is -10 and last term is 26?
How do you write the first five terms of the arithmetic sequence given that the first term is 5, and the common difference is 6?
The first term of an arithmetic sequence is -3 and the fifteenth term is 53. What is the common difference of the sequence?
The first term of an arithmetic sequence is 2. The sum of the third and the sixth terms is 25. What is the fourth term?
What is the common difference of the sequence if the first term of an arithmetic sequence is -3 and the fifteenth term is 53 ?
If the third and fourth terms of an arithmetic sequence are 12 and 16, what are the first and second terms?
If the third and fourth terms of an arithmetic sequence are -6 and -9, what are the first and second terms?
In an Arithmetic Progression, the 9th term is 2 times the 4th term and the 12th term is 78. What is the sum of the first twenty terms?
he fifth term of an arithmetic sequence is 11. If the difference between two consecutive terms is 1, what is the product of the first two terms?
Find the sum of the first 25 terms of the arithmetic sequence whose first term is 7 and the common difference is 3.
If the 6th term of an arithmetic sequence is 48 and the sum of the first 6 term is 300, what is the first term and the constant difference?
The consecutive terms of an arithmetic progression are 5-x, 8, 2x. Find the common difference of the progression.
In an arithmetic sequences, the 1st term is 13 and the 15th term is 111. Find the common difference and the sum of the first 20 terms.
The first term of an arithmetic progression is 29 and the seventh term is -13. What is the value of the tenth term?
Find the indicated term for the arithmetic sequence with the first term a_1 and the common difference d. Find a_{31} when a_1 = -4, d = 2/3
If the 2nd term of an arithmetic sequence is -3 and the 4th term is 1, what is the 55th term?
Determine the common difference, the fifth term, and the sum of the first 100 terms of the following sequence: 1, 2.5, 4, 5.5...
An arithmetic progression has terms a8 = 23 and a20 = 83. What is the value of the term a12?
If the 2nd term of an arithmetic sequence is -15 and the 7th term is 10, find the 4th term.
The sum of the first 20 terms of an arithmetic sequence with a common difference of 3 is 650. Find the first term.
Is the sequence a_n=9_n-16 arithmetic? If yes, find the first term and its common difference.
What is the equation for an arithmetic sequence with a first term of 7 and a second term of 3? a) a_n = 7 - 4(n - 1) b) a_n = 7 + 4(n - 1) c) a_n = 7 - 3(n - 1) d) a_n = 7 + 3(n - 1)
What is the equation for an arithmetic sequence with a first term of 8 and a second term of 5?
Find the common difference d and the nth term an of the arithmetic sequence with the specified terms. 4th term is 2; 6th term is 8
(a) The first term of an arithmetic sequence is -8 and the common difference is 3. (i) Find the seventh term of the sequence. (ii) The last term of the sequence is 100. How many terms are t
The first term in an arithmetic series is 3, and the 9^{th} term is 35. What is the 17^{th} term?
The second and fifth terms of an arithmetic sequence are 17 and 19, respectively. What is the eighth term?
If an arithmetic sequence has a 1st term of 3 and a 10th term of 21, what is the sum of the first 10 terms?
The 50th term of an arithmetic sequence is 86, and the common difference is 2. Find the first three terms of the sequence.
The first term of a finite arithmetic progression is 18 and the common difference is -2 \frac{1}{2}. The sum of the terms is -12. Find the number of terms.
If the 3rd term of an arithmetic sequence is 13 and the 7th term is 33, what is the 20th term?
If the 1st term of an arithmetic sequence is 27 and the 3rd term is 45, what is the 10th term?
The 9th term of an arithmetic progression is 4 + 5p and the sum of the first four terms of the progression is 7p - 10, where p is a constant. Given that the common difference of the progression is 5,
What is the sum of the first four terms of the arithmetic sequence in which the 6th term is 8 and the 10th term is 13?
In a sequence, the first term is 25 and the common difference is -7. Find the seventh term of this sequence.
What is the first term in an arithmetic sequence if the 8th term is 17 and the 10th term is 25?
If an arithmetic sequence has a first term of 4 and a second term of 5, what is the sum of the next five terms?
If the 4th term of an arithmetic sequence is 15 and the 9th term is 30, what is the 30th term?
In a sequence, the first term is -50 and the common difference is 8. Find the sixth term of this sequence.
Write the first ten terms of a sequence whose first term is -10 and whose common difference is -2.
If the 5th and 8th terms of an arithmetic sequence are -9 and -21, respectively, what are the first four terms of the sequence?
The first term of an arithmetic progression AP is -12 and the last term is 40. If the sum of the AP is 196, find the number of terms in AP and a common difference.
In a sequence, the first term is 4 and the common difference is 3. Find the fifth term of this sequence.
What is the common difference for the arithmetic sequence 4, 7, 10, 13, ?
What is the common difference of the following arithmetic sequence? 102, 100, 98, 96 A. 2 B. -2 C. -1 D. 102
What is the common difference of the arithmetic sequence 5, 8, 11, 14,...?
What is the common difference in an arithmetic sequence?
The first two terms of the arithmetic sequence are given below. Find the missing term. a_1 = 5, a_2 = 11, a_{10} =
The first two terms of the arithmetic sequence are given below. Find the missing term. a_1 = - 0.7, a_2 = -13.8, a_8 = ?
The first two terms of the arithmetic sequence are given. Find the missing term. a_1 = 1/8,\; a_2 = 3/4,\; a_7 = ?
The first two terms of the arithmetic sequence are given. Find the missing term. a_1 = 3,\; a_2 = 13,\; a_9 = ?
The first two terms of an arithmetic sequence are given below. Find the missing term. a_1 = 3, a_2 = 13, a_9 =
The first two terms of the arithmetic sequence are given. Find the missing term. a_1 = 4.2, a_2 = 6.6, a_7 = _____
The first two terms of the arithmetic sequence are given. Find the missing term. a_1 = -0.7,\; a_2 = -13.8,\; a_8 = ?
The first two terms of the arithmetic sequence are given. Find the missing term. a_1 = 5,\; a_2 = -1,\; a_{10} = ?
The sum of the first 10 terms of the arithmetic series that begins at 2 and has a common difference of 3 is what?
Consider the terms in the arithmetic sequence: {-44/5, -13/5, 13/5, 44/5}. What would the seventh term be?
If 1, 2, 7 and 20, respectively, are added to the first four terms of an arithmetic progression, the resulting series is a geometric progression. Find the first term and the common difference of the a
Find the common difference for the arithmetic sequence that has 17 as its twelfth term and 71 as its sixth term.
Find the 13th term of a sequence if the 41st term is 48 and the common difference is 13.
Find the indicated term of the sequence with the given first term, a_1, and the common difference, d. Find a_12 when a_1 = -8, d = -2.
(a) Write an arithmetic sequence that has a common difference of 4 and the 8th term is 13. (b) What is the first term? (c) What is the 23rd term in the sequence?
Find the second and third term of the arithmetic sequence: 7 _ _ 22 27 A) 12, 15 B) 17, 12 C) 10, 17 D) 12, 17
Find the first term and the common difference of the arithmetic sequence described. Give a recursive formula for the sequence. Find a formula for the nth term. 7th term is -12, 16th term is 51 What is the first term of the sequence?
the fifth term of an arithmetic sequence is 9 and the 32nd term is -84. what is the 23rd term?
What is the third term of a geometric progression with first term 5 and common ratio -3?
The first three terms of an arithmetic sequence are 2k+1,5k,7k+2. what is the value of k ?
The first term, a_1, in an arithmetic sequence is 8; the fourth term, a_4, is 6. How many terms must be added to have the sum, S_n, of the terms equal 42?
Find the common difference d and the n^{th} term an of the arithmetic sequence whose 6^{th} term is 11 and 12^{th} term is 47.
What is the 7th term of the following arithmetic sequence? -7, -4, -1, ...
Find the first term and the common difference of the arithmetic sequence. Give the recursive formula for the sequence. Find a formula of the n^{th} term. 9^{th} term is -5; 15^{th} term is 31.
The 1st, 4th and 8th terms of an arithmetic sequence with a common difference of d, where d is not equal to 0, are the first three terms of a geometric sequence with a common ratio of r. Given that thhe 1st term of both sequences in 9, find the value of d
Given the arithmetic sequence with a_1 = 35 and a_k+1 = a_k - 3, find: a) the common difference b) the first five terms of the arithmetic sequence c) the nth term of the sequence as a function of n.
What is the { 32^{nd} } term of the arithmetic sequence where { a_1 = 13 } and { a_{13} = 59 }?
For the arithmetic sequence 2/3, 1/15, -8/15, a. determine the common difference, and b. find the next three terms of the sequence.
Which term of an arithmetic sequence is -18, given that a1 = 7 and a2 = 2?
Which term is 673 in the arithmetic sequence 8, 15, 22, 29, ...?
Which term is 208 in the arithmetic sequence 4, 16, 28, 40, 52, .?
If the fifth and eighth terms of an arithmetic sequence are -9 and -21, respectively, what are the first four terms of the sequence?
Find the 6th term of the arithmetic sequence. -7, -17, -27, -37...... The 6th term is .
Find the first five terms of the arithmetic series given that the first and seventh term are 7 and -31, respectively.
Find the first term and the common difference of the sequence: 11, 8, 5, 2. -1, ...
What is the common difference of the arithmetic sequence on which the following series is based? 3 + 4.5 + 6 + 7.5 + 9 + 10.5
The first term of an A.P is 3 and the fifth term is 9.Find the number of terms in the progression if the sum of the progression is 81.
What is the 20 th term in the following arithmetic sequence? 1/2, 1, 3/2, 2.
What is the 11th term of the following arithmetic sequence? -4, -7, -10, ...
In a sequence, each term after the second is the sum of the two preceding terms. The first and fifth term are each 3. What is the second term in the sequence?
A sequence of numbers is formed by adding together corresponding terms of an arithmetic progression and a geometric progression with a common ratio of 2.The 1st term is 48, the 2nd term is 73, and the 3rd term is 128. How do you find the fourth term, and
The sequence has the 4th and 5th terms of 33 and -51 respectively. Each term is found by adding the previous 2 terms and multiplying by -3. What is the value of a2? a2, a3, 33, -51?
|
CommonCrawl
|
Is the "number of photons" of a system a Lorentz invariant?
I'm wondering whether the number of photons of a system is a Lorentz invariant. Google returns a paper that seems to indicate that yes it's invariant at least when the system is a superconducting walls rectangular cavity.
However I was told in the hbar chatroom that it's not an invariant and it's proportional to the 1st term of the 4-momentum which is related to the Hamiltonian of the "free field theory".
Today I've talked to a friend who studies some GR (no QFT yet) and he couldn't believe that this number isn't Lorentz invariant.
So all in all I'm left confused. Is it a Lorentz invariant for some systems and not others? If so, what are the conditions that a system has to fulfil in order for the number of photons to be invariant?
quantum-field-theory photons invariants
thermomagnetic condensed bosonthermomagnetic condensed boson
$\begingroup$ A counted number is a counted number, no matter which coordinate system you write it down in (you don't turn into triplets by sending your twin on a space mission in a really fast rocket) . The first term in the four momentum would be an energy and it does, of course, transform under a Lorentz transformation. That's the Doppler effect. I don't think this is quite as trivial for the case of thermal photons, which do not have a fixed number, to begin with. Only the average number of photons of a thermal state is meaningful. $\endgroup$ – CuriousOne Mar 5 '16 at 0:49
$\begingroup$ I would not take that one too seriously. It's not a simple question, by the way. The photon number is definitely NOT an invariant in accelerated coordinate systems. I don't think it's an invariant in the case of thermal photons which have to be in thermodynamic equilibrium with a non-Lorentz invariant thermal bath. If you throw seven atoms into a fixed volume, and you have them emit seven photons, those seven non-thermal photons will always stay seven in any coordinate system, though... so it's a yes, but... $\endgroup$ – CuriousOne Mar 5 '16 at 2:17
$\begingroup$ Seven detected photons would be the same for any observer; there's no way to know about photons witout detection. $\endgroup$ – Peter Diehr Mar 5 '16 at 2:28
$\begingroup$ @J-T: I will let a theoretician take this one. For my taste that's too much handwaving on an important question. I am sure one can give a much better answer than that. If my intuitive one agrees with the correct theoretical one I'll have a drink on the house, but I don't want to claim to have sufficient expertise in a area where I can only wing it. $\endgroup$ – CuriousOne Mar 6 '16 at 0:20
$\begingroup$ @Rococo For instance, if we have two non-equivalent representations of a free system together with dynamics, at time $t$ in a given frame in a manifold, the number operator may achieve different values in each system, but they're not even comparable, because they live in different Hilbert spaces. $\endgroup$ – user40276 Mar 6 '16 at 17:48
Alice prepares an electromagnetic field in a state with a sharp number of photons $\hat{N}|n\rangle=n|n\rangle$ where $\hat{N}$ is the number operator. Alice is boosted with respect to Bob. In Bob's reference frame the field is in state $\hat{U}(\Lambda)|n\rangle$. The question asks if a measurement of the number of photons for Bob's state gives the sharp answer $n$. In other words, is it true that $\hat{N}\hat{U}(\Lambda)|n\rangle=n\hat{U}(\Lambda)|n\rangle$? Bob will get the sharp result $n$ if the boost operator commutes with the number operator. We just need to show that the commutator $[\hat{U}(\Lambda),\hat{N}]_{-}=0$.
The number operator for photons of helicity $\lambda$ is, \begin{equation} \hat{N_{\lambda}}=\int \frac{d^{3}p}{2\omega}\hat{\eta}_{p\lambda}\hat{\eta}^{\dagger}_{p\lambda} \end{equation} where $\hat{\eta}_{p\lambda},\hat{\eta}^{\dagger}_{p\lambda}$ are emission and absorption operators respectively for a photon of momentum $p$ and helicity $\lambda$ (the notation for emission and absorption operators is from Dirac's monograph "Lectures on Quantum Field Theory"). We also have $\omega = p^{0}$ in the Lorentz invariant measure.
Single photon states transform as, \begin{equation} \hat{U}(\Lambda)|p,\lambda\rangle=e^{-i\theta(p,\Lambda)}|\Lambda p,\lambda\rangle \end{equation} where $\theta(p,\Lambda)$ is the Wigner angle. Creating a single particle state from the vacuum $|S\rangle$ by $|p,\lambda\rangle=\hat{\eta}_{p\lambda}|S\rangle$ implies that the emission operators transform like states, \begin{equation} \hat{U}(\Lambda)\hat{\eta}_{p\lambda}=e^{-i\theta(p,\Lambda)}\hat{\eta}_{\Lambda p\lambda} \ . \end{equation} Taking the Hermitian conjugate, using unitarity, and replacing $\Lambda$ by $\Lambda^{-1}$, \begin{eqnarray} \hat{\eta}^{\dagger}_{p\lambda}\hat{U}^{\dagger}(\Lambda)&=& e^{i\theta(p,\Lambda)}\hat{\eta}^{\dagger}_{\Lambda p\lambda}\\ \hat{\eta}^{\dagger}_{p\lambda}\hat{U}(\Lambda^{-1})&=& e^{i\theta(p,\Lambda)}\hat{\eta}^{\dagger}_{\Lambda p\lambda}\\ \hat{\eta}^{\dagger}_{p\lambda}\hat{U}(\Lambda)&=& e^{i\theta(p,\Lambda^{-1})}\hat{\eta}^{\dagger}_{\Lambda^{-1} p\lambda} \ . \end{eqnarray} Now evaluate the commutator, \begin{eqnarray} [\hat{U}(\Lambda),\hat{N}_{\lambda}]_{-}&=& \int \frac{d^{3}p}{2\omega}\hat{U}(\Lambda)\hat{\eta}_{p\lambda}\hat{\eta}^{\dagger}_{p\lambda}- \int \frac{d^{3}p}{2\omega}\hat{\eta}_{p\lambda}\hat{\eta}^{\dagger}_{p\lambda}\hat{U}(\Lambda)\\ &=&\int \frac{d^{3}p}{2\omega}e^{-i\theta(p,\Lambda)}\hat{\eta}_{\Lambda p\lambda}\hat{\eta}^{\dagger}_{p\lambda}- \int \frac{d^{3}p}{2\omega}\hat{\eta}_{p\lambda}e^{i\theta(p,\Lambda^{-1})}\hat{\eta}^{\dagger}_{\Lambda^{-1} p\lambda} \ . \end{eqnarray} Make a change of variable in the second integral, $p'=\Lambda^{-1}p$. \begin{equation} [\hat{U}(\Lambda),\hat{N}_{\lambda}]_{-}= \int \frac{d^{3}p}{2\omega}e^{-i\theta(p,\Lambda)}\hat{\eta}_{\Lambda p\lambda}\hat{\eta}^{\dagger}_{p\lambda}- \int \frac{d^{3}p'}{2\omega'}\hat{\eta}_{\Lambda p'\lambda}e^{i\theta(\Lambda p',\Lambda^{-1})}\hat{\eta}^{\dagger}_{p'\lambda} \end{equation} The Wigner angle $\theta(p,\Lambda)$ corresponds to a rotation matrix $R(p,\Lambda)=H^{-1}_{\Lambda p}\Lambda H_{p}$ where $H_{p}$ is the standard boost. Now, \begin{equation} R(\Lambda p,\Lambda^{-1})=H^{-1}_{\Lambda^{-1}\Lambda p}\Lambda^{-1}H_{\Lambda p}=H^{-1}_{p}\Lambda^{-1}H_{\Lambda p}= (H^{-1}_{\Lambda p}\Lambda H_{p})^{-1}=(R(p,\Lambda))^{-1} \end{equation} so that the Wigner angle $\theta(\Lambda p,\Lambda^{-1})$ is $-\theta(p,\Lambda)$. Upon putting this result into the last integral the commutator vanishes $[\hat{U}(\Lambda),\hat{N}_{\lambda}]_{-}=0$ and so Bob's electromagnetic field also has the same sharp number $n$ of photons as Alice's field.
Edit: Explanation of why the invariant measure appears in the number operator
The method of induced representations, which is used to get the response of the single particle states to a Lorentz boost (second equation in main text), is simplest if one chooses a Lorentz invariant measure so that the resolution of unity for the single particle states is, \begin{equation} \sum_{\lambda=\pm 1}\int \frac{d^{3}p}{2\omega}|p,\lambda\rangle\langle p,\lambda|=1 \ . \end{equation} This choice implies that the commutator for the emission and absorption operators is, \begin{equation} [\hat{\eta}^{\dagger}_{p\lambda},\hat{\eta}_{p'\lambda'}]_{-}= \langle p,\lambda|p',\lambda'\rangle= 2\omega\delta_{\lambda,\lambda'}\delta^{3}(p-p') \ . \end{equation} In turn, this implies that the normal-ordered Hamiltonian for the free electromagnetic field is, \begin{equation} \hat{H}=\frac{1}{2}\int d^{3}p(\hat{\eta}_{p\lambda=-1}\hat{\eta}^{\dagger}_{p\lambda=-1}+\hat{\eta}_{-p\lambda=+1}\hat{\eta}^{\dagger}_{-p\lambda=+1}) \ . \end{equation} Now create $n$ photons from the vacuum with a state, \begin{equation} |\Psi\rangle=(\hat{\eta}_{p\lambda})^{n}|S\rangle \end{equation} and demand that the number operator $\hat{N}_{\lambda}$ measures the sharp result $n$ on this state. This implies that the Lorentz invariant measure must be used in the definition of the number operator (first equation in main text). So, one sees that there are no assumptions here, just a choice of the invariant measure (instead of a quasi-invariant measure) to make the method of induced representations used to get the irreps of the Poincare group for massless particles as simple as possible.
Stephen BlakeStephen Blake
$\begingroup$ The big assumption here is in how the Fock space states and creation operators transform wrt the Lorentz group (i.e. \begin{equation} \hat{U}(\Lambda)|p,\lambda\rangle=e^{-i\theta(p,\Lambda)}|\Lambda p,\lambda\rangle \end{equation}). Once those assumptions are made, the result is straightforward. Are those assumptions valid? Since the wave equation is Lorentz covariant, and its solutions bring creation operators, I'd say those assumptions are valid in the free case and consistent. In the interacting case it's less clear. In an accelerating frame, the Unruh effect clearly shows this doesnt work $\endgroup$ – Saleh Hamdan Mar 6 '16 at 7:14
$\begingroup$ As far as any talk about the zeroth component of the 4-momentum, this is irrelevant. No one would expect the Hamiltonian to be Lorentz covariant. Clearly there would be red/blue shift, but that says nothing of the number of particles being red/blue shifted, which should be Lorentz invariant if we make the above assumptions on how the Fock space transforms. $\endgroup$ – Saleh Hamdan Mar 6 '16 at 7:20
$\begingroup$ @Saleh Hamdan : The response of the single particle states to a Lorentz boost (second equation in my answer) is not an assumption. It is a consequence of finding the irreducible representations of the Poincare group using the method of induced representations (see "Induced Representations of Groups and Quantum Mechanics" by George W. Mackey, W.A. Benjamin, 1968) for the massless case. These representations are only derived for free particles and the Poincare group only works between non-accelerating reference frames as you point out. $\endgroup$ – Stephen Blake Mar 6 '16 at 10:59
$\begingroup$ The "Explanation of why the invariant measure appears in the number operator" killed any doubt. Thanks $\endgroup$ – Nogueira Mar 8 '16 at 13:37
$\begingroup$ How does the answer reconciles with the accepted answer of physics.stackexchange.com/questions/21830/…? $\endgroup$ – thermomagnetic condensed boson Feb 8 '18 at 20:17
An experimentalist's answer.
There are innumerable experiments measuring two gamma events. Lorentz invariance is a basic assumption for all measured interactions. Each interaction is in a different Lorenz frame depending on the energies and momenta involved. When we make the distributions of crossections and angles, we depend on this invariance of the number of particles in the interaction under observation. As the standard model manages to fit all these to a very good approximation this assumption holds.
Now each individual photon is coming from a Lorenz invariant interaction by construction of electromagnetic interactions, even though nothing is recording it, so the numbers should stay constant.
For the numbers to change if the Lorenz frame changes for an ensemble of already created photons, it means that the imposed Lorenz frame interacts somehow with the photons under observation. If energy is exchanged, more photons may appear which will look like non conservation of numbers, but should not be considered so.
$\begingroup$ You talk about experiments with gamma events, but they might just be not too sensitive to e.g. some microwave photons appearing in another frames. $\endgroup$ – Ruslan Mar 5 '16 at 7:55
$\begingroup$ @Ruslan If the production of photons , the number of photons, were not Lorenz invarian the calculated crossections which come from the hypothesis of Lorenz invariance would not fit. The data for pi0 decay for example cover an enormous range of energies of the pi0, each is a different lorenz frame with respect to the other decays. My point is that to generate a photon you need an interaction, they are not generated out of the vacuum. $\endgroup$ – anna v Mar 5 '16 at 10:37
I am not sure who is deleting my comments, arguments and critics challenging Mr. Blake's derivation. Just because we don't have enough reasoning to argue against an idea should never mean to erase that voice. As far as I know, science is not meant to be monopolized as the same is true with wealth.
In counting the number of photons in a system, we are limited by detection which in turn it is limited by Heisenberg Uncertainty Principle something that can never be overcome in Nature. Theoretically, a photon can have any non-zero energy over an infinitely wide spectrum. So, there is a non-zero probability of some photon having any particular energy. Let's not forget that photons are only one ingredient of the fundamental particles in Nature. This means that they can interact with at least every other charged fundamental particle of Standard Model including their self-interactions. (The interaction channels would exponentially increase as we go beyond Standard Model theories such as Extra Dimensions and Supersymmetry). All this being said, it is perfectly fine that photons get lost and/or created (through their interactions) and never be compensated in any given finite volume of our Universe (to be called system) during any finite period of time (within the finite age of Universe).Hence, being only one subset of all fundamental particles and having an infinitely long lifetime during which they have many interaction channels, their number will never be conserved in any finite volume of our Universe. However, it would be more challenging to generalize the question into, "whether the total number of all fundamental particles inside Visible Universe is conserved." To answer this one though, we need to know the Theory of Everything in which ALL fundamental particles are known including their lifetimes and masses and interaction channels along with the knowledge of Dark Matter and Dark Energy (accounting for 96% of Universe budget of matter and energy which are invisible as of today). So, I am certain that the answer to your question is, "No. Photons number in a finite volume known as system will never be conserved." And I would like to leave the more challenging one to future generations of physicists.
BenjaminBenjamin
$\begingroup$ I didn't downvote. But photons are not charged so don't say "every other charged" particle. And conserved is different than frame invariant so your answer has completely misunderstood the question. And the universe doesn't even have to be in a state of definite particle number and even if it momentarily were we know we can change the number of particles, we do experiments and observations that involve this all the time. So it isn't conserved, and the question was about frame invariance anyway which is totally different. $\endgroup$ – Timaeus Mar 6 '16 at 22:24
$\begingroup$ I am not sure who is deleting my comments, arguments and critics challenging Mr. Blake's derivation. Just because we don't have enough reasoning to argue against an idea should never mean to erase that voice. As far as I know, science is not meant to be monopolized as the same is true with wealth. $\endgroup$ – Benjamin Mar 7 '16 at 2:57
$\begingroup$ I haven't deleted your comments to Stephen Blake, but this is the second comment of yours I've seen that seems to be placed as a response to my comment. Are you sure you are commenting on Blake's answer? And comments aren't for discussions anyway, they are focused on improving answers. They are also second class in that there is no expectation that they survive. You can vote up or vote down and that stays. A comment is supposed to encourage modification of an answer and that's really what it's for. But you can't force a change to an answer. $\endgroup$ – Timaeus Mar 7 '16 at 3:07
$\begingroup$ Hi Timaeus, thanks for letting me know. Yes, I am sure I am commenting on Blake's answer. I really don't know what is going on here. I was waiting for him to respond me but it seems my question is being constantly deleted. I am going to be more patient though until the problem resolves. Sincerely, $\endgroup$ – Benjamin Mar 7 '16 at 3:09
$\begingroup$ You are free to craft a better answer yourself. $\endgroup$ – Timaeus Mar 7 '16 at 3:09
Not the answer you're looking for? Browse other questions tagged quantum-field-theory photons invariants or ask your own question.
Does a charged particle accelerating in a gravitational field radiate?
Is the existence of a photon relative?
About states, observables and the wave functional interpretation in QFT with gauge fields
Do gravitational waves diminish over time/distance?
What is a "classical Schrodinger field", really?
Proof of Loss of Lorentz Invariance in Finite Temperature Quantum Field Theory
If all particles are fields, why does first quantization work for some particles?
Distance formula in Euclidean space vs. Spacetime Interval - why is one Pythagorean and one not?
Quantities invariant by Lorentz transform
What's the "effective potential" for photons in $X$-ray diffraction?
The meaning of gauge-fixing in covariant quantization of the electromagnetic field
The Lorentz-invariant particle spectrum
Is every Lorentz invariant a Lorentz scalar?
|
CommonCrawl
|
Fundam. Prikl. Mat.:
Fundam. Prikl. Mat., 2009, Volume 15, Issue 8, Pages 3–93 (Mi fpm1282)
This article is cited in 4 scientific papers (total in 4 papers)
On the structure of a relatively free Grassmann algebra
A. V. Grishin, L. M. Tsybulya
Moscow State Pedagogical University
Abstract: We investigate the multiplicative and $T$-space structure of the relatively free algebra $F^{(3)}$ with a unity corresponding to the identity $[[x_1,x_2],x_3]=0$ over an infinite field of characteristic $p>0$. The highest emphasis is placed on unitary closed $T$-spaces over a field of characteristic $p>2$. We construct a diagram containing all basic $T$-spaces of the algebra $F^{(3)}$, which form infinite chains of the inclusions. One of the main results is the decomposition of quotient $T$-spaces connected with $F^{(3)}$ into a direct sum of simple components. Also, the studied $T$-spaces are commutative subalgebras of $F^{(3)}$; thus, the structure of $F^{(3)}$ and its subalgebras can be described as modules over these commutative algebras. Separately, we consider the specifics of the case $p=2$. In Appendix, we study nonunitary closed $T$-spaces and the case of a field of zero characteristic.
References: PDF file HTML file
Journal of Mathematical Sciences (New York), 2010, 171:2, 149–212
Bibliographic databases:
UDC: 512.552
Citation: A. V. Grishin, L. M. Tsybulya, "On the structure of a relatively free Grassmann algebra", Fundam. Prikl. Mat., 15:8 (2009), 3–93; J. Math. Sci., 171:2 (2010), 149–212
\Bibitem{GriTsy09}
\by A.~V.~Grishin, L.~M.~Tsybulya
\paper On the structure of a~relatively free Grassmann algebra
\jour Fundam. Prikl. Mat.
\mathnet{http://mi.mathnet.ru/fpm1282}
\elib{http://elibrary.ru/item.asp?id=15340724}
\jour J. Math. Sci.
\scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-78349313679}
http://mi.mathnet.ru/eng/fpm1282
http://mi.mathnet.ru/eng/fpm/v15/i8/p3
This publication is cited in the following articles:
A. V. Grishin, "On the Center of a Relatively Free Lie-Nilpotent Algebra of Index $4$", Math. Notes, 91:1 (2012), 139–140
A. V. Grishin, "On $T$-spaces in a relatively free two-generated Lie nilpotent associative algebra of index $4$", J. Math. Sci., 191:5 (2013), 686–690
A. V. Grishin, S. V. Pchelintsev, "On centres of relatively free associative algebras with a Lie nilpotency identity", Sb. Math., 206:11 (2015), 1610–1627
A. V. Grishin, "On the measure of inclusion in relatively free algebras with the identity of Lie nilpotency of degree 3 or 4", Sb. Math., 210:2 (2019), 234–244
References: 27
|
CommonCrawl
|
UCLA - University of California, Los Angeles
This site license is managed by the UCLA Library.
Group Admin: Tony Aponte
http://www.ucla.edu
FRETBursts: An Open Source Toolkit for Analysis of Freely-Diffusing Single-Molecule FRET
Antonino Ingargiola
and 4 collaborators
Single-molecule Förster Resonance Energy Transfer (smFRET) allows probing intermolecular interactions and conformational changes in biomacromolecules, and represents an invaluable tool for studying cellular processes at the molecular scale. smFRET experiments can detect the distance between two fluorescent labels (donor and acceptor) in the 3-10 nm range. In the commonly employed confocal geometry, molecules are free to diffuse in solution. When a molecule traverses the excitation volume, it emits a burst of photons, which can be detected by single-photon avalanche diode (SPAD) detectors. The intensities of donor and acceptor fluorescence can then be related to the distance between the two fluorophores.
While recent years have seen a growing number of contributions proposing improvements or new techniques in smFRET data analysis, rarely have those publications been accompanied by software implementation. In particular, despite the widespread application of smFRET, no complete software package for smFRET burst analysis is freely available to date.
In this paper, we introduce FRETBursts, an open source software for analysis of freely-diffusing smFRET data. FRETBursts allows executing all the fundamental steps of smFRET bursts analysis using state-of-the-art as well as novel techniques, while providing an open, robust and well-documented implementation. Therefore, FRETBursts represents an ideal platform for comparison and development of new methods in burst analysis.
We employ modern software engineering principles in order to minimize bugs and facilitate long-term maintainability. Furthermore, we place a strong focus on reproducibility by relying on Jupyter notebooks for FRETBursts execution. Notebooks are executable documents capturing all the steps of the analysis (including data files, input parameters, and results) and can be easily shared to replicate complete smFRET analyzes. Notebooks allow beginners to execute complex workflows and advanced users to customize the analysis for their own needs. By bundling analysis description, code and results in a single document, FRETBursts allows to seamless share analysis workflows and results, encourages reproducibility and facilitates collaboration among researchers in the single-molecule community.
Silicon Photomultipler Investigation for Radiation Technologies
Setup Information
The Silicon Photomultiplier (SiPM) is operated using a few Ortec Nuclear Instrument Modules in room B309. Specifically, The model 113 preamplifier, the 570 amplifier, and the 927 Aspec MCA that has a USB port in the back of it for data collection. There are 3 SMA ports that are available to use in the front of the box (Figure 1). Currently the third SMA port is not being used but can be in the future and is okay to be left alone.
SMA port 1 is used to apply bias to the SiPM. This is to be negatively biased only. The power supply used is a BK Precision 9110. This was specifically used because it allows control of how much current the SiPM will get. In the event the cover of the box is removed, the exposed SiPM won't be fried. The second and third SMA ports are used for signal out of the SiPM. Specifically, it is wired to read signal out of SMA port 2.
Outline of Simulating Recovery Following the 2014 South Napa Earthquake
Hua Kang
1. Abstract
Ducting and Conversions of Whistler Waves in Varying Density Plasma With Boundary Conditions
Lane Beale
Using an antenna to generate whistler waves in a plasma using a helium source and a magnetic field probe to measure perturbations in the magnetic field, a biased disc was placed behind the wave generator to create ducting. Ducted waves were seen to propagate towards density minimum when in a region of high magnetic field, and then along a density maximum when in a region of lower magnetic field. A simulation was created using theory, which was found not to agree with measured results.
Nanophotonic Technology
History of Photonics Relevant to CMOS Integrated circuits
In 1909, Arnold Sommerfeld published his proposed analytical proof of surface polarization waves [3] marking in our history of Photonics the cornerstone of the all nanophotonics is motivated. Sixty years following Sommerfeld's publication, Chinese physicist Charles Kao published a solution for guiding Sommerfeld's surface excitations using optical fiber [4] which in 2009 he would also receive a Nobel Prize. Today nanophotonic research is being conducted by many countries for many applications, yet their approach is surprising similar. The majority of resources and funding for nanophotonics is the development of better materials. This point will be further evident in following sections, but for now it should be mentioned that of those resources only a marginal portion is allocated in the direction of CMOS integration. Initially, this discovery was quiet shocking for two big reasons. First of all, in recent years Moore's law's famous exponential curve of computing performance and affordability over time has become less exponentially improving and we know one major cause of the bottleneck occurring in integrated circuits is interconnects. Illustrated in figure 1 is a comparison of the performance capability of optical fibers vs coaxial cables. Also in figure 1 is a relation of current nanophotonic waveguide capability compared to optical fiber which has strong implications for what is possible on chips and the potential need for an enhancing technology. Secondly, the CMOS business has been so profitable and so heavily investing in machinery that it seems logical to continue investing as a lot of the infrastructure exists. The answer to the initial shock is illustrated in figures 2. CMOS compatible nanophotonics occupies an extremely narrow space on a wide spectrum of possible use cases and therefore to expect so much of the resources to be allocated so narrowly this early in such a young immature science could greatly delay the achievable possibilities. The following sections, however, will discuss the results of the resources that were allocated for CMOS integrated nanophotonics and the modules that are in development to address Moore's law.
Photonic ICs and Beyond
and 1 collaborator
In 1909, Arnold Sommerfeld published his proposed analytical proof of surface polarization waves [11] marking in our history of Photonics the cornerstone of the all nanophotonics is motivated. Sixty years following Sommerfeld's publication, Chinese physicist Charles Kao published a solution for guiding Sommerfeld's surface excitations using optical fiber [12] which in 2009 he would also receive a Nobel Prize. Today nanophotonic research is being conducted by many countries for many applications, yet their approach is surprising similar. The majority of resources and funding for nanophotonics is the development of better materials. This point will be further evident in following sections, but for now it should be mentioned that of those resources only a marginal portion is allocated in the direction of CMOS integration. Initially, this discovery was quiet shocking for two big reasons. First of all, in recent years Moore's law's famous exponential curve of computing performance and affordability over time has become less exponentially improving and we know one major cause of the bottleneck occurring in integrated circuits is interconnects. Illustrated in figure 1 is a comparison of the performance capability of optical fibers vs coaxial cables. In figure 2 is a relation of current nanophotonic waveguide capability compared to optical fiber which has strong implications for what is possible on chips and the potential need for an enhancing technology. Secondly, the CMOS business has been so profitable and so heavily investing in machinery that it seems logical to continue investing as a lot of the infrastructure exists. The answer to my initial shock is illustrated in figures 3. CMOS compatible nanophotonics occupies an extremely narrow space on a wide spectrum of possible use cases and therefore to expect so much of the resources to be allocated so narrowly this early in such a young immature science could greatly delay the achievable possibilities. The following sections, however, will discuss the results of the resources that were allocated for CMOS integrated nanophotonics and the modules that are in development to address Moore's law.
Mie Scattering and the Onset of Sonoluminescence
Time between pulses of SL was found to be 3.069 × 10−5 ± 0.01 seconds, width of an SL pulse was found to be 1.925 ∗ 10−6 ± 0.04 seconds and Sonoluminescence was observed.
Diviner calibration
K. Michael Aye
This paper describes the methods to calbrate LRO's Diviner Lunar Radiometer Experiment. Like many radiometers, Diviner is sensitive to instrument temperature changes along the orbit of LRO. Regularly executed calibration blocks include instrument pointings to space and towards internal blackbodies at a known temperature. Data from these blocks serve to determine current offsets and current DN to radiance conversion value. A ground calibration campaign served to determine conversion tables over temperature.
Follow up questions-fibers
a) Overlap integral given by \[ \eta=\int_{-\infty}^{\infty} \Psi_{m^{'}} ^{output} \Psi_{m}^{input} dx\] The paper by tong et al, explains that they adjusted the overlap until the output is maximized. I think that for that occur then say the input was a gaussian E = E(0)eax2 where E(0) would be a central maximum, then overlap adjusted until the output was E(0), making the overlap integral just integrating a gaussian.
b) Using the mode solver for this part. I wanted to see how modes would look after LP01, not single moded. So I went back to the paper and looked at the equation for the diameters that allow for single mode operation, $D< \frac{2.4 \lambda}{\pi \sqrt{n_0^2-n_1^2}}$, where n1 is 1 for air, and n0 is the index of refraction for the medium in use. From the tong paper, the second page of the paper, or pg.817 of the nature journal it was published in, index of refraction for Silica is said to be n0 = 1.46. This gives NA = 1.063, using λ = 633nm, the max diameter is then given to be 454.6 nm for single mode operation. So I will put diameters larger then this for non single modes. Using 600 nm for the diameter. The image generated looks just like two very sharp gaussians
c) I believe this is due to when the diameter of the wire is decreased below the wavelength its supposed to be guiding, more of the light is guiding outside the wire as a surface wave. So for 1550 nm , looking at the graph for loss, starting at wire diameters of 1200 nm we are already operating below the wavelength we want to guide so as the diameter decreases , more light is outside the wire leading to more loss. Compared to the 633 nm wavelength, you can see the increase in loss occurs when operating below 633 nm diameter wire but we are lower loss from 1200 nm until we get to 633 nm
d) To go along with this, the loss mechanism is from surface contamination. The silica wires are't perfectly uniform. By virtue of that, when light is guided by surface waves it is more susceptible to the surface contaminations.
|
CommonCrawl
|
Task (AI goal)
A "Task" is a goal or subgoal within an advanced AI, that can be satisfied as fully as possible by optimizing a bounded part of space, for a limited time, with a limited amount of effort.
E.g., "make as many paperclips as possible" is definitely not a 'task' in this sense, since it spans every paperclip anywhere in space and future time. Creating more and more paperclips, using more and more effort, would be more and more preferable up to the maximum exertable effort.
For a more subtle example of non-taskishness, consider Disney's "sorcerer's apprentice" scenario: Mickey Mouse commands a broomstick to fill a cauldron. The broomstick then adds more and more water to the cauldron until the workshop is flooded. (Mickey then tries to destroy the broomstick. But since the broomstick has no designed-in reflectively stable shutdown button, the broomstick repairs itself and begins constructing subagents that go on pouring more water into the cauldron.)
Since the Disney cartoon is a musical, we don't know if the broomstick was given a time bound on its job. Let us suppose that Mickey tells the broomstick to do its job sometime before 1pm.
Then we might imagine that the broomstick is a subjective expected utility maximizer with a utility function \(U_{cauldron}\) over outcomes \(o\):
$$U_{cauldron}(o): \begin{cases} 1 & \text{if in $o$ the cauldron is $\geq 90\%$ full of water at 1pm} \\ 0 & \text{otherwise} \end{cases}$$
This looks at first glance like it ought to be taskish:
The cauldron is bounded in space.
The goal only concerns events that happen before a certain time.
The highest utility that can be achieved is \(1,\) which is reached as soon as the cauldron is \(\geq 90\%\) full of water, which seems achievable using a limited amount of effort.
The last property in particular makes \(U_{cauldron}\) a "satisficing utility function", one where an outcome is either satisfactory or not-satisfactory, and it is not possible to do any better than "satisfactory".
But by previous assumption, the broomstick is still optimizing expected utility. Assume the broomstick reasons with reasonable generality via some universal prior. Then the subjective probability of the cauldron being full, when it looks full to the broomstick-agent, will not be exactly \(1.\) Perhaps (the broomstick-agent reasons) the broomstick's cameras are malfunctioning, or its RAM has malfunctioned producing an inaccurate memory.
Then the broomstick-agent reasons that it can further increase the probability of the cauldron being full—however slight the increase in probability—by going ahead and dumping in another bucket of water.
That is: Cromwell's Rule implies that the subjective probability of the bucket being full never reaches exactly \(1\). Then there can be an infinite series of increasingly preferred, increasingly more effortful policies \(\pi_1, \pi_2, \pi_3 \ldots\) with
$$\mathbb E [ U_{cauldron} | \pi_1] = 0.99\\ \mathbb E [ U_{cauldron} | \pi_2] = 0.999 \\ \mathbb E [ U_{cauldron} | \pi_3] = 0.999002 \\ \ldots$$
In that case the broomstick can always do better in expected utility (however slightly) by exerting even more effort, up to the maximum effort it can exert. Hence the flooded workshop.
If on the other hand the broomstick is an expected utility satisficer, i.e., a policy is "acceptable" if it has \(\mathbb E [ U_{cauldron} | \pi ] \geq 0.95,\) then this is now finally a taskish process (we think). The broomstick can find some policy that's reasonably sure of filling up the cauldron, execute that policy, and then do no more.
As described, this broomstick doesn't yet have any impact penalty, or features for mild optimization. So the broomstick could also get \(\geq 0.90\) expected utility by flooding the whole workshop; we haven't yet forbidden excess efforts. Similarly, the broomstick could also go on to destroy the world after 1pm—we haven't yet forbidden excess impacts.
But the underlying rule of "Execute a policy that fills the cauldron at least 90% full with at least 95% probability" does appear taskish, so far as we know. It seems possible for an otherwise well-designed agent to execute this goal to the greatest achievable degree, by acting in bounded space, over a bounded time, with a limited amount of effort. There does not appear to be a sequence of policies the agent would evaluate as better fulfilling its decision criterion, which use successively more and more effort.
The "taskness" of this goal, even assuming it was correctly identified, wouldn't by itself make the broomstick a fully taskish AGI. We also have to consider whether every subprocess of the AI is similarly tasky; whether there is any subprocess anywhere in the AI that tries to improve memory efficiency 'as far as possible'. But it would be a start, and make further safety features more feasible/useful.
See also Mild optimization as an open problem in AGI alignment.
Task-directed AGI
An advanced AI that's meant to pursue a series of limited-scope goals given it by the user. In Bostrom's terminology, a Genie.
Ryan Carey24 Apr 2017 20:28 UTC
I think the "task AI" term has been a bit confusing. When people first hear the term "task AI" they naturally think of non-autonomy (Jessica and I both did this). It also sounds a bit similar to Holden's "tool AI" which has similar connotations.
Whereas I'm apparently supposed to be imagining an optionally autonomous satisficing agent. I admittedly don't have any better suggestions.
|
CommonCrawl
|
Positive solutions to a Dirichlet problem with $p$-Laplacian and concave-convex nonlinearity depending on a parameter
CPAA Home
Local existence of strong solutions to the three dimensional compressible MHD equations with partial viscosity
March 2013, 12(2): 831-850. doi: 10.3934/cpaa.2013.12.831
Standing waves of nonlinear Schrödinger equations with optimal conditions for potential and nonlinearity
Soohyun Bae 1, and Jaeyoung Byeon 2,
Faculty of Liberal Arts and Sciences, Hanbat National University, Daejeon, 305-719
Department of Mathematics, Pohang University of Science and Technology, Pohang, Kyungbuk 790-784
Received September 2011 Revised January 2012 Published September 2012
We consider the singularly perturbed nonlinear elliptic problem \begin{eqnarray*} \varepsilon^2 \Delta v - V(x)v + f(v) =0, v > 0, \lim_{|x|\to \infty} v(x) = 0. \end{eqnarray*} Under almost optimal conditions for the potential $V$ and the nonlinearity $f$, we establish the existence of single-peak solutions whose peak points converge to local minimum points of $V$ as $\varepsilon \to 0$. Moreover, we exhibit a threshold on the condition of $V$ at infinity between existence and nonexistence of solutions.
Keywords: optimal conditions., variational method, decaying potential, standing waves, Nonlinear Schrödinger equations.
Mathematics Subject Classification: Primary: 35J20, 35J60; Secondary: 35Q5.
Citation: Soohyun Bae, Jaeyoung Byeon. Standing waves of nonlinear Schrödinger equations with optimal conditions for potential and nonlinearity. Communications on Pure & Applied Analysis, 2013, 12 (2) : 831-850. doi: 10.3934/cpaa.2013.12.831
A. Ambrosetti, M. Badiale and S. Cingolani, Semiclassical states of nonlinear Schrödinger equations,, Arch. Ration. Mech. Anal., 140 (1997), 285. doi: 10.1007/s002050050067. Google Scholar
A. Ambrosetti, A. Malchiodi and S. Secchi, Multiplicity results for some nonlinear Schrödinger equations with potentials,, Arch. Ration. Mech. Anal., 159 (2001), 253. doi: 10.1007/s002050100152. Google Scholar
A. Ambrosetti, V. Felli and A. Malchiodi, Ground states of nonlinear Schrödinger equations with potentials vanishing at infinity,, J. Eur. Math. Soc., 7 (2005), 117. doi: 10.4171/JEMS/24. Google Scholar
A. Ambrosetti and A. Malchiodi, "Perturbation Methods and Semilinear Elliptic Problems on $R^N$,", Progress in Mathematics 240, (2006). doi: 10.1007/3-7643-7396-2. Google Scholar
A. Ambrosetti, A. Malchiodi and D. Ruiz, Bound states of Nonlinear Schrödinger equations with potentials vanishing at infinity,, J. d'Analyse Math., 98 (2006), 317. doi: 10.1007/BF02790279. Google Scholar
A. Ambrosetti and D. Ruiz, Radial solutions concentrating on spheres of NLS with vanishing potentials,, Proc. Roy. Soc. Edinburgh Sect. A, 136 (2006), 889. doi: 10.1017/S0308210500004789. Google Scholar
H. Berestycki and P.L. Lions, Nonlinear scalar field equations I existence of a ground state,, Arch. Ration. Mech. Anal., 82 (1983), 313. doi: 10.1007/BF00250555. Google Scholar
M. Bidaut-Veron, Local and global behavior of solutions of quasilinear equations of Emden-Fowler type,, Arch. Rational Mech. Anal., 107 (1989), 293. doi: 10.1007/BF00251552. Google Scholar
J. Byeon, and L. Jeanjean, Standing waves for nonlinear Schrödinger equations with a general nonlinearity,, Arch. Ration. Mech. Anal., 185 (2007), 185. doi: 10.1007/s00205-006-0019-3. Google Scholar
J. Byeon, L. Jeanjean and K. Tanaka, Standing waves for nonlinear Schrödinger equations with a general nonlinearity: one and two dimensional cases,, Comm. Partial Differential Equations, 33 (2008), 1113. doi: 10.1080/03605300701518174. Google Scholar
J. Byeon and Z.-Q. Wang, Standing waves with a critical frequency for nonlinear Schrödinger equations,, Arch. Ration. Mech. Anal., 165 (2002), 295. doi: 10.1007/s00205-002-0225-6. Google Scholar
J. Byeon and Z.-Q. Wang, Standing waves with a critical frequency for nonlinear Schrödinger equations II,, Calculus of Variations and PDE, 18 (2003), 207. doi: 10.1007/s00526-002-0191-8. Google Scholar
E. N. Dancer, K. Y. Lam and S. Yan, The effect of the graph topology on the existence of multipeak solutions for nonlinear Schrödinger equations,, Abstr. Appl. Anal., 3 (1998), 293. doi: 10.1155/S1085337501000276. Google Scholar
E. N. Dancer and S. Yan, On the existence of multipeak solutions for nonlinear field equations on $R^N$,, Discrete Contin. Dynam. Systems, 6 (2000), 39. doi: 10.3934/dcds.2000.6.39. Google Scholar
M. Del Pino and P. L. Felmer, Local mountain passes for semilinear elliptic problems in unbounded domains,, Calculus of Variations and PDE, 4 (1996), 121. doi: 10.1007/BF01189950. Google Scholar
M. Del Pino and P. L. Felmer, Semi-classical states for nonlinear Schrödinger equations,, J. Functional Analysis, 149 (1997), 245. doi: 10.1006/jfan.1996.3085. Google Scholar
M. Del Pino and P. L. Felmer, Multi-peak bound states for nonlinear Schrödinger equations,, Ann. Inst. Henri Poincar\'e, 15 (1998), 127. doi: 10.1016/S0294-1449(97)89296-7. Google Scholar
M. Del Pino and P. L. Felmer, Semi-classical states for nonlinear Schrödinger equations: a variational reduction method,, Math. Ann., 324 (2002), 1. doi: 10.1007/s002080200327. Google Scholar
A. Floer and A. Weinstein, Nonspreading wave packets for the cubic Schrödinger equations with a bounded potential,, J. Functional Analysis, 69 (1986), 397. doi: 10.1016/0022-1236(86)90096-0. Google Scholar
D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order,", 2$^{nd}$ edition, (1983). Google Scholar
M. Guedda and L. Veron, Local and global properties of solutions of quasilinear elliptic equations,, J. Differential Equations, 76 (1988), 159. doi: 10.1016/0022-0396(88)90068-X. Google Scholar
C. Gui, Existence of multi-bump solutions for nonlinear Schrödinger equations via variational method,, Comm. Partial Differential Equations, 21 (1996), 787. doi: 10.1080/03605309608821208. Google Scholar
L. Jeanjean and K. Tanaka, A remark on least energy solutions in $R^N$,, Proc. Amer. Math. Soc., 131 (2003), 2399. doi: 10.1090/S0002-9939-02-06821-1. Google Scholar
L. Jeanjean and K. Tanaka, Singularly perturbed elliptic problems with superlinear or asymptotically linear nonlinearities,, Calculus of Variations and PDE, 21 (2004), 287. doi: 10.1007/s00526-003-0261-6. Google Scholar
O. Kwon, Existence of multi-bump standing waves with a critical frequency for nonlinear Schrödingerinfinity,, Proc. Roy. Soc. Edinburgh Sect. A, 139 (2009), 833. doi: 10.1017/S0308210508000309. Google Scholar
V. Kondratiev, V. Liskevich and Z. Sobol, Second-order semilinear elliptic inequalities in exterior domains,, J. Differential Equations, 187 (2003), 429. doi: 10.1016/S0022-0396(02)00036-0. Google Scholar
X. Kang and J. Wei, On interacting bumps of semi-classical states of nonlinear Schrödinger equations,, Adv. Differential Equations, 5 (2000), 899. Google Scholar
Y. Y. Li, On a singularly perturbed elliptic equation,, Adv. Differential Equations, 2 (1997), 955. Google Scholar
P. L. Lions, The concentration -compactness principle in the calculus of variations. The locally compact case, part II ,, Ann. Inst. Henri Poincar\'e, 1 (1984), 223. Google Scholar
V. Liskevich, S. Lyakhova and V. Moroz, Positive solutions to singular semilinear elliptic equations with critical potential on cone-like domains,, Adv. Differential Equations, 4 (2006), 361. Google Scholar
V. Moroz and J. Van Schaftingen, Semiclassical stationary states for nonlinear Schrodinger equations with fast decaying potentials,, Calculus of Variations and PDE, 37 (2010), 1. doi: 10.1007/s00526-009-0249-y. Google Scholar
W. M. Ni and J. Serrin, Nonexistence theorems for quasilinear partial differential equations,, Proceedings of the Conference Commemorating the 1st Centennial of the Circolo Matematico di Palermo(Palermo, (1985), 171. Google Scholar
Y. G. Oh, Existence of semiclassical bound states of nonlinear Schrödinger equations with potentials of the class $(V)_a$,, Comm. Partial Differential Equations, 13 (1988), 1499. doi: 10.1080/03605308808820585. Google Scholar
P. H. Rabinowitz, On a class of nonlinear Schrödinger equations,, Z. Angew. Math. Phys., 43 (1992), 270. doi: 10.1007/BF00946631. Google Scholar
X. Wang, On concentration of positive bound states of nonlinear Schrödinger equations,, Comm. Math. Phys., 153 (1993), 229. doi: 10.1007/BF02096642. Google Scholar
H. Yin and P. Zhang, Bound states of nonlinear Schrodinger equations with potentials tending to zero at infinity,, J. of Differential Equations, 247 (2009), 618. doi: 10.1016/j.jde.2009.03.002. Google Scholar
François Genoud, Charles A. Stuart. Schrödinger equations with a spatially decaying nonlinearity: Existence and stability of standing waves. Discrete & Continuous Dynamical Systems - A, 2008, 21 (1) : 137-186. doi: 10.3934/dcds.2008.21.137
Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. Discrete & Continuous Dynamical Systems - A, 2001, 7 (3) : 525-544. doi: 10.3934/dcds.2001.7.525
Liping Wang, Chunyi Zhao. Infinitely many solutions for nonlinear Schrödinger equations with slow decaying of potential. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1707-1731. doi: 10.3934/dcds.2017071
Zuji Guo. Nodal solutions for nonlinear Schrödinger equations with decaying potential. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1125-1138. doi: 10.3934/cpaa.2016.15.1125
Masahito Ohta. Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1671-1680. doi: 10.3934/cpaa.2018080
Xiaoyu Zeng. Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1749-1762. doi: 10.3934/dcds.2017073
Jaeyoung Byeon, Louis Jeanjean. Multi-peak standing waves for nonlinear Schrödinger equations with a general nonlinearity. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 255-269. doi: 10.3934/dcds.2007.19.255
Zaihui Gan, Jian Zhang. Blow-up, global existence and standing waves for the magnetic nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 827-846. doi: 10.3934/dcds.2012.32.827
Yue Liu. Existence of unstable standing waves for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (1) : 193-209. doi: 10.3934/cpaa.2008.7.193
François Genoud. Existence and stability of high frequency standing waves for a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1229-1247. doi: 10.3934/dcds.2009.25.1229
Jaeyoung Byeon, Ohsang Kwon, Yoshihito Oshita. Standing wave concentrating on compact manifolds for nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2015, 14 (3) : 825-842. doi: 10.3934/cpaa.2015.14.825
Alex H. Ardila. Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field. Communications on Pure & Applied Analysis, 2018, 17 (1) : 163-175. doi: 10.3934/cpaa.2018010
Reika Fukuizumi, Louis Jeanjean. Stability of standing waves for a nonlinear Schrödinger equation wdelta potentialith a repulsive Dirac. Discrete & Continuous Dynamical Systems - A, 2008, 21 (1) : 121-136. doi: 10.3934/dcds.2008.21.121
Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. Communications on Pure & Applied Analysis, 2015, 14 (3) : 843-859. doi: 10.3934/cpaa.2015.14.843
Nan Lu. Non-localized standing waves of the hyperbolic cubic nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3533-3567. doi: 10.3934/dcds.2015.35.3533
Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1093-1105. doi: 10.3934/dcds.2010.27.1093
Myeongju Chae, Soonsik Kwon. The stability of nonlinear Schrödinger equations with a potential in high Sobolev norms revisited. Communications on Pure & Applied Analysis, 2016, 15 (2) : 341-365. doi: 10.3934/cpaa.2016.15.341
Thierry Colin, Pierre Fabrie. Semidiscretization in time for nonlinear Schrödinger-waves equations. Discrete & Continuous Dynamical Systems - A, 1998, 4 (4) : 671-690. doi: 10.3934/dcds.1998.4.671
Juan Belmonte-Beitia, Vladyslav Prytula. Existence of solitary waves in nonlinear equations of Schrödinger type. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1007-1017. doi: 10.3934/dcdss.2011.4.1007
Santosh Bhattarai. Stability of normalized solitary waves for three coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1789-1811. doi: 10.3934/dcds.2016.36.1789
Soohyun Bae Jaeyoung Byeon
|
CommonCrawl
|
Published: 22nd June 2021
DOI: 10.4204/EPTCS.335
EPTCS 335
Proceedings Eighteenth Conference on
Theoretical Aspects of Rationality and Knowledge
Beijing, China, June 25-27, 2021
Edited by: Joseph Halpern and Andrés Perea
Andrés Perea 1
A Recursive Measure of Voting Power that Satisfies Reasonable Postulates
Arash Abizadeh and Adrian Vetta 3
Well-Founded Extensive Games with Perfect Information
Krzysztof R. Apt and Sunil Simon 7
Uncertainty-Based Semantics for Multi-Agent Knowing How Logics
Carlos Areces, Raul Fervari, Andrés R. Saravia and Fernando R. Velázquez-Quesada 23
Revisiting Epistemic Logic with Names
Marta Bílková, Zoé Christoff and Olivier Roy 39
Language-based Decisions
Adam Bjorndahl and Joseph Y. Halpern 55
An Awareness Epistemic Framework for Belief, Argumentation and Their Dynamics
Alfredo Burrieza and Antonio Yuste-Ginel 69
Local Dominance
Emiliano Catonini and Jingyi Xue 85
Collective Argumentation: The Case of Aggregating Support-Relations of Bipolar Argumentation Frameworks
Weiwei Chen 87
De Re Updates
Michael Cohen, Wen Tang and Yanjing Wang 103
Dynamically Rational Judgment Aggregation: A Summary
Franz Dietrich and Christian List 119
Deliberation and Epistemic Democracy
Huihui Ding and Marcus Pivato 127
No Finite Model Property for Logics of Quantified Announcements
Hans van Ditmarsch, Tim French and Rustam Galimullin 129
Krisztina Fruzsa, Roman Kuznets and Ulrich Schmid 139
Are the Players in an Interactive Belief Model Meta-certain of the Model Itself?
Satoshi Fukuda 155
Knowledge from Probability
Jeremy Goodman and Bernhard Salow 171
Belief Inducibility and Informativeness
P. Jean-Jacques Herings, Dominik Karos and Toygar Kerman 187
Measuring Violations of Positive Involvement in Voting
Wesley H. Holliday and Eric Pacuit 189
Algorithmic Randomness, Bayesian Convergence and Merging
Simon Huttegger, Sean Walsh and Francesca Zaffora Blando 211
Game-Theoretic Models of Moral and Other-Regarding Agents (extended abstract)
Gabriel Istrate 213
Understanding Transfinite Elimination of Non-Best Replies
Stephan Jagau 229
Persuading Communicating Voters
Toygar Kerman and Anastas P. Tenev 231
Knowing How to Plan
Yanjun Li and Yanjing Wang 233
Probabilistic Stability and Statistical Learning
Krzysztof Mierzewski 249
Attainable Knowledge and Omniscience
Pavel Naumov and Jia Tao 251
Failures of Contingent Thinking
Evan Piermont and Peio Zuazo-Garin 267
Reasoning about Emergence of Collective Memory
R. Ramanujam 269
A Deontic Stit Logic Based on Beliefs and Expected Utility
Aldo Iván Ramírez Abarca and Jan Broersen 281
Epistemic Modality and Coordination under Uncertainty
Giorgio Sbardolini 295
Communication Pattern Models: An Extension of Action Models for Dynamic-Network Distributed Systems
Diego A. Velázquez, Armando Castañeda and David A. Rosenblueth 307
These proceedings contain the papers that have been accepted for presentation at the Eighteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK XVIII). The conference took place from June 25 to June 27, 2021, at Tsinghua University, Beijing, China. However, due to the COVID-19 pandemic, the conference was offered completely online.
As is to be expected from TARK, these proceedings offer a highly interdisciplinary collection of papers, including areas such as logic, computer science, philosophy, economics, game theory, decision theory and social welfare. The topics covered by the papers include semantic models for knowledge and belief, epistemic logic, computational social choice, rationality in games and decision problems, and foundations of multi-agent systems.
I wish to thank the team of local organizers, chaired by Fenrong Liu, to make this conference possible under these extraordinary circumstances. Another word of gratitude goes to the members of the program committee, not only for reviewing the submissions, but also for their valuable input concerning other aspects of the conference, such as the invited speakers and the precise format of the conference. The members of the program committee are: Christian Bach, Adam Bjorndahl, Giacomo Bonanno, Emiliano Catonini, Franz Dietrich, Davide Grossi, Joseph Halpern (conference chair), Jérôme Lang, Fenrong Liu (local organizing chair), Silvia Milano, Yoram Moses, Eric Pacuit, Andrés Perea (program committee chair), Olivier Roy, Elias Tsakas, Paolo Turrini, Rineke Verbrugge and Kevin Zollman.
I also wish to thank the invited speakers at this conference: Ariel Procaccia, Burkhard Schipper, Sonja Smets and Katie Steele.
On the practical side, the conference and the proceedings have benefitted a lot from the EasyChair platform, and the EPTCS - system. I thank Rob van Glabbeek, editor of EPTCS, for his help during the process of setting up these proceedings.
Last but not least, I am very grateful to Joseph Halpern (conference chair) and Fenrong Liu (local organizing chair) who have done so much for the organization of TARK XVIII. It was an absolute pleasure to work with you, and I am sorry for the many E-mails you had to digest from me.
I sincerly hope that these proceedings will be a source of inspiration for your research, and that you will enjoy reading the papers.
Andrés Perea
Program Committee Chair TARK XVIII
Maastricht, June 2021
Arash Abizadeh (Department of Political Science, McGill University, Montreal, Canada)
Adrian Vetta (Department of Mathematics and Statistics, and School of Computer Science, McGill University, Montreal, Canada)
We design a recursive measure of voting power based upon partial voting efficacy as well as full voting efficacy. In contrast, classical indices and measures of voting power incorporate only full voting efficacy. We motivate our design by representing voting games using a division lattice and via the notion of random walks in stochastic processes, and show the viability of our recursive measure by proving it satisfies a plethora of postulates that any reasonable voting measure should satisfy.
There have been two approaches to justifying measures of voting power. The first is the axiomatic approach, which seeks to identify a set of reasonable axioms that uniquely pick out a single measure of voting power. To date this justificatory approach has proved a failure: while many have succeeded in providing axiomatic characterizations of various measures, no one has succeeded in doing so for a set of axioms all of which are independently justified, i.e., in showing why it would be reasonable to expect a measure of voting power to satisfy the entire set of axioms that uniquely pick out a proposed measure. For example, Dubey (1975) and Dubey and Shapley (1979) have characterized the classic Shapely-Shubik index ($SS$) and Penrose-Banzhaf measure ($PB$) as uniquely satisfying a distinct set of axioms, respectively, but several of the axioms lack proper justification (Straffin 1982: 292-296; Felsenthal and Machover 1998: 194-195; Laruelle and Valenciano 2001). The second, two-pronged approach is more modest and involves combining two prongs of justification. The first prong is to motivate a proposed measure on conceptual grounds, showing the sense in which it captures the intuitive meaning of what voting power is. With this conceptual justification in place, the second prong of justification then requires showing that the measure satisfies a set of reasonable postulates. For the more modest approach, both prongs of justification are necessary, and the satisfaction of reasonable postulates serves, not to pick out a uniquely reasonable measure, but to rule out unreasonable measures.
The first prong of justification has been typically carried out in probabilistic terms. For example, the a priori Penrose-Banzhaf measure equates a player's voting power, in a given voting structure, with the proportion of logically possible divisions or complete vote configurations in which the player is (fully) decisive for the division outcome, i.e., in which the player has an alternative voting strategy such that, if it were to choose that alternative instead, the outcome would be different (holding all other players' votes constant). The standard interpretation is that the a priori $PB$ measure represents the probability a player will be decisive under the assumptions of equiprobable voting (the probability a player votes for an alternative is equal to the probability it votes for any other) and voting independence (votes are not correlated), which together imply equiprobable divisions (the probability of each division is equal) (Felsenthal and Machover 1998: 37-38).
However, measures of voting power based exclusively on the ex ante probability of decisiveness suffer from a crucial conceptual flaw. The motivation for basing a measure of voting power on this notion is that decisiveness is supposed to formalize the idea of a player making a difference to the outcome. To equate a player's voting power with the player's ex ante probability of being decisive is to assume that if any particular division were hypothetically to occur, then the player would have efficaciously exercised power to help produce the outcome ex post if and only if that player would have been decisive or necessary for the outcome. Yet this assumption is false: sometimes, as in causally overdetermined outcomes, an actor has efficaciously exercised its power to effect an outcome ex post, and, through the exercise of that power, made a causal contribution to the outcome, even though the actor's contribution was not decisive to it.
More specifically, reducing voting power to the ex ante probability of being decisive fails to take into account players' partial causal efficacy in producing outcomes ex post. In this paper, we design a Recursive Measure ($RM$) of voting power that remedies this shortcoming, by taking into account partial efficacy or degrees of causal efficacy. A full conceptual justification for $RM$ -- i.e., the first prong of justification on the more modest approach -- is given in Abizadeh (working paper). $RM$ represents, not the probability a player will be decisive for the division outcome (the probability the player will be fully causally efficacious in bringing it about) but, rather, the player's expected efficacy, that is, the probability the player will make a causal contribution to the outcome weighted by the degree of causal efficacy. Whereas decisiveness measures such as $PB$ solely track full efficacy, $RM$ tracks partial efficacy as well.
Our task in this paper is to furnish the second prong of justification. In particular, we take it that any reasonable measure of a priori voting power $\pi$ should satisfy, for simple voting games $\mathcal{G}$ with equiprobable divisions, where $[n]$ is the set of all voters and a dummy is a voter not decisive in any division, the following postulates:
Iso-invariance postulate: For iso-invariant voting games $\mathcal{G}$ and $\hat{\mathcal{G}}$: $\pi_i=\hat{\pi}_i$ for any player $i$.
Dummy postulates: For any game $\hat{\mathcal{G}}$ formed by the addition of a dummy voter to $\mathcal{G}$: if $i$ is a dummy voter, then $\pi_i=0$; $\pi_i=0$ only if $i$ is a dummy voter; and if $i$ is a non-dummy voter, then $\pi_i=\hat{\pi}_i$.
Dominance postulate: For any subset $S\subseteq [n]$ with $i,j\notin S$: $\pi_j\ge \pi_i$ whenever $j$ weakly dominates $i$, and $\pi_j> \pi_i$ whenever $j$ strictly dominates $i$ (where $j$ weakly dominates $i$ if whenever $S\cup i$ vote yes and the outcome is yes, then if $S\cup j$ vote yes the outcome is yes; and $i$ strictly dominates $j$ if the former weakly dominates the latter but not vice versa).
Donation postulate: For any game $\hat{\mathcal{G}}$ formed from $\mathcal{G}$ by player $j$ transferring its vote to player $i$: $\hat{\pi}_i \ge \max (\pi_i, \pi_j )$.
Bloc postulate: For any game $\hat{\mathcal{G}}$ formed from $\mathcal{G}$ by player $i$ annexing $i$'s vote to form a bloc $I=\{i,j\}$: $\hat{\pi}_I \ge \max (\pi_i, \pi_j )$.
Quarrel postulate: For any game $\hat{\mathcal{G}}$ formed from $\mathcal{G}$ by inducing a symmetric, weak, monotonic quarrel between $i$ and $j$: $\hat{\pi}_i\le \pi_i$ and $\hat{\pi}_j\le \pi_j$.
Added blocker postulate: For any game $\mathcal{G}^Y$ resulting from $\mathcal{G}$ by adding an added yes-blocker, and $\mathcal{G}^N$ resulting from adding an added no-blocker: $\frac{\pi^+_i(\mathcal{G})}{\pi^+_j(\mathcal{G})} = \frac{\pi^+_i(\mathcal{G}^Y)}{\pi^+_j(\mathcal{G}^Y)}$, and $\frac{\pi^-_i(\mathcal{G})}{\pi^-_j(\mathcal{G})} = \frac{\pi^-_i(\mathcal{G}^N)}{\pi^-_j(\mathcal{G}^N)}$ (where $\pi^+$ is a player's yes-voting power, based solely on divisions in which it votes yes, and $\pi^-$ is a player's no-voting power, based solely on divisions in which it votes no.
In the full paper, we explain the intuitive justification for and fully specify each of these voting-power postulates, and then prove that $RM$ satisfies them for a priori power in simple voting games. We prove these postulates by introducing a new way of representing voting games using a division lattice, and show that previous formulations of some of these postulates require revision.
A full version of the paper can be found at: http://arxiv.org/abs/2105.03006
Abizadeh, A. (Working paper). A Recursive Measure of Voting Power with Partial Decisiveness or Efficacy.
Dubey, P. (1975). On the Uniqueness of the Shapley Value. International Journal of Game Theory, 4(3), 131-139. doi:10.1007/BF01780630
Dubey, P., and Shapley, L.S. (1979). Mathematical Properties of the Banzhaf Power Index. Mathematics of Operations Research, 4(2), 99-131. doi:10.1287/moor.4.2.99
Felsenthal, D., and Machover, M. (1998) The Measurement of Voting Power: Theory and Practice, Problems and Paradoxes, Edward Elgar. doi:10.4337/9781840647761
Laruelle, A., and Valenciano, F. (2001). Shapley-Shubik and Banzhaf Indices Revisited. Mathematics of Operations Research, 26(1), 89-104. doi:10.1287/moor.26.1.89.10589
Straffin, P.D. (1982). Power Indices in Politics. In S.J. Brams, W.F. Lucas, and P.D. Straffin (Eds.), Political and Related Models, 256-321. New York: Springer. doi:10.1007/978-1-4612-5430-0_11
Emiliano Catonini (HSE University Moscow)
Jingyi Xue (Singapore Management University)
We present a local notion of dominance that speaks to the true choice problems among actions in a game tree and does not rely on global planning. When we do not restrict the ability of the players to do contingent reasoning, a reduced strategy is weakly dominant if and only if it prescribes a locally dominant action at every decision node, therefore any dynamic decomposition of a direct mechanism that preserves strategy-proofness is robust to the lack of global planning. Under a form of wishful thinking, we also show that strategy-proofness is robust to the lack of forward-planning. Moreover, from our local perspective, we can identify rough forms of contingent reasoning that are particularly natural. We construct a dynamic game that implements the Top Trading Cycles allocation under a minimal form of contingent reasoning, related to independence of irrelevant alternatives.
Franz Dietrich (Paris School of Economics & CNRS)
Christian List (LMU Munich)
Judgment aggregation theory traditionally aims for collective judgments that are rational. So far, rationality has been understood in purely static terms: as coherence of judgments at a given time, where 'coherence' could for instance mean consistency, or completeness, or deductive closure, or combinations thereof. By contrast, this paper, which summarises results from Dietrich and List (2021), asks the novel question of whether collective judgments can be dynamically rational: whether they can respond well to new information, i.e., change rationally when information is learnt by everyone. Formally, we call a judgment aggregation rule dynamically rational with respect to a given revision operator if, whenever all individuals revise their judgments in light of some information (a proposition), then the new aggregate judgments are the old ones revised in light of this information. In short, aggregation and revision commute. A general impossibility theorem holds: as long as the propositions on the agenda are sufficiently interconnected, no judgment aggregation rule with standard properties is dynamically rational with respect to any revision operator satisfying mild conditions (familiar from belief revision theory). The theorem is a counterpart for dynamic rationality of known impossibility theorems for static rationality. Relaxation of the theorem's conditions opens the door to interesting aggregation rules generating dynamically rational judgments, including certain premise-based rules, as we briefly discuss (see Dietrich and List 2020 for details).
Suppose a group of individuals – say, a committee, expert panel, multi-member court, or other decision-making body – makes collective judgments on some propositions by aggregating its members' individual judgments on those propositions. And now suppose some new information – in the form of the truth of some proposition – is learnt. All individuals rationally revise their judgments. Aggregating the new individual judgments yields new collective judgments. If the group is to be a rational agent, then it should incorporate new information rationally, and so the new aggregate judgments should coincide with the old ones revised in light of the information. Technically, this means that the operations of aggregation and revision commute: aggregating judgments and then revising the result yields the same as revising individual judgments and then aggregating.
In this paper, we investigate whether we can find reasonable aggregation rules that enable a group to achieve such dynamic rationality: aggregation rules which commute with reasonable revision methods. Surprisingly, this question has not been studied in the judgment-aggregation framework, where judgments are binary verdicts on some propositions: "yes"/"no", "true"/"false", "accept" /"reject". (On judgment-aggregation theory, see List and Pettit 2002, Dietrich and List 2007, Nehring and Puppe 2010, Dokow and Holzman 2010, List and Puppe 2009.) The focus in judgment-aggregation theory has generally been on static rationality, namely on whether properties such as consistency, completeness, or deductive closure are preserved when individual judgments are aggregated into collective ones at a given point in time.1
By contrast, the question of dynamic rationality has received much attention in the distinct setting of probability aggregation, where judgments aren't binary but take the form of subjective probability assignments to the elements of some algebra. In that context, a mix of possibility and impossibility results has been obtained (e.g., Madansky 1964, Genest 1984, Genest et al. 1986, Dietrich 2010, 2019, Russell et al. 2015). These show that some familiar methods of aggregation – notably, the arithmetic averaging of probabilities – fail to commute with belief revision, while other methods – particularly geometric averaging – do commute with revision. An investigation of the parallel question in the case of binary judgments is therefore overdue.
We present a negative result: for a large class of familiar judgment aggregation rules, dynamic rationality is unachievable relative to a large class of reasonable judgment revision methods. However, if we relax some of our main theorem's conditions on the aggregation rule, dynamically rational aggregation becomes possible. In particular, "premise-based" aggregation can be dynamically rational relative to certain "premise-based" revision methods. This extended abstract focuses on the impossibility finding, for reasons of space. Possibilities are discussed in Dietrich and List (2021), which also contains all proofs.
The formal setup
We begin with the basic setup from judgment-aggregation theory (following List and Pettit 2002 and Dietrich 2007). We assume that there is a set of individuals who hold judgments on some set of propositions, and we are looking for a method of aggregating these judgments into resulting collective judgments. The key elements of this setup are the following:
Individuals. These are represented by a finite and non-empty set N. Its members are labelled 1, 2, ..., n. We assume n ≥ 2.
Propositions. These are represented in formal logic. For our purposes, a thin notion of "logic" will suffice. Specifically, a logic, L, is a non-empty set of formal objects called "propositions", which is endowed with two things: a negation operator, denoted ¬, so that, for every proposition p in L there is also its negation ¬p in L; and a well-behaved notion of consistency, which specifies, for each set of propositions S ⊆ L, whether S is consistent or inconsistent.2 Standard propositional, predicate, modal, and conditional logics all fall under this definition, as do Boolean algebras.3 A proposition p is contradictory if {p} is inconsistent, tautological if {¬p} is inconsistent, and contingent if p is non-contradictory and non-tautological.
Agenda. The agenda is the set of those propositions from L on which judgments are to be made. Formally, this is a finite non-empty subset X of L which can be partitioned into proposition-negation pairs {p, ¬p}, abbreviated { ± p}. Sometimes it is useful to make this partition explicit. We write Z to denote the set of these proposition-negation pairs of X. The elements of Z can be interpreted as the binary issues under consideration. Then the agenda X is their disjoint union, formally X = ∪Z ∈ ZnZ. Throughout this paper, we assume that double-negations cancel out in agenda propositions.4
Our focus will be on agendas satisfying a non-triviality condition. To define it, call a set of propositions minimal inconsistent if it is inconsistent but all its proper subsets are consistent. Proposition-negation pairs of the form {p, ¬p} (with p contingent) are minimal inconsistent, and so are sets of the form {p, q, ¬(p ∧ q)} (with p and q contingent), where "∧" stands for logical conjunction ("and"). We call an agenda non-simple if it has at least one minimal inconsistent subset of size greater than two. An example of a non-simple agenda is the set X = { ± p, ± (p → q), ± q}, where p might be the proposition "Current atmospheric CO2 is above 407 ppm", p → q might be the proposition "If current atmospheric CO2 is above 407 ppm, then the Arctic iceshield will melt by 2050", and q might be the proposition "The Arctic iceshield will melt by 2050". The conditional p → q can be formalized in standard propositional logic or in a suitable logic for conditionals. A three-member minimal inconsistent subset of this agenda is {p, p → q, ¬q}.
Judgments. Each individual's (and subsequently the group's) judgments on the given propositions are represented by a judgment set, which is a subset J ⊆ X, consisting of all those propositions from X that its bearer "accepts" (e.g., affirms or judges to be true). A judgment set J is
complete if it contains a member of each proposition-negation pair from X,
consistent if it is a consistent set in the sense of the given logic, and
classically rational if it has both of these properties.
We write J to denote the set of all classically rational judgment sets on the agenda X. A list of judgment sets (J1, ..., Jn) across the individuals in N is called a profile (of individual judgment sets).
Aggregation rule. A (judgment) aggregation rule is a function, F, which maps each profile (J1, ..., Jn) in some domain D of admissible profiles (often D = Jn) to a collective judgment set J = F(J1, ..., Jn). A standard example is majority rule, which is defined as follows: for each (J1, ..., Jn) ∈ Jn,
F(J1, ..., Jn) = {p ∈ X : |{i:p∈Ji}| > n/2}.
A typical research question in judgment aggregation theory is whether we can find aggregation rules that satisfy certain requirements of democratic responsiveness to the individual judgments and collective rationality.
Judgment revision
The idea we wish to capture is that whenever any individual (or subsequently the group) learns some new information, in the form of the truth of some proposition, this individual (or the group) must incorporate the learnt information in the judgments held – an idea familiar from belief revision theory in the tradition of Alchourrón, Gärdenfors and Makinson (1985) (see also Rott 2001 and Peppas 2008). Our central concept is that of a judgment revision operator. This is a function which assigns to any pair (J, p) of an initial judgment set J ⊆ X and a learnt proposition p ∈ X a new judgment set J|p, the revised judgment set, given p. Formally, the revision operator is any function from 2X × X to 2X. We call it regular if it satisfies the following two minimal conditions:
it is successful, i.e., p ∈ J|p for any pair (J, p) ("accept what you learn"), and
it is conservative, i.e., J|p = J for any pair (J, p) such that p ∈ J ("no news, no change").
We further call a revision operator rationality-preserving if whenever J ∈ J, we have J|p ∈ J for all non-contradictory propositions p ∈ X. These definitions are well-illustrated by the class of distance-based revision operators, familiar from belief revision theory. Such operators require that when a judgment set is revised in light of some new information, the post-revision judgments remain as "close" as possible to the pre-revision judgments, subject to the constraint that the learnt information be incorporated and no inconsistencies be introduced. Different distance-based operators spell out the notion of "closeness" in different ways (different metrics have been introduced in the area of judgment aggregation by Konieczny and Pino-Pérez 2002 and Pigozzi 2006).
Can aggregation and revision commute?
We are now ready to turn to this paper's question. As noted, we would ideally want any decision-making group to employ a judgment aggregation rule and a revision operator that generate the same collective judgments irrespective of whether revision takes place before or after aggregation. This requirement (an analogue of the classic "external Bayesianity" condition in probability aggregation theory, as in Madansky 1964, Genest 1984, and Genest et al. 1986) is captured by the following condition on the aggregation rule F and the revision operator |:
Dynamic rationality. For any profile (J1, ..., Jn) in the domain of F and any learnt proposition p ∈ X where the revised profile (J1|p, ..., Jn|p) is also in the domain of F, F(J1|p, ..., Jn|p) = F(J1, ..., Jn)|p.
To see that this condition is surprisingly hard to satisfy, consider an example. Suppose a three-member group is making judgments on the agenda X = { ± p, ± (p → q), ± q}, where p → q is understood as a subjunctive conditional. That is, apart from the subsets of X that include a proposition-negation pair, the only inconsistent subset of X is {p, p → q, ¬q}.5 Suppose, further, members' initial judgments and the resulting majority judgments are as follows:
Individual 1: { ¬p, ¬(p → q), q}
Individual 2: { ¬p, p → q, ¬q}
Individual 3: { ¬p, ¬(p → q), ¬q}
Majority: { ¬p, ¬(p → q), ¬q}
Assume the revision operator is based on the Hamming distance, with some tie-breaking provision such that, in the case of a tie, one is more ready to change one's judgment on p or p → q (which represent "premises") than on q (which represents a "conclusion"). If the individuals learn the truth of p and revise their judgments, they arrive at the following post-revision judgments:
Individual 1: { p, ¬(p → q), q}
Individual 2: { p, p → q, q}
Individual 3: { p, ¬(p → q), ¬q}
Majority: { p, ¬(p → q), q}
Crucially, the post-information group judgment set, {p, ¬(p → q), q}, differs from the revision in light of p of the pre-information group judgment set, because {¬p, ¬(p → q), ¬q}|p = {p, ¬(p → q), ¬q}. That is, the group replaces ¬q with q in its judgment set, although learning p did not force the group to revise its position on q (recall that {p, ¬(p → q), ¬q} is perfectly consistent, given that → is a subjunctive conditional). Thus the group's (majority) judgment set does not evolve rationally.
At first sight, one might think that this problem is just an artifact of majority rule or our specific distance-based revision operator, or that it is somehow unique to our example. However, the following formal result – a simplified ('anonymous') version of our impossibility theorem – shows that the problem is more general. Define a uniform quota rule, with acceptance threshold m ∈ {1, ..., n}, as the aggregation rule with domain Jn such that, for each (J1, ..., Jn) ∈ Jn,
F(J1, ..., Jn) = {p ∈ X : |{i : p ∈ Ji} ≥ m}.
Majority rule is a special case of a uniform quota rule, namely the one where m is the smallest integer greater than n/2. We have:
Theorem 1: If the agenda X is non-simple, then no uniform quota rule whose threshold is not the unanimity threshold n is dynamically rational with respect to any regular rationality-preserving revision operator.
In short, replacing majority rule with some other uniform quota rule with threshold less than n wouldn't solve our problem of dynamic irrationality, and neither would replacing our distance-based revision operator with some other regular rationality-preserving revision operator. In fact, the problem generalizes further, as shown in the next section.
A general impossibility theorem
We will now abstract away from the details of any particular aggregation rule, and suppose instead we are looking for an aggregation rule F that satisfies the following general conditions:
Universal domain: The domain of admissible inputs to the aggregation rule F is the set of all classically rational profiles, i.e., D = Jn.
Non-imposition: F does not always deliver the same antecedently fixed output judgment set J, irrespective of the individual inputs, i.e., F is not a constant function.
Monotonicity: Additional individual support for an accepted proposition does not overturn the proposition's acceptance, i.e., for any profile (J1, ..., Jn) ∈ D and any proposition p ∈ F(J1, ..., Jn), if any Ji not containing p is replaced by some Ji′ containing p and the modified profile (J1, ..., Ji′, ..., Jn) remains in D, then p ∈ F(J1, ..., Ji′, ..., Jn).
Non-oligarchy: There is no non-empty set of individuals M ⊆ N (a set of "oligarchs") such that, for every profile (J1, ..., Jn) ∈ D, F(J1, ..., Jn) = ∩i ∈ MJi.
Systematicity: The collective judgment on each proposition is determined fully and neutrally by individual judgments on that proposition. Formally, for any propositions p, p′ ∈ X and any profiles (J1, ..., Jn), (J1′, ..., Jn′) ∈ D, if, for all i ∈ N, p ∈ Ji ⇔ p′ ∈ Ji′, then p ∈ F(J1, ..., Jn) ⇔ p′ ∈ F(J1′, ..., Jn′).
Why are these conditions initially plausible? The reason is that, for each of them, a violation would entail a cost. Violating universal domain would mean that the aggregation rule is not fully robust to pluralism in its inputs; it would be undefined for some classically rational judgment profiles. Violating non-imposition would mean that the collective judgments are totally unresponsive to the individual judgments, which is completely undemocratic. Violating monotonicity could make the aggregation rule erratic in some respect: an individual could come to accept a particular collectively accepted proposition and thereby overturn its acceptance. Violating non-oligarchy would mean two things. First, the collective judgments would depend only on the judgments of the "oligarchs", which is undemocratic (unless M = N); and second, the collective judgments would be incomplete with respect to any binary issue on which there is the slightest disagreement among the oligarchs, which would lead to widespread indecision (except when M is singleton, so that the rule is dictatorial). Violating systematicity, finally, would mean that the collective judgment on each proposition is no longer determined as a proposition-independent function of individual judgments on that proposition. It may then either depend on individual judgments on other propositions too (a lack of propositionwise independence), or the pattern of dependence may vary from proposition to proposition (a lack of neutrality ). Systematicity – the conjunction of propositionwise independence and neutrality – is the most controversial condition among the five. But it is worth noting that it is satisfied by majority rule and all uniform quota rules. Indeed, majority rule and uniform quota rules (except the unanimity rule) satisfy all five conditions.
Our main theorem shows that, for non-simple agendas, the present five conditions are incompatible with dynamic rationality:
Theorem 2: If the agenda X is non-simple, then no aggregation rule satisfying universal domain, non-imposition, monotonicity, non-oligarchy, and systematicity is dynamically rational with respect to any regular rationality-preserving revision operator.
Interestingly, Theorem 2 does not impose any condition of static rationality. The theorem does not require that collective judgment sets are consistent or complete or deductively closed. The impossibility of dynamic inconsistency is thus independent of classic impossibilities of static rationality. In fact, Theorem 2 would continue to hold if its condition of dynamic rationality were replaced by static rationality in the form of consistency and completeness of collective judgment sets.
By Theorem 2, the problem identified by Theorem 1 is not restricted to uniform quota rules, but extends to all aggregation rules satisfying our conditions. Moreover, since practically all non-trivial agendas are non-simple, the impossibility applies very widely.
The natural follow-up question is that of whether any of the conditions in the theorem is redundant, i.e., could be dropped, and if not what sort of dynamically rational aggregation rules become possible after dropping any of these conditions. This question goes beyond the scope of this summary and is treated in Dietrich and List (2021). Four remarks should however be given:
Firstly, none of the theorem's conditions on the aggregation rule, the revision operator, or the agenda is redundant. That is, whenever we drop the agenda condition (non-simplicity) or any one of the aggregation conditions (universal domain, non-imposition, monotonicity, non-oligarchy, and systematicity) or any of the revision conditions (successfulness, conservativeness, and rationality preservation), there exist dynamically rational aggregation rules such that the remaining conditions hold.
Secondly, abandoning exactly one condition on the aggregation rule leads to rather degenerate dynamically rational possibilities, in the form of 'peculiar' aggregation rules and/or revision operators (with the exception of universal domain, whose relaxation allows for interesting dynamically rational possibilities). One of the conditions on aggregation seems very strong: systematicity. An important difference between static and dynamic rationality is that dropping systematicity or even independece makes it easy (indeed, too easy) to satisfy static rationality – for instance by using distance-based rules or prioritarian rules or scoring rules – whereas dynamic rationality remains hard to achieve without systematicity, as illustrated by the degenerate nature of the non-systematic escape route constructed in Dietrich and List (2021). It thus seems inappropriate to blame systematicity for being the main culprit for the impossibility of dynamic rationality.
Thirdly, let us give examples of dynamically rational aggregation rules that become possible if we give up any one of the three conditions on the revision operator while preserving all other conditions on revision or aggregation.
Non-successful revision. Consider the constant revision operator, defined by
J|p = J for all (J, p).
This operator is only conservative and rationality-preserving. All aggregation rules are trivially dynamically rational with respect to it.
Non-conservative revision. For each proposition p ∈ X, fix a judgment set Jp which contains p and moreover is rational (i.e., in J) as long as p is non-contradictory. Consider the revision operator given by
J|p = Jp for all (J, p).
This operator is only successful and rationality-preserving. As one can show, every unanimity-preserving aggregation rule is dynamically rational with respect to it.
Non-rationality-preserving revision. Consider a revision operator such that J|p is identical to J whenever J does not contain p, and otherwise is some irrational judgment set containing p. This operator is only conservative and successful. As one can show, every aggregation rule satisfying universal domain and propositionwise unanimity preservation is dynamically rational with respect to it. Here, propositionwise unanimity-preservation means that, for all profiles (J1, ..., Jn) in the domain and all propositions p ∈ X, if p ∈ Ji for all i, then p ∈ F(J1, ..., Jn).
Finally, on a more positive note, in Dietrich and List (2021) we explore an interesting class of dynamically rational aggregation rules, which simultaneously relax multiple of Theorem 2's conditions on aggregation and revision, notably systematicity. In a nutshell, premise-based aggregation rules are dynamically rational with respect to premise-based revision operators. Presenting these rules goes beyond the scope of this summary.
Proofs of theorems and other technical details are given in Dietrich and List (2021).
Alchourrón, C. E., Gärdenfors, P., and Makinson, D. (1985): On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic 50(2), pp. 510–530. DOI: 10.2307/2274239
Dietrich, F. (2007): A generalised model of judgment aggregation. Social Choice and Welfare 28(4), pp. 529–565. DOI: 10.1007/s00355-006-0187-y
Dietrich, F. (2010): Bayesian group belief. Social Choice and Welfare 35(4), pp. 595–626. DOI: 10.1007/s00355-010-0453-x
Dietrich, F. (2019): A theory of Bayesian groups. Noûs 53(3), pp. 708–736. DOI: 10.1111/nous.12233
Dietrich, F., and List, C. (2007): Arrow's theorem in judgment aggregation. Social Choice and Welfare 29(1), pp. 19–33. DOI: 10.1007/s00355-006-0196-x
Dietrich, F., and List, C. (2021): Dynamically Rational Judgment Aggregation. Working paper, see https://philpapers.org/rec/DIEDRJ
Dokow, E., and Holzman, R. (2010): Aggregation of binary evaluations. Journal of Economic Theory 145(2), pp. 495–511. DOI: 10.1016/j.jet.2007.10.004
Genest, C. (1984): A characterization theorem for externally Bayesian groups. Annals of Statistics 12(3), pp. 1100–1105. DOI: 10.1214/aos/1176346726
Genest, C., McConway, K. J., and Schervish, M. J. (1986): Characterization of externally Bayesian pooling operators. Annals of Statistics 14(2), pp. 487–501. DOI: 10.1007/BF02562628
Konieczny, S., and Pino-Pérez, R. (2002): Merging information under constraints: A logical framework. Journal of Logic and Computation 12(5), pp. 773–808. DOI: 10.1093/logcom/12.5.773
List, C. (2011): Group Communication and the Transformation of Judgments: An Impossibility Result. Journal of Political Philosophy 19(1), pp. 1–27. DOI: 10.1111/j.1467-9760.2010.00369.x
List, C., and Pettit, P. (2002): Aggregating sets of judgments: An impossibility result. Economics and Philosophy 18(1), pp. 89–110.
List, C., and Pettit, P. (2011): Group Agency: The Design, Possibility, and Status of Corporate Agents. Oxford: Oxford University Press. DOI: 10.1093/acprof:oso/9780199591565.001.0001
List, C., and Puppe, C. (2009): Judgment aggregation: A survey. In P. Anand, C. Puppe, and P. Pattanaik, Oxford Handbook of Rational and Social Choice. Oxford: Oxford University Press. DOI: 10.1093/acprof:oso/9780199290420.001.0001
Madansky, A. (1964): Externally Bayesian Groups. Technical Report RM-4141-PR, RAND Corporation.
Nehring, K., and Puppe, C. (2010): Abstract Arrovian aggregation. Journal of Economic Theory 145(2), pp. 467–494. DOI: 10.1016/j.jet.2010.01.010
Peppas, P. (2008): Belief Revision. In F. van Harmelen, V. Lifschitz and B. Porter, Handbook of Knowledge Representation, Elsevier, pp. 317-359.
Pettit, P. (2006): When to defer to majority testimony – and when not. Analysis 66(3), pp. 179–187.
Pigozzi, G. (2006): Belief merging and the discursive dilemma: An argument-based account to paradoxes of judgment aggregation. Synthese 152(2), pp. 285–298. DOI: 10.1007/s11229-006-9063-7
Rott, H. (2001): Change, Choice and Inference: A Study of Belief Revision and Non-monotonic Reasoning. Oxford: Oxford University Press.
Russell, J. S., Hawthorne, J., and Buchak, L. (2015): Groupthink. Philosophical Studies 172(5), pp. 1287–1309. DOI: 10.1007/s11098-014-0350-8
The revision of judgments has been investigated only in a different sense in judgment aggregation theory, namely in peer-disagreement contexts, where individuals do not learn a proposition but learn the judgments of others (Pettit 2006, List 2011).↩︎
Well-behavedness is a three-part requirement: (i) any proposition-negation pair {p, ¬p} is inconsistent; (ii) any subset of any consistent set is still consistent; and (iii) the empty set is consistent, and any consistent set S has a consistent superset S′ which contains a member of every proposition-negation pair {p, ¬p}.↩︎
Readers familiar with probability theory could take L to be a Boolean algebra on a non-empty set Ω of possible worlds (e.g., the power set L = 2Ω), with negation defined as set-theoretic complementation and consistency of a set defined as non-empty intersection. The Boolean algebra could also be an abstract rather than set-theoretic Boolean algebra.↩︎
To be precise, henceforth, by the negation of any proposition q ∈ X we shall mean the agenda-internal negation of q, i.e., the opposite proposition in the binary issue {p, ¬p} to which q belongs. This is logically equivalent to the ordinary negation of q and will again be denoted ¬q, for simplicity. This convention ensures that ¬¬q = q.↩︎
This subjunctive understanding of p → q contrasts with the material one, where p → q is understood less realistically as ¬p ∨ q. On the material understanding, the subsets {p, ¬(p → q), q}, {¬p, ¬(p → q), q}, and {¬p, ¬(p → q), ¬q} would also be deemed inconsistent.↩︎
Huihui Ding (CY Cergy Paris University )
Marcus Pivato (CY Cergy Paris University)
We study the effects of deliberation on epistemic social choice, in two settings. In the first setting, the group faces a binary epistemic decision analogous to the Condorcet Jury Theorem. In the second setting, group members have probabilistic beliefs arising from their private information, and the group wants to aggregate these beliefs in a way that makes optimal use of this information. During deliberation, each agent discloses private information to persuade the other agents of her current views. But her views may also evolve over time, as she learns from other agents. This process will improve the performance of the group, but only under certain conditions; these involve the nature of the social decision rule, the group size, and also the presence of neutral agents whom the other agents try to persuade.
P. Jean-Jacques Herings (Maastricht University)
Dominik Karos (Bielefeld University)
Toygar Kerman (Maastricht University)
We consider a group of receivers who share a common prior on a finite state space and who observe private correlated signals that are contingent on the true state of the world. We show that, while necessary, Bayes plausibility is not sufficient for a distribution over posterior belief vectors to be inducible, and we provide a characterization of inducible distributions. We classify communication strategies as minimal, direct, and language independent, and we show that any inducible distribution can be induced by a language independent communication strategy (LICS). We investigate the role of the different classes of communication strategies for the amount of higher order information that is revealed to receivers. We show that the least informative communication strategy which induces a fixed distribution over posterior belief vectors lies in the relative interior of the set of all language independent communication strategies which induce that distribution.
Simon Huttegger (University of California, Irvine)
Sean Walsh (University of California, Los Angeles)
Francesca Zaffora Blando (Carnegie Mellon University)
Convergence-to-the-truth results and merging-of-opinions results are part of the basic toolkit of Bayesian epistemologists. In a nutshell, the former establish that Bayesian agents expect their beliefs to almost surely converge to the truth as the evidence accumulates. The latter, on the other hand, establish that, as they make more and more observations, two Bayesian agents with different subjective priors are guaranteed to almost surely reach inter-subjective agreement, provided that their priors are sufficiently compatible. While in and of themselves significant, convergence to the truth with probability one and merging of opinions with probability one remain somewhat elusive notions. In their classical form, these results do not specify which data streams belong to the probability-one set of sequences on which convergence to the truth or merging of opinions occurs. In particular, they do not reveal whether the data streams that ensure eventual convergence or merging share any property that might explain their conduciveness to successful learning. Thus, a natural question raised by these classical results is whether the kind of data streams that are conducive to convergence and merging for Bayesian agents are uniformly characterizable in an informative way.
The results presented in this paper provide an answer to this question. The driving idea behind this work is to approach the phenomena of convergence to the truth and merging of opinions from the perspective of computability theory and, in particular, the theory of algorithmic randomness--a branch of computability theory concerned with characterizing the notion of a sequence displaying no effectively detectable patterns. We restrict attention to Bayesian agents whose subjective priors are computable probability measures and whose goal, in the context of convergence to the truth, is estimating quantities that can be effectively approximated. These are natural restrictions to impose when studying the inductive performance of more realistic, computationally limited learners. Crucially, they also allow to provide a more fine-grained analysis of both convergence to the truth and merging of opinions. Our results establish that, in this setting, the collections of data streams along which convergence and merging occur are indeed uniformly characterizable in an informative way: they are exactly the algorithmically random data streams.
Stephan Jagau (IMBS, University of California, Irvine)
In auction theory, industrial organization, and other fields of game theory, it is often convenient to let infinite strategy sets stand in for large finite strategy sets. A tacit assumption is that results from infinite games generally translate back to their finite counterparts. Transfinite eliminations of non-best replies pose a radical challenge here, suggesting that common belief in rationality in infinite games strictly refines up to k-fold belief in rationality for all finite k. I provide a general characterization of common belief in rationality for finite and infinite games that fully restores the equivalence to up to k-fold belief in rationality for all finite k. By means of eliminating non-best replies and supporting beliefs, my characterization entirely avoids transfinite eliminations. Hence, rather than revealing new depths of reasoning, transfinite eliminations signal an inadequacy of eliminating non-best replies as a general description of strategic rationality.
Toygar Kerman (Department of Microeconomics and Public Economics (MPE), Maastricht University)
Anastas P. Tenev (Department of Microeconomics and Public Economics (MPE), Maastricht University)
This paper studies a multiple-receiver Bayesian persuasion model, where a sender communicates with receivers who have homogeneous beliefs and aligned preferences. The sender wants to implement a proposal and commits to a communication strategy which sends private (possibly) correlated messages to the receivers, who are in an exogenous and commonly known network. Receivers can observe their neighbors' private messages and after updating their beliefs, vote sincerely on the proposal. We examine how networks of shared information affect the sender's gain from persuasion and find that in many cases it is not restricted by the additional information provided by the receivers' neighborhoods. Perhaps surprisingly, the sender's gain from persuasion is not monotonically decreasing with the density of the network.
Krzysztof Mierzewski (Carnegie Mellon University)
A canonical way to bridge the probabilistic, gradational notion of belief studied by Bayesian probability theory with the more mundane, all-or-nothing concept of qualitative belief is in terms of acceptance rules [Kelly and Lin, 2012]: maps that specify which propositions a rational agent accepts in light of their numerical credences (given by a probability model). Among the various acceptance rules proposed in the literature, an especially prominent one is Leitgeb's stability rule [Leitgeb, 2013, 2014, 2017; Rott, 2017], based on the notion of probabilistically stable hypotheses: that is, hypotheses that maintain sufficiently high probability under conditioning on new information.
When applied to discrete probability spaces, the stability rule for acceptance guarantees logically closed and consistent belief sets, and it suggests a promising account of the relationship between subjective probabilities and qualitative belief. Yet, most natural inductive problems - particularly those commonly occurring in statistical inferenc - are best modelled with continuous probability distributions and statistical models with a richer internal structure. This paper explores the possibility of extending Leitgeb's stability rule to more realistic learning scenarios and general probability spaces. This is done by considering a generalised notion of probabilistic stability, in which acceptance depends not only on the underlying probability space, but also on a learning problem - namely, a probability space equipped with a distinguished family of events capturing the relevant evidence (e.g., the observable data) in the given learning scenario. This view of acceptance as being relative to an evidence context is congenial to (topological approaches to) formal learning theory and hypothesis testing in statistics (where one typically distinguishes the hypotheses being considered from observable sample data), as well as logics of evidence-relative belief [van Benthem and Pacuit, 2011].
Here we consider the case of statistical learning. We show that, in the context of standard (parametric) Bayesian learning models, the stability rule yields a notion of acceptance that is either trivial (only hypotheses with probability 1 are accepted) or fails to be conjunctive (accepted hypotheses are not closed under conjunctions). The first problem chiefly affects statistical hypotheses, while the second one chiefly affects predictive hypotheses about future outcomes. The failure of conjunctivity for the stability rule is particularly salient, as it affects a wide class of consistent Bayesian priors and learning models with exchangeable random variables. In particular, the results presented here apply to many distributions commonly used in statistical inference, as well as to every method in Carnap's continuum of inductive logics [Carnap, 1980; Skyrms, 1996]. These results highlight a serious tension between (1) being responsive to evidence and (2) having conjunctive beliefs induced by the stability rule. In the statistical context, certain properties of priors that are conducive to inductive learning - open-mindedness, as well as certain symmetries in the agent's probability assignments - act against conjunctive belief. Thus, the main selling points of the stability account of belief - its good logical behaviour and its close connection to the Lockean thesis - do not survive the passage to richer probability models, such as canonical statistical models for i.i.d learning. We conclude by discussing the consequences the results bear on Leitgeb's Humean Thesis on belief [Leitgeb, 2017].
J. van Benthem and E. Pacuit. Dynamic Logics of Evidence-Based Beliefs. Studia Logica, 99(1): 61 - 92, 2011. doi: 10.1007/s11225-011-9347-x.
R. Carnap. A Basic System of Inductive Logic. in R.C. Jeffrey (ed.), Studies in Inductive Logic and Probability, vol. 2, Berkeley: University of California Press., 1980.
K. T. Kelly and H. Lin. A geo-logical solution to the lottery-paradox, with applications to conditional logic. Synthese, 186(2): 531 - 575, 2012. doi: 10.1007/s11229-011-9998-1.
H. Leitgeb. Reducing belief simpliciter to degrees of belief. Annals of Pure and Applied Logic, 164: 1338 - 1389, 2013. doi: 10.1016/j.apal.2013.06.015.
H. Leitgeb. The Stability Theory of Belief. Philosophical Review, 123(2): 131 - 171, 2014. doi: 10.1215/ 00318108-2400575.
H. Leitgeb. The Stability of Belief. Oxford University Press, Oxford, 2017.
H. Rott. Stability and Scepticism in the Modelling of Doxastic States: Probabilities and Plain Beliefs. Minds and Machines, 27(1): 167 - 197, 2017. doi: 10.1007/s11023-016-9415-0.
B. Skyrms. Carnapian inductive logic and Bayesian statistics. In Ferguson, T. S., Shapley, L. S. and MacQueen, J. B., editors, Statistics, probability and game theory: Papers in honor of David Blackwell, Hayward, CA, Institute of Mathematical Statistics, pages 321 - 336, 1996. doi: 10.1214/lnms/1215453580.
Evan Piermont ( Royal Holloway, University of London, Department of Economics)
Peio Zuazo-Garin (Higher School of Economics, International College of Economics and Finance)
In this paper, we provide a theoretical framework to analyze an agent who misinterprets or misperceives the true decision problem she faces. Within this framework, we show that a wide range of behavior observed in experimental settings manifest as failures to perceive implications, in other words, to properly account for the logical relationships between various payoff relevant contingencies. We present behavioral characterizations corresponding to several benchmarks of logical sophistication and show how it is possible to identify which implications the agent fails to perceive. Thus, our framework delivers both a methodology for assessing an agent's level of contingent thinking and a strategy for identifying her beliefs in the absence full rationality.
|
CommonCrawl
|
Hereditarily non uniformly perfect sets
DCDS-S Home
doi: 10.3934/dcdss.2019151
Thurston's algorithm and rational maps from quadratic polynomial matings
Mary Wilkerson 1,
Department of Mathematics and Statistics, Coastal Carolina University, PO Box 261954, Conway, SC 29528-6054, USA
Received September 2016 Revised June 2017 Published January 2019
Figure(15)
Topological mating is a combination that takes two same-degree polynomials and produces a new map with dynamics inherited from this initial pair. This process frequently yields a map that is Thurston-equivalent to a rational map $ F $ on the Riemann sphere. Given a pair of polynomials of the form $ z^2+c $ that are postcritically finite, there is a fast test on the constant parameters to determine whether this map $ F $ exists-but this test does not give a construction of $ F $. We present an iterative method that utilizes finite subdivision rules and Thurston's algorithm to approximate this rational map, $ F $. This manuscript expands upon results given by the Medusa algorithm in [9]. We provide a proof of the algorithm's efficacy, details on its implementation, the settings in which it is most successful, and examples generated with the algorithm.
Keywords: Mating, rational maps, Thurston's algorithm, Medusa, finite subdivision rule, pseudo-equator.
Mathematics Subject Classification: Primary: 37F20; Secondary: 37F10.
Citation: Mary Wilkerson. Thurston's algorithm and rational maps from quadratic polynomial matings. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2019151
L. Bartholdi and V. Nekrashevych, Thurston equivalence of topological polynomials, Acta Math, 197 (2006), 1-51. doi: 10.1007/s11511-006-0007-3. Google Scholar
H. Bruin and D. Schleicher, Symbolic dynamics of quadratic polynomials, Institut Mittag-Leffler, The Royal Swedish Academy of Sciences, 7.Google Scholar
X. Buff, A. Epstein and S. Koch, Twisted matings and equipotential gluings, Annales de la Faculté des Sciences de Toulouse Mathématiques, 21 (2012), 995-1031. doi: 10.5802/afst.1360. Google Scholar
X. Buff, A. Epstein, S. Koch, D. Meyer, K. Pilgrim, M. Rees and L. Tan, Questions about polynomial matings, Annales de la Faculté des Sciences de Toulouse Mathématiques, 21 (2012), 1149-1176. doi: 10.5802/afst.1365. Google Scholar
J. Cannon, W. Floyd and W. Parry, Subdivision programs, https://www.math.vt.edu/people/floyd/research/software/subdiv.html.Google Scholar
J. Cannon, W. Floyd and W. Parry, Finite subdivision rules, Conform. Geom. Dyn., 5 (2001), 153-196. doi: 10.1090/S1088-4173-01-00055-8. Google Scholar
A. Douady and J. H. Hubbard, Exploring the Mandelbrot set. The Orsay notes, Publ. Math. Orsay.Google Scholar
A. Douady and J. H. Hubbard, A proof of thurston's topological characterization of rational functions, Acta Mathematica, 171 (1993), 263-297. doi: 10.1007/BF02392534. Google Scholar
S. Hruska Boyd and C. Henriksen, The Medusa algorithm for polynomial matings, Conform. Geom. Dyn., 16 (2012), 161-183. doi: 10.1090/S1088-4173-2012-00245-7. Google Scholar
J. H. Hubbard and D. Schleicher, The spider algorithm, Complex Dynamical Systems, RL Devaney ed., Proc. Symp. Appl. Math, 49 (1994), 155-180. doi: 10.1090/psapm/049/1315537. Google Scholar
W. Jung, Mandel version 5.11, http://www.mndynamics.com, 2014.Google Scholar
W. Jung, The Thurston algorithm for quadratic matings.,Google Scholar
D. Meyer, Unmating of rational maps, sufficient criteria and examples, in Frontiers in Complex Dynamics: In Celebration of John Milnor's 80th Birthday (ed. S. S. A. Bonifant M. Lyubich), Princeton University Press, 51 (2014), 197-233. Google Scholar
J. Milnor, Pasting together Julia sets: A worked out example of mating, Experiment. Math., 13 (2004), 55-92. doi: 10.1080/10586458.2004.10504523. Google Scholar
C. Petersen and D. Meyer, On the notions of mating, Annales de la faculté des sciences de Toulouse Mathématiques, 21 (2012), 839-876. doi: 10.5802/afst.1355. Google Scholar
M. Rees, A partial description of the parameter space of rational maps of degree two: Part 1, Acta Math., 168 (1992), 11-87. doi: 10.1007/BF02392976. Google Scholar
N. Selinger, Thurston's pullback map on the augmented Teichmüller space and applications, Inventiones Mathematicae, 189 (2012), 111-142. doi: 10.1007/s00222-011-0362-3. Google Scholar
M. Shishikura, On a theorem of M. Rees for matings of polynomials, in The Mandelbrot Set, Theme and Variations (ed. Tan, L.), vol. London Mathematical Society Lecture Notes, 274, Cambridge University Press, 2000,289-305. Google Scholar
L. Tan, Matings of quadratic polynomials, Ergodic Theory Dynam. Systems, 12 (1992), 589-620. doi: 10.1017/S0143385700006957. Google Scholar
M. Wilkerson, Finite Subdivision Rules from Matings of Quadratic Functions: Existence and Constructions, PhD thesis, Virginia Polytechnic Institute and State University, 2012.Google Scholar
M. Wilkerson, Subdivision rule constructions on critically preperiodic quadratic matings, New York J. Math., 22 (2016), 1055-1084. Google Scholar
Figure 1. The conformal isomorphism $ \phi $ which determines external rays for $ z\mapsto z^2+i $. Shown on the right are external rays landing at points on the critical orbit of this polynomial.
Figure 2. Steps in the formation of the formal mating.
Figure 3. The Medusa and pseudo-equator algorithms are based upon Thurston's algorithm, highlighted in the commutative diagram above.
Figure 4. A rudimentary finite subdivision rule on $ \hat{\mathbb{C}} $.
Figure 5. The Julia set and Hubbard trees for $ f_{1/4} $.
Figure 6. The preimage of a Hubbard tree under its associated polynomial.
Figure 7. On the left, $ T_{1/4} $. On the right, the subdivision complex $ S_\mathcal{R} $ for the essential self-mating of $ f_{1/4} $.
Figure 8. On the left, the expected pullback of $ S_\mathcal{R} $ by the essential mating as based on local behavior of Hubbard trees. The essential mating is locally homeomorphic everywhere except on the critical set, so we complete the pullback as shown on the right.
Figure 9. The finite subdivision rule associated with $ f_{1/4}\;╨_e\;f_{1/4} $, along with marked pseudo-equator curves. $ C_0 $ is marked in blue on the left and its pullback $ C_1 $ is marked in blue on the right.
Figure 10. Pullbacks of the equator by a rational map that is Thurston-equivalent to the topological self-mating of $ f_{1/4} $. These pullbacks approximate the Julia set of the rational map, $ \hat{\mathbb{C}} $. (Image generated in Mathematica.)
Figure 11. Top: The Julia sets of $ f_{1/4} $ and $ f_{1/8} $, with external angles marked at postcritical points for reference. Middle: The Hubbard trees associated with these polynomials. Bottom: the finite subdivision rule associated with the essential mating $ f_{1/4}\;╨_e\;f_{1/8} $.
Figure 12. The critical orbit portrait and finite subdivision rule associated with $ f_{1/4}\;╨_e\;f_{1/8} $, along with marked pseudo-equator curves. $ C_0 $ is marked in blue above and its pullback $ C_1 $ is marked in blue below. We have relabeled the marked points to emphasize angle markings given by the parameterizations of $ C_0 $ and $ C_1 $.
Figure 13. Pullbacks of the equator by a sequence of rational maps which approximate the geometric mating of $ f_{1/4} $ and $ f_{1/8} $. (Image generated in Mathematica.)
Figure 14. The problem with using the canonical branch of the square root for pullbacks of $ C_n $: orientation is important, but harder to keep record of when our pullback curve is cut into several pieces.
Figure 15. The "pseudo-equator" is pinched by $ \sim_e $ into a non-Jordan curve.
Peter Haïssinsky, Kevin M. Pilgrim. An algebraic characterization of expanding Thurston maps. Journal of Modern Dynamics, 2012, 6 (4) : 451-476. doi: 10.3934/jmd.2012.6.451
Guizhen Cui, Yan Gao. Wandering continua for rational maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1321-1329. doi: 10.3934/dcds.2016.36.1321
Pedro A. S. Salomão. The Thurston operator for semi-finite combinatorics. Discrete & Continuous Dynamical Systems - A, 2006, 16 (4) : 883-896. doi: 10.3934/dcds.2006.16.883
S. R. Bullett and W. J. Harvey. Mating quadratic maps with Kleinian groups via quasiconformal surgery. Electronic Research Announcements, 2000, 6: 21-30.
David Julitz. Numerical approximation of atmospheric-ocean models with subdivision algorithm. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 429-447. doi: 10.3934/dcds.2007.18.429
Cezar Joiţa, William O. Nowell, Pantelimon Stănică. Chaotic dynamics of some rational maps. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 363-375. doi: 10.3934/dcds.2005.12.363
Eriko Hironaka, Sarah Koch. A disconnected deformation space of rational maps. Journal of Modern Dynamics, 2017, 11: 409-423. doi: 10.3934/jmd.2017016
Piotr Pokora, Tomasz Szemberg. Minkowski bases on algebraic surfaces with rational polyhedral pseudo-effective cone. Electronic Research Announcements, 2014, 21: 126-131. doi: 10.3934/era.2014.21.126
Yan Gao, Jinsong Zeng, Suo Zhao. A characterization of Sierpiński carpet rational maps. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 5049-5063. doi: 10.3934/dcds.2017218
Jeffrey Diller, Han Liu, Roland K. W. Roeder. Typical dynamics of plane rational maps with equal degrees. Journal of Modern Dynamics, 2016, 10: 353-377. doi: 10.3934/jmd.2016.10.353
Huaibin Li. An equivalent characterization of the summability condition for rational maps. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4567-4578. doi: 10.3934/dcds.2013.33.4567
Weihua Liu, Andrew Klapper. AFSRs synthesis with the extended Euclidean rational approximation algorithm. Advances in Mathematics of Communications, 2017, 11 (1) : 139-150. doi: 10.3934/amc.2017008
Ayla Sayli, Ayse Oncu Sarihan. Statistical query-based rule derivation system by backward elimination algorithm. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1341-1356. doi: 10.3934/dcdss.2015.8.1341
Jawad Al-Khal, Henk Bruin, Michael Jakobson. New examples of S-unimodal maps with a sigma-finite absolutely continuous invariant measure. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 35-61. doi: 10.3934/dcds.2008.22.35
Rui Gao, Weixiao Shen. Analytic skew-products of quadratic polynomials over Misiurewicz-Thurston maps. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2013-2036. doi: 10.3934/dcds.2014.34.2013
Weiyuan Qiu, Fei Yang, Yongcheng Yin. Quasisymmetric geometry of the Cantor circles as the Julia sets of rational maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3375-3416. doi: 10.3934/dcds.2016.36.3375
Aihua Fan, Shilei Fan, Lingmin Liao, Yuefei Wang. Minimality of p-adic rational maps with good reduction. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3161-3182. doi: 10.3934/dcds.2017135
Youming Wang, Fei Yang, Song Zhang, Liangwen Liao. Escape quartered theorem and the connectivity of the Julia sets of a family of rational maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5185-5206. doi: 10.3934/dcds.2019211
Rich Stankewitz, Hiroki Sumi. Random backward iteration algorithm for Julia sets of rational semigroups. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2165-2175. doi: 10.3934/dcds.2015.35.2165
Kariane Calta, Thomas A. Schmidt. Infinitely many lattice surfaces with special pseudo-Anosov maps. Journal of Modern Dynamics, 2013, 7 (2) : 239-254. doi: 10.3934/jmd.2013.7.239
Mary Wilkerson
|
CommonCrawl
|
OSA Publishing > Optica > Volume 7 > Issue 12 > Page 1649
Prem Kumar, Editor-in-Chief
Superconducting nanowire single-photon detectors with 98% system detection efficiency at 1550 nm
Dileep V. Reddy, Robert R. Nerem, Sae Woo Nam, Richard P. Mirin, and Varun B. Verma
Dileep V. Reddy,1,2,* Robert R. Nerem,3 Sae Woo Nam,2 Richard P. Mirin,2 and Varun B. Verma2
1Department of Physics, University of Colorado, Boulder, Colorado 80309, USA
2National Institute of Standards and Technology, Boulder, Colorado 80305, USA
3Institute for Quantum Science and Technology, University of Calgary, Calgary, Alberta T2N 1N4, Canada
*Corresponding author: [email protected]
Richard P. Mirin https://orcid.org/0000-0002-4472-4655
D Reddy
R Nerem
S Nam
R Mirin
V Verma
Vol. 7,
•https://doi.org/10.1364/OPTICA.400751
Dileep V. Reddy, Robert R. Nerem, Sae Woo Nam, Richard P. Mirin, and Varun B. Verma, "Superconducting nanowire single-photon detectors with 98% system detection efficiency at 1550 nm," Optica 7, 1649-1653 (2020)
Get PDF (2123 KB)
Detecting single infrared photons toward optimal system detection efficiency (OE)
Large-sensitive-area superconducting nanowire single-photon detector at 850 nm with high detection efficiency (OE)
Fast and high efficiency superconducting nanowire single-photon detector at 630 nm wavelength (AO)
Quantum efficiency
Quantum memories
Single photon detectors
Superconductors
Original Manuscript: June 19, 2020
Revised Manuscript: October 2, 2020
Manuscript Accepted: October 21, 2020
DEVICE DESIGN AND FABRICATION
MEASUREMENT SETUP AND RESULTS
Suppl. Mat. (1)
Superconducting nanowire single-photon detectors (SNSPDs) are an enabling technology for myriad quantum-optics experiments that require high-efficiency detection, large count rates, and precise timing resolution. The system detection efficiencies (SDEs) for fiber-coupled SNSPDs have fallen short of theoretical predictions of near unity by at least 7%, with the discrepancy being attributed to scattering, material absorption, and other SNSPD dynamics. We optimize the design and fabrication of an all-dielectric layered stack and fiber coupling package in order to achieve $98.0 \pm 0.5\%$ SDE, measured for single-mode-fiber guided photons derived from a highly attenuated 1550 nm continuous-wave laser. This enforces a smaller bound on the scattering and absorption losses in such systems and opens the use of SNSPDs for scenarios that demand high-SDE for throughput and fidelity.
Diverse experiments and applications ranging from fundamental research [1], communications [2,3], metrology [4], remote sensing [5], materials research [6], and astronomy [7,8] rely on single-photon detection. Superconducting nanowire single-photon detectors (SNSPDs) have become a dominant platform for such endeavors, as they boast very high system detection efficiencies (SDEs) [9–11], low timing jitter [12], and very low dark counts [13,14]. These properties have popularized their employment in several recent quantum-optics experiments, including loophole-free tests for local realism [15], quantum teleportation [16] and key distribution [13], characterization of quantum states [17–19], and quantum buffer memories [20,21].
Recent advances in device-fabrication technology [22], in combination with advances in cryogenic cooling have rendered SNSPDs a commercially viable investment for years to come. The record for the highest SDE, however, has remained stagnant to within error at around 93% since the first report in 2013 [9]. The lingering mismatch between theoretical predictions and experimental realizations for SDE has imposed generously loose bounds on often immeasurable channels of loss, including scattering, off-nanowire absorption, and the internal quantum efficiency of the superconducting material system. Here, we focus on improving the SDE by optimizing the device's vertical optical stack design, as well as the coupling of the guided fiber mode to the active detection area of our device. We achieve a new record SDE of $98.0 \pm 0.5\%$ at a wavelength of 1550 nm, thus tightening the bounds on several loss mechanisms and offering insight into the optical coupling process.
2. DEVICE DESIGN AND FABRICATION
SNSPDs consist of a nanoscale, meandering current path etched into a thin (sub-10 nm) layer of superconducting film. When operated at sub-critical cryogenic temperatures and current biased below the critical current density value, this "nanowire" poses zero resistance. In this setup, the introduction of any thermal energy (say, via absorption of a single incident photon, or kinetic implantation of a massive atom/ion) onto the nanowire creates a local region of normal resistance [23–29], thus momentarily interrupting the current flow and generating a radio-frequency (RF) "detection pulse" in the bias line.
To efficiently couple light onto the nanowire, we mount our devices on a self-aligning fiber-packaging system [30]. We use SMF-28e+ fiber pigtails terminated in standard 2.5 mm ceramic ferrules and AR-coated for 1550 nm light. The SDE for this system is defined as the probability of the device registering a detection given that a photon is launched into the fiber pigtail from outside the cryostat.
SNSPD vertical optical stacks consist of interferometric trapping structures above and below the thin, nanowire layer to facilitate multiple interactions of the photons with the nanowire. The typical constructions use a reflector below the nanowire, which could be a metallic mirror [9,10,22,31] with an electrically isolating dielectric layer in between forming a slab cavity for the photons. If optimizing for SDE, the stack might also consist of layers deposited on top of the nanowire [32–35] effectively forming an AR coating. The optimization is done for normally incident plane waves using rigorous coupled-wave analysis (RCWA) [34,36]. By reformulating the resulting steady-state field distribution in terms of Poynting vectors [37], we can determine the net absorption in each layer. In a generic, three-layer AR-coating optimized stack with a metal mirror reflector (as in [9]), the metallic layer is responsible for the absorption of nearly 3% of the photons at the optimum design wavelength even under ideal, simulated conditions (see Fig. 1). The electromagnetic field penetrates into the metallic layer due to the skin effect, inducing the movement of conduction electrons (the very mechanism of reflection) within a non-zero resistance metallic medium, which functions as a loss channel. This prompted us to use a distributed Bragg reflector (DBR) instead, consisting of alternating layers of high- and low-refractive-index dielectrics [11,38–40]. A 6.5 period (13 layer) DBR, with dielectric layers of optical thicknesses $\lambda /4$ each, suffices for near-unity photon absorption into the nanowire.
Fig. 1. (a) SNSPD vertical optical stack with gold mirror. (b) Simulated cumulative absorption versus depth [37] for normally incident plane wave at optimum wavelength.
The SNSPDs presented here were all fabricated on a single 76.2 mm diameter silicon wafer. To begin, we deposited 13 alternating layers of silicon dioxide (${{\rm SiO}_2}$) and amorphous silicon ($\alpha {\rm Si}$)—starting with ${{\rm SiO}_2}$—onto the wafer using plasma-enhanced chemical vapor deposition (PECVD). The refractive indices of these two dielectrics at 1550 nm were measured to be 1.453 (${{\rm SiO}_2}$) and 2.735 ($\alpha {\rm Si}$). For a DBR with the reflection band centered at 1550 nm, the layers deposited had thicknesses of $266.75 \pm 0.84$ nm (${{\rm SiO}_2}$) and $141.70 \pm 0.27$ nm ($\alpha {\rm Si}$). We then deposited gold terminals and assorted alignment marks using a photolithographic lift-off process. The terminals are composed of a three-layer stack of titanium (Ti, 2 nm), gold (Au, 50 nm), and titanium (Ti, 2 nm) deposited in an electron-beam evaporative sputtering tool. Following this, we use magnetron sputtering to co-sputter a 4.1 nm thick, 75:25 ratio molybdenum silicide (MoSi) layer capped with a 2 nm sputtering of $\alpha {\rm Si}$ to prevent oxidation. The refractive index of the MoSi layer was estimated to be $(n,\kappa) = (5.817,6.033)$ at 1550 nm (see Supplement 1, Section 2). This forms our superconducting layer and has been measured to have a critical temperature of ${T_c} \gt 5\,\,{\rm K}$ [41,42]. Coarse features are then etched into this layer using photolithography and an ${{\rm SF}_6}$ reactive-ion etch (RIE) recipe. The nanowire meander patterns generated using phidl [43] are then written onto the active area using a PMMA resist layer and an electron-beam lithography tool. The electron-beam resist pattern is then transferred onto the superconducting layer using a second RIE step with the same ${{\rm SF}_6}$-based recipe.
Fig. 2. (a) DBR-based vertical optical stack with the MoSi nanowire layer labeled. (b) Top view of optical-microscope image of the device chip. (c) SEM of a small region of the nanowire meandering at the edge of the active area, showing 180° hairpin bends. The darker regions are MoSi.
Fig. 3. System detection efficiency (SDE) versus applied voltage bias at the optimized input polarization for three different active area diameters. The pulses are read out of the nanowires directly using $50\,\Omega$ coaxial SMA lines (without extra series resistors). The inset is a zoomed-in view of the marked region. Error bars not shown for discernibility.
We patterned the nanowire meanders to cover circular active areas of diameters 20 µm, 35 µm, and 50 µm across different devices. The nanowires had widths of 80 nm and a fill factor of 0.57 (gap distance of 60 nm) [see Fig. 2(c)]. Atop the nanowire, we then deposited a three-layer AR coating of $\alpha {\rm Si}$ (50 nm), ${{\rm SiO}_2}$ (248.7 nm), and $\alpha {\rm Si}$ (68 nm) using the PECVD tool [Fig. 2(a)]. The final deposition step was for a pair of Ti (2 nm) and Au (100 nm) fiber spacers on either side of the active area [see Fig. 2(b)]. The AR coating on the device chip was then selectively etched off of certain regions to expose the Au terminals. All the dielectric layers were etched away to expose the substrate on the outside of the Au terminal polygons [gray region in Fig. 2(b)], and a deep-RIE process was used to etch through the Si wafer substrate and release the detector dies in a keyhole pattern [30], ready for mounting and wirebonding.
Fig. 4. (a) Pulse shapes for SNSPDs of the three active area diameters. (b) The same over a longer time scale. (c) Pulse shapes for a 50 µm diameter SNSPD with and without ${R_s} = 450\,\Omega$ resistor in series with the $50\,\Omega$ coaxial readout lines. The shaded regions mark the variance.
3. MEASUREMENT SETUP AND RESULTS
The devices were then mounted into the self-aligning fiber packaged mounts [30] and cooled inside a sorption-based cryostat to 720–780 mK. The bare ends of the fiber pigtails were accessible via a vacuum feedthrough and could be spliced to a photon source. The input state of light for SDE measurements is derived from a highly attenuated continuous-wave laser. The attenuated laser output passed through a polarization controller and was fed into a ${1 \times 2}$ optical switch, which can route light either to a monitoring power meter or the device under test. The full measurement setup and calibration method is detailed in Supplement 1, Section 1, and is nearly identical to those in [9,30,38].
The SNSPDs are quasi-current-biased using a bias tee, a $100\,\,{\rm k}\Omega$ series resistor, and a voltage source (see Supplement 1, Section 1). The RF-only port of the bias tee is connected to two room-temperature RF amplifiers. The output pulses are then either recorded on a sampling oscilloscope or conditioned into square pulses and sent to a pulse counter. The polarization optimization algorithm tries to vary all the settings on the polarization controller to either minimize or maximize the count rate (CR) from the SNSPD. As such, the reported maximum-polarization SDE value is at worst a conservative lower bound, and the reported minimum-polarization SDE value is an upper bound for the exact corresponding values. We do not attempt to correct for any fiber-splicing losses. A bad fiber splice to a detector pigtail will decrease the estimated SDE.
Figure 3 shows the bias voltage versus SDE plots for SNSPDs of three different active area diameters at the CR-maximized input polarizations. These devices are being read straight from the nanowires through $50\,\Omega$ impedance coaxial lines. The plots indicate a significant gain in SDE in the 35 µm diameter device over the 20 µm one, which is an effect of Gaussian-beam expansion of the fiber-exit mode now interferometrically trapped within a DBR-based vertical optical stack. The beam exiting an SMF-28e+ fiber has a mode-field diameter of under 10 µm, implying a Rayleigh range of 50 µm. Given how thin the nanowire layer is required to be, the photon in the optical mode is expected to pass through it ${\cal O}{(10^2})$ times before the probability of absorption approaches unity (see Supplement 1, Section 2). This, convolved with the larger effective round-trip length in the DBR-based optical stack [Fig. 2(a)], would favor larger active area nanowire devices for SDE.
The SDE curve, however, latches earlier for the larger SNSPD, and refuses to fully saturate. The problem is exacerbated for the 50 µm diameter SNSPDs. This is an effect of the increased RF pulse width in the larger devices (see Fig. 4) owing to larger kinetic inductances, resulting in a slight nonlinearity in sensitivity to successive events. The curves in Fig. 3 were all plotted at average CRs of ${10^5}$ counts per second within the expected saturation regions. Under no-light conditions, the dark-count profiles for all three device sizes extended to similar latching bias voltages of $0.49 - 0.52\;{\rm V} $.
Fig. 5. System detection efficiency (SDE) versus applied voltage bias at the count-rate maximized (max_pol) and minimized (min_pol) input polarizations for four 50 µm diameter devices (labeled A, B, C, and D) from the same wafer, with pulses read out with $450\,\Omega$ resistor in series with the $50\,\Omega$ coaxial lines.
The exponential-recovery temporal widths of the RF pulses, which indicate detection events [Fig. 4(a)], are directly proportional to the ratio of the inductance of the SNSPDs to the impedance of the coaxial electrical lines, whereas the rise times are inversely related to the hotspot resistance. Over a longer timescale [see Fig. 4(b)], however, the pulses show a slow ringing effect, which is a reflection from the readout electronics due to impedance mismatch [24,44]. To reveal the true saturating SDE of our large-area SNSPDs, we needed to eliminate the undershoot/ringing in the RF pulses by modifying the readout electronics [45,46]. We achieved this by adding ${R_S} = 450\,\Omega$ thin-film resistors in series with the coaxial lines close to the SNSPD inside the cryostat, thus changing the recovery time constant at the expense of pulse height [see Fig. 4(c)].
With the speed-up series resistors in place, all the 50 µm wide circular active area SNSPDs now showed saturation in their bias voltage versus SDE curves. Figure 5 plots these for four different SNSPDs from the same wafer. The devices are color coded and labeled A, B, C, and D in the legend. Both CR-maximized and CR-minimized polarization optimizations are shown. The SDE saturates at a value of $98.0 \pm 0.5\%$, thus breaching the previous record [9].
We have demonstrated fiber-coupled SNSPDs with SDEs exceeding 98%, which to our knowledge is the highest value published thus far at near-infrared wavelengths. We relied on a DBR-based optical stack design, a well-established fiber-coupling mounting method, and a large active area to capture all of the diverging optical mode that exits the front face of the coupling fiber. We employed a series resistor to mitigate the impedance mismatch issues that plague the output RF pulse shapes for large-area devices. This proved to be a necessary electronic compensation for measuring the true SDE of the device.
The new SDE record restricts the theoretical loss of photons through mechanisms such as scattering, dielectric absorption, and stack fabrication errors with stricter upper bounds. The capability will further the use of fiber-coupled SNSPDs in increasingly elaborate quantum-optics setups and experiments that involve detection of rare events, such as multi-device coincident detections.
The authors acknowledge Igor Vayshenker for providing us with power-meter calibration. We thank Shannon Duff and Adriana E. Lita for help with characterization of the cleanroom deposition tools.
This work includes contributions of the National Institute of Standards and Technology, which are not subject to U.S. copyright. The use of trade names does not imply endorsement by the U.S. government. The authors declare no conflicts of interest.
See Supplement 1 for supporting content.
1. Y. Hochberg, I. Charaev, S. W. Nam, V. Verma, M. Colangelo, and K. K. Berggren, "Detecting sub-GeV dark matter with superconducting nanowires," Phys. Rev. Lett. 123, 151802 (2019). [CrossRef]
2. Y. Mao, B.-X. Wang, C. Zhao, G. Wang, R. Wang, H. Wang, F. Zhou, J. Nie, Q. Chen, Y. Zhao, Q. Zhang, J. Zhang, T.-Y. Chen, and J.-W. Pan, "Integrating quantum key distribution with classical communications in backbone fiber network," Opt. Express 26, 6010–6020 (2018). [CrossRef]
3. J.-P. Chen, C. Zhang, Y. Liu, C. Jiang, W. Zhang, X.-L. Hu, J.-Y. Guan, Z.-W. Yu, H. Xu, J. Lin, M.-J. Li, H. Chen, H. Li, L. You, Z. Wang, X.-B. Wang, Q. Zhang, and J.-W. Pan, "Sending-or-not-sending with independent lasers: secure twin-field quantum key distribution over 509 km," Phys. Rev. Lett. 124, 070501 (2020). [CrossRef]
4. S. Slussarenko, M. M. Weston, H. M. Chrzanowski, L. K. Shalm, V. B. Verma, S. W. Nam, and G. J. Pryde, "Unconditional violation of the shot-noise limit in photonic quantum metrology," Nat. Photonics 11, 700–703 (2017). [CrossRef]
5. J. Zhu, Y. Chen, L. Zhang, X. Jia, Z. Feng, G. Wu, X. Yan, J. Zhai, Y. Wu, Q. Chen, X. Zhou, Z. Wang, C. Zhang, L. Kang, J. Chen, and P. Wu, "Demonstration of measuring sea fog with an SNSPD-based lidar system," Sci. Rep. 7, 15113 (2017). [CrossRef]
6. L. Chen, D. Schwarzer, J. A. Lau, V. B. Verma, M. J. Stevens, F. Marsili, R. P. Mirin, S. W. Nam, and A. M. Wodtke, "Ultra-sensitive mid-infrared emission spectrometer with sub-ns temporal resolution," Opt. Express 26, 14859–14868 (2018). [CrossRef]
7. Q. Zhuang, Z. Zhang, and J. H. Shapiro, "Distributed quantum sensing using continuous-variable multipartite entanglement," Phys. Rev. A 97, 032329 (2018). [CrossRef]
8. E. T. Khabiboulline, J. Borregaard, K. De Greve, and M. D. Lukin, "Optical interferometry with quantum networks," Phys. Rev. Lett. 123, 070504 (2019). [CrossRef]
9. F. Marsili, V. B. Verma, J. A. Stern, S. Harrington, A. E. Lita, T. Gerrits, I. Vayshenker, B. Baek, M. D. Shaw, R. P. Mirin, and S. W. Nam, "Detecting single infrared photons with 93% system efficiency," Nat. Photonics 7, 210–214 (2013). [CrossRef]
10. H. Le Jeannic, V. B. Verma, A. Cavaillès, F. Marsili, M. D. Shaw, K. Huang, O. Morin, S. W. Nam, and J. Laurat, "High-efficiency WSi superconducting nanowire single-photon detectors for quantum state engineering in the near infrared," Opt. Lett. 41, 5341–5344 (2016). [CrossRef]
11. S. Krapick, M. Hesselberg, V. B. Verma, S. W. Nam, and R. P. Mirin, "Bandwidth-enhanced superconducting nanowire single photon detectors for telecom wavelengths," in CLEO: QELS_Fundamental Science (OSA, 2017), paper FF1E.2.
12. B. Korzh, Q.-Y. Zhao, J. P. Allmaras, S. Frasca, T. M. Autry, E. A. Bersin, A. D. Beyer, R. M. Briggs, B. Bumble, M. Colangelo, G. M. Crouch, A. E. Dane, T. Gerrits, A. E. Lita, F. Marsili, G. Moody, C. Peña, E. Ramirez, J. D. Rezac, N. Sinclair, M. J. Stevens, A. E. Velasco, V. B. Verma, E. E. Wollman, S. Xie, D. Zhu, P. D. Hale, M. Spiropulu, K. L. Silverman, R. P. Mirin, S. W. Nam, A. G. Kozorezov, M. D. Shaw, and K. K. Berggren, "Demonstration of sub-3 ps temporal resolution with a superconducting nanowire single-photon detector," Nat. Photonics 14, 250–255 (2020). [CrossRef]
13. H. Shibata, T. Honjo, and K. Shimizu, "Quantum key distribution over a 72 dB channel loss using ultralow dark count superconducting single-photon detectors," Opt. Lett. 39, 5078–5081 (2014). [CrossRef]
14. H. Shibata, K. Fukao, N. Kirigane, S. Karimoto, and H. Yamamoto, "SNSPD with ultimate low system dark count rate using various cold filters," IEEE Trans. Appl. Supercond. 27, 1–4 (2017). [CrossRef]
15. L. K. Shalm, E. Meyer-Scott, B. G. Christensen, P. Bierhorst, M. A. Wayne, M. J. Stevens, T. Gerrits, S. Glancy, D. R. Hamel, M. S. Allman, K. J. Coakley, S. D. Dyer, C. Hodge, A. E. Lita, V. B. Verma, C. Lambrocco, E. Tortorici, A. L. Migdall, Y. Zhang, D. R. Kumor, W. H. Farr, F. Marsili, M. D. Shaw, J. A. Stern, C. Abellán, W. Amaya, V. Pruneri, T. Jennewein, M. W. Mitchell, P. G. Kwiat, J. C. Bienfang, R. P. Mirin, E. Knill, and S. W. Nam, "Strong loophole-free test of local realism," Phys. Rev. Lett. 115, 250402 (2015). [CrossRef]
16. H. Takesue, S. D. Dyer, M. J. Stevens, V. Verma, R. P. Mirin, and S. W. Nam, "Quantum teleportation over 100 km of fiber using highly efficient superconducting nanowire single-photon detectors," Optica 2, 832–835 (2015). [CrossRef]
17. P. B. Dixon, D. Rosenberg, V. Stelmakh, M. E. Grein, R. S. Bennink, E. A. Dauler, A. J. Kerman, R. J. Molnar, and F. N. C. Wong, "Heralding efficiency and correlated-mode coupling of near-IR fiber-coupled photon pairs," Phys. Rev. A 90, 043804 (2014). [CrossRef]
18. F. Najafi, J. Mower, N. C. Harris, F. Bellei, A. Dane, C. Lee, X. Hu, P. Kharel, F. Marsili, S. Assefa, K. K. Berggren, and D. Englund, "On-chip detection of non-classical light by scalable integration of single-photon detectors," Nat. Commun. 6, 5873 (2015). [CrossRef]
19. M. M. Weston, H. M. Chrzanowski, S. Wollmann, A. Boston, J. Ho, L. K. Shalm, V. B. Verma, M. S. Allman, S. W. Nam, R. B. Patel, S. Slussarenko, and G. J. Pryde, "Efficient and pure femtosecond-pulse-length source of polarization-entangled photons," Opt. Express 24, 10869–10879 (2016). [CrossRef]
20. J. Jin, E. Saglamyurek, M. G. Puigibert, V. B. Verma, F. Marsili, S. W. Nam, D. Oblak, and W. Tittel, "Telecom-wavelength atomic quantum memory in optical fiber for heralded polarization qubits," Phys. Rev. Lett. 115, 140501 (2015). [CrossRef]
21. E. Saglamyurek, M. G. Puigibert, Q. Zhou, L. Giner, F. Marsili, V. B. Verma, S. Woo Nam, L. Oesterling, D. Nippa, D. Oblak, and W. Tittel, "A multiplexed light-matter interface for fiber-based quantum networks," Nat. Commun. 7, 11202 (2016). [CrossRef]
22. I. E. Zadeh, J. W. N. Los, R. B. M. Gourgues, V. Steinmetz, G. Bulgarini, S. M. Dobrovolskiy, V. Zwiller, and S. N. Dorenbos, "Single-photon detectors combining high efficiency, high detection rates, and ultra-high timing resolution," APL Photon. 2, 111301 (2017). [CrossRef]
23. J. K. W. Yang, A. J. Kerman, E. A. Dauler, V. Anant, K. M. Rosfjord, and K. K. Berggren, "Modeling the electrical and thermal response of superconducting nanowire single-photon detectors," IEEE Trans. Appl. Supercond. 17, 581–585 (2007). [CrossRef]
24. A. J. Kerman, J. K. W. Yang, R. J. Molnar, E. A. Dauler, and K. K. Berggren, "Electrothermal feedback in superconducting nanowire single-photon detectors," Phys. Rev. B 79, 100509(R) (2009). [CrossRef]
25. C. M. Natarajan, M. G. Tanner, and R. H. Hadfield, "Superconducting nanowire single-photon detectors: physics and applications," Supercond. Sci. Technol. 25, 063001 (2012). [CrossRef]
26. A. Engel, J. J. Renema, K. Il'in, and A. Semenov, "Detection mechanism of superconducting nanowire single-photon detectors," Supercond. Sci. Technol. 28, 114003 (2015). [CrossRef]
27. J. J. Renema, Q. Wang, R. Gaudio, I. Komen, K. op't Hoog, D. Sahin, A. Schilling, M. P. van Exter, A. Fiore, A. Engel, and M. J. de Dood, "Position-dependent local detection efficiency in a nanowire superconducting single-photon detector," Nano Lett. 15, 4541–4545 (2015). [CrossRef]
28. M. Caloz, B. Korzh, N. Timoney, M. Weiss, S. Gariglio, R. J. Warburton, C. Schönenberger, J. Renema, H. Zbinden, and F. Bussières, "Optically probing the detection mechanism in a molybdenum silicide superconducting nanowire single-photon detector," Appl. Phys. Lett. 110, 083106 (2017). [CrossRef]
29. M. Sidorova, A. Semenov, H.-W. Húbers, I. Charaev, A. Kuzmin, S. Doerner, and M. Siegel, "Physical mechanisms of timing jitter in photon detection by current-carrying superconducting nanowires," Phys. Rev. B 96, 184504 (2017). [CrossRef]
30. A. J. Miller, A. E. Lita, B. Calkins, I. Vayshenker, S. M. Gruber, and S. W. Nam, "Compact cryogenic self-aligning fiber-to-detector coupling with losses below one percent," Opt. Express 19, 9102–9110 (2011). [CrossRef]
31. H. Li, Y. Wang, L. You, H. Wang, H. Zhou, P. Hu, W. Zhang, X. Liu, X. Yang, L. Zhang, Z. Wang, and X. Xie, "Supercontinuum single-photon detector using multilayer superconducting nanowires," Photon. Res. 7, 1425–1431 (2019). [CrossRef]
32. A. Gaggero, S. Jahanmirinejad, F. Marsili, F. Mattioli, R. Leoni, D. Bitauld, D. Sahin, G. J. Hamhuis, R. Ntzel, R. Sanjines, and A. Fiore, "Nanowire superconducting single-photon detectors on GaAs for integrated quantum photonic applications," Appl. Phys. Lett. 97, 151108 (2010). [CrossRef]
33. L. Redaelli, G. Bulgarini, S. Dobrovolskiy, S. N. Dorenbos, V. Zwiller, E. Monroy, and J. M. Gérard, "Design of broadband high-efficiency superconducting-nanowire single photon detectors," Supercond. Sci. Technol. 29, 065016 (2016). [CrossRef]
34. H. Li, S. Chen, L. You, W. Meng, Z. Wu, Z. Zhang, K. Tang, L. Zhang, W. Zhang, X. Yang, X. Liu, Z. Wang, and X. Xie, "Superconducting nanowire single photon detector at 532 nm and demonstration in satellite laser ranging," Opt. Express 24, 3535–3542 (2016). [CrossRef]
35. T. Yamashita, K. Waki, S. Miki, R. A. Kirkwood, R. H. Hadfield, and H. Terai, "Superconducting nanowire single-photon detectors with non-periodic dielectric multilayers," Sci. Rep. 6, 35240 (2016). [CrossRef]
36. T. K. Gaylord and M. G. Moharam, "Rigorous coupled-wave analysis of planar-grating diffraction," J. Opt. Soc. Am. 71, 811–818 (1981). [CrossRef]
37. O. Deparis, "Poynting vector in transfer-matrix formalism for the calculation of light absorption profile in stratified isotropic optical media," Opt. Lett. 36, 3960–3962 (2011). [CrossRef]
38. S. Krapick, M. Hesselberg, V. B. Verma, I. Vayshenker, S. W. Nam, and R. P. Mirin, "Superconducting single-photon detectors with enhanced high-efficiency bandwidth," arXiv:1706.00004 (2017).
39. L. You, H. Li, W. Zhang, X. Yang, L. Zhang, S. Chen, H. Zhou, Z. Wang, and X. Xie, "Superconducting nanowire single-photon detector on dielectric optical films for visible and near infrared wavelengths," Supercond. Sci. Technol. 30, 084008 (2017). [CrossRef]
40. C. Zhang, W. Zhang, J. Huang, L. You, H. Li, C. lv, T. Sugihara, M. Watanabe, H. Zhou, Z. Wang, and X. Xie, "NbN superconducting nanowire single-photon detector with an active area of 300 µm-in-diameter," AIP Adv. 9, 075214 (2019). [CrossRef]
41. Y. P. Korneeva, M. Y. Mikhailov, Y. P. Pershin, N. N. Manova, A. V. Divochiy, Y. B. Vakhtomin, A. A. Korneev, K. V. Smirnov, A. G. Sivakov, A. Y. Devizenko, and G. N. Goltsman, "Superconducting single-photon detector made of MoSi film," Supercond. Sci. Technol. 27, 095012 (2014). [CrossRef]
42. V. B. Verma, B. Korzh, F. Bussières, R. D. Horansky, S. D. Dyer, A. E. Lita, I. Vayshenker, F. Marsili, M. D. Shaw, H. Zbinden, R. P. Mirin, and S. W. Nam, "High-efficiency superconducting nanowire single-photon detectors fabricated from MoSi thin-films," Opt. Express 23, 33792–33801 (2015). [CrossRef]
43. A. N. McCaughan, "phidl Python CAD layout module," https://github.com/amccaugh/phidl.
44. A. J. Kerman, D. Rosenberg, R. J. Molnar, and E. A. Dauler, "Readout of superconducting nanowire single-photon detectors at high count rates," J. Appl. Phys. 113, 144511 (2013). [CrossRef]
45. C. Cahall, D. J. Gauthier, and J. Kim, "Scalable cryogenic readout circuit for a superconducting nanowire single-photon detector system," Rev. Sci. Instrum. 89, 063117 (2018). [CrossRef]
46. E. A. Dauler, M. E. Grein, A. J. Kerman, F. Marsili, S. Miki, S. W. Nam, M. D. Shaw, H. Terai, V. B. Verma, and T. Yamashita, "Review of superconducting nanowire single-photon detector system design options and demonstrated performance," Opt. Eng. 53, 081907 (2014). [CrossRef]
Y. Hochberg, I. Charaev, S. W. Nam, V. Verma, M. Colangelo, and K. K. Berggren, "Detecting sub-GeV dark matter with superconducting nanowires," Phys. Rev. Lett. 123, 151802 (2019).
Y. Mao, B.-X. Wang, C. Zhao, G. Wang, R. Wang, H. Wang, F. Zhou, J. Nie, Q. Chen, Y. Zhao, Q. Zhang, J. Zhang, T.-Y. Chen, and J.-W. Pan, "Integrating quantum key distribution with classical communications in backbone fiber network," Opt. Express 26, 6010–6020 (2018).
J.-P. Chen, C. Zhang, Y. Liu, C. Jiang, W. Zhang, X.-L. Hu, J.-Y. Guan, Z.-W. Yu, H. Xu, J. Lin, M.-J. Li, H. Chen, H. Li, L. You, Z. Wang, X.-B. Wang, Q. Zhang, and J.-W. Pan, "Sending-or-not-sending with independent lasers: secure twin-field quantum key distribution over 509 km," Phys. Rev. Lett. 124, 070501 (2020).
S. Slussarenko, M. M. Weston, H. M. Chrzanowski, L. K. Shalm, V. B. Verma, S. W. Nam, and G. J. Pryde, "Unconditional violation of the shot-noise limit in photonic quantum metrology," Nat. Photonics 11, 700–703 (2017).
J. Zhu, Y. Chen, L. Zhang, X. Jia, Z. Feng, G. Wu, X. Yan, J. Zhai, Y. Wu, Q. Chen, X. Zhou, Z. Wang, C. Zhang, L. Kang, J. Chen, and P. Wu, "Demonstration of measuring sea fog with an SNSPD-based lidar system," Sci. Rep. 7, 15113 (2017).
L. Chen, D. Schwarzer, J. A. Lau, V. B. Verma, M. J. Stevens, F. Marsili, R. P. Mirin, S. W. Nam, and A. M. Wodtke, "Ultra-sensitive mid-infrared emission spectrometer with sub-ns temporal resolution," Opt. Express 26, 14859–14868 (2018).
Q. Zhuang, Z. Zhang, and J. H. Shapiro, "Distributed quantum sensing using continuous-variable multipartite entanglement," Phys. Rev. A 97, 032329 (2018).
E. T. Khabiboulline, J. Borregaard, K. De Greve, and M. D. Lukin, "Optical interferometry with quantum networks," Phys. Rev. Lett. 123, 070504 (2019).
F. Marsili, V. B. Verma, J. A. Stern, S. Harrington, A. E. Lita, T. Gerrits, I. Vayshenker, B. Baek, M. D. Shaw, R. P. Mirin, and S. W. Nam, "Detecting single infrared photons with 93% system efficiency," Nat. Photonics 7, 210–214 (2013).
H. Le Jeannic, V. B. Verma, A. Cavaillès, F. Marsili, M. D. Shaw, K. Huang, O. Morin, S. W. Nam, and J. Laurat, "High-efficiency WSi superconducting nanowire single-photon detectors for quantum state engineering in the near infrared," Opt. Lett. 41, 5341–5344 (2016).
S. Krapick, M. Hesselberg, V. B. Verma, S. W. Nam, and R. P. Mirin, "Bandwidth-enhanced superconducting nanowire single photon detectors for telecom wavelengths," in CLEO: QELS_Fundamental Science (OSA, 2017), paper FF1E.2.
B. Korzh, Q.-Y. Zhao, J. P. Allmaras, S. Frasca, T. M. Autry, E. A. Bersin, A. D. Beyer, R. M. Briggs, B. Bumble, M. Colangelo, G. M. Crouch, A. E. Dane, T. Gerrits, A. E. Lita, F. Marsili, G. Moody, C. Peña, E. Ramirez, J. D. Rezac, N. Sinclair, M. J. Stevens, A. E. Velasco, V. B. Verma, E. E. Wollman, S. Xie, D. Zhu, P. D. Hale, M. Spiropulu, K. L. Silverman, R. P. Mirin, S. W. Nam, A. G. Kozorezov, M. D. Shaw, and K. K. Berggren, "Demonstration of sub-3 ps temporal resolution with a superconducting nanowire single-photon detector," Nat. Photonics 14, 250–255 (2020).
H. Shibata, T. Honjo, and K. Shimizu, "Quantum key distribution over a 72 dB channel loss using ultralow dark count superconducting single-photon detectors," Opt. Lett. 39, 5078–5081 (2014).
H. Shibata, K. Fukao, N. Kirigane, S. Karimoto, and H. Yamamoto, "SNSPD with ultimate low system dark count rate using various cold filters," IEEE Trans. Appl. Supercond. 27, 1–4 (2017).
L. K. Shalm, E. Meyer-Scott, B. G. Christensen, P. Bierhorst, M. A. Wayne, M. J. Stevens, T. Gerrits, S. Glancy, D. R. Hamel, M. S. Allman, K. J. Coakley, S. D. Dyer, C. Hodge, A. E. Lita, V. B. Verma, C. Lambrocco, E. Tortorici, A. L. Migdall, Y. Zhang, D. R. Kumor, W. H. Farr, F. Marsili, M. D. Shaw, J. A. Stern, C. Abellán, W. Amaya, V. Pruneri, T. Jennewein, M. W. Mitchell, P. G. Kwiat, J. C. Bienfang, R. P. Mirin, E. Knill, and S. W. Nam, "Strong loophole-free test of local realism," Phys. Rev. Lett. 115, 250402 (2015).
H. Takesue, S. D. Dyer, M. J. Stevens, V. Verma, R. P. Mirin, and S. W. Nam, "Quantum teleportation over 100 km of fiber using highly efficient superconducting nanowire single-photon detectors," Optica 2, 832–835 (2015).
P. B. Dixon, D. Rosenberg, V. Stelmakh, M. E. Grein, R. S. Bennink, E. A. Dauler, A. J. Kerman, R. J. Molnar, and F. N. C. Wong, "Heralding efficiency and correlated-mode coupling of near-IR fiber-coupled photon pairs," Phys. Rev. A 90, 043804 (2014).
F. Najafi, J. Mower, N. C. Harris, F. Bellei, A. Dane, C. Lee, X. Hu, P. Kharel, F. Marsili, S. Assefa, K. K. Berggren, and D. Englund, "On-chip detection of non-classical light by scalable integration of single-photon detectors," Nat. Commun. 6, 5873 (2015).
M. M. Weston, H. M. Chrzanowski, S. Wollmann, A. Boston, J. Ho, L. K. Shalm, V. B. Verma, M. S. Allman, S. W. Nam, R. B. Patel, S. Slussarenko, and G. J. Pryde, "Efficient and pure femtosecond-pulse-length source of polarization-entangled photons," Opt. Express 24, 10869–10879 (2016).
J. Jin, E. Saglamyurek, M. G. Puigibert, V. B. Verma, F. Marsili, S. W. Nam, D. Oblak, and W. Tittel, "Telecom-wavelength atomic quantum memory in optical fiber for heralded polarization qubits," Phys. Rev. Lett. 115, 140501 (2015).
E. Saglamyurek, M. G. Puigibert, Q. Zhou, L. Giner, F. Marsili, V. B. Verma, S. Woo Nam, L. Oesterling, D. Nippa, D. Oblak, and W. Tittel, "A multiplexed light-matter interface for fiber-based quantum networks," Nat. Commun. 7, 11202 (2016).
I. E. Zadeh, J. W. N. Los, R. B. M. Gourgues, V. Steinmetz, G. Bulgarini, S. M. Dobrovolskiy, V. Zwiller, and S. N. Dorenbos, "Single-photon detectors combining high efficiency, high detection rates, and ultra-high timing resolution," APL Photon. 2, 111301 (2017).
J. K. W. Yang, A. J. Kerman, E. A. Dauler, V. Anant, K. M. Rosfjord, and K. K. Berggren, "Modeling the electrical and thermal response of superconducting nanowire single-photon detectors," IEEE Trans. Appl. Supercond. 17, 581–585 (2007).
A. J. Kerman, J. K. W. Yang, R. J. Molnar, E. A. Dauler, and K. K. Berggren, "Electrothermal feedback in superconducting nanowire single-photon detectors," Phys. Rev. B 79, 100509(R) (2009).
C. M. Natarajan, M. G. Tanner, and R. H. Hadfield, "Superconducting nanowire single-photon detectors: physics and applications," Supercond. Sci. Technol. 25, 063001 (2012).
A. Engel, J. J. Renema, K. Il'in, and A. Semenov, "Detection mechanism of superconducting nanowire single-photon detectors," Supercond. Sci. Technol. 28, 114003 (2015).
J. J. Renema, Q. Wang, R. Gaudio, I. Komen, K. op't Hoog, D. Sahin, A. Schilling, M. P. van Exter, A. Fiore, A. Engel, and M. J. de Dood, "Position-dependent local detection efficiency in a nanowire superconducting single-photon detector," Nano Lett. 15, 4541–4545 (2015).
M. Caloz, B. Korzh, N. Timoney, M. Weiss, S. Gariglio, R. J. Warburton, C. Schönenberger, J. Renema, H. Zbinden, and F. Bussières, "Optically probing the detection mechanism in a molybdenum silicide superconducting nanowire single-photon detector," Appl. Phys. Lett. 110, 083106 (2017).
M. Sidorova, A. Semenov, H.-W. Húbers, I. Charaev, A. Kuzmin, S. Doerner, and M. Siegel, "Physical mechanisms of timing jitter in photon detection by current-carrying superconducting nanowires," Phys. Rev. B 96, 184504 (2017).
A. J. Miller, A. E. Lita, B. Calkins, I. Vayshenker, S. M. Gruber, and S. W. Nam, "Compact cryogenic self-aligning fiber-to-detector coupling with losses below one percent," Opt. Express 19, 9102–9110 (2011).
H. Li, Y. Wang, L. You, H. Wang, H. Zhou, P. Hu, W. Zhang, X. Liu, X. Yang, L. Zhang, Z. Wang, and X. Xie, "Supercontinuum single-photon detector using multilayer superconducting nanowires," Photon. Res. 7, 1425–1431 (2019).
A. Gaggero, S. Jahanmirinejad, F. Marsili, F. Mattioli, R. Leoni, D. Bitauld, D. Sahin, G. J. Hamhuis, R. Ntzel, R. Sanjines, and A. Fiore, "Nanowire superconducting single-photon detectors on GaAs for integrated quantum photonic applications," Appl. Phys. Lett. 97, 151108 (2010).
L. Redaelli, G. Bulgarini, S. Dobrovolskiy, S. N. Dorenbos, V. Zwiller, E. Monroy, and J. M. Gérard, "Design of broadband high-efficiency superconducting-nanowire single photon detectors," Supercond. Sci. Technol. 29, 065016 (2016).
H. Li, S. Chen, L. You, W. Meng, Z. Wu, Z. Zhang, K. Tang, L. Zhang, W. Zhang, X. Yang, X. Liu, Z. Wang, and X. Xie, "Superconducting nanowire single photon detector at 532 nm and demonstration in satellite laser ranging," Opt. Express 24, 3535–3542 (2016).
T. Yamashita, K. Waki, S. Miki, R. A. Kirkwood, R. H. Hadfield, and H. Terai, "Superconducting nanowire single-photon detectors with non-periodic dielectric multilayers," Sci. Rep. 6, 35240 (2016).
T. K. Gaylord and M. G. Moharam, "Rigorous coupled-wave analysis of planar-grating diffraction," J. Opt. Soc. Am. 71, 811–818 (1981).
O. Deparis, "Poynting vector in transfer-matrix formalism for the calculation of light absorption profile in stratified isotropic optical media," Opt. Lett. 36, 3960–3962 (2011).
S. Krapick, M. Hesselberg, V. B. Verma, I. Vayshenker, S. W. Nam, and R. P. Mirin, "Superconducting single-photon detectors with enhanced high-efficiency bandwidth," arXiv:1706.00004 (2017).
L. You, H. Li, W. Zhang, X. Yang, L. Zhang, S. Chen, H. Zhou, Z. Wang, and X. Xie, "Superconducting nanowire single-photon detector on dielectric optical films for visible and near infrared wavelengths," Supercond. Sci. Technol. 30, 084008 (2017).
C. Zhang, W. Zhang, J. Huang, L. You, H. Li, C. lv, T. Sugihara, M. Watanabe, H. Zhou, Z. Wang, and X. Xie, "NbN superconducting nanowire single-photon detector with an active area of 300 µm-in-diameter," AIP Adv. 9, 075214 (2019).
Y. P. Korneeva, M. Y. Mikhailov, Y. P. Pershin, N. N. Manova, A. V. Divochiy, Y. B. Vakhtomin, A. A. Korneev, K. V. Smirnov, A. G. Sivakov, A. Y. Devizenko, and G. N. Goltsman, "Superconducting single-photon detector made of MoSi film," Supercond. Sci. Technol. 27, 095012 (2014).
V. B. Verma, B. Korzh, F. Bussières, R. D. Horansky, S. D. Dyer, A. E. Lita, I. Vayshenker, F. Marsili, M. D. Shaw, H. Zbinden, R. P. Mirin, and S. W. Nam, "High-efficiency superconducting nanowire single-photon detectors fabricated from MoSi thin-films," Opt. Express 23, 33792–33801 (2015).
A. N. McCaughan, "phidl Python CAD layout module," https://github.com/amccaugh/phidl .
A. J. Kerman, D. Rosenberg, R. J. Molnar, and E. A. Dauler, "Readout of superconducting nanowire single-photon detectors at high count rates," J. Appl. Phys. 113, 144511 (2013).
C. Cahall, D. J. Gauthier, and J. Kim, "Scalable cryogenic readout circuit for a superconducting nanowire single-photon detector system," Rev. Sci. Instrum. 89, 063117 (2018).
E. A. Dauler, M. E. Grein, A. J. Kerman, F. Marsili, S. Miki, S. W. Nam, M. D. Shaw, H. Terai, V. B. Verma, and T. Yamashita, "Review of superconducting nanowire single-photon detector system design options and demonstrated performance," Opt. Eng. 53, 081907 (2014).
Abellán, C.
Allman, M. S.
Allmaras, J. P.
Amaya, W.
Anant, V.
Assefa, S.
Autry, T. M.
Baek, B.
Bellei, F.
Bennink, R. S.
Berggren, K. K.
Bersin, E. A.
Beyer, A. D.
Bienfang, J. C.
Bierhorst, P.
Bitauld, D.
Borregaard, J.
Boston, A.
Briggs, R. M.
Bulgarini, G.
Bumble, B.
Bussières, F.
Cahall, C.
Calkins, B.
Caloz, M.
Cavaillès, A.
Charaev, I.
Chen, J.-P.
Chen, Q.
Chen, S.
Chen, T.-Y.
Christensen, B. G.
Chrzanowski, H. M.
Coakley, K. J.
Colangelo, M.
Crouch, G. M.
Dane, A.
Dane, A. E.
Dauler, E. A.
de Dood, M. J.
De Greve, K.
Deparis, O.
Devizenko, A. Y.
Divochiy, A. V.
Dixon, P. B.
Dobrovolskiy, S.
Dobrovolskiy, S. M.
Doerner, S.
Dorenbos, S. N.
Dyer, S. D.
Engel, A.
Englund, D.
Farr, W. H.
Feng, Z.
Fiore, A.
Frasca, S.
Fukao, K.
Gaggero, A.
Gariglio, S.
Gaudio, R.
Gauthier, D. J.
Gaylord, T. K.
Gérard, J. M.
Gerrits, T.
Giner, L.
Glancy, S.
Goltsman, G. N.
Gourgues, R. B. M.
Grein, M. E.
Gruber, S. M.
Guan, J.-Y.
Hadfield, R. H.
Hale, P. D.
Hamel, D. R.
Hamhuis, G. J.
Harrington, S.
Harris, N. C.
Hesselberg, M.
Hochberg, Y.
Hodge, C.
Honjo, T.
Horansky, R. D.
Hu, P.
Hu, X.
Hu, X.-L.
Huang, J.
Huang, K.
Húbers, H.-W.
Il'in, K.
Jahanmirinejad, S.
Jennewein, T.
Jia, X.
Jiang, C.
Jin, J.
Kang, L.
Karimoto, S.
Kerman, A. J.
Khabiboulline, E. T.
Kharel, P.
Kim, J.
Kirigane, N.
Kirkwood, R. A.
Knill, E.
Komen, I.
Korneev, A. A.
Korneeva, Y. P.
Korzh, B.
Kozorezov, A. G.
Krapick, S.
Kumor, D. R.
Kuzmin, A.
Kwiat, P. G.
Lambrocco, C.
Lau, J. A.
Laurat, J.
Le Jeannic, H.
Leoni, R.
Li, M.-J.
Lin, J.
Lita, A. E.
Liu, X.
Liu, Y.
Los, J. W. N.
Lukin, M. D.
lv, C.
Manova, N. N.
Marsili, F.
Mattioli, F.
Meng, W.
Meyer-Scott, E.
Migdall, A. L.
Mikhailov, M. Y.
Miki, S.
Miller, A. J.
Mirin, R. P.
Mitchell, M. W.
Moharam, M. G.
Molnar, R. J.
Monroy, E.
Moody, G.
Morin, O.
Mower, J.
Najafi, F.
Nam, S. W.
Natarajan, C. M.
Nie, J.
Nippa, D.
Ntzel, R.
Oblak, D.
Oesterling, L.
op't Hoog, K.
Pan, J.-W.
Patel, R. B.
Peña, C.
Pershin, Y. P.
Pruneri, V.
Pryde, G. J.
Puigibert, M. G.
Ramirez, E.
Redaelli, L.
Renema, J.
Renema, J. J.
Rezac, J. D.
Rosenberg, D.
Rosfjord, K. M.
Saglamyurek, E.
Sahin, D.
Sanjines, R.
Schilling, A.
Schönenberger, C.
Schwarzer, D.
Semenov, A.
Shalm, L. K.
Shapiro, J. H.
Shaw, M. D.
Shibata, H.
Shimizu, K.
Sidorova, M.
Siegel, M.
Silverman, K. L.
Sinclair, N.
Sivakov, A. G.
Slussarenko, S.
Smirnov, K. V.
Steinmetz, V.
Stelmakh, V.
Stern, J. A.
Stevens, M. J.
Sugihara, T.
Takesue, H.
Tang, K.
Tanner, M. G.
Terai, H.
Timoney, N.
Tittel, W.
Tortorici, E.
Vakhtomin, Y. B.
van Exter, M. P.
Vayshenker, I.
Velasco, A. E.
Verma, V.
Verma, V. B.
Waki, K.
Wang, B.-X.
Wang, G.
Wang, H.
Wang, R.
Wang, X.-B.
Warburton, R. J.
Watanabe, M.
Wayne, M. A.
Weiss, M.
Weston, M. M.
Wodtke, A. M.
Wollman, E. E.
Wollmann, S.
Wong, F. N. C.
Woo Nam, S.
Wu, G.
Wu, P.
Wu, Y.
Xie, X.
Xu, H.
Yamamoto, H.
Yamashita, T.
Yan, X.
Yang, J. K. W.
Yang, X.
You, L.
Yu, Z.-W.
Zadeh, I. E.
Zbinden, H.
Zhai, J.
Zhang, C.
Zhang, Q.
Zhang, W.
Zhang, Y.
Zhang, Z.
Zhao, C.
Zhao, Q.-Y.
Zhao, Y.
Zhou, F.
Zhou, H.
Zhou, Q.
Zhou, X.
Zhu, D.
Zhu, J.
Zhuang, Q.
Zwiller, V.
AIP Adv. (1)
APL Photon. (1)
Appl. Phys. Lett. (2)
IEEE Trans. Appl. Supercond. (2)
J. Appl. Phys. (1)
J. Opt. Soc. Am. (1)
Nano Lett. (1)
Nat. Commun. (2)
Nat. Photonics (3)
Opt. Eng. (1)
Photon. Res. (1)
Phys. Rev. A (2)
Phys. Rev. B (2)
Rev. Sci. Instrum. (1)
Sci. Rep. (2)
Supercond. Sci. Technol. (5)
Supplementary Material (1)
» Supplement 1 Supplemental document
|
CommonCrawl
|
Integer-valued factorial ratios
Modified 1 year, 2 months ago
This historical question recalls Pafnuty Chebyshev's estimates for the prime distribution function. In his derivation Chebyshev used the factorial ratio sequence $$ u_n=\frac{(30n)!n!}{(15n)!(10n)!(6n)!}, \qquad n=0,1,2,\dots, $$ which assumes integer values only. The latter fact can be established with the help of $$ \operatorname{ord}_p n! =\biggl\lfloor\frac{n}{p}\biggr\rfloor+\biggl\lfloor\frac{n}{p^2}\biggr\rfloor +\biggl\lfloor\frac{n}{p^3}\biggr\rfloor+\dots $$ and routine verification of $$ \lfloor 30x\rfloor+\lfloor x\rfloor-\lfloor 15x\rfloor-\lfloor 10x\rfloor-\lfloor 6x\rfloor\ge0. $$ Other Chebyshev-like examples of integer-valued factorial sequences are known; the complete list of such $$ u_n=\frac{(a_1n)!\dots(a_rn)!}{(b_1n)!\dots(b_sn)!} $$ in the case $s=r+1$ was recently tabulated in [J.W. Bober, J. London Math. Soc. (2) 79 (2009) 422--444]. A motivation for this classification problem is in relation with a certain approach to Riemann's hypothesis, but I would prefer to refer everybody interested in to Bober's paper (which could be found in the arXiv as well). The proofs of $u_n\in\mathbb Z$ make use of the above formula for $\operatorname{ord}_p n!$
There are three 2-parameter families in Bober's list, namely, $$ \frac{(n+m)!}{n!m!}, \qquad \frac{(2n)!(2m)!}{n!(n+m)!m!}, \qquad\text{and}\qquad \frac{(2n)!m!}{n!(2m)!(n-m)!} \quad (n>m); $$ the first one includes the binomial coefficients, while some properties of the second family are mentioned in this question. For the binomial family, a standard way to establish integrality purely combinatorially amounts to interpreting the factorial ratio as coefficients in the expansion $$ (1+t)^{n+m}=\sum_{k=0}^{n+m}\binom{n+m}{k} t^k, $$ that is, as the number of $m$-element subsets of an $(n+m)$-set. There is lack of similar interpretation for the other two 2-parametric families, although Ira Gessel indicates in [J. Symbolic Computation 14 (1992) 179--194] that the inductive argument together with identity $$ \frac{(2n)!(2(n+p))!}{n!(n+(n+p))!(n+p)!} =\sum_{k=0}^{\lfloor p/2\rfloor}2^{p-2k} \binom{p}{2k} \frac{(2n)!(2k)!}{n!(n+k)!k!} \qquad (p\geq 0) $$ allows one to show that the numbers in question are indeed integers. A slight modification of the formula can be used for showing that the third 2-parametric family is integer valued. In these cases one uses a reduction to binomial sums for which the integrality is already known. But what about the 1-parametric families, like Chebyshev's or, say, $$ \frac{(12n)!n!}{(6n)!(4n)!(3n)!}? $$ Is there any way to establish the integrality without referring to the $p$-order formula?
My own motivation is explained in the joint recent preprint with Ole Warnaar, where we observe a $q$-version of the integrality in a "stronger form".
nt.number-theory
co.combinatorics
binomial-coefficients
hypergeometric-functions
122 silver badges33 bronze badges
asked May 29, 2010 at 6:28
Wadim ZudilinWadim Zudilin
$\begingroup$ Wadim, why did you omit L'vovich? $\endgroup$
– Victor Protsak
$\begingroup$ Victor, we are so democratic nowadays... BTW, my son is Victor Wadimovich. :) $\endgroup$
– Wadim Zudilin
$\begingroup$ Just to add some more on the list: artofproblemsolving.com/Forum/viewtopic.php?t=160695 (scroll down to posts #3 and #4) and $\frac{\left(na_1\right)!\left(na_2\right)!...\left(na_n\right)!}{a_1!a_2!...a_n!\cdot\left(a_1+a_2\right)^{\left(n-1\right)/2}\left(a_2+a_3\right)^{\left(n-1\right)/2}...\left(a_n+a_1\right)^{\left(n-1\right)/2}}\in\mathbb Z$ for any $n\in\mathbb N$ and $a_1,a_2,...,a_n\in\mathbb N$. $\endgroup$
– darij grinberg
$\begingroup$ Very nice, Darji, thanks! I just checked that the first one (from the forum) is perfectly treatable by the arithmetic argument ($p$-order of factorials). $\endgroup$
$\begingroup$ Here is a more recent paper of Soundararajan addressing integer factorial ratios: arxiv.org/abs/1906.06413 $\endgroup$
– yoyo
Along with the binomial coefficients, the other two infinite families each enjoy a fairly simple recurrence. for example $$f(n,m)=\frac{(2n)!(2m)!}{n!(n+m)!m!}.$$ has $f(0,t)=\binom{2t}{t}$ and $f(i+1,j)=4f(i,j)-f(i,j+1).$
Consider the one parameter family $$\frac{(2n)!(6n)!}{n!(4n)!(3n)!}.$$ Viewed in isolation it seems hard to establish integrality without referring to the p-order formula. However as the case m=3n of the family $f(n,m)$ it is the values in a line of cells with slope 3.
I've wondered if any of the various "sporadic" one parameter families can be embedded in a similar manner in a 2 parameter family defined by a recurrence. Evidently the entire table would not all be given by a formula exactly of that form.
answered Jul 29, 2010 at 6:43
Aaron MeyerowitzAaron Meyerowitz
29.7k11 gold badge4444 silver badges100100 bronze badges
$\begingroup$ MO welcoming +1, Aaron! Yes, the 2-parametric families possess (many!) recurrences, and this is why they are "easy". There are arithmetic-algebraic obstacles for the 1-parametric families to extend to 2-parametric ones of the the same "factorial ratio" form (this can be rigorously shown!). My intuition says that the desired 2-parametric families $f(n,m)$, say, have the form $\sum_kg(n,m,k)$ of a hypergeometric sum, so that $f(n,n)$ (or some other specializations to 1 variable) has a closed "factorial ratio" form. I simply have no idea on how to construct such $f$; I've never seen them around. $\endgroup$
$\begingroup$ I agree that 2-parametric families of that "factorial ratio" form are unlikely to be sitting around undiscovered (as noted in my closing sentence). But what about embedding in a 2-parameter family possessing a recurrence like g(i+1,j+1)=ug(i,j+1)+vg(i,j)+wg(i+1,j). Still seems like a long shot. In the example g(n)=f(n,3n) I gave, how would one start from g(n) and somehow deduce a 2-parameter recurrence? Are there other ones in which it nicely embeds? $\endgroup$
– Aaron Meyerowitz
$\begingroup$ You wish me answer the questions I ask myself! :-) Unless one finds at least one nontrivial example of lifting from 1- to 2-parametric families, we can only try to guess the structure. The recurrence relation you write strongly resembles the WZ-pair relation, at least some "brain food" for me. Thanks! $\endgroup$
$\begingroup$ Right. We do know that example with a lift to a family given by a recurrence. Can we "discover" the recurrence? I was curious about the "many!" recurrences you mention. I can find that one and a similar one for the other family, but only those. What form do you mean? $\endgroup$
$\begingroup$ Many (in fact, three) recurrences, which are sufficient for proving the integrality of $(2n)!(2m)!/n!(n+m)!m!$, are given in Gessel's paper (see the free weblink in the question). $\endgroup$
The p-order method got a lot of attention in the solution of Askey's 1986 problem 6514 in the Math Monthly to show that $$\frac{(3m + 3n)!(3n)!(2m)!(2n)!}{(2m + 3n) !(m + 2n) !(m + n)!m!n!n!}$$ is always an integer.
It had been conjectured that this is the constant term of $$\left( \prod{(1-u/v)} \right)^m \left( \prod{(1-uv/w)} \right)^n$$ where each product is over the 6 ways to set the variables to x,y and z (and hence is an integer). This was established in: A Proof of the $G_2 $ Case of Macdonald's Root System-Dyson Conjecture by Doron Zeilberger, SIAM J. Math. Anal. 18, 880 (1987), DOI:10.1137/0518065 . So this is certainly not a p-order proof. However I don't know that there are constant term identities for these other ratios. The article does cite (a special case of) a theorem of Morris showing that the following expression is a constant term and hence an integer:
$$\frac{(a+b+2c)!(a+b+c)!(a+b)!(2c)!(3c)!}{(a+2c)!(b+2c)!(a+c)!(b+c)!a!b!c!c!} $$
$\begingroup$ Welcome again, Aaron! That's a very good point, to interpret these factorial ratios as CTs... To my best knowledge there are no such things known for the 1-parametric families, but I might be wrong. +1 again and many thanks. $\endgroup$
$\begingroup$ I also don't know of any, and CT identities are not my area of expertise. On the other hand there are now known to be 3 2-parameter families and a 29 sporadic one parameter families (an! bn!)/(cn! dn! en!) there are a handful more sporadic families with 3 on the top 4 on the bottom and maybe 4 on top 5 on the bottom. SO it would not be totally amazing if root systems came in somehow. $\endgroup$
$\begingroup$ With your very explicit idea in mind, I should ask Doron directly (he appeared on MO only once). I am not an expert on CT evaluations, although I know a huge database of some work in this direction, related to Calabi-Yau differential equations. $\endgroup$
$\begingroup$ Incidentally, $(2m)!\,(2n)!/m!\,n!\,(m+n)!$ is the constant term in $(1+x)^m(1+1/x)^m(1-x)^n(1-1/x)^n$. $\endgroup$
– Ira Gessel
$\begingroup$ So the $-1^n$ times the coefficient of $m^{m+n}$ in both $(x+1)^{2m}(x-1)^{2n}$ and $(1-4x)^{n-1/2}$. $\endgroup$
Although this isn't an answer to the question, it's worth pointing out that the second and third families are essentially binomial coefficients.
We have $$U_2(m,n):=\frac{(2m)!\,(2n)!}{m!\, n!\, (m+n)!} = (-1)^m 2^{2m+2n}\binom{n-\frac12}{m+n}$$ and $$U_3(m,n):=\frac{(2n)!\,m!}{n!\,(2m)!\,(n-m)!}=(-1)^{n-m}2^{2n-2m}\binom{-m-\frac12}{n-m}.$$ It follows that $U_2(m,n)$ is $(-1)^n$ times the coefficient of $x^{m+n}$ in $(1-4x)^{n-1/2}$ and $U_3(m,n)$ is the coefficient of $x^{n-m}$ in $(1-4x)^{-m-1/2}$. Thus these numbers are integers since they are coefficients of (odd) integer powers of $(1-4x)^{-1/2}=\sum_{n=0}^\infty \binom{2n}{n} x^n$. (And in a sense they are really just one family.) It also follows that there is a simple combinatorial interpretation for $U_3(m,n)$ since it is a coefficient of a positive integer power of $(1-4x)^{-1/2}$, but we don't get an interpretation for $U_2(m,n)$ since there is cancellation in expanding positive powers of $(1-4x)^{1/2}$
Ira GesselIra Gessel
I asked a similar question on sci.math.research in December 2006.
I mentioned Gregg Patruno's solution (Amer. Math. Monthly 94 (1987), 1012-1014) to Dick Askey's Problem 6514 in the American Mathematical Monthly, which uses what you call the $p$-order formula, and then I asked if there were any way to prove such facts by expressing the formula of interest in terms of quantities that are "obviously" integers (e.g., binomial coefficients). William Shanley pointed out that if one asks for a stronger result, namely a combinatorial interpretation of any such ratio of factorials, then this is probably too much to ask for. He mentioned Gessel and Xin's paper "A combinatorial interpretation of the numbers $6(2n)!/n!(n+2)!$" (J. Integer Seq. 8 (2005), Article 05.2.3), which uses considerable ingenuity to give a natural combinatorial interpretation in one specific case. However, establishing integrality is weaker than finding a natural combinatorial interpretation.
But as far as I know, your question is still open. In response to my sci.math.research article, Valery Liskovets sent me email with two references giving partial results. The first is David Callan's paper "Certificates of integrality for linear binomials" (Fibonacci Q. 38 (2000), 317-325) and the second is an article by Jam Germain from the NMBRTHRY Archive (18 Oct 2003).
edited Nov 30, 2021 at 17:54
Timothy ChowTimothy Chow
69.6k1919 gold badges316316 silver badges515515 bronze badges
$\begingroup$ Thanks, Timothy, for the references which I'll follow. I am definitely not interested in combinatorial interpretations (yes, this is too much to ask!), and as I indicate in my question the 2-parametric families can be shown to be integer valued (so that there is a hope to get something for 1-parametric families as well). $\endgroup$
$\begingroup$ +1 for the references and convincing me on the fact that the problem is on the market for quite a period. Dick's Problem 6514 most probably can be solved by Ira Gessel's approach (I mention in the question), although the ratio $(3m+3n)!(3n)!(2m)!(2n)!/(2m+3n)!(m+2n)!(m+n)!m!n!n!$ is better split into two integer-valued ratios (I have to think more about this). Jam's response is not close enough to the problem. I'll check Fibonacci Q. I am aware of other work on this problem but not of other methods of showing the desired integrality. :( $\endgroup$
$\begingroup$ Timothy, I read your Dec 2006 question carefully once again, and even I understand your motivation quite well, it is a different question! First, you don't have the factorial ratios I mention in mind (except for the one from Dick's problem). Secondly, you wish to have an expression by means of "obviously integers", like binomials. This does not work even for the non-binomial 2-parametric families, an extra induction is required. Final remark: the $p$-order argument is originally due to Chebyshev(!), not Gregg Patruno, and is given in Polya-Szego's "Problems and theorems in analysis". $\endgroup$
$\begingroup$ Wadim, you're right; as I said, I asked a "similar" question and not "the same" question, but I thought the pointers might be of use to you anyway. And certainly I did not mean to attribute the $p$-order argument to Gregg Patruno; indeed, Patruno complained that the method was absolutely ancient and that the Monthly should stop posing problems of this sort. It was just the first place I happened to encounter it. $\endgroup$
– Timothy Chow
$\begingroup$ Timothy, once again: Thank you very much for your response! I think that it's veryhelpful for my (and maybe somebody else) understanding of what is going on here. $\endgroup$
I'd like to see a combinatorial/group theoretic proof. For instance, if one could exhibit an injective homomorphism of direct (or just semidirect) products of symmetric groups:
$\phi:S_{3n} \times S_{4n} \times S_{6n} \to S_{n}\times S_{12n},$
then the index of Im($\phi$) would be that ratio. This is just a vague hint.
Pietro MajerPietro Majer
$\begingroup$ Hmmm, this is hardly an answer but probably expression of interest. I am curious enough whether an injective homomorphism $S_n\times S_m\times S_{n+m}\to S_{2n}\times S_{2m}$ is known... I doubt. $\endgroup$
$\begingroup$ Pietro: This is impossible even for $n=1$: Assume such $\phi$ exists. The image of a 5-cycle from $S_6$ is either a 5-cycle or a product of two disjoint 5-cycles in $S_{12}.$ In the latter case, its centralizer doesn't contain elements of order 3. In the former case, the centralizer is $S_7\times C_5,$ which contains a unique $S_3\times S_4$ subgroup modulo conjugation. But its centralizer in $S_{12}$ is $S_5$ , which cannot contain $\phi_(S_6)$. (There may be an even easier proof, but using a 3-cycle, a 4-cycle, and a 6-cycle doesn't seem to immediately lead to a contradiction) $\endgroup$
$\begingroup$ good point. and semidirect products? anyway, that was just a hint, (I'm not a group theorist) $\endgroup$
– Pietro Majer
$\begingroup$ Semidirect products don't help. For $n \neq 6$, every automorphism of $S_n$ is inner. So every semidirect product $G \ltimes S_n$ is isomorphic to $G \times S_n$. $\endgroup$
– David E Speyer
$\begingroup$ Pietro, this was nevertheless a very "olympic" approach. Are there other "combinatorial" interpretations of the binomials rather than the order of symmetric group? As for your "Wait! Is yours an elementary question/homework? then this is not the right place; try one of these instead (links follow)", there was recently a discussion on Meta of similar flavour, but it went nowhere... $\endgroup$
Greatest power of two dividing an integer
How to prove this polynomial always has integer values at all integers?
p-adic valuation for multinomial coefficients
Recursions which define polynomials
Asymptotic growth of antichains in divisibility posets
How to do such a partitioning?
Primality test similar to the AKS test
An alternative to continued fraction and applications
Exact formula for partial sums of Liouville function $L(n)$ (OEIS sequence A002819)
Why do these polynomials split almost in the middle?
|
CommonCrawl
|
Effect of different fermentation strategies on Bacillus thuringiensis cultivation and its toxicity towards the bagworm, Metisa plana Walker (Lepidoptera: Psychidae)
Mohamed Mazmira Mohd Masri1 &
Arbakariya Bin Ariff2
88 Accesses
The effect of batch and fed-batch fermentation on the cultivation performance of Bacillus thuringiensis was investigated using a 5-l stirred tank bioreactor. Significantly higher viable cell count (> 1.5 × 1012 CFU/ml) was obtained in the fed-batch compared to batch fermentation (1.4 × 1012 CFU/ml). Glucose feeding during the fermentation seemed to enhance cell growth but failed to enhance the sporulation rate. It was found that sporulation and δ-endotoxin synthesis in fed-batch fermentation could be enhanced by the application of optimal dissolved oxygen tension (DOT) control strategy without affecting the cell growth. Fed-batch cultivation with feeding at the exponential growth phase where the DOT was switched from 80 to 40% at 12 h of cultivation recorded the highest spore count of 7.1 × 1011 spore/ml. Cultures obtained from batch cultivation, as well as fed-batch cultivation with feeding at lag or exponential growth phase and the application of optimal DOT control strategy, recorded the presence of δ-endotoxin; however, none was detected in intermittent fed-batch fermentation. Bioassay data against the bagworm Metisa plana Walker (Lepidoptera: Psychidae) recorded the highest corrected mortality (80%) at 7 days of treatment (DAT), using the culture obtained from fed-batch cultivation with feeding during the exponential growth phase, and the DOT was switched from 80 to 40% at 12 h of cultivation. It is important to note that all cultures containing δ-endotoxin exhibited 100% mortality towards M. plana at 14 DAT.
Bacillus thuringiensis (Bt) is widely used to control insect pests in the order Lepidoptera, Diptera, and Coleoptera (Yury et al. 2019). This bacterium produces spores that contain a proteinaceous body known as crystal protein or δ-endotoxin that possesses insecticidal properties. These insecticidal proteins accumulate in the cell as crystal inclusions which constitute approximately 25% of the dry weight of the sporulated cells (Agaisse and Lereclus 1995). Bt is also very useful in controlling leaf defoliators such as bagworms (Noorhazwani et al. 2017). The currently recommended option to conserve the natural enemies is by using Bt for spraying against pest (Norman and Mazmira 2019).
Malaysian Palm Oil Board (MPOB) has established a local biopesticide product based on Bt known as Ecobac-1 (EC). The product has been used for ground and aerial spraying in smallholder areas and also plantations to combat the bagworm outbreak, especially in Perak and Johor (Mazmira et al. 2010). At least three consecutive aerial sprayings of Bt are required to control the bagworm population to below the threshold level (Noorhazwani et al. 2017). In Malaysia, severe economic losses are caused by two species of bagworm, namely Metisa plana Walker and Pteroma pendula Joannis (Lepidoptera: Psychidae) (Ramlah et al. 2007). The shortfalls due to bagworm attacks can cause up to 33–47% yield losses, especially in oil palm (Basri et al. 1994). From the mid-1960s, bagworm outbreak became less common but started to surge again with more severity in the 1990s (Brian and Norman 2019). The bagworm species known as M. plana was classified as the most economically significant insect pest of oil palm (Basri et al. 1988). Bagworm infestation has been a serious issue affecting the yield of oil palm due to procrastinated and incorrect control strategy (Tey and Cheong 2013). In 2018, the total hectarage of oil palm infested areas, especially in the smallholdings, reached up to more than 30,000 ha, and the use of Bt based biopesticides has been the best alternative to control the pest.
Batch cultivation mode is frequently used to produce Bt spores with δ-endotoxin (Rowe and Margaritis 1987; Avionone-Rossa and Mignone 1993; Adams et al. 1999). However, the kinetics of Bt in batch cultivation has not been studied extensively. A wide range of the maximum specific growth rate (0.4–1.9 h−1) for Bt has been reported (Avionone-Rossa and Mignone 1993), indicating the lack of a systematic study on growth kinetics of Bt.
The final spore concentrations obtained in batch cultivation of Bt were relatively low and not exceeded 1011 spores/ml (Sarrafzadeh et al. 2005; Khodair et al. 2008; Vu et al. 2010). A mixture of Bt spores and crystals can be produced using different modes of cultivation (Aronson and Yechiel 2001). Many researchers have reported the use of fed-batch cultivation for the production of high-density cell culture (Stanbury et al. 2003; Krause et al. 2010; Warren et al. 2018). The maximum cell concentration obtained in fed-batch cultivation (53.7 g/l) of Bt subspecies kurstaki was ninefold higher as compared to that obtained in batch cultivation (5.9 g/l) (Liu et al. 1994). Kang et al. (1993) found that the fed-batch cultivation with a constant feeding did not produce sporulated cells even after cells were subsequently kept in the bioreactor and operated as batch mode.
Comprehensive reports on the effect of different modes of bioreactor operation on Bt cultivation for the production of spores with high entomotoxicity activities towards bagworm have not been reported in any literature. In this study, the cultivation performance was evaluated in terms of final cell concentration, percentage of sporulation, δ-endotoxin synthesis, and also its toxicity towards M. plana.
Microorganism
Bacillus thuringiensis MPK13, obtained from the Malaysian Palm Oil Board (MPOB) culture collection, was used in this study (Mazmira et al. 2012). This bacterium was isolated from the gut of the dead larvae of bagworm M. plana through several isolation steps. The isolated bacterium was then grown on nutrient agar and stored at 4 °C as a stock culture (Mazmira et al. 2013).
Media and inoculum preparation
The preferred medium for the cultivation of Bt with high sporulation rate and δ-endotoxin production as described earlier (Içygen et al. 2002; Mazmira et al. 2012) was used in this study. The medium consisted of (NH4)2SO4, 2.0 g/l; K2HPO4.3H2O, 0.5 g/l; MgSO4.7H2O, 0.2 g/l; MnSO4.4H2O, 0.05 g/l; CaCl2.2H2O, 0.08 g/l; and yeast extract, 2.0 g/l. The initial pH was adjusted at 6.5. Glucose at a concentration of 8.0 g/l was added to the basal medium. Glucose needs to be separately sterilized at 110 °C for 10 min before being added to the medium. The feed medium used for fed-batch fermentation was similar to the original medium in all aspects. For inoculum preparation, the Bt colony from the stock was inoculated into 400 ml of sterile nutrient broth in 1 l Erlenmeyer flask. The flask was then incubated at 30 °C in rotary orbital shaker agitated at 150 rpm for 14 h. The culture was then used as a standard inoculum for all cultivations, using a 5-l stirred tank bioreactor.
Stirred tank bioreactor
All modes of cultivations investigated in this study were conducted using a 5-l stirred tank bioreactor (BIOSTAT B-DCU, Sartorius Stedim, Germany). The standard six-bladed Rushton turbine impeller (diameter = 0.05 m) was used for bubble dispersion and mixing while ring sparger was used for air sparging. The agitation speed was controlled in the range of 50 to 500 rpm, and the temperature was maintained at 30 °C throughout the cultivations. The control system provided the regulation of the mixing speed (50–500 rpm) as well as the regulation of the stirrer working time. The airflow was set at one v/v/m. Silicone KM72FS (Shin-Etsu, Japan) at 10% was used as an antifoam agent. The dissolved oxygen tension (DOT) regulation during the cultivation was obtained with variations of agitation speed. Samples (20 ml) were withdrawn every 4 h intervals to determine the total viable cell count, spore count, sporulation rate, and δ-endotoxin synthesis.
Batch cultivation
Actively growing seed from the inoculum was used to inoculate the bioreactor at 11% v/v. The medium (3.6 l) was sterilized at 121 °C, 15 psi for 15 min. The batch cultivation was started by inoculating the inoculum into the 5-l bioreactor. The temperature was maintained at 30 °C. The DOT level was controlled at 80% by variation in agitation speed ranging from 50 to 500 rpm using a cascade model of DOT control module.
Fed-batch cultivation
The schematic diagram of the equipment setup for fed-batch cultivation is shown in Fig. 1. Initial batch bioreactor operation conditions for subsequent fed-batch were the same as batch cultivation, but the culture was reduced to 2 l. During fed-batch cultivation, a peristaltic pump (Watson-Marlow 101 U/R, England) was used to feed the fresh substrate into the bioreactor. The fed-batch feeding strategy was modified from the method reported by Rech and Ayub (2007).
Schematic diagram of the equipment setup for fed-batch cultivation
Two types of feeding strategies (constant and intermittent feeding) were applied in fed-batch cultivation. In constant fed-batch cultivations, fresh medium was fed to the bioreactor at a constant rate during three different growth phases: (1) lag growth phase, (2) exponential growth phase, and (3) stationary growth phase. In intermittent fed-batch cultivations, fresh medium was intermittently fed to the bioreactor at two different growth phases: (1) exponential growth phase (6 h of cultivation) and (2) stationary growth phase (24 h of cultivation). Cultivation conditions in all fed-batch were similar to batch cultivations and the DOT was not controlled but monitored throughout the process.
δ-Endotoxin synthesis by Bt could be enhanced in batch cultivation when the DOT was controlled at 80% saturation during the active growth phase and then switched to 40% saturation during the middle of the exponential growth phase. This DOT control strategy was also applied in fed-batch cultivation with feeding at lag and exponential growth phases. In fed-batch cultivation with feeding at lag phase (2 h of cultivation), the DOT was switched from 80 to 40% saturation at 8 h of cultivation. While in fed-batch with medium feeding at exponential growth phase (6 h of cultivation), the DOT was switched from 80 to 40% saturation at 12 h of cultivation.
During the cultivation, culture samples were collected at different time intervals for analysis. The culture samples were serially diluted using 0.85% (v/v) sterilized saline buffers and plated on nutrient agar (NA) plates. The plates were incubated at 30 °C for 48 h and the number of the single colonies developed was counted and expressed in CFU/ml. For spore count, the culture samples were heated at 80 °C for 15 min to kill the vegetative cells before serially diluted and plated on NA plates. The plates were incubated at 30 °C for 48 h and the number of the single colonies developed was counted and expressed as spores/milliliter (Thompson and Stevenson 1984).
SDS-PAGE analysis was conducted using the Laemmli method (Laemmli 1970). The Laemmli system is a discontinuous SDS system that is the most widely used electrophoretic system. The resolution in a Lemmli gel is excellent because the treated peptides are concentrates in a stacking gel before entering the separating gel. To set up two sets of gels for Hoefer unit, running gel consisting of 5 ml monomer solution (A:B), 15 ml 4 × running buffer 600 μl, 10% 0f SDS, and 29.1 ml of distilled water. The gel solution was vacuumed for 15 min and after that 300 μl of 10% ammonium persulfate and 20 μl of Temed was added. The ammonium persulfate must be prepared fresh. The running gel solution was poured into the Hoefer unit. Stacking gel contains 2.6 ml monomer (A:B), an aliquot of 5 ml stacking gel buffer, and 200 μl 10% SDS. Before the samples were loaded into the gel, an aliquot of 2 × treatment buffer was added and incubated in a water bath at 100 °C for 90 s. Aliquot of 80 μl of each sample was loaded into each well of the gel. Aliquot of 10 μl of 10 kD marker was also loaded into the gel. After the samples were loaded into the wells, electric current was set up at 15 A and left overnight.
Laboratory bioassay towards Metisa plana
The efficacy of the Bt cells cultivated in different modes of bioreactor operation was carried out against early instar of M. plana. The spray suspensions were prepared by diluting the Bt culture samples obtained from the cultivation with sterile distilled water. Bioassay samples were taken at 48 h of cultivation from samples which contain δ-endotoxin. The control treatment was prepared by spraying sterile distilled water on the palm leaves. The Bt suspension was then sprayed uniformly on the palm leaves dipped in reverse osmosis water. After the spray became dry, five larvae (early instars) were placed on the sprayed leaves. Each experiment was performed in four replicates. The observation of larval mortality was recorded at different days (1, 3, 7, and until 13) after treatment (DAT). Data on mortality was converted to corrected mortality using the Abbot Formula (Abbott 1987):
$$ \mathrm{Corrected}\ \mathrm{mortality}=\left[\%\mathrm{treatment}-\%\mathrm{control}/100-\%\mathrm{control}\right]\times 100\% $$
Batch cultivation of Bt MPK13
The time course of batch cultivation of Bt MPK13 in a 5-l stirred tank bioreactor is shown in Fig. 2. The results showed that high cell growth and sporulation could be obtained during batch cultivation. The high cell growth (> 1.0 × 1011 CFU/ml) was achieved as early as 8 h of cultivation. The highest cell growth (1.4 × 1012 CFU/ml) and highest spore count (4.7 × 1011 CFU/ml) were recorded at 48 h of cultivation, respectively. During batch cultivation, log phase was recorded as early as 4 h of fermentation until 20 h of cultivation, which lasted for 16 h. The cells started to enter the stationary phase starting from 24 h of cultivation. The highest percentage of the sporulation rate recorded in batch cultivation was 37% (Table 1). Glucose was completely consumed at 28 h of cultivation (Fig. 2). Bt MPK13 cells efficiently consumed glucose to support the growth process. Maximum cell productivity (72 × 1011 CFU/l/h) and spore productivity (25 × 1011 spore/l/h) were recorded at 48 h of cultivation (Table 1).
Typical time course of batch cultivation of Bacillus thuringiensis MPK13. Black square denotes the total viable cell count (CFU/ml), white triangle spore count, and black triangle glucose concentration
Table 1 Comparison of cell growth, sporulation and δ-endotoxin production by Bacillus thuringiensis MPK13 in batch and fed-batch cultivations
Fed-batch cultivation of Bt MPK13
Feeding during the lag growth phase
The time course of fed-batch cultivation of Bt MPK13, with fresh medium feeding at 2 h of cultivation in a 5-l stirred tank bioreactor, is shown in Fig. 3a. The feeding time was initiated at 2 h of cultivation based on the lag phase of the growth profile obtained during batch cultivation. It was found that substrate feeding during the lag phase has substantially delayed the exponential phase for approximately 4 h. The highest cell count (14.7 × 1011 CFU/ml) in fed-batch cultivation with feeding during the lag phase and the highest spore count (3.7 × 1011 spore/ml) were recorded at 48 h of cultivation (Table 1). The exponential growth phase in fed-batch cultivation with feeding during lag phase lasted for 20 h as compared to 16 h in batch cultivation, as shown in Fig. 3a. The extension of the exponential growth phase may be due to the addition of fresh substrate during the lag growth phase. Glucose concentration in the culture was completely utilized by the cells at 44 h of cultivation. Maximum cell productivity (73 × 1011 CFU/l/h) and spore productivity (19 × 1011 spore/l/h) were recorded at 48 h of cultivation (Table 1).
Typical time course of fed-batch cultivation of Bacillus thuringiensis MPK13 with a constant feeding of fresh medium (a feeding at lag growth phase; b feeding at exponential growth phase; c feeding at stationary growth phase). Black square denotes the total viable cell count (CFU/ml), white triangle spore count, and black triangle glucose concentration
Feeding during exponential growth phase
The highest cell count (15.8 × 1011 CFU/ml) and spore count (3.9 × 1011 spore/ml) were recorded at 40 and 48 h of cultivation, respectively (Table 1). Feeding of glucose during the exponential growth phase resulted in the extension of the phase (24 h) as compared to only 16 h for batch cultivation. As shown in Fig. 3b, glucose concentration in the culture was entirely utilized by the cells after 36 h of cultivation, which was at the stationary growth phase. Maximum sporulation rate, maximum cell productivity, and maximum spore productivity for this cultivation was 25%, 99 × 1011 CFU/l/h, and 20.3 × 1011 spore/l/h, respectively.
Feeding during stationary growth phase
The time course of fed-batch cultivation of Bt MPK13 with fresh medium feeding at 24 h of cultivation is shown in Fig. 3c. Feeding of glucose during the stationary phase resulted in the highest cell growth (16.1 × 1011 CFU/ml) and highest spore count (3.1 × 1011 spore/ml) at 48 h of cultivation. Maximum sporulation (22%) was recorded at 24 h of cultivation (Table 1). The final glucose concentration at 48 h of cultivation was 1.3 g/l (Fig. 3c). Maximum productivity for cells and spore during fed-batch cultivation with feeding during the stationary growth phase was 84 × 1011 CFU/l/h and 16 × 1011 CFU/l/h, respectively (Table 1).
Intermittent feeding during log and stationary phase
The time courses of intermittent fed-batch cultivation of Bt MPK13 with fresh medium feeding at 6 and 24 h of cultivation are shown in Fig. 4. The highest cell count (17.2 × 1011 CFU/ml) and spore count (2.6 × 1011 spore/ml) was recorded at 48 h of cultivation (Table 1). Glucose concentration at 48 h of cultivation was 1.0 g/l (Fig. 4). The maximum productivity for viable cells and spores for this cultivation was 90 × 1011 CFU/l/h and 13.5 × 1011 CFU/l/h, respectively (Table 1).
Typical time course of fed-batch cultivation of Bacillus thuringiensis MPK13 with intermittent feeding of fresh medium (feeding at exponential and stationary growth phase). Black square denotes the total viable cell count (CFU/ml), white triangle spore count, and black triangle glucose concentration
Fed-batch with optimal DOT control strategy
In fed-batch cultivation with fresh medium, feeding at lag growth phase and the DOT was switched from 80 to 40% at 8 h of cultivation, and the highest cell growth (14.7 × 1011 CFU/ml) and the highest spore count (6.6 × 1011 spore/ml) were recorded at 48 h of cultivation (Table 2). The highest sporulation percentage (45.9%) was also recorded at 48 h of cultivation. The cell and spore productivity for this cultivation was 77 × 1011 CFU/ml/h and 30 × 1011 spore/ml/h, respectively.
Table 2 Comparison of cell growth, sporulation, and δ-endotoxin production by Bacillus thuringiensis MPK13 in fed-batch cultivation with the optimal DOT control strategy
In addition, fed-batch cultivation with fresh medium feeding at exponential growth phase and DOT was switched from 80 to 40% at 12 h of cultivation, and the highest cell growth (14.5 × 1011 CFU/ml) and the highest spore count (7.1 × 1011 spore/ml) were recorded at 48 h of cultivation (Table 2). The highest percentage of sporulation (49.0%) was also recorded at 48 h of cultivation. The cell and spore productivity for this cultivation was 75.5 × 1011 CFU/ml/h and 37 × 1011 spore/ml/h, respectively. Bt MPK13 cells in fed-batch cultivation with feeding at exponential growth phase and DOT were switched at 12 h had a high capability to sporulate than the cells in fed-batch cultivation with feeding at lag growth phase, and DOT was switched at 8 h of cultivation.
Comparison of cultivation performance in different modes of bioreactor operation
The cultivation performance of Bt MPK13 in different modes of bioreactor operation was presented in Table 1. The lowest cell count (1.4 × 1012 CFU/ml) and the lowest cell productivity (72 × 108 CFU/l/h) was obtained in batch cultivation, though considerably high spore productivity was achieved (25 × 108 CFU/l/h). Increased cell count by about 6% was obtained in fed-batch cultivation by feeding during lag growth phase than in the batch cultivation. However, the percentage of sporulation (25 %) was lower than that obtained in batch cultivation (37 %). In fed-batch cultivation with feeding at the exponential growth phase, a 14% increase in cell growth was recorded than in the batch cultivation. However, spore count decreased by about 25% than that obtained in batch cultivation (Table 1). In fed-batch cultivation with feeding at the stationary growth phase, a 16% increase in cell count was recorded. However, a substantial decrease in spore count (40%) was recorded compared to that obtained in the batch cultivation. In addition, glucose was not fully consumed in fed-batch cultivation, fed during the stationary growth phase (Fig. 3c), suggesting that glucose was not required for sporulation.
Among all cultivation modes tested in this study, the highest viable cell count (1.7 × 1012 CFU/ml) was obtained in intermittent fed-batch cultivation. In comparison with batch cultivation, approximately 24% increase in cell count was recorded in intermittent fed-batch cultivation (Table 1). However, a substantial reduction in spore count (2.6 × 1011 spore/ml) was obtained in this cultivation. Substantial enhancement in the percentage of sporulation was achieved in fed-batch cultivation when the optimal DOT control strategy was applied. The highest sporulation percentage (49%), spore productivity (37 × 1011 spore/l/h), and spore count (7.1 × 1011 spore/ml) were recorded in fed-batch cultivation with medium at exponential growth phase, where DOT was switched from 80 to 40% at 12 h of cultivation (Table 2). Fed-batch cultivation of Bt, without appropriate DOT control strategy, enhanced cell growth but not the percentage of sporulation.
In the cultivation of Bt MPK13, glucose was identified as the most critical nutrient that supports both viable cell growth and also sporulation (Mazmira et al. 2012). Reports on the empirical feeding policies have been developed to achieve high cell density culture (Khodair et al. 2008). In this experiment, the excess feeding of glucose seemed to decrease sporulation and also blocked the synthesis of δ-endotoxin. Intermittent feeding of glucose at exponential and stationary growth phase, as well as continuous feeding of glucose throughout the cultivation, successfully promoted high cell growth (≥ 1.6 × 1011 CFU/ml). However, sporulation was reduced by a spore count of less than < 3.5 × 1011 CFU/ml). Although the existence of glucose is crucial for sporulation, high concentration in the culture may disturb the initiation of the sporulation process. It is well noted that sporulation and germination in bacilli are dependent on the nutritional status of the microorganisms (Rajalakshmi and Shethna 1980).
Sporulation and cry protein yields are usually low in fed-batch cultivation (López and de la Torre 2005). Liu et al. (1994) studied the effect of several feeding strategies on vegetative cell growth, spore formation, crystal protein content, carbon dioxide production, and oxygen consumption in fed-batch cultivation of Bt subspecies kurstaki. They found that spore and crystals were not formed in fed-batch cultivation. During fed-batch cultivation, there was a redirection of bacterial metabolism which takes place during the feeding.
In Bt, the setup of transition state was also reported during feeding in fed-batch cultivation. The physiological changes indicated that the transition state was set up during feeding, and it seemed to give a negative effect on sporulation and cry gene expression. Reduced spore count decreased in the percentage of sporulation in fed-batch cultivation with feeding during stationary phase, intermittent feeding, and continuous cultivation as demonstrated in this study could be explained by this mechanism.
Glucose feeding during fed-batch or continuous cultivation also made the glucose not be entirely metabolized in time and resulted in the mass accumulation of organic acids (Wen et al. 2007). Thus, the Krebs cycle activity decreases and the cell is unable to produce sufficient ATP, which in turn reduces the power and biosynthetic intermediates for spore formation (Kim et al. 2003). Nonetheless, fed-batch cultivation of Bt subsp. darmstadiensis 032 with an improved pH and glucose control strategy improved thuringiensis yield significantly (Zhou et al. 2007), though cell growth and sporulation performance were not analyzed. Results from this study demonstrated that the feeding strategy during fed-batch cultivation is crucial and greatly influenced the synthesis of δ-endotoxin.
High cell densities are favorable in fed-batch and continuous cultivations, but yields of spores and cry proteins synthesis were significantly reduced (Arcas et al. 1987; Liu et al. 1994). The reason of why sporulation was affected during feeding of a medium can be explained by the transition state regulators that might be overproduced during feeding in the fed-batch or continuous cultivation. Occurrence of catabolite repression is another possible explanation, where excess carbon source in the medium not only causes catabolite repression but also represses the expression of the SpoOA fusion gene that affects sporulation (Yamashita et al. 1989; Lereclus et al. 2000; Sonenshein 2000). Obtained results indicated that feeding the culture with glucose as the carbon source in order to fit the nutrient demand with nutrient availability as a way to obtain high cell densities was not sufficient for the success of higher cell sporulation and better δ-endotoxin production.
Synthesis of δ-endotoxin
The synthesis of δ-endotoxin at 48 h of cultivation in batch, fed-batch with feeding at lag, and the exponential growth phase is shown in Fig. 5. No detection of δ-endotoxin was recorded in intermittent fed-batch cultivation, fed-batch cultivation with feeding at a stationary growth phase, and also continuous cultivation at all dilution rates. The advantage of batch cultivation can be clearly observed from the time δ-endotoxin was produced (28 h of cultivation). The toxin was synthesized 20 h earlier than the fed-batch cultivation with feeding at lag and exponential growth phase, where the existence of δ-endotoxin was only detected at 48 h of cultivation (Table 1). The lack of δ-endotoxin synthesis in the respective culture corresponded to the lowest spore count, the low percentage of sporulation, and also a substantial reduction in spore count (≥ 40%) than those obtained in batch cultivation. Feeding of glucose at stationary growth phase, intermittent feeding and continuous feeding throughout the cultivation seemed to promote a high cell growth, thus invading the cells to sporulate. However, with the right DOT control strategy during fed-batch cultivation, enhancement and early synthesis of δ-endotoxin were observed. As shown in Fig. 6a, fed-batch cultivation with fresh medium feeding at lag phase and DOT was switched from 80 to 40% at 8 h of cultivation recorded early synthesis (24 h of cultivation) of δ-endotoxin while fed-batch cultivation with fresh medium feeding at exponential growth phase and DOT was switched from 80 to 40% at 12 h of cultivation recorded thick 130 kD δ-endotoxin bands starting from 28 to 48 h of cultivation (Fig. 6b).
The SDS-PAGE analysis of δ-endotoxin (130 kD) for Bacillus thuringiensis culture obtained from different modes of cultivation. Cultures, collected at 48 h of cultivation, were applied to 10% polyacrylamide gel and stained with Coomassie brilliant blue. M, standard marker; 1, intermittent fed-batch cultivation with feeding at exponential growth phase and stationary growth phase; 2, fed-batch cultivation with feeding at stationary growth phase; 3, fed-batch cultivation with feeding at lag growth phase; 4, fed-batch cultivation with feeding at exponential growth phase; and 5, batch cultivation
SDS-PAGE analysis of δ-endotoxin (130 kD) in the cultures of Bacillus thuringiensis MPK13 obtained from a fed-batch cultivation with feeding at lag growth phase, DOT was switched from 80 to 40% at 8 h, and b fed-batch cultivation with feeding at exponential growth phase, DOT was switched from 80 to 40% at 12 h. The cultures were applied to 10% polyacrylamide gel and stained with Coomassie brilliant blue. M, standard marker
The absence of δ-endotoxin in intermittent fed-batch cultivation has also been reported. Although intermittent fed-batch cultivation enhanced the growth of Bt cell, the sporulation and δ-endotoxin synthesis were significantly reduced (Vu et al. 2010). Sasaki et al. (1998) claimed that high cell concentration (16.1 g/l) could be obtained in fed-batch cultivation, using sodium acetate-yeast extract (AYE) as a feeding medium, which was fed twice during the cultivation. Nevertheless, a deficient percentage of sporulation was observed after 55 h of cultivation. In this study, the optimal DOT control strategy was successfully applied in fed-batch cultivation to enhance δ-endotoxin synthesis. Bodizs et al. (2007) reported the importance of DOT regulation using a cascade model in enhancing the industrial pilot-scale fed-batch fungal fermentation. In this study, the change of DOT from a high level (80% saturation) to a low level (60% saturation) promoted sporulation rate and triggered δ-endotoxin synthesis without significantly affecting the cell growth.
Toxicity against Metisa plana
The toxicity of δ-endotoxin, obtained from Bt MPK13 cultures, was tested against M. plana. The highest corrected mortality (80%) at 7 DAT (days of treatment) and 14 DAT (100% mortality) was recorded by the culture obtained from fed-batch cultivation with feeding during the exponential growth phase, and the DOT was switched from 80 to 40% at 12 h of cultivation (Table 3). Culture from fed-batch cultivation with feeding during the lag growth phase and the DOT was switched from 80 to 40% at 8 h of cultivation exhibited the second highest corrected mortality (75%) at 7 DAT. For culture obtained from batch cultivation, the corrected mortality recorded at 7 DAT was 56%. For cultures obtained from fed-batch cultivation without DOT control strategy, either feeding at lag or exponential growth phase, the corrected mortality recorded at 7 DAT was not more than 67%. It is important to note that all cultures containing δ-endotoxin exhibited 100% mortality towards M. plana at 14 DAT (Table 3).
Table 3 Corrected mortality of Bacillus thuringiensis MPK13 δ-endotoxin against Metisa plana at 7 and 14 DAT
All cultures that contain δ-endotoxin during the cultivation recorded a high corrected mortality rate (≥ 55% mortality) towards the bagworm M. plana at 7 DAT. The 100% mortality of the bagworm at 14 DAT after exposure to the δ-endotoxin further confirms the high-efficacy effect of Bt MPK13 on the lepidopteran pest.
The results of this study demonstrated that the fed-batch had the potency to increase Bt cell growth than the batch cultivation. However, the fed-batch cultivation, with feeding during the stationary growth phase and intermittent feeding, did not support high spore production as the system was supplied with the highest concentration of glucose during the cultivation. The synthesis of δ-endotoxin, with a molecular weight of 130 kD, was detected in batch and constant fed-batch cultivations with feeding at lag or exponential growth phase. The capability of fed-batch cultivation, with feeding during lag or exponential growth phase, can be enhanced significantly with the application of optimal DOT control strategy.
The datasets used and analyzed during the study are available from the corresponding author on reasonable request.
CFU:
Colony-forming unit
Day after treatment
DOT:
Dissolved oxygen tension
EC:
Emulsified concentrate
Rotation per minute
Abbott WS (1987) A method of computing the effectiveness of an insecticide. J Am Mosquito Contr. 3(2):302–303
Adams TT, Eiteman MA, Adang MJ (1999) Bacillus thuringiensis subsp. kurstaki spore production in batch culture using broiler litter extracts as complex media. Bioresource Technol 67:82–87
Agaisse H, Lereclus D (1995) How does Bacillus thuringiensis produce so much insecticidal crystal protein. J Bacteriol 177(21):6027–6032
Arcas J, Yantorno O, Ertola R (1987) Effect of high concentration of nutrients on Bacillus thuringiensis cultures. Biotechnol Lett 9:105–110
Aronson AI, Yechiel S (2001) Why Bacillus thuringiensis toxins are so effective: unique features of their mode of action. FEMS Microbiol Lett 195:1–8
Avionone-Rossa C, Mignone C (1993) δ-endotoxin activity and spore production in batch and fed-batch cultures of Bacillus thuringiensis. Biotech Lett 15(3):295–300
Basri MW, Hassan AH, Zulkefli M (1988) Bagworms (Lepidoptera: Psychidae) oil palm in Malaysia. PORIM Occasional Paper 23:37
Basri MW, Siti Ramlah AA, Norman K (1994) Status report on the use of Bacillus thuringiensis in the control of some oil palm pests. Elaeis 6(2):82–101
Bodizs L, Titica M, Faria N, Srinivasan B, Dochain D, Bonvin D (2007) Oxygen control for an industrial pilot-scale fed-batch filamentous fungal fermentation. J Process Control 17:595–606
Brian JW, Norman K (2019) Bagworm (Lepidoptera: Psychidae) infestation in the centennial of the oil palm industry a review of causes and control. J Oil Palm Res 31(3):364–380 https://doi.org/10.21894/jopr.2019.0032
Içygen Y, Içygen B, Őzcengiz G (2002) Regulation of crystal protein biosynthesis by Bacillus thuringiensis: I. Effects of mineral elements and pH. Res Microbiol 153:599–604
Kang BC, Lee SY, Chang HN (1993) Production of Bacillus thuringiensis spores in total cell retention culture and two-stage continuous culture using an internal ceramic filter system. Biotech Bioengineering 42:1107–1112
Khodair TQ, Abdelhafez AAM, Sakr HM, Ibrahim MMM (2008) Improvement of Bacillus thuringiensis bioinsecticides production by fed-batch culture on low-cost, effective medium. Res J Agric Biol Sci 4(6):923–935
Kim HJ, Kim SS, Ratnayake-Lecamwasam M, Tachikawa K, Sonshein AL, Strauch M (2003) Complex regulation of the Bacillus subtilis aconitase gene. J Bacteriol. 185:1672–1680
Krause M, Ukkonen K, Haataja T, Ruottinen M, Glumoff T, Neubauer A, Neubauer P, Vasala A (2010) A novel fed-batch based cultivation method provides high cell-density and improves yield of soluble recombinant proteins in shaken cultures. Microb Cell Fact 9(11):1475–2859
Laemmli UK (1970) Cleavage of structural proteins during the assembly of the head of bacteriophage T4. Nature. 227(5259):680–685
Lereclus D, Argaisse H, Grandvalet C, Gominent M (2000) Regulation of toxin and virulence gene transcription in Bacillus thuringiensis. Int J Med Microbiol 29:295–299
Liu WF, Bajpal R, Binary V (1994) High-density cultivation of spore formers. Ann NY. Acad Sci 721:310–325
López LVE, de la Torre M (2005) Redirection of metabolism during nutrient feeding in fed-batch cultures of Bacillus thuringiensis. Appl Microbiol Biotechnol 67:254–260
Mazmira MMM, Ramlah SAA, Rosfarizan M, Ling TC, Ariff AB (2012) Effect of saccharides on growth, sporulation rate and δ-endotoxin synthesis of Bacillus thuringiensis. Afr J Biotechnol 11(40):9654–9663
Mazmira MMM, Ramlah SAA, Rosfarizan M, Ling TC, Ariff AB (2013) Relationship between total carbon, total nitrogen and carbon to nitrogen ratio on growth, sporulation rate and delta-endotoxin synthesis of Bacillus thuringiensis. Minerva Biotecnologica 25(4):219–225
Mazmira MMM, Siti Ramlah AA, Najib MA, Norman K, Kushairi AD, Basri MW (2010) Integrated pest management (IPM) of bagworms in Southern Perak via aerial spraying of Bacillus thuringiensis. Oil Palm Bull 63:24–33
Noorhazwani K, Siti Ramlah AA, Mazmira MMM, Najib MA, Che Manan CAH, Norman K (2017) Controlling Metisa plana Walker (Lepidoptera: Psychidae) outbreak using Bacillus thuringiensis at an oil palm plantation in Slim River, Perak, Malaysia. J Oil Palm Res 29(1):47–54
Norman K, Mazmira MM (2019) Industry-wide efforts in circumventing the scourge of bagworm infestation in Malaysia - what have gone wrong and what should be done? Planter 95(1118):321–333
Rajalakshmi S, Shethna YI (1980) Spore and crystal formation in Bacillus thuringiensis during growth in cystine and cysteine. J Bioscience 2(4):321–328
Ramlah SAA, Norman K, Basri MW, Najib MA, Mazmira MMM, Kushairi AD (2007) Sistem pengurusan perosak bersepadu bagi kawalan ulat bungkus di ladang sawit. MPOB publication 28.
Rech R, Ayub MAZ (2007) Simplified feeding strategies for fed-batch cultivation of Kluyveromyces marxianus in cheese whey. Process Biochemistry 42:873–877
Rowe GE, Margaritis A (1987) Bioprocess developments in the production of bioinsecticides by Bacillus thuringiensis. CRC Crit Rev Biotechnol. 6:87–127
Sarrafzadeh MH, Belloy L, Esteban G, Navarro JM, Ghonmidh C (2005) Dielectric monitoring of growth and sporulation of Bacillus thuringiensis. Biotech Lett 27:511–517
Sasaki K, Jiaviriyaboonya S, Rogers PL (1998) Enhancement of sporulation and crystal toxin production by corn steep liquor feeding intermittent fed-batch culture of Bacillus sphaericus 2362. Biotech Lett 20(2):165–168
Sonenshein AL (2000) Control of sporulation initiation in Bacillus subtilis. Curr Opin Microbiol. 3:561–566
Stanbury PF, Whitaker PF, Hall SJ (2003) Principles of fermentation technology, 2nd edn. Butterworth, Heinemann
Tey CC, Cheong YL (2013) Challenges in integrated pest management (IPM). Proc. of 10th NATSEM 2013 - confronting management challenges in the oil palm industry. Incorporated Society of Planters, Kuala Lumpur, pp 117–127
Thompson PJ, Stevenson KE (1984) Mesophilic spore-forming aerobes. In: Speck M (ed) Compendium of methods for the microbiological examination of foods. American Public Health Association, Washington, pp 211–220
Vu KD, Tyagi RD, Valero JR, Surampalli RY (2010) Batch and fed-batch fermentation of Bacillus thuringiensis using starch industry wastewater as fermentation substrate. Bioprocess Biosyst Eng 33(6):691–700
Warren B, David BL, Nazim C (2018) Bioreactor operating strategies for improved polyhydroxyalkanoate (PHA) productivity. Polymers 10:1197 https://doi.org/10.3390/polym10111197
Wen ZJ, Fei CY, Hong XZ, Niu YZ, Wen CS (2007) Production of thuringiensis by fed-batch culture of Bacillus thuringiensis subsp. darmstadiensis 032 with an improved pH-control glucose feeding strategy. Process Biochem 42:52–56
Yamashita S, Kawamura F, Yoshikawa H, Takahashi H, Kobayashi Y, Saito H (1989) Dissection of the expression signals of the SpoOA gene of Bacillus subtilis: glucose represses sporulation specific expression. J Gen Microbiol 135(5):1335–1345
Yury VM, Anton AN, Kirill SA (2019) Repertoire of the Bacillus thuringiensis virulence factors unrelated to major classes of protein toxins and its role in the specificity of host-pathogen interactions. Toxins 11:347 https://doi.org/10.3390/toxins11060347
Zhou JW, Cheng YF, Xu ZH, Yu ZN, Chen SW (2007) Production of thuringiensis by fed-batch culture of Bacillus thuringiensis subsp. darmstadiensis 032 with an improved pH control feeding strategy. Process Biochem 42:52–56
The authors would like to thank the Director General of Malaysian Palm Oil Board for the permission to publish this article. We greatly appreciate all the staff from Microbial Technology and Engineering Center (MICROTEC), especially to Mr. Zamri Daud and Mr. Aminshah Abd Aziz for their valuable assistance. We would also like to thank Dr. Norman Kamarudin for his comments and support for this study.
The study was supported financially by MPOB.
Applied Entomology and Microbiology Unit, Biological Research Division, Malaysian Palm Oil Board, 43000, Kajang, Selangor, Malaysia
Mohamed Mazmira Mohd Masri
Department of Bioprocess Technology, Faculty of Biotechnology and Biomolecular Sciences, Universiti Putra Malaysia, UPM, 43400, Serdang, Selangor, Malaysia
Arbakariya Bin Ariff
Search for Mohamed Mazmira Mohd Masri in:
Search for Arbakariya Bin Ariff in:
MMMM and ABA designed the experiment. MMMM conducted the experiment and drafted the manuscript. ABA helped in data analysis and added inputs in the drafted manuscript. Both authors read and approved the final manuscript.
Correspondence to Mohamed Mazmira Mohd Masri.
Masri, M.M.M., Ariff, A.B. Effect of different fermentation strategies on Bacillus thuringiensis cultivation and its toxicity towards the bagworm, Metisa plana Walker (Lepidoptera: Psychidae). Egypt J Biol Pest Control 30, 2 (2020). https://doi.org/10.1186/s41938-020-0204-y
DOI: https://doi.org/10.1186/s41938-020-0204-y
Bacillus thuringiensis
Fed-batch
Sporulation rate
Metisa plana
|
CommonCrawl
|
What can we learn from China's health insurance reform to improve the horizontal equity of healthcare financing?
Fan Yang1,2,
Mingsheng Chen1,2 &
Lei Si3,4
Universal health coverage is a challenge to horizontal equity in healthcare financing. Since 1998, China has extended its healthcare insurance schemes, and individuals with equal incomes but different attributes such as social status, profession, geographic access to health care, and health conditions, are covered by the same health insurance scheme. This study aims to examine horizontal inequity in the Chinese healthcare financing system in 2002 and 2007 using data from two national household health surveys.
Multi-stage stratified random sampling was used to select 3,946 households with 13,619 individuals in 2002, and 3,958 households with 12,973 individuals in 2007. A decomposition method was used to measure the horizontal inequity and reranking in healthcare finance.
Over the period 2002–2007, the absolute value of horizontal inequity in total healthcare payments decreased from 997.83 percentage points to 199.87 percentage points in urban areas, and increased from 22.28 percentage points to 48.80 percentage points in rural areas. The horizontal inequity in social health insurance remained almost the same in urban areas, at around 27 percentage points, but decreased from 110.90 percentage points to 7.80 percentage points in rural areas. Horizontal inequity in out-of-pocket payments decreased from 178.43 percentage points to 80.96 percentage points in urban areas, and increased from 26.06 percentage points to 41.40 percentage points in rural areas.
The horizontal inequity of healthcare finance in China over the period 2002–2007 was reduced by general taxation and social insurance, but strongly affected by out-of-pocket payments. Increasing the benefits from social health insurance would help to reduce horizontal inequity.
Since 1978, the planned economic model has been gradually replaced by a market-oriented model in China. The influence of the market model has now reached every corner of Chinese society. In urban areas, traditional health insurance plans, the Free Medical Service and the Labor Medical Service, were collapsed due to financial pressure on paying for the medical bill. In rural areas, the Cooperative Medical Scheme (CMS), which used to provide to health cover for the residents of rural areas, was rapidly collapsed due to rural economic reform and the implementation of the household contract responsibility system at the beginning of the 1980s [1].
China began to redevelop its health insurance schemes at the end of 1998 (Table 1). The Urban Workers Basic Medical Insurance (UWBMI) covers public sector workers and retirees in urban areas (for example, from government departments, and state-owned and collectively-owned enterprises). Both employers and employees paid contributions to UWBMI, of about 6%–8% and 0%–2% of the employee's salary. UWBMI was gradually extended to cover non-public sector workers, including migrant workers and workers in private enterprises, private non-enterprise units, social organizations, and foreign-invested enterprises. UWBMI coverage increased from 30.4% of urban residents in 2002 to 44.2% in 2007 [1, 2]. In 2003, the New Cooperative Medical Scheme (NCMS), an initiative to rebuild health insurance and overhaul the healthcare system in rural areas, following the gradual dissolution of CMS from the late 1980s. Since its formation, China's authorities have provided additional public spending for NCMS, which has achieved a high rural coverage level with the percentage of rural residents insured increasing from 9.5% in 2002 to 89.7% in 2007 [1, 2] (see Table 1).
Table 1 Changes in China's social health insurance schemes between 2002 and 2007
The healthcare financing system has therefore been extensively reformed and the population coverage of health insurance schemes has considerably improved. There is, however, a potential threat to the horizontal equity of healthcare finance. Financing equity of healthcare depends on both vertical and horizontal equity [3, 4]. Vertical equity implies that people having greater economic ability ought to pay more and horizontal equity implies that people with equal economic ability should pay the same. The issue of vertical equity and its progressivity have been studied by many researchers [5,6,7,8], but very few have examined horizontal equity of healthcare financing [4, 9,10,11]. Horizontal inequity is increasingly recognized as an important component in the healthcare financing system, and plays a key role in adjusting the economic rank order of the general population when a relatively high healthcare payment occurs. China's health insurance schemes have expanded health coverage to individuals with different socioeconomic status. In each socioeconomic group, individuals may also live in either urban or rural areas, and have different health conditions, social status, and access to healthcare. Much uncertainty still exists about the relation between horizontal inequity and the expansion of health insurance schemes. This study therefore aimed to examine the horizontal equity of healthcare finance in 2002 and 2007 in one province of China, to shine new light on the question of how well different forms of healthcare financing performed after reform of China's health insurance system.
The unit of analysis used was households. Two rounds of household survey were conducted in China's Gansu province in 2003 and 2008. These surveys recorded basic household information and healthcare use of household members in 2002 and 2007. Gansu is located in the northwest of China and is an impoverished province with a population of more than 26 million [12]. The survey randomly collected data from 13 counties or county-level cities using multi-stage stratified random sampling. Eight communities or administrative villages were sampled in every city or county. About 30 households from each community or administrative villages were then randomly sampled, giving a total of 3,946 households (1,974 urban and 1,972 rural) containing 13,619 individuals (5,880 urban and 7,739 rural) in 2003 and 3,958 households (1,979 urban and 1,979 rural) containing 12,973 individuals (5,581 urban and 7,392 rural) in 2008 (see Tables 2 and 3).
Table 2 Descriptive statistics and socioeconomic characteristics for the urban sample
Table 3 Descriptive statistics and socioeconomic characteristics for the rural sample
The survey was administered via household interviews. All household members aged 15 years or more were interviewed by trained data collectors in each sampled household. Incapacitated people and children under 15 years old were interviewed through their guardians. The face-to-face interviews, implemented by trained data collectors, were done with a structured questionnaire, containing a series of questions regarding household's demographic and socioeconomic characteristics, including household expenditure, number of family members, urban–rural classification, and gender, age, educational attainment, and employment type of household members. Monthly household expenditure on housing, food, water, transport, electricity, clothing, communications, education, fuel, entertainment, travel, healthcare and other expenditure were recorded for the previous 12-month period. Per capita household expenditure adjusted by adult equivalence (AE) was used as the measure of living standard in our study [13]. Household expenditure was obtained from the household heads or the most suitable household member. Healthcare expenditure was obtained from interviewees' medical records.
China's healthcare system is financed through general taxation, social health insurance schemes, commercial health insurance schemes, and out-of-pocket (OOP) payments. Healthcare payments were computed using two sources: the household survey, and tax information and copayments for social health insurance, which were collected from the local statistical yearbook. General taxation is an important financing source for healthcare in China, and comes from a range of sources including excise on eating, drinking and accommodation; cigarettes, alcohol, gas, electricity, and entertainment; and other consumption taxes. Tariffs for tax were collected from the China Price Statistical Yearbook [14], and general taxation was approximated by applying tariffs for tax to the corresponding data on expenditure collected in the survey. There are no taxes specifically earmarked for health in China, so we assumed that the proportion of general taxation going to the health sector was calculated on a pro-rata basis. In 2002, tax-funded expenditure was 79.97% of government expenditure. Government expenditure on health was 4.12% of general government expenditure [15]. The proportion of household tax payments going into the health sector was therefore assumed to be 3.29%. In 2007, the proportion of household tax payments going into the health sector was estimated at 4.75% [16]. The UWBMI financing contribution was measured by applying the contribution rates to earnings of insured workers. The contribution rates for UWBMI were collected from the Gansu Statistical Yearbook [12]. Flat rate contributions were directly recorded during household interviews for those covered by CMS and NCMS. Private health insurance payments are directly obtained from the household interview. The inquiry into OOP payments involved information about health care expenditures on prescription and outpatient care paid by individuals during the latest 2 weeks before the household interview, and inpatient care paid by individuals during the previous 12 months.
Unit of the finance of health care is household, with expenditures and healthcare payments aggregated to the household level. The value of household expenditure is used as the measurement of ability to pay. Household expenditure is adjusted for household size and composition to obtain an adult equivalent estimate. The number of adult equivalent household members is defined as
$${\mathrm{AE }= (\mathrm{A }+ 0.5\mathrm{K})}^{0.75}$$
where A is the number of adults (> 14 years) in the household and K the number of children (0–14 years) [17].
Contributions toward the finance of health care may redistribute disposable income of households. The types of redistribution include vertical redistribution and horizontal redistribution. The former occurs when healthcare payments are disproportionately related to ability to pay. The latter occurs when persons with equal ability to pay contribute unequally to healthcare payments. Together with reranking, vertical and horizontal redistribution are generally defined as the redistributive effect (RE) [17]. Vertical equity implies that people having greater ability to pay ought to pay more and horizontal equity implies that people with equal ability to pay should pay the same. Reranking occurs when people change rank order before and after healthcare payments.
In 1994, Aronson, Johnson and Lambert provided a decomposition method to measure the RE of income tax [3]. Later, Wagstaff and van Doorslaer applied this Aronson–Johnson–Lambert (AJL) decomposition method to decompose the change in income inequality caused by healthcare financing into a vertical, horizontal and reranking effect [4]. The extent of vertical equity, horizontal inequity and reranking calculations are usually expressed as percentages of the total RE.
The RE of healthcare finance can be calculated as the difference in the Gini coefficient caused by the healthcare payment:
$$RE\equiv G^X-G^{X-P}$$
where \({G}^{X}\) and \({G}^{X-P}\)are the pre-payment and post-payment Gini coefficients, \(X\)denotes pre-payment income, or more generally some measure of ability to pay [18], and \(P\) denotes the healthcare payment. AJL decomposition method has shown that this difference can be expressed as:
$$RE=V-H-R$$
The first term, V, measures the inequality reduction that would have been obtained if there had been no differential tax treatment. The second term, H, measures the extent of classical horizontal inequity. The third term, R, measures the extent of reranking in the move from the pre-payment distribution to the post-payment distribution, by comparing the post-payment Gini coefficient with the post-payment concentration index. If there is no reranking, R is zero.
Horizontal inequity H is measured by the weighted sum of the group (j) specific post-payment Gini coefficients, \({G}_{j}^{X-P}\), where weights are given by the product of the group's population share and its post-payment income share, \({a}_{j}\).
$$H=\sum\limits_ja_jG_j^{X-P}$$
R captures the extent of reranking of households that occurs in the move from pre-payment to post-payment income distributions. It is measured as the difference between the post-payment Gini coefficient \({G}^{X-P}\) (which ranks households by post-payment income) and the post-payment concentration index \({C}^{X-P}\) (which ranks households by their pre-payment income):
$$R={G}^{X-P}-{C}^{X-P}$$
Tables 4, 5, 6 and 7 present the results of horizontal inequity and reranking of healthcare financing sources, and the distribution of healthcare financing sources across equivalent income deciles.
Table 4 Horizontal inequity and reranking of the Chinese urban health care financing system in 2002
Table 5 Horizontal inequity and reranking of Chinese rural health care financing system in 2002
Table 6 Horizontal inequity and reranking of Chinese urban health care financing system in 2007
Urban areas in 2002
Table 4 shows that 15.15% of household expenditure was payments for healthcare. RE was negative (− 0.000454) for the overall healthcare financing system, suggesting that the redistribution favored wealthier households (pro-rich). It would have been 2364.02% less redistributive without differential treatment, 997.83 percentage points being the result of horizontal inequity and 1366.18 the result of reranking. General tax showed a slightly pro-rich structure because the RE value was negative (− 0.000008). It would have been 248.01% less redistributive without differential treatment, which depended wholly on horizontal inequity. RE for social health insurance was positive (0.000901) and showed a pro-poor redistribution. It would have been 30.57% more redistributive without differential treatment, the majority (27.36 percentage points) being the result of horizontal inequity. Commercial health insurance had a pro-poor effect because its RE value was positive (0.000054). The redistribution would have been 1925.36% more without differential treatment, 936.09 percentage points as a result of horizontal inequity and 989.25 percentage points from reranking. RE for OOP payments (− 0.002204) was negative, suggesting its redistribution was pro-rich. It would have been 433.95% less redistributive without differential treatment, with 178.43 percentage points from the result of reranking and 255.51 percentage points from horizontal inequity.
Rural areas in 2002
As shown in Table 5, healthcare payments made up 16.65% of household expenditure. The healthcare financing system showed a pro-rich redistribution with a negative value for RE (− 0.009877). Horizontal inequity accounted for 22.28 percentage points, and reranking for 44.85 percentage points. This means that the healthcare financing system does not treat households with equal household expenditure equally, and households are also reranked after healthcare payments. The system would have been 67.13% less redistributive without differential treatment. General tax showed a slightly regressive structure because its value of RE was negative (− 0.000087). It would have been 255.21% less redistributive without differential treatment, which was solely from horizontal inequity. Social health insurance had a negative RE (− 0.000080), implying that it was pro-rich. It would have been 110.90% less redistributive without differential treatment, which was again solely from horizontal inequity. Commercial insurance also had a pro-rich effect with a negative RE (− 0.000412). It would have been 105.58% less redistributive without differential treatment, with 27.62 percentage points from horizontal inequity and 77.96 percentage points from reranking. RE for OOP payments (− 0.008965) was negative, suggesting its redistribution was pro-rich. It would have been 70.22% less redistributive without differential treatment, with 44.16 percentage points from horizontal inequity and 26.06 percentage points from reranking.
Table 6 shows that healthcare payments made up 20.10% of urban household expenditure in 2007. The value of RE for total healthcare payments was negative (− 0.001925), showing a pro-rich redistribution. It would have been 831.83% less without differential treatment, with 199.87 percentage points to the result of horizontal inequity and 631.96 percentage points from reranking. General tax was slightly pro-rich redistributive. It would have been 25.83% less redistributive without differential treatment, which was solely the result of horizontal inequity. Social health insurance had a pro-poor redistribution, and would have been 37.80 more without differential treatment, with 26.94 percentage points from horizontal inequity and 10.85 percentage points from reranking. Commercial health insurance, however, was a pro-rich redistribution and would have been 115.82 percentage points less without differential treatment. Horizontal inequity accounted for 34.56 percentage points of the RE, and reranking for 81.26 percentage points. A much higher degree of differential treatment occurred in OOP payments, which would have been 344.38% less redistributive without differential treatment, with 80.96 percentage points from horizontal inequity and 263.42 percentage points from reranking.
Table 7 shows that 16.68% of rural household expenditure in 2007 went on healthcare payments. The value of RE for total healthcare payments was negative (− 0.011796), showing that the healthcare financing system was again a pro-rich redistribution. It would have been 149.80% less redistributive without differential treatment, with 48.80 percentage points from horizontal inequity and 101.00 percentage points from reranking. General tax had a slightly pro-rich redistribution. It would have been 63.36% less without differential treatment, which was solely the result of horizontal inequity. Social health insurance was a pro-rich redistribution, and would have been 11.91% less without differential treatment, with 7.80 percentage points from horizontal inequity and 4.11 from reranking. Commercial health insurance had a pro-poor redistribution that would have 127.61 percentage points more without differential treatment. In total, 44.79 percentage points were from horizontal inequity and 82.82 from reranking. OOP payments had a pro-rich redistribution that would have been 123.24% less without differential treatment. In total, 41.40 percentage points were from horizontal inequity and 81.84 from reranking.
Horizontal inequity of general taxation decreased in both urban and rural areas from 2002 to 2007, indicating that it has gradually become a comprehensive tool of income redistribution in the general population. In the 1990s, the tax-levying system was not well-developed and tax avoidance was quite common in China [19]. This resulted in different financing burdens among individuals with the same income. The extent of horizontal inequity in general taxation was therefore comparatively high in 2002. In the 2000s, with improved information technology, the Chinese government enhanced tax supervision by introducing a number of tax avoidance countermeasures, such as identity certification, tax withholding and remitting, and non-cash transactions on block trading [20]. Consequently, the financing burden was equalized among people with the same income levels. The extent of horizontal inequity of general taxation was therefore significantly reduced by 2007.
Horizontal inequity was largely unchanged for social health insurance in urban areas over the period 2002–2007. Over this period, the social health insurance provider was UWBMI in urban areas. UWBMI is managed and operated by local governments, which organize universal health insurance for urban workers. The premium is jointly funded by employers and employees, and the funding amount depends on the individual's age. Generally, employers contribute 6–8% of employees' salaries for workers under 45 years old and 8–10% for those aged 45 or over. The employees themselves contribute 2% of their salaries [21]. Insured employees therefore pay a fixed proportion of their salary as premium, but this proportion varies slightly by age and region. More types of workers were gradually covered by UWBMI over the period, including migrant workers and those in private enterprises, social organizations, and foreign-invested enterprises, but the premium-setting policy was the same for everyone. This resulted in similar financing contributions from people with the same income levels. Horizontal inequity of UWBMI came from the different ages and regions involved, and was therefore both stable and at an acceptable level.
Horizontal inequity dramatically decreased for social health insurance in rural areas during the period 2002–2007. In 2002, the social health insurance provider was the remaining CMS, and this had been substituted by NCMS since 2003. Both had flat rate contribution schemes, so the financing contribution was the same in absolute terms for all insured individuals. The extent of horizontal inequity of CMS was quite high in 2002 because coverage was less than 10% (Table 1) and the horizontal inequity stemmed from the discrepancy between the number of covered and the uncovered. By 2007, NCMS coverage was nearly 90% and the horizontal inequity therefore significantly decreased during the period. Commercial health insurance did not play an important role in healthcare financing system because China's authorities decided to achieve universal health coverage through social health insurance [22, 23]. Commercial insurance therefore only accounted for approximately 3% of the total healthcare payments made in the last decade [24]. Insured individuals purchased different types of insurance from different providers and the horizontal inequity was comparatively high.
During the period 2002–2007, horizontal inequity in OOP payments dramatically decreased in urban areas, but slightly increased in rural areas. OOP payments are post-paid, and the change in their horizontal equity may be explained by pre-paid payments, such as general taxation, and social and commercial health insurance. In urban areas, the horizontal inequity in OOP payments was reduced by tax avoidance countermeasures, the UWMBI's premium-setting policy and the decreasing horizontal inequity in commercial health insurance. In rural areas, general taxation and social health insurance both decreased, but horizontal inequity in OOP payments was affected by the increase in horizontal inequity of commercial health insurance, which resulted in a slight increase in horizontal inequity of OOP payments. In both urban and rural areas, the extent of reranking of OOP payments increased over the period. This suggested that the rank order of individuals who paid for medical care through OOP payments decreased significantly, and some even dropped below the poverty line.
Policymakers took the expansion of health insurance schemes seriously in China. UWBMI, and NCMS, were either established or extended during this period. This was expected to decrease the heavy dependence on OOP payments and reduce their adverse impact on household income. The expansion of health insurance schemes was designed not only to improve access to basic medical care, but also to provide adequate and effective financial protection. However, social health insurance focused on ensuring wide universal coverage, not depth of risk pooling.
Health insurance was administered and implemented at county level. The county government's top priority was fund security, and a deficit in fund pooling was not encouraged. This resulted in a very strict compensation policy for insured patients. For example, only services from contracted hospitals and pharmacies were eligible for reimbursement. Reimbursement depended on the provincial health insurance list, but many types of medicines and medical services, especially the more expensive items, were not covered. Patients had to pay for these medicines and medical care. Even the costs of the medical services in the list were not fully reimbursed. It was found that the level of NCMS deductibles was low, but so were the co-payment rate and ceiling [25]. The costs of catastrophic illness requiring hospitalization were often not reimbursed because of underfunding [26, 27]. As a result, many urban and rural residents still faced high economic risk of diseases and dramatic changes in household economic rank were unavoidable following high healthcare payments. OOP payments as a fraction of income (g) were far larger than all other healthcare payments. This indicated that the impact of horizontal inequity in OOP payments was much larger than in other healthcare payments. The horizontal inequity of the total healthcare payments was largely dominated by OOP payments.
Between 2002 and 2007, therefore, horizontal inequity and reranking decreased in urban areas and increased in rural areas. In cities, horizontal inequity of social health insurance stayed broadly the same, but there was a significant decrease in horizontal inequity of commercial health insurance. This indicated that more and more urban residents chose social health insurance. The tax-levying policy also decreased the horizontal inequity in general taxation. These actions resulted in the decreasing level of horizontal inequity of total health finance in urban areas. In rural counties, the horizontal inequity of general taxation and social health insurance also decreased significantly, but OOP payments were the driving force behind an overall increase in horizontal inequity of total health finance. This result indicates that, although NCMS coverage reached nearly 90% over the period between 2002 and 2007, horizontal inequity of total healthcare finance was not reduced. This suggests that population coverage was just one dimension in the expansion of health insurance coverage. Updating of cost coverage and service coverage are also key elements in reform of health insurance schemes. Designing a rational financing mechanism for individuals within income groups and between income groups can, however, help to reduce horizontal inequity.
Some limitations of our study must be acknowledged. A limitation was that the data were collected from a single province in China, and the results obtained did not entirely represent the characteristics of national healthcare financing. Another limitation was that, as with other cross-sectional studies, we cannot conclude that the observed changes in horizontal inequity of healthcare financing had been caused by health insurance reform. Other uncontrollable factors would affect financing equity, such as regional economic development, health literacy, and quality of health technology.
Social health insurance schemes may be best funded through pro rata contributions, rather than flat rate contributions. General taxation and social insurance reduced the horizontal inequity of healthcare finance in China, but it was still strongly affected by OOP payments. Increasing the benefits package of social health insurance would be helpful to reduce the horizontal inequity of healthcare finance still further.
The datasets used in the current study are not publicly available due to the confidential policy but are available from the corresponding author on reasonable request.
AE:
Adult equivalence
AJL:
Aronson–Johnson–Lambert
Cooperative Medical Scheme
NCMS:
New Cooperative Medical Scheme
NHSS:
National Health Services Survey
OOP:
Out-of-pocket payment
Redistributive effect
UWBMI:
Urban Workers Basic Medical Insurance
China's Ministry of Health. An analysis of the third national health services survey. Beijing: Chinese Union Medical University Press; 2004.
China's Ministry of Health. An analysis of the fourth national health services survey. Beijing: Chinese Union Medical University Press; 2009.
Aronson JR, Johnson P, Lambert PJ. Redistributive effect and unequal income tax treatment. Econ J. 1994;104:262–70.
Wagstaff A, van Doorslaer E. Progressivity, horizontal equity and reranking in health care finance: a decomposition analysis for The Netherlands. J Health Econ. 1997;16:499–516.
Chen M, Chen W, Zhao Y. New evidence on financing equity in China's health care reform–a case study on Gansu province, China. BMC Health Serv Res. 2012;12:466.
Elwell-Sutton TM, Jiang CQ, Zhang WS, Cheng KK, Lam TH, Leung GM, Schooling CM. Inequality and inequity in access to health care and treatment for chronic conditions in China: the Guangzhou Biobank Cohort Study. Health Policy Plan. 2013;28:467–79.
Chen M, Zhao Y, Si L. Who pays for health care in China? The case of Heilongjiang province. PLoS One. 2014;9:e108867.
Yang W. China's new cooperative medical scheme and equity in access to health care: evidence from a longitudinal household survey. Int J Equity Health. 2013;12:20.
Gerdtham UG, Sundberg G. Redistributive effects of Swedish health care finance. Int J Health Plann Manage. 1998;13:289–306.
Bilger M. Progressivity, horizontal inequality and reranking caused by health system financing: a decomposition analysis for Switzerland. J Health Econ. 2008;27:1582–93.
Cavagnero E, Bilger M. Equity during an economic crisis: financing of the Argentine health system. J Health Econ. 2010;29:479–88.
Gansu Provincial Bureau of Statistics. Gansu Statistical Yearbook 2010. Beijing: China Statistics Press; 2010.
Wilde PE. The analysis of household surveys: A microeconometric approach to development policy. Am J Agr Econ. 2000;82:780–2.
National Bureau of Statistics of China. China Price Statistical Yearbook 2008. Beijing: China Statistics Press; 2008.
National Health Development Research Center. China National Health Accounts Report 2003. Beijing: Ministry of Health; 2003.
O'Donnell O, Doorslaer Ev, Wagstaff A, Lindelow M. Analyzing Health Equity Using Household Survey Data: A Guide to Techniques and their Implementation. Washington: World Bank; 2007.
Wagstaff A, van Doorslaer E. Catastrophe and impoverishment in paying for health care: with applications to Vietnam 1993–1998. Health Econ. 2003;12:921–34.
Kim H-K. The Politics of Fiscal Standardization in China: Fiscal Contract vs. Tax Assignment. Asian Perspective. 2004;28:171–204.
Sun Z, Chang CP, Hao Y. Fiscal decentralization and China's provincial economic growth: a panel data analysis for China's tax sharing system (vol 51, pg 2267, 2017). Qual Quant. 2017;51:2291–2291.
Huang F, Gan L. The Impacts of China's Urban Employee Basic Medical Insurance on Healthcare Expenditures and Health Outcomes. Health Econ. 2017;26:149–63.
Wang HQ, Liu ZH, Zhang YZ, Luo ZJ. Integration of current identity-based district-varied health insurance schemes in China: implications and challenges. Front Med. 2012;6:79–84.
Li Y, Wu Q, Xu L, Legge D, Hao Y, Gao L, Ning N, Wan G. Factors affecting catastrophic health expenditure and impoverishment from medical expenses in China: policy implications of universal health insurance. Bull World Health Organ. 2012;90:664–71.
Brown PH, Theoharides C. Health-seeking behavior and hospital choice in China's New Cooperative Medical System. Health Econ. 2009;18(Suppl 2):S47-64.
Jing Z, Chu J, Imam Syeda Z, Zhang X, Xu Q, Sun L, Zhou C. Catastrophic Health Expenditure among Type 2 Diabetes Mellitus Patients: a Province-wide Study in Shandong, China. J Diabetes Investig. 2019;10:283–9.
Sylvia S, Xue H, Zhou C, Shi Y, Yi H, Zhou H, Rozelle S, Pai M, Das J. Tuberculosis detection and the challenges of integrated care in rural China: a cross-sectional standardized patient study. PLoS Med. 2017;14:e1002405.
This study was supported by the Public Health Policy and Management Innovation Research Team, which is an Excellent Innovation Team of Philosophy and Social Sciences in Jiangsu Universities granted by the Jiangsu Education Department.
This study is funded by the National Natural Science Foundation of China (grant number: 71874086, 72174093) and the China Medical Board (grant number: 19–346). LS is supported by an NHMRC Early Career Fellowship (grant number: GNT1139826).
School of Health Policy & Management, Nanjing Medical University, No. 101, Longmian Avenue, Nanjing, 211166, China
Fan Yang & Mingsheng Chen
Center for Global Health, Nanjing Medical University, Nanjing, China
School of Health Sciences, Western Sydney University, Campbelltown, Australia
Lei Si
The George Institute for Global Health, University of New South Wales, Kensington, Australia
Fan Yang
Mingsheng Chen
LS and MC led and designed the study, contributed to the data analysis, reviewed the manuscript, and helped writing the final draft manuscript. FY led the data collection, analysis, interpretation, and wrote the first draft of the manuscript. All authors reviewed the content of the final version of the manuscript. The author(s) read and approved the final manuscript.
Correspondence to Mingsheng Chen.
This study was approved by the Academic Research Ethics Committee of Nanjing Medical University. All procedures were in accordance with the ethical standards of the Helsinki Declaration. Participants provided informed consent prior to data collection.
The authors declare they have no competing interests.
Yang, F., Chen, M. & Si, L. What can we learn from China's health insurance reform to improve the horizontal equity of healthcare financing?. Int J Equity Health 21, 170 (2022). https://doi.org/10.1186/s12939-022-01793-3
Horizontal inequity
Healthcare payment
|
CommonCrawl
|
Problem E
Wikimedia, cc-by-sa
Ada, Bertrand and Charles often argue over which TV shows to watch, and to avoid some of their fights they have finally decided to buy a video tape recorder. This fabulous, new device can record $k$ different TV shows simultaneously, and whenever a show recorded in one the machine's $k$ slots ends, the machine is immediately ready to record another show in the same slot.
The three friends wonder how many TV shows they can record during one day. They provide you with the TV guide for today's shows, and tell you the number of shows the machine can record simultaneously. How many shows can they record, using their recording machine? Count only shows that are recorded in their entirety.
The first line of input contains two integers $n,k$ ($1\leq k < n \leq 100\ 000$). Then follow $n$ lines, each containing two integers $x_ i,y_ i$, meaning that show $i$ starts at time $x_ i$ and finishes by time $y_ i$. This means that two shows $i$ and $j$, where $y_ i = x_ j$, can be recorded, without conflict, in the same recording slot. You may assume that $0 \leq x_{i} < y_{i} \leq 1\ 000\ 000\ 000$.
The output should contain exactly one line with a single integer: the maximum number of full shows from the TV guide that can be recorded with the tape recorder.
Problem ID: entertainmentbox
Authors: Markus S. Dregi and Pål G. Drange
|
CommonCrawl
|
Traveling wave solutions of a highly nonlinear shallow water equation
Phase transition layers for Fife-Greenlee problem on smooth bounded domain
March 2018, 38(3): 1553-1565. doi: 10.3934/dcds.2018064
On the universality of the incompressible Euler equation on compact manifolds
UCLA Department of Mathematics, Los Angeles, CA 90095-1555, USA
Received July 2017 Published December 2017
The incompressible Euler equations on a compact Riemannian manifold
$(M,g)$
take the form
$\partial_t u + \nabla_u u =- \mathrm{grad}_g p \\\mathrm{div}_g u =0.$
We show that any quadratic ODE
$\partial_t y =B(y,y)$
, where
$B \colon \mathbb{R}^n × \mathbb{R}^n \to \mathbb{R}^n$
is a symmetric bilinear map, can be linearly embedded into the incompressible Euler equations for some manifold
$M$
if and only if
$B$
obeys the cancellation condition
$\langle B(y,y), y \rangle =0$
for some positive definite inner product
$\langle,\rangle$
$\mathbb{R}^n$
. This allows one to construct explicit solutions to the Euler equations with various dynamical features, such as quasiperiodic solutions, or solutions that transition from one steady state to another, and provides evidence for the "Turing universality" of such Euler flows.
Keywords: Euler equation, quadratic ODE, universality, embedding, Riemannian manifolds.
Mathematics Subject Classification: 35Q35, 37N10, 76B99.
Citation: Terence Tao. On the universality of the incompressible Euler equation on compact manifolds. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1553-1565. doi: 10.3934/dcds.2018064
V. I. Arnold, Sur la géometrie différentielle des groupes de Lie de dimension infinie et ses applications à l'hydrodynamique des fluides parfaits, Ann. Inst. Fourier, 16 (1966), 319-361. doi: 10.5802/aif.233. Google Scholar
M. S. Ashbaugh, C. C. Chicone and R. H. Cushman, The twisting tennis racket, J. Dyn. Diff. Eq., 3 (1991), 67-85. doi: 10.1007/BF01049489. Google Scholar
T. Bohr, M. H. Jensen, G. Paladin and A. Vulpiani, Dynamical Systems Approach to Turbulence, Cambridge University Press, 1998. Google Scholar
S. Bromberg and A. Medina, Completeness of homogeneous quadratic vector fields, Qual. Theory Dyn. Syst., 6 (2005), 181-185. doi: 10.1007/BF02972671. Google Scholar
R. J. Dickson and L. M. Perko, Bounded quadratic systems in the plane, J. of Diff. Equs., 7 (1990), 251-273. doi: 10.1016/0022-0396(70)90110-5. Google Scholar
E. I. Dinaburg and Ya. G. Sinai, A quasilinear approximation for the three-dimensional Navier-Stokes system, Moscow Math. J., 1 (2001), 381-388. Google Scholar
D. Ebin and J. Marsden, Groups of diffeomorphisms and the motion of an incompressible fluid, Ann. of Math.(2), 92 (1970), 102-163. doi: 10.2307/1970699. Google Scholar
S. Friedlander and N. Pavlovic, Blow-up in a three-dimensional vector model for the Euler equations, Comm. Pure Appl. Math., 57 (2004), 705-725. doi: 10.1002/cpa.20017. Google Scholar
U. Frisch, Turbulence: The Legacy of A. N. Kolmogorov, Cambridge University Press, 1995. Google Scholar
E. B. Gledzer, System of hydrodynamic type admitting two quadratic integrals of motion, Sov. Phys. Dokl., 18 (1973), 216-217. Google Scholar
J. L. Kaplan and J. A. Yorke, Non associative real algebras and quadratic differential equations, Nonlinear Analysis, 3 (1979), 49-51. doi: 10.1016/0362-546X(79)90033-6. Google Scholar
N. H. Katz and N. Pavlović, Finite time blow-up for a dyadic model of the Euler equations, Trans. Amer. Math. Soc., 357 (2005), 695-708. doi: 10.1090/S0002-9947-04-03532-9. Google Scholar
K. Okhitani and M. Yamada, Temporal intermittency in the energy cascade process and local Lyapunov analysis in fully developed model of turbulence, Prog. Theor. Phys., 89 (1989), 329-341. doi: 10.1143/PTP.81.329. Google Scholar
T. Tao, Finite time blowup for an averaged three-dimensional Navier-Stokes equation, J. Amer. Math. Soc., 29 (2016), 601-674. Google Scholar
T. Tao, On the universality of potential well dynamics, Dynamics of Partial Differential Equations, 14 (2017), 219-238. doi: 10.4310/DPDE.2017.v14.n3.a1. Google Scholar
Simone Fiori. Error-based control systems on Riemannian state manifolds: Properties of the principal pushforward map associated to parallel transport. Mathematical Control & Related Fields, 2021, 11 (1) : 143-167. doi: 10.3934/mcrf.2020031
Yohei Yamazaki. Center stable manifolds around line solitary waves of the Zakharov–Kuznetsov equation with critical speed. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021008
Buddhadev Pal, Pankaj Kumar. A family of multiply warped product semi-Riemannian Einstein metrics. Journal of Geometric Mechanics, 2020, 12 (4) : 553-562. doi: 10.3934/jgm.2020017
Peng Luo. Comparison theorem for diagonally quadratic BSDEs. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020374
Peter H. van der Kamp, D. I. McLaren, G. R. W. Quispel. Homogeneous darboux polynomials and generalising integrable ODE systems. Journal of Computational Dynamics, 2021, 8 (1) : 1-8. doi: 10.3934/jcd.2021001
Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168
Yuxi Zheng. Absorption of characteristics by sonic curve of the two-dimensional Euler equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 605-616. doi: 10.3934/dcds.2009.23.605
Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012
Huu-Quang Nguyen, Ya-Chi Chu, Ruey-Lin Sheu. On the convexity for the range set of two quadratic functions. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020169
Wei-Chieh Chen, Bogdan Kazmierczak. Traveling waves in quadratic autocatalytic systems with complexing agent. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020364
Alexandra Köthe, Anna Marciniak-Czochra, Izumi Takagi. Hysteresis-driven pattern formation in reaction-diffusion-ODE systems. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3595-3627. doi: 10.3934/dcds.2020170
Knut Hüper, Irina Markina, Fátima Silva Leite. A Lagrangian approach to extremal curves on Stiefel manifolds. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2020031
Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021003
Djamel Aaid, Amel Noui, Özen Özer. Piecewise quadratic bounding functions for finding real roots of polynomials. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 63-73. doi: 10.3934/naco.2020015
Tetsuya Ishiwata, Takeshi Ohtsuka. Numerical analysis of an ODE and a level set methods for evolving spirals by crystalline eikonal-curvature flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 893-907. doi: 10.3934/dcdss.2020390
Harrison Bray. Ergodicity of Bowen–Margulis measure for the Benoist 3-manifolds. Journal of Modern Dynamics, 2020, 16: 305-329. doi: 10.3934/jmd.2020011
HTML views (1082)
|
CommonCrawl
|
Why do many people say that virtual particles do not conserve energy?
I've seen this claim made all over the Internet. It's on Wikipedia. It's in John Baez's FAQ on virtual particles, it's in many popular books. I've even seen it mentioned offhand in academic papers. So I assume there must be some truth to it. And yet, whenever I have looked at textbooks describing how the math of Feynman diagrams works, I just can't see how this could be true. Am I missing something? I've spent years searching the internet for an explanation of in what sense energy conservation is violated and yet, I've never seen anything other than vague statements like "because they exist for a short period of time, they can borrow energy from the vacuum".
The rules of Feynman diagrams, as I am familiar with them, guarantee that energy and momentum are conserved at every vertex in the diagram. As I understand it, this is not just true for external vertices, but for all internal vertices as well, no matter how many loops deep inside you are. It's true, you integrate the loops over all possible energies and momenta independently, but there is always a delta function in momentum space that forces the sum of the energies of the virtual particles in the loops to add up to exactly the total energy of the incoming or outgoing particles. So for example, in a photon propagator at 1-loop, you have an electron and a positron in the loop, and both can have any energy, but the sum of their energies must add up to the energy of the photon. Right?? Or am I missing something?
I have a couple guesses as to what people might mean when they say they think energy is not conserved by virtual particles...
My first guess is that they are ignoring the actual energy of the particle and instead calculating what effective energy it would have if you looked at the mass and momentum of the particle, and then imposed the classical equations of motion on it. But this is not the energy it has, right? Because the particle is off-shell! It's mass is irrelevant, because there is no mass-conservation rule, only an energy-conservation rule.
My second guess is that perhaps they are talking only about vacuum energy diagrams, where you add together loops of virtual particles which have no incoming or outgoing particles at all. Here there is no delta function that makes the total energy of the virtual particles match the total energy of any incoming or outgoing particles. But then what do they mean by energy conservation if not "total energy in intermediate states matches total energy in incoming and outgoing states"?
My third guess is that maybe they're talking about configuration-space Feynman diagrams instead of momentum-space diagrams. Because now the states we're talking about are not energy-eigenstates, you are effectively adding together a sum of diagrams each with a different total energy. But first, the expected value of energy is conserved at all times. As is guaranteed by quantum mechanics. It's only because you're asking about the energy of part of the superposition instead of the whole thing that you get an impartial answer (that's not summed up yet). And second... isn't the whole idea of a particle (whether real or virtual) a plane wave (or wave packet) that's an energy and momentum eigenstate? So in what sense is this a sensible way to think about the question at all?
Because I've seen this claim repeated so many times, I am very curious if there is something real behind it, and I'm sure there must be. But somehow, I have never seen an explanation of where this idea comes from.
quantum-field-theory energy-conservation virtual-particles
reductionistareductionista
$\begingroup$ People say that because they somehow arrived at a conclusion that virtual particles are something else than lines in a Feynman diagram. In particular, note that QFT does not assign any particle state to a "virtual particle". It's just a line in a perturbative diagram, not a state. I share your irritation about this and have genuinely no idea why it is so widespread to talk about virtual particles as if they were something more and mysterious. $\endgroup$ – ACuriousMind♦ Dec 2 '15 at 15:55
$\begingroup$ @ACuriousMind perhaps that should be an answer $\endgroup$ – David Z♦ Dec 2 '15 at 16:22
$\begingroup$ @DavidZ: Well...the question is "why do people (not random people, but physicists) say this?" and I'd answer "I have no idea, I think it's wrong in every conceivable way.". Is that an answer? $\endgroup$ – ACuriousMind♦ Dec 2 '15 at 16:26
$\begingroup$ see also this physics.stackexchange.com/q/205674 $\endgroup$ – anna v Dec 2 '15 at 16:42
$\begingroup$ See Matt Strassler's article where he says "a virtual particle is not a particle at all". Also see Do virtual particles actually physically exist?. The answer is no. They're field quanta. It's like you divide a field into abstract chunks and say each is a virtual particle. The electron and proton "exchange field" when they form a hydrogen atom, they don't throw photons at each other. $\endgroup$ – John Duffield Dec 2 '15 at 21:02
The short answer to your question is that the statements that "virtual particles need not conserve energy" and "intermediate components of Feynman diagrams need not be on the mass shell" are equivalent statements, but from two different historical perspectives.
The concept of a virtual particle was introduced into physics in the mid-1920s while the formalism of quantum mechanics was still being developed. The famous historical paper by Bohr, Kramers, and Slater is the best source (although Slater had addressed the idea in an earlier work). The general ideas of the paper proved to be incorrect but the paper helped stimulate Heisenberg's theory (and his uncertainty principle). Nevertheless the concept of a virtual particle persisted. It is now used primarily in elementary descriptions of quantum field theory as a crutch for avoiding the more technical aspects of Feynman diagrams.
I was aware of this because I took courses from Slater in Graduate School, but I must admit that I had difficulty finding information about the history of the concept of virtual particles without including Slater, Bohr, and Kramers as keywords so I understand your frustration..
Lewis MillerLewis Miller
$\begingroup$ very nice! This confirms what I wrote below: We can't really see this anymore, because the formalism has changed - only the statement is still hanging in the air... $\endgroup$ – Martin Dec 2 '15 at 18:00
I'm neither an expert on QFT, nor do I have a very deep knowledge of how the ideas developed - so this is at best a partial answer.
I always thought that your first guess is what they actually meant: A virtual particle is an "off-shell"-particle, which means that it does not obey the usual energy-momentum equation. Now people tend to interpret this as the virtual particles having a different kind of mass (and energy is conserved), but you could also say that the particles have the usual mass and then adjust energy and/or momentum to make the equations right - or you look at the equations of motion, etc.
I believe that this is very popular, because it is very tempting to think of virtual particles as "particles". This gives you a nice (albeit wrong) way to convey quantum field theory to the layman. You say (and I've read similar accounts and know some experimentalists who think about this in this fashion, because they never needed or wanted to know the real way): "See, you have these things called elementary particles and they have a mass and so forth. They also obey the equations of special relativity and you can write down equations of motion. Now let's have an experiment where we collide two electrons. You can visualise what happens in these diagrams: The particles get close and then they exchange a photon, which is called a "virtual photon". In reality, it could also happen that this photon creates an electron-anti-electron-pair which annihilates itself. Therefore, you have all those other diagrams - but in principle, all that happens is this photon exchange." Now the trouble is that you talk about the virtual particles as if they were real particles. When you started out with usual equations of motion, you are now in a conundrum. The old way out is by using energy-time uncertainty relations, the new way out is by using off-shell equations and the correct way out is by remembering that you aren't talking about physical quantities and you are doing perturbation theory.
However, it might be that there is another side to the story. I found this quote by John Baez from here:
[...]There's an old lousy form of perturbation theory in which virtual particles violate conservation of energy-momentum - that may be what you're thinking about.
But this only survives in popularizations of physics, not what quantum field theorists usually do these days. At least since Feynman came along, most of use a form of perturbation theory in which virtual particles obey conservation of energy-momentum. Instead, what virtual particles get to do that real particles don't is "lie off-shell". This means they don't need to satisfy
E^2 - p^2 = m^2
where m is the mass of the particle in question (in units where c = 1).
In any event, regardless of which form of perturbation theory you use, in actual reality it appears that energy-momentum is conserved even over short durations and short distances. (Here I'm neglecting issues related to general relativity, which aren't so important here.)
This would imply that the true origin of why people talk about virtual particles violating conservation of energy is something that dates back to before the invention of Feynman diagrams. This would explain why we only find vague allusions to the concept and no maths that supposedly tell us that this is the case: The reason is that this is not the way QFT is taught today and we don't really read the historical debates.
In a way, this would be similar to popular science telling us that the Robertson-Schrödinger uncertainty relation is about how measurement disturbs a state and how it is not possible to measure momentum and position simultaneously. This is not what that equation says and it is not reflected by today's mathematical expression, but it is how Heisenberg thought about the matter, when he formulated the first instance of this relation. You still hear it, because it gets iterated over and over again by anybody (and this is almost everybody) who doesn't have the time to properly think about this but only refers to how they learned it.
MartinMartin
$\begingroup$ That's interesting, thanks for digging up that quote from John Baez. I wonder why he still leaves this statement up on the website he hosts: "In perturbation theory, systems can go through intermediate "virtual states" that normally have energies different from that of the initial and final states. This is because of another uncertainty principle, which relates time and energy." math.ucr.edu/home/baez/physics/Quantum/virtual_particles.html I wonder if the perturbation theory referred to in the FAQ is this "old" perturbation theory, or if it applies to modern perturbation theory. $\endgroup$ – reductionista Dec 2 '15 at 19:24
$\begingroup$ I'd say it's the "old" perturbation theory he is referring, too. Why he still hosts it - I don't know, maybe precisely because the picture he describes is still iterated in many elementary introductions and popular science books and his main point was making clear that there is no problem with conservation of energy (which he states). Lewis Miller in his answer seems to have dug up where the whole concept was originally derived - maybe having a look at the paper he cites will make things clearer. $\endgroup$ – Martin Dec 2 '15 at 23:20
What are virtual particles? They appear in Feynman diagrams representing a propagator function in the integral For example integrating this electron-electron first order scattering diagram
will give the probability distribution of the scattered electrons.
The internal line is a propagator with the mass of the exchanged particle in the denominator, and that is why the line is identified with a particle. Since it is within an integral, it is off mass shell, i.e. E^2-p^2 is different than the mass of the particle and variable over the integration. The particle is off mass shell. One can choose what gives, the energy or the momentum rule for the dxdydzdt increment under the integral, if one considers the line as a particle. If one conserves the momentum and calls it a particle with the mass of its name, then energy is not conserved, since the mass is off mass shell. I believe that is where the mix up starts: in considering it a particle. Thus the statement "energy is not conserved" is isomorphic to "it is off mass shell".
In a final analysis as other answers have said it would be best not to call it a particle but accept it as a mathematical function carrying the quantum numbers of the named particle.
anna vanna v
$\begingroup$ Ok... I think this may all just be a slight semantic difference then. I would have said that virtual particles are particles (where my definition of a particle would be "a quantized excitation of a quantum field", just not particles whose mass is close to the pole in the propagator for the corresponding quantum field. But it seems that you (and others) have a different definition of a particle, which is what I would call a "real particle"... one whose mass is on-shell. $\endgroup$ – reductionista Dec 7 '15 at 21:24
$\begingroup$ Actually I still don't understand this statement though: "If one conserves the momentum and calls it a particle with the mass of its name, then energy is not conserved, since the mass is off mass shell." The only way energy would not be conserved is if the mass were on shell. You say if the mass is off shell, then energy is not conserved. But I think the right statement should be that either the mass is off shell or the energy is not conserved. It can't be both at the same time, right?! $\endgroup$ – reductionista Dec 7 '15 at 21:40
$\begingroup$ @JeffLJones there there are two variables and one mathematical equality to a mass. if the mass is fixed and one variable chosen for the integral, the value of the other variable is given by the relationship and it cannot be conserved , i.e. in mathematical equality with the rest of the diagram, since mass has been considered fixed for that interval. Thus since, in my choice, energy has to be conserved at the vertices the mass has to be off shell. $\endgroup$ – anna v Dec 8 '15 at 4:41
$\begingroup$ @JeffLJones my definition of a particle is the one in the standard model table. $\endgroup$ – anna v Dec 8 '15 at 4:42
They do conserve energy-momentum, absolutly, at each instant, anywhere, anywhen. However, they don't respect the usual relation that defines the energy : $$\tag{1} E_{\text{real}}(p) = \sqrt{p^2 \, c^2 + m_0^2 \, c^4}. $$ Instead of this, they obey some "off-shell" relations. For example, they may have this energy-momentum relation instead : $$\tag{2} E_{\text{virtual}}(p) = \sqrt{p^2 \, c^2 + a_1 \, p^4 + m_0^2 \, c^4} + a_2 \, p^2 + a_3 \, p^4, $$ or any other fantasy !
Energy-momentum conservation is always strictly respected. It's just the energy-momentum relation which may be weird.
Now, the amount of "violation" that they do can be defined as this : $$\tag{3} \Delta E = |\, E_{\text{virtual}} - E_{\text{real}} |, $$ and you could write (note the "reversed" inequality) : $$\tag{4} \Delta E \, \Delta t < \frac{\hbar}{2}, $$ where $\Delta t$ is the duration of the violation.
ChamCham
It's just a semantic issue of how you define the word "energy." If you define it to mean "the zero component of the four-momentum," or $m c^2\, dt/d\tau$, or "the Noether current generating the time translational symmetry," or "the spatial integral of the $T_{00}$ component of the stress-energy tensor," then it is conserved by virtual particles. If you define it to mean $\sqrt{m^2 c^4 + p^2 c^2}$, where ${\bf p}$ is the momentum three-vector, then it isn't. These definitions coincide in the classical case but not in the quantum case. The former definitions are more theoretically natural, but the latter is sometimes (though not always) easier to measure experimentally.
tparkertparker
$\begingroup$ Sorry, can you elaborate a bit? I really don't understand what you're saying. $\endgroup$ – knzhou Jul 9 '17 at 20:22
$\begingroup$ So in the latter definition, being off-shell is simply equivalent to violating energy conservation? $\endgroup$ – Rococo Jul 9 '17 at 20:24
$\begingroup$ @Rococo Exactly. $\endgroup$ – tparker Jul 10 '17 at 0:41
$\begingroup$ @knzhou Suppose you have two (real) particles with four-momenta $(E_1, {\bf p}_1)$ and $(E_2, {\bf p}_2)$ respectively, which merge into a virtual particle with mass $m$. The virtual particle will have momentum ${\bf p}_1 + {\bf p}_2$, and so by the second definition of "energy" it will have energy $\sqrt{m^2 c^4 + ({\bf p}_1 + {\bf p}_2)^2 c^2}$. In general this does not equal $E_1 + E_2$, so energy (under the second definition) is not conserved. $\endgroup$ – tparker Jul 10 '17 at 1:17
$\begingroup$ @JeffLJones No, back before Feynman they defined $p^0$ to be $\sqrt{m^2 + p^2}$, so that four-momenta were always on-shell, but $p^0$ was not conserved at interaction vertices. I'm not sure if anyone was still using these conventions when the first QFT textbook was written. $\endgroup$ – tparker Jul 11 '17 at 16:21
Not the answer you're looking for? Browse other questions tagged quantum-field-theory energy-conservation virtual-particles or ask your own question.
Heisenberg energy uncertainty
If virtual particles violate for a little moment the conservation of energy, can energy be created?
Virtual particles, conservation of energy & the fabric of space
Can the Heisenberg Uncertainty Principle be explained intuitively?
Do virtual particles actually physically exist?
What actually are virtual particles?
Virtual Particles real? Virtual particles create a universe?
Do virtual particles emit radiation?
How will a particle moving off its mass shell appear/behave if it is observed in the real world?
What happens to light at points of intersection?
Energy conservation of Virtual Particles - Quantum Fluctuation?
QFT propagator, time reversal and the Born rule
Why does an electron react differently to a virtual photon in the interaction between two electrons and between an electron and a positron?
How does the spontaneous creation and destruction of matter not violate conservation of energy?
Why can't a particle interaction $A + B \rightarrow C$ conserve energy and momentum at the same time?
|
CommonCrawl
|
Verification of the geomagnetic field models using historical satellite measurements obtained in 1964 and 1970
A. A. Soloviev ORCID: orcid.org/0000-0002-6476-94711,2 &
D. V. Peregoudov1,3
In 2019, the World Data Center for Solar-Terrestrial Physics in Moscow digitized the archive of observations of the Earth's magnetic field carried out by the Soviet satellites Kosmos-49 (1964) and Kosmos-321 (1970). As a result, the scientific community for the first time obtained access to a unique digital data set, which was registered at the very beginning of the scientific space era. This article sets out three objectives. First, the quality of the obtained measurements is assessed by their comparison with the IGRF model. Second, we assess the quality of the models, which at that time were derived from the data of these two satellites. Third, we propose a new, improved model of the geomagnetic field secular variation based on the scalar measurements of the Kosmos-49 and Kosmos-321 satellites.
The history of low Earth orbit (LEO) satellite missions designed to measure the geomagnetic field began with the launch of the Sputnik-3 mission on May 15, 1958 in accordance with the scientific program of the International Geophysical Year (IGY). With a payload of 968 kg, this satellite was equipped with the instruments for more than ten different experiments. They included a unique magnetically oriented saturable-core magnetometer (Dolginov et al. 1961), which made it possible for the first time to carry out orbital measurements of the total intensity of the geomagnetic field. In 1959, thanks to the efforts of colleagues from the United States the second satellite Vanguard-3 carrying proton magnetometer on board for measuring the total field was launched (Cain 1971). The first spaceborne vector measurements of the geomagnetic field appeared only 20 years later in the MAGSAT mission. As a result, valuable data for a 6-month period between 1979 and 1980 became widely available, which made it possible to construct reliable models of the geomagnetic field (Mandea 2006).
At the same time, scalar observations obtained before 1979 are also of apparent scientific interest, though such recordings were almost unavailable to a wide community until recently. Through the efforts of Krasnoperov et al. (2020), unique magnetometer data from the satellites Kosmos-49 (operated from October 24 to November 3, 1964) and Kosmos-321 (operated from January 20 to March 13, 1970) were digitized and for the first time published electronically online. Kosmos-49 carried proton precession magnetometers and Kosmos-321 was equipped with a quantum magnetometer; both were used for absolute measurements. During the same period of 1965–1971, the satellites of the POGO series (Polar Orbiting Geophysical Observatories) were also operational. In particular, OGO-2, OGO-4, and OGO-6 satellites provided scalar magnetic field measurements (Jackson and Vette 1975) and OGO data were used to derive main field models, including the International Geomagnetic Reference Field (IGRF) for 1965 (Cain et al. 1967). The magnetic data collected by these satellites are available from DTU web-site (https://www.space.dtu.dk/english/Research/Scientific_data_and_models/Magnetic_Satellites). Such historical data provide wide opportunities for retrospective analysis of the geomagnetic field in terms of both internal and external sources.
This paper is devoted to the quality assessment of the historical satellite measurements from Kosmos-49 and Kosmos-321 satellites by comparison with the IGRF model for 1964–1970. In addition, we present a comparison of the direct satellite observations with several models from IZMIRAN (Institute of Terrestrial Magnetism, Ionosphere and Radio Wave Propagation named after N.V. Pushkov of the Russian Academy of Sciences) of those years, which were based on the same observations. Finally, we present new core geomagnetic field models for 1964 and 1970 based on Kosmos-49 and Kosmos-321 measurements and demonstrate their advantages over existing analogues.
Comparison of direct satellite measurements with IGRF
As noted above, the first satellite measurements of the geomagnetic field were scalar, that is, they measured not the field vector (projections onto three orthogonal axes), but only its modulus (total field or intensity). Two proton magnetometers PM-4 were mounted on board of the satellite Kosmos-49, the sensors of which were oriented at the right angle. The instruments were switched on by turns from a time program device of high accuracy in the intervals of 32.76 s. The time marks gave the possibility to tie board readings out of each instrument to the absolute time. The magnetometer sensors were mounted at 3.3 m distance from the satellite centre by means of a boom. A small magnetic influence of the satellite at this distance was compensated by a system of permanent magnets, mounted on the bottom of the boom, creating the uniform compensating field in the places of the sensor mountings. The compensation accuracy of the magnetic and electromagnetic influence of the satellite was verified by the absence of modulate effects in the board magnetograms when the satellite rotated around the vertical and horizontal axes by means of a special nonmagnetic device. It was also checked by means of an outer stationary magnetometer when the satellite moved translationally relative to the magnetometer. The accuracy of compensation was about 2 nT. The magnetometer accuracy during the search of an unknown field was from 2 to 3 nT. To avoid errors while measuring the nuclear precession frequency, special arrangements were taken to minimize the satellite angular velocities during its separation from the rocket (Dolginov et al. 1970).
Kosmos-321 satellite was equipped by quantum cesium magnetometer QCM-1. This is a self-generating type magnetometer using the cesium vapor optical pumping method. The correspondence of the QCM-1 readings to the absolute values in the range of measured fields was checked by comparison with the proton magnetometer readings. The correspondence was within 2 nT. A special thermal control system ensured the normal functioning of the spectral lamp and absorption chambers outside the satellite's thermal container. The magnetometer sensors were placed in a container, which was removed from the satellite body using a boom 3.6 m long. Nevertheless, the experiment revealed the effect of deviation caused mainly by the influence of thermal currents in the elements of the boom and container fasteners. Due to the rotation of the satellite, the deviation influence represented modulations with the period of the satellite's rotation. These effects were excluded during data processing. The program for the primary computer processing of the experimental data provided the following: conversion of the measured values into magnetic field units, determination of the satellite coordinates at the time of measurement, determination of the theoretical field at the measurement points, and determination of the difference between the measured and calculated field values (Dolginov 1978). The effect of thermal currents was reproduced on the fasteners similar to that of Kosmos-321. This interference manifested as a gap in measured values and coincided with the times of switches between the sensors. For this reason, the catalog (Dolginov et al. 1976) contains data for only a limited number of orbits, a total of 5000 measurements, for which the interference had approximately sinusoidal form and was eliminated from data upon data processing.
Mathematical models enable analytical calculation of geomagnetic field components and modulus at any point in space outside the geomagnetic field sources, where the geomagnetic field is a potential field. There are some limitations, however, as models derived from scalar measurements only are subject to the so-called "Backus effect" (Backus 1970). This effect is essentially a strongly erroneous recovery of the internal geomagnetic vector field due to non-uniqueness of the inversion based on total intensity data only. However, properly constructed models make it possible to compare the modelled total field with the directly measured value. Being the reference model of the core field, IGRF is adopted by the international scientific community in geomagnetic studies. It represents a set of coefficients used for the expansion of the geomagnetic potential in spherical harmonics. Each set of coefficients describes a field averaged over a 5-year interval, starting at epoch 1900.0. The model is updated every 5 years and the latest, 13th generation IGRF for the period 1900–2020 is available today (Alken et al. 2021a). The obtained coefficient sets guarantee smooth variability of the geomagnetic field over the entire period assuming their linear change between the neighboring epochs. To assess the quality of satellite measurements carried out more than half a century ago, it seems natural to compare them with the predicted values according to the IGRF model, calculated for the corresponding epochs.
The IGRF models include the spherical harmonic coefficients up to order and degree of 10 for 1900–1995 and 13 for 2000–2020, so it describes only the variable internal sources of the geomagnetic field. Therefore, for an accurate comparison of the modelled and measured values, the signal of the external origin should be corrected as much as possible from the raw geomagnetic data. In this regard, we select for comparison only measurements that were obtained on the night side of the Earth, from 22:00 to 05:00 local time, and for geomagnetically quiet periods defined by the values of the planetary geomagnetic activity indices Kp ≤ 2 and |Dst| < 20 nT. A total of 3766 measurements are selected based on these criteria out of 17,449 values recorded by the Kosmos-49 satellite. Withdrawal of the high-latitude measurements affected by auroral electrojets is not applicable, since the maximum latitude of the Kosmos-49 flight was ± 50°. The positions of the Kosmos-49 satellite when measuring the selected values are shown in Fig. 1a.
Spatial distribution of the selected measurements by the Kosmos-49 (a) and Kosmos-321 (b) satellites. The color scale denotes the flight altitude in kilometers
The field according to the IGRF model is calculated for October 29, 1964 (epoch 1964.826), the middle of the flight of the Kosmos-49 satellite. Figure 2a shows a histogram of the distribution for the differences in the total intensities derived from the IGRF model and directly measured by the Kosmos-49 satellite. Difference bins of 10 nT are plotted horizontally, and the probability density calculated for the number of measurements with such differences is plotted vertically. This histogram can be approximated by the Gaussian distribution function with a mean of 2.78 nT and a standard deviation (SD) of 32.83 nT (Table 1). Evident erroneous measurements (80 ones) producing residuals of more than 150 nT (see Fig. 2a) are excluded. Their possible reasons include instrumental failures (e.g., Soloviev et al. 2018), poor processing of raw data and catalogue publishing errors.
Histograms of the distribution for the differences in the total intensities derived from the IGRF model and directly measured by the Kosmos-49 (a) and Kosmos-321 (b) satellites. Red curve shows the modelled Gaussian distribution function that best fits the empirical distribution
Table 1 Means and standard deviations of the Kosmos-49 and Kosmos-321 residuals for different models at different epochs.
To analyze the Kosmos-321 data, we select 710 measurements out of 4910, and calculate the IGRF-based field for February 22, 1970 (epoch 1970.144). Here, in addition to the selection criteria used for Kosmos-49 measurements we discard data obtained at geomagnetic dipole latitudes higher than 60° in both hemispheres. It permits to avoid systematic and strong distortion of the core field signal by external fields of magnetospheric and ionospheric origin. The positions of the Kosmos-321 satellite when measuring the selected values are shown in Fig. 1b. Figure 2b shows a histogram of the distribution of the differences between the selected Kosmos-321 measurements and modelled total field values. This histogram is modelled by a Gaussian distribution function with a mean of 13.64 nT and a SD of 26.65 nT (see Table 1).
Figure 3 shows the spatial distribution of the residuals as a function of latitude. Due to the small orbit inclination of the Kosmos-49 satellite (49°), most of the time it was located on the night side of the Earth in the northern hemisphere, as evidenced in Fig. 3a. The selected measurements from the Kosmos-321 satellite with an orbital inclination of 71° are more evenly distributed between geomagnetic dipole latitudes 60° S and 60° N (Fig. 3b). At the same time, one may observe an expected increase in residuals when the satellite approaches the poles, which is due to intense electromagnetic processes typical for the polar ionosphere. In addition, the spatial distribution of the Kosmos-321 residuals (Fig. 3b) is slightly skewed toward positive values in the northern hemisphere, which is likely the reason for the biased mean of the Kosmos-321 residuals (see Table 1).
Distribution of residuals along the latitude for the selected measurements from the Kosmos-49 (a) and Kosmos-321 (b) satellites. Horizontal axis shows the geographic latitude, vertical axis shows the difference between the total intensities derived from the IGRF model and measured directly by the satellite. Color scale denotes the number of measurements
To figure out how well the IGRF model approximates Kosmos-49 and Kosmos-321 data, we calculate the same statistics for OGO-6 (active years 1969–1971), Magsat (1979–1980), Oersted (1999–2008), and Swarm-A (since 2013) residuals. Recall that Kosmos-49 was operational 11 days in autumn 1964, which fell on the solar activity minimum between 19 and 20th cycles, and Kosmos-321 was operational 53 days in winter–spring 1970, which fell on the solar activity maximum of the 20th cycle. To be consistent with the time intervals, seasons and solar cycle phases (Adhikari et al. 2019) corresponding to selected Kosmos measurements, we consider the following data from the listed satellites:
OGO-6: 11.01.1970–22.01.1970, central epoch 17.01.1970 (solar maximum);
Magsat: 19.02.1980–08.04.1980, central epoch 15.03.1980 (solar maximum);
Oersted: 19.10.2005–09.12.2005, central epoch 14.11.2005 (almost solar minimum); and
Swarm-A: 18.11.2020–29.11.2020, central epoch 24.11.2020 (solar minimum).
For statistics calculation, we apply the same data selection criteria as for Kosmos-321 and derive IGRF-based field at the specified central epochs. The results are as follows:
OGO-6: mean = − 3.6 nT, SD = 18.1 nT;
Magsat: mean = − 0.3 nT, SD = 9.7 nT;
Oersted: mean = − 1.0 nT, SD = 13.9 nT; and
Swarm-A: mean = − 6.4 nT, SD = 11.5 nT.
The statistics do not vary dramatically between all the considered satellites (see figures above and Table 1), although the mean and SD for Kosmos-321 are slightly larger than for all the other satellites, and the SD for Kosmos-49 is larger than for all the more recent satellites. Still, these results indicate that the IGRF model quite satisfactorily approximates geomagnetic field observations made by the Kosmos-49 and Kosmos-321 satellites.
Verification of historic models of the geomagnetic field
The initial analysis of the Kosmos-49 and Kosmos-321 measurements resulted in the analytical models of the geomagnetic field, which are listed in the corresponding catalogs (Dolginov et al. 1967, 1976). The Gauss coefficients of these two models at 1964.0 (hereinafter "M-1") and 1970.0 (hereinafter "M-2") epochs are presented in Appendix 1. Below we provide their comparison with the IGRF model, which, as shown in the previous section, approximates direct geomagnetic field observations quite well. An apparent approach for comparison is estimating the difference between the individual components of the geomagnetic field vectors calculated using two models. For correct comparison, hereinafter the IGRF expansion is limited by the order and degree that are maximum for the considered models.
Figure 4 shows a map of differences in the vertical component (Z) of the core geomagnetic field according to the M-1 model and the IGRF model for the corresponding date. Similarly, Fig. 5 shows the result of comparing the M-2 model with the IGRF model. Such comparison method is used when evaluating candidate models for IGRF (Alken et al. 2021b). The largest deviations are observed in the equatorial area and reach more than 1000 nT in absolute value, which is an unacceptable error in the study of the geomagnetic field and its secular variation reaching several tens of nT per year. The large spatial differences between the two models (in both Figs. 4 and 5), which are caused by errors in the sectorial terms of the spherical harmonic expansions, are a well-known manifestation of the Backus effect (Stern and Bredekamp 1975) unaccounted in models M-1 and M-2. Despite large discrepancies in the Z component, the total intensities from the Kosmos-49 satellite are approximated by M-1 and IGRF models with an accuracy of about 30 nT (for M-1 model SD is 25.56 nT with a mean of 0.47 nT). The discrepancies in the total field between the Kosmos-321 measurements and M-2 predictions are characterized by a SD of 74.20 nT with a mean of 10.10 nT (see Table 1).
Comparison of the model based on the Kosmos-49 data (M-1) with the IGRF model as a map of the Z component differences calculated at the nodes of the regular geographic grid at a spherical surface with the mean Earth radius, 6371.2 km
Comparison of the model based on Kosmos-321 data (M-2) with the IGRF model as a map of Z component differences calculated at the nodes of the regular geographic grid at a spherical surface with the mean Earth radius, 6371.2 km
The availability of geomagnetic field models at several neighboring epochs makes it possible to estimate its secular variation (SV). As before, here we compare the SV of the Z component according to different models. Figure 6 shows a comparison of the SV obtained as the difference between the M-2 and M-1 models divided by the corresponding time interval (about 6 years) with the SV derived from the IGRF model for the period of 1965–1970. Because of the Backus effect, the SV obtained by differencing models M-2 and M-1 is expected to have very large errors at low latitudes ranging from − 300 to 300 nT/year. These errors of periodic structure arise from large differences in the SV coefficients starting with degree n = 3 between the IGRF and M-1/M-2 models. Large discrepancies are also observed in the Arctic region and at the coast of Antarctica around Greenwich. For the rest of the Earth's surface the SV from the M-1 and M-2 models is fairly close to the values according to the IGRF model.
Comparison of the 1964–1970 SV of the Z component based on the M-1 and M-2 models with the 1965–1970 SV derived from the IGRF model. The map shows the differences of the SV (Z) calculated at the nodes of the regular geographic grid at a spherical surface with the mean Earth radius, 6371.2 km
A new geomagnetic field model based on satellite data
The availability of only scalar data obviously complicates the procedure for reconstructing the total magnetic field vector. The concomitant Backus effect mentioned above can be eliminated by adding the knowledge of the accurate location of the magnetic dip equator (Khokhlov et al. 1997). Its minimization is basically gained in one of three ways:
Adding relevant vector data especially collected in the equatorial belt (e.g., Cain et al. 1967);
Adding observations of the position of the equatorial electrojet (Holme et al. 2005); and
Taking into account the position of the dip equator derived from the reference model (Ultré-Guérard et al. 1998).
For reducing the Backus effect, we follow the third way as the IGRF model is available for 1964 and 1970. Using this model, we determine the dip equator position at each degree of the longitude for the middle of each satellite operation period. To construct a magnetic field model, we adopt an approach from (Ultré-Guérard et al. 1998) and minimize the function
$$S(g,h)=\sum_{i=1}^{M}{w({\mathbf{r}}_{i})\left({B}_{i}-B({\mathbf{r}}_{i};g,h)\right)}^{2}+{w}_{\text{eq}}\sum_{j=1}^{N}{Z({\mathbf{r}}_{j};g,h)}^{2}.$$
Here M is the number of measurements, \({B}_{i}\) are the values of the measured total field, \({\mathbf{r}}_{i}\) are the points at which the measurements were made, \(B({\mathbf{r}}_{i};g,h)\) is the total field value according to the model with a set of Gauss coefficients \({g}_{nm},{h}_{nm}\), \(w({\mathbf{r}}_{i})\) is a weighting factor aimed to balance the different density distribution of the measurements over the Earth's surface, N is the number of points \({\mathbf{r}}_{j}\) defining the dip equator, \(Z({\mathbf{r}}_{j};g,h)\) is the Z component value according to the model at those points and \({w}_{\text{eq}}\) is the weight applied to the dip equator constraint. For a LEO satellite with a maximum trajectory latitude \({\theta }_{\mathrm{max}}\), the weighting factor \(w({\mathbf{r}}_{i})\) chosen as \(\sqrt{{\mathrm{sin}}^{2}{\theta }_{\mathrm{max}}-{\mathrm{sin}}^{2}\theta }\) takes into account the fact that most of the measurements are concentrated near the latitudes \({\pm \, \theta }_{\mathrm{max}}\). For the sparse measurements, the weighting factor can be omitted. As for \({w}_{\text{eq}}\), the result of minimization is practically independent of this factor within a wide range (two orders of magnitude). In our calculations, we take \({w}_{\text{eq}}\) equal to 100.
The actual minimization is carried out by the built-in function lsqnonlin () of the Matlab package, which enables solving nonlinear least-square (nonlinear data-fitting) problems. The minimization method is iterative. The subsequent step of minimizing the function \(S(x)=\frac{1}{2}{\sum }_{i}{f}_{i}^{2}\left(x\right)\) is performed in the two-dimensional subspace of the parameter space spanned by the gradient of the function \({g}_{k}={\sum }_{i}{f}_{i}\frac{\partial {f}_{i}}{\partial {x}_{k}}\) at a given point and the Newtonian step n, which is determined from the system of equations:
$$\sum_{p}\sum_{i}\frac{\partial {f}_{i}}{\partial {x}_{k}}\frac{\partial {f}_{i}}{\partial {x}_{p}}{n}_{p}=-{g}_{k}.$$
A function in a two-dimensional subspace is approximated by a quadratic form. If the next calculated step leads to a decrease in the function value, then follows the next iteration. If not, the admissible step decreases, and the procedure is repeated. Calculation stops when the admissible step becomes small enough. As an initial approximation, the values of the coefficients \({g}_{nm},{h}_{nm}\) are chosen according to the IGRF model at the beginning of 1965 and 1970.
To calculate the coefficients based on the Kosmos-49 (hereinafter referred to as the "N-1" model) and Kosmos-321 (hereinafter, the "N-2" model) data, we apply selection criteria weaker than those used for comparing measurements with the IGRF model (see "Comparison of direct satellite measurements with IGRF" section). This is due to the extremely small set of raw measurements, especially those obtained by the Kosmos-321 satellite; if they are filtered according to the criteria specified in "Comparison of direct satellite measurements with IGRF" section, we get vast regions of Africa, Europe and Canada that are not covered by data at all (see Fig. 1b).
Thus, when selecting data from the Kosmos-49 satellite, the time filter is removed (both dayside and nightside data are used) and only the criteria Kp ≤ 2 and |Dst| < 20 nT are applied. We also discard the evident erroneous measurements visible on the histogram in Fig. 2a as bars on the left and right for residuals of more than 150 nT. As a result, 12,511 out of 17,499 measurements are used for building the N-1 model. Mean and SD of the differences between the N-1 model and the actual observations are 0.23 nT and 27.57 nT, respectively (see Table 1). Maps with plotted positions of the Kosmos-49 satellite when taking the measurements selected for the N-1 model are shown in Fig. 7a.
Spatial distribution of the measurements by the Kosmos-49 (a) and Kosmos-321 (b) satellites selected for the N-1 and N-2 models, respectively. The color scale denotes the flight altitude in kilometers
As for the Kosmos-321 satellite, we also consider both dayside and nightside data (see argumentation above). We apply the geomagnetic dipole latitude filter |glat| < 60° and weakened criteria Kp ≤ 3 and |Dst| < 50 nT to select the data for the model calculation. As a result, 2510 out of 4910 measurements remain; their spatial distribution is shown in Fig. 7b. Mean and SD of the differences between the N-2 model and the actual observations are − 1.07 nT and 11.34 nT, respectively (see Table 1). The coefficients of the resulting two models are listed in Tables 2, 3.
Table 2 Gauss coefficients of the new geomagnetic field model based on the measurements from the Kosmos-49 satellite (N-1); the coefficients are listed in three columns
Table 3 Gauss coefficients of the new geomagnetic field model based on the measurements from the Kosmos-321 satellite (N-2); the coefficients are listed in three columns
The comparison of the N-1 and N-2 models with the IGRF model (Z component) for 1964 and 1970, respectively, is shown in Fig. 8. An expected degradation in the data approximation by the N-2 model is observed in the southern part of the African continent and geomagnetic pole regions (Fig. 8b), where original data are missing (see Fig. 7b). Nevertheless, as compared to M-1 and M-2 models, N-1 and N-2 deviations from the IGRF predictions are reduced significantly.
Comparison of the new models based on the Kosmos-49 (N-1, a) and Kosmos-321 (N-2, b) measurements with the IGRF model for 1964 and 1970, respectively. Each map shows differences in the Z component of the geomagnetic field at the nodes of the regular geographic grid at a spherical surface with the mean Earth radius, 6371.2 km
An improvement in the SV prediction for the 1964–1970 period can be traced by comparing the discrepancy between the M-1/M-2 and IGRF models (Fig. 6) with the discrepancy between the N-1/N-2 and IGRF models. Figure 9 shows a map for the latter case, limited to ± 60° in latitude due to the same limitation for the initial data selection for the N-2 model. The SV according to the new models becomes more realistic in the eastern part of the Pacific Ocean, African continent and the vast region of Southeast Asia (see Figs. 6 and 9). The variance in SV predictions between N-1/N-2 and IGRF models is reduced to [− 40, 40] nT/year, except for South Africa, where it reaches 60 nT/year due to lack of Kosmos-321 data (see Figs. 7b and 8b). We estimate the SV predicted by different models more in detail in the next section.
Comparison of the SV (Z) derived from the N-1 and N-2 models over 1964–1970 with the SV (Z) according to the IGRF model over 1965–1970. The map shows the SV (Z) differences calculated at the nodes of the regular geographic grid between 60oN and 60oS at a spherical surface with the mean Earth radius, 6371.2 km
To quantify the discrepancies between IGRF, M-1,2 and N-1,2 models we might compare their power spectra in terms of mean square field averaged over a spherical surface against each harmonic degree (e.g., Lowes 1966, 1974). Figure 10 displays a spectrum for each considered model as well as the spectrum of the difference between each pair of models: IGRF and M-1, IGRF and N-1, IGRF and M-2, IGRF and N-2 at Earth mean radius (6371.2 km). It follows that the total square error in the N-1 and N-2 field models is minimal for both 1964 and 1970.
Power spectra of models IGRF, M-1,2 and N-1,2 and the differences between IGRF and M-1,2, IGRF and N-1,2 at epochs 1964.826 (October 29, 1964) (a) and 1970.144 (February 22, 1970) (b), respectively, at Earth mean radius (6371.2 km)
Comparison of the model predictions with observatory data
For additional validation of the constructed models, it is useful to employ high-precision ground-based observations of the geomagnetic field provided by magnetic observatories. Though their geographic coverage is uneven (Kozyreva et al. 2019), the main value of the observatory data is continuity and long duration of the full magnetic vector observations at a fixed point in space. These data are widely used for both external (Petrov and Krasnoperov 2020) and internal field studies. In particular, it makes it possible to study the SV of the geomagnetic field and use these data as calibrating values for satellite observations and derived models. Today, the quality standards for such data and observatories are specified by the INTERMAGNET global network (Love and Chulliat 2013). A complete data archive from magnetic observatories over the entire history of observations is available online at the World Data Center for Geomagnetism in Edinburgh (http://www.wdc.bgs.ac.uk/).
To validate the IGRF, M-1, M-2, N-1 and N-2 models, we select data from nine mid-latitude observatories from both hemispheres for the period 1955–1980, which overlaps the considered years 1964, 1965 and 1970. The data from near-equatorial and high-latitude observatories are not considered due to the higher impact of external magnetic fields caused by equatorial and polar electrojets. A map with the location of the selected observatories is shown in Fig. 11.
Map of locations of the considered observatories along with their IAGA codes in the latitudinal range from 30° to 50° in both hemispheres
First, we linearly interpolate data gaps in original hourly values available at the web-portal of the World Data Center. Then, we derive monthly means from hourly values using the running average over 30-day window. Despite different completeness and quality of the data at different observatories for the period under consideration, nevertheless, the data presented are quite representative for evaluating model predictions. We compare them with the modelled values of the Z component at the observatory locations for 1964/1965 and 1970. Figure 12 shows how they fit to the continuous series of the Z component measured at the observatories.
Z component of the geomagnetic field measured continuously at nine magnetic observatories over 1955–1980 and predicted by different models for 1964/1965 and 1970. IAGA codes of the observatories are indicated in the upper right of each plot. The blue line shows the observatory data (monthly means), red circles show IGRF values for the beginning of 1965 and 1970, green circles show the M-1 and M-2 values and blue circles show the N-1 and N-2 values for 1964 and 1970. Data from CLF, FRD, VIC and GNA observatories are corrected for crustal biases according to (Verbanac et al. 2015)
For some of the considered observatories (CLF, FRD, VIC, GNA) the data are corrected for the local crustal anomalies (also known as observatory biases), which were quantified using recent high-quality satellite data (Verbanac et al. 2015). Unfortunately, the biases are unknown for the rest of considered observatories (MMB, TKT, PAF, HER, PIL). Table 4 contains SV values for Z component derived from IGRF, M-1/M-2 and N-1/N-2 models by simple subtraction of 1964/1965 value from 1970 value divided by number of years in between, and their absolute deviations from SV based on observatory data (ΔSV). The observatory-based SV is calculated as a difference between 1970 and 1964 yearly means divided by number of years in between.
Table 4 Comparison of SV (Z) predictions based on IGRF, M-1,2 and N-1,2 models with observatory data. All values are given in nT/year
For most of the observatories the actual SV is very well-fitted by the new N-1/N-2 models, despite large differences in quantity and spatial distribution between selected Kosmos-49 and Kosmos-321 data (see Fig. 7). Moreover, in some cases (MMB, CLF, GNA and PAF) the SV is even better traced by the new models rather than IGRF model, according to ΔSV values from Table 4. The worst data approximation by the new models is observed around 1970 at HER observatory (South Africa), which is due to the highest discrepancies between the N-2 model and IGRF in this region (see Figs. 8b and 9).
In a number of cases, there is a high consistency between all three models, as, for example, for the FRD (USA) or HER (South Africa) observatories in 1964. However, the majority of plots presented in Fig. 12 contain unacceptable outliers produced by M-1 and M-2 model predictions, which makes them inapplicable for studying the SV over the considered period.
The efforts undertaken by (Krasnoperov et al. 2020) have made it possible for the first time to access and process digital arrays of the historical satellite observations of the total field in 1964 and 1970. Up to now, these data along with OGO series observations represent the earliest spaceborne measurements of the geomagnetic field available online.
The high consistency between the direct geomagnetic field measurements by the Kosmos-49 and Kosmos-321 satellites and the modelled total intensity according to the IGRF model indicates the high quality of these historical data. This argues for their applicability for scientific research and makes them unique and valuable material for retrospective study of the geomagnetic field dynamics.
Comparison of the historical models of the core geomagnetic field based on the Kosmos-49 (model M-1) and Kosmos-321 (model M-2) data with another historical model for the year 1960 based solely on ground and marine observations (including those carried out at the famous non-magnetic schooner Zarya) (Adam et al. 1963, 1964) suggests that the transition to the use of satellite measurements provided significant improvement in the geomagnetic field modelling for the regions, where ground-based data were insufficient, namely, in the oceans and the Antarctic zone. However, it is shown, that the models M-1 and M-2 are clearly affected by the Backus effect, which makes them inapplicable in studying the core magnetic field dynamics.
The main value of analytical models for the core geomagnetic field expansion into spherical harmonics is the ability not so much to calculate the instantaneous characteristics of the field, but to estimate its variability over time, i.e., the SV. We present new models based on the data from the Kosmos-49 (model N-1) and Kosmos-321 (model N-2) satellites that take into account the Backus effect. As a result, these models much better fit the actual 1964–1970 SV derived from the ground-based observations than M-1 and M-2 models based on the same satellite data. The results predicted by N-1 and N-2 are satisfactory despite big differences in quantity and spatial distribution between 1964 and 1970 data sampling points used for their construction.
For some of the ground-based observation sites, the proposed N-1 and N-2 models approximate the SV even better than the IGRF model. This suggests that the inclusion of Kosmos-49 and Kosmos-321 data into the IGRF model for the period 1964–1970 is not complete. However, for the regions, where raw satellite data are not available, the new model predictions are still poor.
Kosmos-49/321 satellite data are available at World Data Center for Solar-Terrestrial Physics in Moscow (https://usd.wdcb.ru/indexen.html) and PANGAEA data repository (https://doi.org/10.1594/PANGAEA.907927). OGO-6, Magsat, Oersted, and Swarm satellite data are available at https://www.space.dtu.dk/english/Research/Scientific_data_and_models/Magnetic_Satellites. The observatory data are available at the World Data Center for Geomagnetism in Edinburgh (http://www.wdc.bgs.ac.uk/), INTERMAGNET (http://intermagnet.org) and Interregional Geomagnetic Data Center (http://geomag.gcras.ru).
The developed software is not available online.
Adam NV, Benkova NP, Orlov VP, Osipov NK, Tyurmina LO (1963) Spherical analysis of the main geomagnetic field and secular variations. Geomag Aeron 3:271–285
Adam NV, Osipov NK, Tyurmina LO, Shlyakhtina AP (1964) Spherical harmonic analysis of world magnetic charts for the 1960 epoch. Geomagn Aero 4:878–879
Adhikari B, Dahal S, Kumar MR, Nirakar S, Nidhi CD, Ballav SS, Sarala A, Chapagain NP (2019) Analysis of solar, interplanetary, and geomagnetic parameters during solar cycles 22, 23, and 24. Russ J Earth Sci 19:1003. https://doi.org/10.2205/2018ES000645
Alken P, Thébault E, Beggan CD et al (2021a) International geomagnetic reference field: the thirteenth generation. Earth Planets Space 73:49. https://doi.org/10.1186/s40623-020-01288-x
Alken P, Thébault E, Beggan CD et al (2021b) Evaluation of candidate models for the 13th generation International Geomagnetic Reference Field. Earth Planets Space 73:48. https://doi.org/10.1186/s40623-020-01281-4
Backus GE (1970) Non-uniqueness of the external geomagnetic field determined by surface intensity measurements. J Geophys Res 75:6337–6341
Cain JC (1971) Geomagnetic models from satellite surveys. Rev Geophys Space Phys 9(2):259–273
Cain JC, Hendricks SJ, Langel RA, Hudson WV (1967) A proposed model for the international geomagnetic reference field-1965. J Geomagn Geoelectr 19(4):335–355
Dolginov SH (1978) Investigation of the Earth's magnetic field. In: Blagonravov AA (ed) Successes of the Soviet Union in space exploration 1967–1977. p 760 (in Russian)
Dolginov ShSh, Zhuzgov LN, Seliutin VA (1961) Magnetometric equipment of the third soviet artificial earth satellite. Am Rocket Soc J. https://doi.org/10.2514/8.5776
Dolginov ShSh, Nalivayko VI, Tyurmin AV, Chinchevoy MM, Brodskaya RE, Zlotin GN, Kiknadze IN, Tyurmina LO (1967) Catalogue of measured and computed values of the geomagnetic field intensity along the orbit of the Kosmos-49 satellite, Academy of Sciences USSR IZMIRAN, Moscow, USSR (in Russian)
Dolginov ShSh, Kozlov AN, Chinchevoi MM (1970) Magnetometers for space measurements. Revue De Physique Appliquee 5(1):178–182. https://doi.org/10.1051/rphysap:0197000501017800
Dolginov ShSh, Kozlov AN, Kolesova VI, Kosacheva VP, Nalivaiko VI, Strunnikova LI, Tyurmin AI, Tyurmina LO, Fastovsky UV, Cherevko TN, Aleksashin EP, Velchinskaya AS, Gavrilova EA, Pokras VI, Sinitsyn VI, Yagovkin AP (1976) Catalogue of measured and computed values of the geomagnetic field intensity along the orbit of the Kosmos-321 satellite. Nauka Publishers, Moscow (in Russian)
Holme R, James MA, Lühr H (2005) Magnetic field modelling from scalar-only data: resolving the Backus effect with the equatorial electrojet. Earth Planet Space 57:1203–1209. https://doi.org/10.1186/BF03351905
Jackson JE, Vette JI (1975) OGO program summary, NASA SP-7601
Khokhlov A, Hulot G, Le Mouel J-L (1997) On the Backus effect—I. Geophys J Int 130(3):701–703. https://doi.org/10.1111/j.1365-246X.1997.tb01864.x
Kozyreva OV, Pilipenko VA, Soloviev AA, Engebretson MJ (2019) Virtual magnetograms—a tool for the study of geomagnetic response to the solar wind/IMF driving. Russ J Earth Sci 19:ES2005. https://doi.org/10.2205/2019ES000654
Krasnoperov R, Peregoudov D, Lukianova R, Soloviev A, Dzeboev B (2020) Early Soviet satellite magnetic field measurements in the years 1964 and 1970. Earth Syst Sci Data 12:555–561
Love JJ, Chulliat A (2013) An international network of magnetic observatories. Eos Trans AGU 94(42):373–374. https://doi.org/10.1002/2013EO420001
Lowes FJ (1966) Mean-square values on sphere of spherical harmonic vector fields. J Geophys Res 71(8):2179. https://doi.org/10.1029/JZ071i008p02179
Lowes FJ (1974) Spatial power spectrum of the main geomagnetic field, and extrapolation to the core. Geophys J Int 36(3):717–730. https://doi.org/10.1111/j.1365-246X.1974.tb00622.x
Mandea M (2006) Magnetic satellite missions: where have we been and where are we going? CR Geosci 338(14–15):1002–1011. https://doi.org/10.1016/j.crte.2006.05.011
Petrov VG, Krasnoperov RI (2020) The aspects of K-index calculation at Russian Geomagnetic Observatories. Russ J Earth Sci 20:ES6008. https://doi.org/10.2205/2020ES000724
Soloviev A, Bogoutdinov Sh, Agayan S, Redmon R, Loto'aniu TM, Singer HJ (2018) Automated recognition of jumps in GOES satellite magnetic data. Russ J Earth Sci 18:ES4003. https://doi.org/10.2205/2018ES000626
Stern DP, Bredekamp JH (1975) Error enhancement in geomagnetic models derived from scalar data. J Geophys Res 80:1776–1782
Ultré-Guérard P, Hamoudi M, Hulot G (1998) Reducing the Backus effect given some knowledge of the dip-equator. Geophys Res Lett 25(16):3201–3204. https://doi.org/10.1029/98GL02211
Verbanac G, Mandea M, Bandic M, Subasic S (2015) Magnetic observatories: biases over CHAMP satellite mission. Solid Earth 6(2):775–781
The results presented in this paper use data collected at the INTERMAGNET magnetic observatories (http://intermagnet.org). We express our gratitude to the national institutes that support them, INTERMAGNET community for promoting the high standards of magnetic observatory practice, the ISC World Data System (https://www.worlddatasystem.org/) and the Interregional Geomagnetic Data Center (http://geomag.gcras.ru) for making the data available online. The facilities of the GC RAS Common Use Center "Analytical Center of Geomagnetic Data" (http://ckp.gcras.ru) were used for conducting the research. The authors wish to thank two anonymous reviewers for the valuable comments, which helped to improve the material presentation. In addition, the authors are grateful to Dr. Roman Krasnoperov from GC RAS for providing rare information on the Kosmos technical specifications.
The research was carried out with the financial support of the RFBR, MOST (China) and DST (India) as a part of the scientific project No. 19-55-80021 (AS) and in the framework of budgetary funding of the Geophysical Center RAS, adopted by the Ministry of Science and Higher Education of the Russian Federation (DP).
Geophysical Center of the Russian Academy of Sciences, Moscow, Russian Federation
A. A. Soloviev & D. V. Peregoudov
Schmidt Institute of Physics of the Earth of the Russian Academy of Sciences, Moscow, Russian Federation
A. A. Soloviev
National Research University "Moscow Power Engineering Institute" (Technical University), Moscow, Russian Federation
D. V. Peregoudov
AS set the problem, elaborated the research scheme and wrote the text. DP did programming and generated figures. Both authors read and approved the final manuscript.
Correspondence to A. A. Soloviev.
Appendix 1. Gauss coefficients of the models M-1 and M-2
See Tables 5 and 6.
Table 5 Gauss coefficients of the geomagnetic field model at 1964.0 epoch (M-1) based on the measurements from the Kosmos-49 satellite (Dolginov et al. 1967) (the coefficients are listed in three columns)
Table 6 Gauss coefficients of the geomagnetic field model at 1970.0 epoch (M-2) based on the measurements from the Kosmos-321 satellite (Dolginov et al. 1976) (the coefficients are listed in three columns)
Soloviev, A.A., Peregoudov, D.V. Verification of the geomagnetic field models using historical satellite measurements obtained in 1964 and 1970. Earth Planets Space 74, 187 (2022). https://doi.org/10.1186/s40623-022-01749-5
Geomagnetism
Satellite measurements
Magnetic observatories
Core field models
Secular variation
1. Geomagnetism
|
CommonCrawl
|
Number (mult/div)
Multiplication and repeated addition (1,2,3,5,10)
Interpret products (1,2,3,5,10)
Arrays as products (1,2,3,5,10)
Multiplication and division using groups (1,2,3,5,10)
Multiplication and division using arrays (1,2,3,5,10)
Find unknowns in multiplication and division problems (1,2,3)
Find unknowns in multiplication and division problems (5,10)
Find unknowns in multiplication and division problems (1,2,3,5,10)
Multiplication and division tables (2,3,5,10)
Multiplication and division (turn arounds and fact families) (0,1,2,3,5,10)
Number sentences and word problems (1,2,3,5,10)
Extending multiplication and division calculations using patterns (1,2,3,5,10)
Problem solving with multiplication and division (1,2,3,5,10)
How do we know if the calculator is correct? (Investigation)
Multiplication as repeated addition (10x10)
Interpreting products (10x10)
Arrays as products (10x10)
Multiplication and division using groups (10x10)
Multiplication and division using arrays (10x10)
Find unknowns in multiplication and division problems (0,1,2,4,8)
Find unknowns in multiplication and division problems (3,6,7,9)
Find unknowns in multiplication and division problems (mixed)
Multiplication and division tables (10x10)
Multiplication and division (turn arounds and fact families) (10x10)
Find quotients (10x10)
Number sentences and word problems (10x10)
Multiplication and division by 10
Extending multiplication and division calculations using patterns (10x10)
Problem solving with multiplication and division (10x10)
Properties of multiplication (10x10)
Multiplication and division by 10 and 100
Distributive property for multiplication (10x10)
Use the distributive property (10x10)
Use rounding to estimate solutions
Multiply a two digit number by a small single digit number using an area model
Multiply a two digit number by a small single digit number
Multiply a single digit number by a two digit number using an area model
Multiply a single digit number by a two digit number
Multiply a single digit number by a three digit number using an area model
Multiply a single digit number by a three digit number
Multiply a single digit number by a four digit number using an area model
Multiply a single digit number by a four digit number
Multiply 2 two digit numbers using an area model
Multiply 2 two digit numbers
Multiply a two digit number by a 3 digit number
Multiply 3 numbers together
Divide a 2 digit number by a 1 digit number using area or array model
Divide a 2 digit number by a 1 digit number
Divide a 3 digit number by a 1 digit number using short division algorithm
Divide a 3 digit number by a 1 digit number resulting in a remainder
Divide a 3 digit number by a 1 digit number using short division algorithm, with remainders
Multiply various single and double digit numbers
Extend multiplicative strategies to larger numbers
Divide various numbers by single digits
Solve division problems presented within contexts
Solve multiplication and division problems involving objects or words
Multiply various single, 2 and 3 digit numbers
Divide various 4 digit numbers by 2 digit numbers
When we multiply a one-digit number by a three-digit number, there's a great way of using the area of rectangles to help us.
Remember, to find the area of a rectangle, we use the formula:
$\text{Area of a rectangle }=\text{length }\times\text{width }$Area of a rectangle =length ×width
Now take a look at the video to see how you can break your multiplication problem up into smaller steps, using rectangles.
We can break up large rectangles into smaller rectangles and find the areas to solve large multiplication questions.
One way to break up three-digit numbers is to use the hundreds, tens and unit values.
We want to find $344\times3$344×3 using the area model.
Find the area of the first rectangle.
Find the area of the second rectangle.
Find the area of the third rectangle.
What is the total area of all three rectangles?
So what is $344\times3$344×3?
We want to use the area model to find $225\times4$225×4.
Fill in the areas of each rectangle.
$200$200 $20$20 $5$5
$4$4 $\editable{}$
$\editable{}$
$900$900 $7$7
What is the total area of both rectangles?
|
CommonCrawl
|
Solved (Free): If the mean number of cigarettes smoked by pregnant women is 16 and the standard deviation is 8
ByDr. Raju Chaudhari
Apr 12, 2021 normal distribution, probability distribution, Statistics
If the mean number of cigarettes smoked by pregnant women is 16 and the standard deviation is 8, find the probability that in a random sample of 100 pregnant women the mean number of cigarettes smoked will be more than 18.
Also find the probability that in a random sample of 100 pregnant women the mean number of cigarettes smoked will be less than 18.
Let $X$ denote the number of cigarettes smoked by pregnant women.
Let $E(X)=\mu$, $V(X)=\sigma^2$, where $\mu = 16$, $\sigma = 8$.
A sample of $n = 100$ pregnant women are selected at random, then using central limit theorem for large $n$, $\overline{X} \sim N(\mu, \sigma^2/n)$.
So $Z=\frac{\overline{X}-\mu}{\sigma/\sqrt{n}} \sim N(0,1)$.
The probability that probability that in a random sample of 100 pregnant women the mean number of cigarettes smoked will be more than 18 is
$$ \begin{aligned} P(\overline{X} > 18) & =1-P(\overline{X} < 18)\\ &= 1-P\bigg(\frac{\overline{X} -\mu}{\sigma/\sqrt{n}}< \frac{18 -16}{8/\sqrt{100}}\bigg)\\ &= 1-P(Z < 2.5)\\ &= 1-0.9938\\ &=0.0062 \end{aligned} $$
Thus the probability that in a random sample of 100 pregnant women the mean number of cigarettes smoked will be more than 18 is $0.0062$.
The probability that in a random sample of 100 pregnant women the mean number of cigarettes smoked will be less than 18 is
$$ \begin{aligned} P(\overline{X} < 18) &= P\bigg(\frac{\overline{X} -\mu}{\sigma/\sqrt{n}} < \frac{18 -16}{8/\sqrt{100}}\bigg)\\ &= P(Z < 2.5)\\ &= 0.9938 \end{aligned} $$
Thus the probability that in a random sample of 100 pregnant women the mean number of cigarettes smoked will be less than 18 is $0.9938$.
Sample size determination
Correlation Regression
Statistics Calculators
By Dr. Raju Chaudhari
Solved: The article "Modeling Sediment and Water Column Interactions for Hydrophobic Pollutants"
Operations research (2)
R Programming (12)
R Basics (1)
Statistics Using R (11)
Correlation and Regression (11)
Descriptive Statistics (5)
Design of Experiment (1)
Estimation (69)
Hypothesis Testing (58)
Non-parametric (12)
Probability (33)
Probability Distribution (190)
|
CommonCrawl
|
Step Functions
We will soon look at Riemann-Stieltjes integrals where the integrator $\alpha$ is a step function, however, we will first need to formally define what exactly a step function is.
Definition: A Step Function $\alpha$ on the interval $[a, b]$ is a piecewise constant function containing finitely many pieces, i.e., there exists a partition $P = \{a = x_0, x_1, ..., x_n = b \} \in \mathscr{P}[a, b]$ such that $\alpha (x)$ constant for all $x \in (x_{k-1}, x_k)$ for each $k \in \{1, 2, ..., n \}$. The Jump at $x_k$ for $k \in \{0, 1, 2, ..., n \}$ is defined to be $\alpha(x_k^+) - \alpha(x_k^-)$. For $k = 0$ the jump at $x_0$ is defined to be $\alpha(x_0^+) - \alpha(x_0)$, and for $k = n$, the jump at $x_n$ is defined to be $\alpha (x_n) - \alpha(x_n^-)$.
For example, consider the function $\alpha$ defined on the interval $[0, 3]$ by:
\begin{align} \quad \alpha (x) = \left\{\begin{matrix} 1 & \mathrm{if} \: 0 \leq x \leq 1 \\ 2 & \mathrm{if} \: 1 < x \leq \frac{3}{2}\\ 4 & \mathrm{if} \: \frac{3}{2} < x \leq 2\\ -1 & \mathrm{if} \: 2 < x \leq 3 \end{matrix}\right. \end{align}
Then $\alpha$ is indeed a step function because $\alpha (x)$ is constant on the intervals $(0, 1)$, $\left ( 1, \frac{3}{2} \right )$, $\left ( \frac{3}{2}, 2 \right )$ and $(2, 3)$ corresponding to the partition $P = \left \{ 0, 1, \frac{3}{2}, 2, 3 \right \} \in \mathscr{P}[0, 3]$. The graph of $\alpha$ is given below:
Notice that the points of discontinuities of step functions are the "joining" points of these subintervals. In the example above, we see that locations of possibly discontinuities are $x_0 = 0$, $x_1 = 1$, $x_2 = \frac{3}{2}$, $x_3 = 2$, and $x_4 = 3$.
It is important to note that given an arbitrary partition $P = \{ a = x_0, x_1, ..., x_n = b \} \in \mathscr{P} [a, b]$, by the definition of a step function that if $\alpha$ is a step function that is constant on each open subinterval $(x_{k-1}, x_k)$ for each $k \in \{1, 2, ..., n \}$ then $\alpha$ need not be left or right continuous at each of the points $x_0, x_1, x_2, ..., x_n$.
|
CommonCrawl
|
Theoretical Computer Science Stack Exchange is a question and answer site for theoretical computer scientists and researchers in related fields. It only takes a minute to sign up.
Closure ordinals for inductive types with function spaces
Functors built from finite products and sums have closure ordinal $\omega$, detailed nicely in this manuscript by Francois Metayer. i.e. we can reach the inductive type $nat := \mu X. 1 + X$ by iterating the functor $1 + X$, which reaches its fixed point after $\omega$ iterations.
But once we allow constant exponentiation, such as in $\mu X. 1 + X + (nat \rightarrow X)$, then $\omega$ isn't enough.
I'm looking for results that include exponentiation. What kind of ordinals are sufficient?
Especially appreciated would be a reference that presents a proof that such functors are $\alpha$-continuous for some ordinal $\alpha$ like in the above manuscript.
lo.logic type-theory ct.category-theory
Kaveh
Andrew CaveAndrew Cave
The answer to your question depends on several things, the most important of which is the size of your function spaces. I'll explain. Define $$O_0 = nat $$ $$O_{n+1} = \mu X.\ 1+X+(O_n\rightarrow X)$$ As you noted in your answer, each $O_n$ can be considered internally to be the $n$-th regular cardinal of your system. In set theory, this datatype can be represented by an actual ordinal and is appropriately huge.
However, such constructions may be added to some version of type theory, and the question becomes: what ordinal is needed to give a set-theoretic interpretation to this construct? Now if we restrict ourselves to constructive semantics, a natural idea is to try to interpret each type by the set of "realizers" of this type, which is a subset of the set of $\lambda$-terms, or equivalently, the natural numbers $\mathbb{N}$.
In this case, it is easy to show that the ordinal is countable for any $O_n$, but that this ordinal grows very quickly. How quickly? Again, this depends on the amount of freedom you have when trying to build functions. The theory to building such ordinals is described in the theory of Large Countable Ordinals, of which Wikipedia has, surprisingly, a lot to say. In general it is easy to show that the ordinals in question are smaller than the Church-Kleene Ordinal, unless you allow non-constructive means of building functions (say $Beaver(n)$ that computes the busy beaver number for machines with $n$ states).
This isn't saying much though, except that in a constructive theory, you only require constructive ordinals to build interpretations. There is a bit more to say though. First, there is a very nice presentation by Thierry Coquand that details that in the absence of an eliminator for all other types but $nat$, you can build $O_1$ in exactly $\epsilon_0$ steps.
In general there seems to be a correspondence between the logical strength of a type theory, and the size of the largest ordinal that it can represent in this manner. This correspondence is the subject matter of Ordinal Analysis, which has been studied at great length since the late sixties, and is still under study today (with some amazing open questions). Warning though: the subject matter is as technical as it is fascinating.
codycody
I think I've found an answer that works in categories sufficiently like Set. It's theorem 3.1.12 in Initial algebras and terminal coalgebras: a survey by Adamek, Milius, and Moss.
The answer is that no one ordinal is sufficient for all such functors. They get arbitrarily large.
More precisely, for $F(X) = C_0\times(A_0 \rightarrow X) + C_1\times(A_1 \rightarrow X)\;+\;...\;+\; C_n\times(A_n \rightarrow X)$, the answer is the first regular ordinal larger than all the $A_i$. We say $\alpha$ is regular if for all $\beta < \alpha$, all $\beta$-indexed chains of ordinals < $\alpha$ have a supremum < $\alpha$. Roughly, $\alpha$ is not reachable from a smaller chain of smaller ordinals.
The key result is that for $\alpha$ a regular ordinal, the well-founded $\alpha$-branching trees have transfinite depth < $\alpha$.
Informally, I understand it as any $f : A_k \rightarrow F^\alpha(0)$ (i.e. $f : A_k \rightarrow \bigcup_{i<\alpha} F^i(0)$) "fits into" $A_k \rightarrow F^j(0)$ where $j := \mathrm{sup}_{(a:A_k)} \text{``the i such that }f(a) \text{ fits into } F^i(0)"$. That $j < \alpha$ holds is precisely because $\alpha$ is regular and $|A_k| < \alpha$.
So $(A_k \rightarrow \bigcup_{i<\alpha}F^i(0)) \subseteq \bigcup_{j<\alpha} (A_k \rightarrow F^j(0))$ for each $k$.
So extending this across the $+$s and $\times$s, we have: $F(F^\alpha(0)) \subseteq \bigcup_{j<\alpha}F(F^j(0)) = \bigcup_{j<\alpha}F^j(0) = F^\alpha(0)$, and so it's reached the fixed point at $\alpha$.
It's not quite clear to me how to generalize this argument beyond Set though. How do we take $A_k$-indexed colimits?
Thanks for contributing an answer to Theoretical Computer Science Stack Exchange!
Not the answer you're looking for? Browse other questions tagged lo.logic type-theory ct.category-theory or ask your own question.
Inductive types for large countable ordinal notations.
What's the difference between ADTs, GADTs, and inductive types?
"Guarded" negative occurrences in definition of inductive types, always bad?
Well-formedness condition for inductive types
Example of where violation of strict positivity condition in inductive types leads to inconsistency
W-types vs Inductive types
What does consistency mean for "computational theories" corresponding to inductive types?
Philosophy behind monotonicity requirement for inductive types
|
CommonCrawl
|
Long-time behavior of a class of viscoelastic plate equations
The existence, general decay and blow-up for a plate equation with nonlinear damping and a logarithmic source term
March 2020, 28(1): 291-309. doi: 10.3934/era.2020017
Normalized solutions for Choquard equations with general nonlinearities
Shuai Yuan , Sitong Chen , and Xianhua Tang
School of Mathematics and Statistics, Central South University, Changsha, Hunan 410083, China
* Corresponding author: Sitong Chen
Received November 2019 Revised January 2020
Fund Project: This work was partially supported by the National Natural Science Foundation of China (No: 1197011711)
In this paper, we prove the existence of positive solutions with prescribed
$ L^{2} $
-norm to the following Choquard equation:
$ \begin{equation*} -\Delta u-\lambda u = (I_{\alpha}*F(u))f(u), \ \ \ \ x\in \mathbb{R}^3, \end{equation*} $
$ \lambda\in \mathbb{R}, \alpha\in (0,3) $
$ I_{\alpha}: \mathbb{R}^3\rightarrow \mathbb{R} $
is the Riesz potential. Under the weaker conditions, by using a minimax procedure and some new analytical techniques, we show that for any
$ c>0 $
, the above equation possesses at least a couple of weak solution
$ (\bar{u}_c, \bar{ \lambda}_c)\in \mathcal{S}_{c}\times \mathbb{R}^- $
$ \|\bar{u}_c\|_{2}^{2} = c $
Keywords: Choquard equations, normalized solution, variational method, minimax method, weak solutions.
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.
Citation: Shuai Yuan, Sitong Chen, Xianhua Tang. Normalized solutions for Choquard equations with general nonlinearities. Electronic Research Archive, 2020, 28 (1) : 291-309. doi: 10.3934/era.2020017
T. Bartsch, L. Jeanjean and N. Soave, Normalized solutions for a system of coupled cubic Schrödinger equations on $\Bbb{R}^3$, J. Math. Pures Appl., 106 (2016), 583-614. doi: 10.1016/j.matpur.2016.03.004. Google Scholar
T. Bartsch and N. Soave, A natural constraint approach to normalized solutions of nonlinear Schrödinger equations and systems, J. Funct. Anal., 272 (2017), 4998-5037. doi: 10.1016/j.jfa.2017.01.025. Google Scholar
J. Bellazzini and G. Siciliano, Scaling properties of functionals and existence of constrained minimizers, J. Funct. Anal., 261 (2011), 2486-2507. doi: 10.1016/j.jfa.2011.06.014. Google Scholar
J. Bellazzini, L. Jeanjean and T. Luo, Existence and instability of standing waves with prescribed norm for a class of Schrödinger-Poisson equations, Proc. Lond. Math. Soc., 107 (2013), 303-339. doi: 10.1112/plms/pds072. Google Scholar
D. Cao and H. Li, High energy solutions of the Choquard equation, Discrete Contin. Dyn. Syst., 38 (2018), 3023-3032. doi: 10.3934/dcds.2018129. Google Scholar
S. Chen, J. Shi and X. Tang, Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity, Discrete Contin. Dyn. Syst., 39 (2019), 5867-5889. doi: 10.3934/dcds.2019257. Google Scholar
S. Chen and X. Tang, Berestycki-Lions conditions on ground state solutions for a Nonlinear Schrödinger equation with variable potentials, Adv. Nonlinear Anal., 9 (2020), 496-515. doi: 10.1515/anona-2020-0011. Google Scholar
S. Chen, X. Tang and S. Yuan, Normalized solutions for Schrödinger-Poisson equations with general nonlinearities, J. Math. Anal. Appl., 481 (2020), 123447, 24 pp. doi: 10.1016/j.jmaa.2019.123447. Google Scholar
S. Chen and X. Tang, On the planar Schrödinger-Poisson system with the axially symmetric potential, J. Differential Equations, 268 (2020), 945-976. doi: 10.1016/j.jde.2019.08.036. Google Scholar
S. Chen, A. Fiscella, P. Pucci and X. Tang, Semiclassical ground state solutions for critical Schrödinger-Poisson systems with lower perturbations, J. Differential Equations, 268 (2020), 2672-2716. doi: 10.1016/j.jde.2019.09.041. Google Scholar
P. Choquard, J. Stubbe and M. Vuffray, Stationary solutions of the Schrödinger-Newton model–an ODE approach, Differential Integral Equations, 21 (2008), 665-679. Google Scholar
L. Jeanjean, Existence of solutions with prescribed norm for semilinear elliptic equations, Nonlinear Anal., 28 (1997), 1633-1659. doi: 10.1016/S0362-546X(96)00021-1. Google Scholar
L. Jeanjean and T. Luo, Sharp nonexistence results of prescribed $L^2$-norm solutions for some class of Schrödinger-Poisson and quasi-linear equations, Z. Angew. Math. Phys., 64 (2013), 937-954. doi: 10.1007/s00033-012-0272-2. Google Scholar
L. Jeanjean, T. Luo and Z.-Q. Wang, Multiple normalized solutions for quasi-linear Schrödinger equations, J. Differential Equations, 259 (2015), 3894-3928. doi: 10.1016/j.jde.2015.05.008. Google Scholar
Y. Lei, On finite energy solutions of fractional order equations of the Choquard type, Discrete Contin. Dyn. Syst., 39 (2019), 1497-1515. doi: 10.3934/dcds.2019064. Google Scholar
G.-B. Li and H.-Y. Ye, The existence of positive solutions with prescribed $L^2$-norm for nonlinear Choquard equations, J. Math. Phys., 55 (2014), 19 pp. doi: 10.1063/1.4902386. Google Scholar
E. H. Lieb, Existence and uniqueness of the minimizing solution of Choquard's nonlinear equation, Studies in Appl. Math., 57 (1977), 93-105. doi: 10.1002/sapm197757293. Google Scholar
P.-L. Lions, Solutions of Hartree-Fock equations for Coulomb systems, Comm. Math. Phys., 109 (1987), 33-97. doi: 10.1007/BF01205672. Google Scholar
G. P. Menzala, On regular solutions of a nonlinear equation of Choquard's type, Proc. Roy. Soc. Edinburgh Sect. A, 86 (1980), 291-301. doi: 10.1017/S0308210500012191. Google Scholar
V. Moroz and J. Van Schaftingen, Groundstates of nonlinear Choquard equations: Existence, qualitative properties and decay asymptotics, J. Funct. Anal., 265 (2013), 153-184. doi: 10.1016/j.jfa.2013.04.007. Google Scholar
I. M. Moroz, R. Penrose and P. Tod, Spherically-symmetric solutions of the Schrödinger-Newton equations, Classical Quantum Gravity, 15 (1998), 2733-2742. doi: 10.1088/0264-9381/15/9/019. Google Scholar
S. I. Pekar, Üntersuchung über die Elektronentheorie der Kristalle, Akademie-Verlag, 1954. Google Scholar
R. Penrose, On gravity's role in quantum state reduction, Gen. Relativity Gravitation, 28 (1996), 581-600. doi: 10.1007/BF02105068. Google Scholar
X. Tang and S. Chen, Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials, Discrete Contin. Dyn. Syst., 37 (2017), 4973-5002. doi: 10.3934/dcds.2017214. Google Scholar
X. Tang and S. Chen, Singularly perturbed Choquard equations with nonlinearity satisfying Berestycki-Lions assumptions, Adv. Nonlinear Anal., 9 (2020), 413-437. doi: 10.1515/anona-2020-0007. Google Scholar
X. Tang, S. Chen, X. Lin and J. Yu, Ground state solutions of Nehari-Pankov type for Schrödinger equations with local super-quadratic conditions, J. Differential Equations, 268 (2019). doi: 10.1016/j.jde.2019.10.041. Google Scholar
P. Tod and I. M. Moroz, An analytical approach to the Schrödinger-Newton equations, Nonlinearity, 12 (1999), 201-216. doi: 10.1088/0951-7715/12/2/002. Google Scholar
J. Vétois and S. Wang, Infinitely many solutions for cubic nonlinear Schrödinger equations in dimension four, Adv. Nonlinear Anal., 8 (2019), 715-724. doi: 10.1515/anona-2017-0085. Google Scholar
M. Willem, Minimax Theorems, Birkhäuser Boston, Inc., Boston, MA, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar
H. Ye, The mass concentration phenomenon for $L^2$-critical constrained problems related to Kirchhoff equations, Z. Angew. Math. Phys., 67 (2016), 16 pp. doi: 10.1007/s00033-016-0624-4. Google Scholar
Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097
José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091
Lateef Olakunle Jolaoso, Maggie Aphane. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020178
Jens Lorenz, Wilberclay G. Melo, Suelen C. P. de Souza. Regularity criteria for weak solutions of the Magneto-micropolar equations. Electronic Research Archive, 2021, 29 (1) : 1625-1639. doi: 10.3934/era.2020083
Li-Bin Liu, Ying Liang, Jian Zhang, Xiaobing Bao. A robust adaptive grid method for singularly perturbed Burger-Huxley equations. Electronic Research Archive, 2020, 28 (4) : 1439-1457. doi: 10.3934/era.2020076
Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115
Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230
Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056
Xiaoxiao Li, Yingjing Shi, Rui Li, Shida Cao. Energy management method for an unpowered landing. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020180
Philippe Laurençot, Christoph Walker. Variational solutions to an evolution model for MEMS with heterogeneous dielectric properties. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 677-694. doi: 10.3934/dcdss.2020360
Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019
Qing-Hu Hou, Yarong Wei. Telescoping method, summation formulas, and inversion pairs. Electronic Research Archive, , () : -. doi: 10.3934/era.2021007
Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081
Shuai Yuan Sitong Chen Xianhua Tang
|
CommonCrawl
|
The search for neutron-antineutron oscillations at the Sudbury Neutrino Observatory (1705.00696)
SNO Collaboration: B. Aharmim, S. N. Ahmed, A. E. Anthony, N. Barros, E. W. Beier, A. Bellerive, B. Beltran, M. Bergevin, S. D. Biller, K. Boudjemline, M. G. Boulay, B. Cai, Y. D. Chan, D. Chauhan, M. Chen, B. T. Cleveland, G. A. Cox, X. Dai, H. Deng, J. A. Detwiler, P. J. Doe, G. Doucas, P.-L. Drouin, F. A. Duncan, M. Dunford, E. D. Earle, S. R. Elliott, H. C. Evans, G. T. Ewan, J. Farine, H. Fergani, F. Fleurot, R. J. Ford, J. A. Formaggio, N. Gagnon, J. TM. Goon, K. Graham, E. Guillian, S. Habib, R. L. Hahn, A. L. Hallin, E. D. Hallman, P. J. Harvey, R. Hazama, W. J. Heintzelman, J. Heise, R. L. Helmer, A. Hime, C. Howard, M. Huang, P. Jagam, B. Jamieson, N. A. Jelley, M. Jerkins, K. J. Keeter, J. R. Klein, L. L. Kormos, M. Kos, A. Kruger, C. Kraus, C. B. Krauss, T. Kutter, C. C. M. Kyba, R. Lange, J. Law, I. T. Lawson, K. T. Lesko, J. R. Leslie, I. Levine, J. C. Loach, R. MacLellan, S. Majerus, H. B. Mak, J. Maneira, R. D. Martin, N. McCauley, A. B. McDonald, S. R. McGee, M. L. Miller, B. Monreal, J. Monroe, B. G. Nickel, A. J. Noble, H. M. O'Keeffe, N. S. Oblath, C. E. Okada, R. W. Ollerhead, G. D. OrebiGann, S. M. Oser, R. A. Ott, S. J. M. Peeters, A. W. P. Poon, G. Prior, S. D. Reitzner, K. Rielage, B. C. Robertson, R. G. H. Robertson, M. H. Schwendener, J. A. Secrest, S. R. Seibert, O. Simard, J. J. Simpson, D. Sinclair, P. Skensved, T. J. Sonley, L. C. Stonehill, G. Tesic, N. Tolich, T. Tsui, R. Van Berg, B. A. VanDevender, C. J. Virtue, B. L. Wall, D. Waller, H. Wan Chan Tseung, D. L. Wark, J. Wendland, N. West, J. F. Wilkerson, J. R. Wilson, A. Wright, M. Yeh, F. Zhang, K. Zuber
May 1, 2017 hep-ex, nucl-ex, physics.ins-det
Tests on $B-L$ symmetry breaking models are important probes to search for new physics. One proposed model with $\Delta(B-L)=2$ involves the oscillations of a neutron to an antineutron. In this paper a new limit on this process is derived for the data acquired from all three operational phases of the Sudbury Neutrino Observatory experiment. The search was concentrated in oscillations occurring within the deuteron, and 23 events are observed against a background expectation of 30.5 events. These translate to a lower limit on the nuclear lifetime of $1.48\times 10^{31}$ years at 90% confidence level (CL) when no restriction is placed on the signal likelihood space (unbounded). Alternatively, a lower limit on the nuclear lifetime was found to be $1.18\times 10^{31}$ years at 90% CL when the signal was forced into a positive likelihood space (bounded). Values for the free oscillation time derived from various models are also provided in this article. This is the first search for neutron-antineutron oscillation with the deuteron as a target.
A Search for Astrophysical Burst Signals at the Sudbury Neutrino Observatory (1309.0910)
B. Aharmim, S. N. Ahmed, A. E. Anthony, N. Barros, E. W. Beier, A. Bellerive, B. Beltran, M. Bergevin, S. D. Biller, K. Boudjemline, M. G. Boulay, B. Cai, Y. D. Chan, D. Chauhan, M. Chen, B. T. Cleveland, G. A. Cox, X. Dai, H. Deng, J. A. Detwiler, M. DiMarco, M. D. Diamond, P. J. Doe, G. Doucas, P.-L. Drouin, F. A. Duncan, M. Dunford, E. D. Earle, S. R. Elliott, H. C. Evans, G. T. Ewan, J. Farine, H. Fergani, F. Fleurot, R. J. Ford, J. A. Formaggio, N. Gagnon, J. TM. Goon, K. Graham, E. Guillian, S. Habib, R. L. Hahn, A. L. Hallin, E. D. Hallman, P. J. Harvey, R. Hazama, W. J. Heintzelman, J. Heise, R. L. Helmer, A. Hime, C. Howard, M. Huang, P. Jagam, B. Jamieson, N. A. Jelley, M. Jerkins, K. J. Keeter, J. R. Klein, L. L. Kormos, M. Kos, C. Kraus, C. B. Krauss, A. Krueger, T. Kutter, C. C. M. Kyba, R. Lange, J. Law, I. T. Lawson, K. T. Lesko, J. R. Leslie, I. Levine, J. C. Loach, R. MacLellan, S. Majerus, H. B. Mak, J. Maneira, R. Martin, N. McCauley, A. B. McDonald, S. R. McGee, M. L. Miller, B. Monreal, J. Monroe, B. G. Nickel, A. J. Noble, H. M. O'Keeffe, N. S. Oblath, R. W. Ollerhead, G. D. Orebi Gann, S. M. Oser, R. A. Ott, S. J. M. Peeters, A. W. P. Poon, G. Prior, S. D. Reitzner, K. Rielage, B. C. Robertson, R. G. H. Robertson, M. H. Schwendener, J. A. Secrest, S. R. Seibert, O. Simard, J. J. Simpson, D. Sinclair, P. Skensved, T. J. Sonley, L. C. Stonehill, G. Tesic, N. Tolich, T. Tsui, R. Van Berg, B. A. VanDevender, C. J. Virtue, B. L. Wall, D. Waller, H. Wan Chan Tseung, D. L. Wark, P. J. S. Watson, J. Wendland, N. West, J. F. Wilkerson, J. R. Wilson, J. M. Wouters, A. Wright, M. Yeh, F. Zhang, K. Zuber
Sept. 4, 2013 nucl-ex, astro-ph.SR
The Sudbury Neutrino Observatory (SNO) has confirmed the standard solar model and neutrino oscillations through the observation of neutrinos from the solar core. In this paper we present a search for neutrinos associated with sources other than the solar core, such as gamma-ray bursters and solar flares. We present a new method for looking for temporal coincidences between neutrino events and astrophysical bursts of widely varying intensity. No correlations were found between neutrinos detected in SNO and such astrophysical sources.
Measurement of scintillation efficiency for nuclear recoils in liquid argon (1004.0373)
D. Gastler, E. Kearns, A. Hime, L. C. Stonehill, S. Seibert, J. Klein, W. H. Lippincott, D. N. McKinsey, J. A. Nikkel
May 8, 2012 nucl-ex, physics.ins-det, astro-ph.IM
The scintillation light yield of liquid argon from nuclear recoils relative to electronic recoils has been measured as a function of recoil energy from 10 keVr up to 250 keVr. The scintillation efficiency, defined as the ratio of the nuclear recoil scintillation response to the electronic recoil response, is 0.25 \pm 0.01 + 0.01(correlated) above 20 keVr.
Combined Analysis of all Three Phases of Solar Neutrino Data from the Sudbury Neutrino Observatory (1109.0763)
SNO Collaboration: B. Aharmim, S. N. Ahmed, A. E. Anthony, N. Barros, E. W. Beier, A. Bellerive, B. Beltran, M. Bergevin, S. D. Biller, K. Boudjemline, M. G. Boulay, B. Cai, Y. D. Chan, D. Chauhan, M. Chen, B. T. Cleveland, G. A. Cox, X. Dai, H. Deng, J. A. Detwiler, M. DiMarco, P. J. Doe, G. Doucas, P.-L. Drouin, F. A. Duncan, M. Dunford, E. D. Earle, S. R. Elliott, H. C. Evans, G. T. Ewan, J. Farine, H. Fergani, F. Fleurot, R. J. Ford, J. A. Formaggio, N. Gagnon, J. TM. Goon, K. Graham, E. Guillian, S. Habib, R. L. Hahn, A. L. Hallin, E. D. Hallman, P. J. Harvey, R. Hazama, W. J. Heintzelman, J. Heise, R. L. Helmer, A. Hime, C. Howard, M. Huang, P. Jagam, B. Jamieson, N. A. Jelley, M. Jerkins, K. J. Keeter, J. R. Klein, L. L. Kormos, M. Kos, C. Kraus, C. B. Krauss, A Kruger, T. Kutter, C. C. M. Kyba, R. Lange, J. Law, I. T. Lawson, K. T. Lesko, J. R. Leslie, J. C. Loach, R. MacLellan, S. Majerus, H. B. Mak, J. Maneira, R. Martin, N. McCauley, A. B. McDonald, S. R. McGee, M. L. Miller, B. Monreal, J. Monroe, B. G. Nickel, A. J. Noble, H. M. O'Keeffe, N. S. Oblath, R. W. Ollerhead, G. D. Orebi Gann, S. M. Oser, R. A. Ott, S. J. M. Peeters, A. W. P. Poon, G. Prior, S. D. Reitzner, K. Rielage, B. C. Robertson, R. G. H. Robertson, R. C. Rosten, M. H. Schwendener, J. A. Secrest, S. R. Seibert, O. Simard, J. J. Simpson, P. Skensved, T. J. Sonley, L. C. Stonehill, G. Tešić, N. Tolich, T. Tsui, R. Van Berg, B. A. VanDevender, C. J. Virtue, H. Wan Chan Tseung, D. L. Wark, P. J. S. Watson, J. Wendland, N. West, J. F. Wilkerson, J. R. Wilson, J. M. Wouters, A. Wright, M. Yeh, F. Zhang, K. Zuber
Sept. 4, 2011 hep-ph, hep-ex, nucl-ex, astro-ph.SR
We report results from a combined analysis of solar neutrino data from all phases of the Sudbury Neutrino Observatory. By exploiting particle identification information obtained from the proportional counters installed during the third phase, this analysis improved background rejection in that phase of the experiment. The combined analysis resulted in a total flux of active neutrino flavors from 8B decays in the Sun of (5.25 \pm 0.16(stat.)+0.11-0.13(syst.))\times10^6 cm^{-2}s^{-1}. A two-flavor neutrino oscillation analysis yielded \Deltam^2_{21} = (5.6^{+1.9}_{-1.4})\times10^{-5} eV^2 and tan^2{\theta}_{12}= 0.427^{+0.033}_{-0.029}. A three-flavor neutrino oscillation analysis combining this result with results of all other solar neutrino experiments and the KamLAND experiment yielded \Deltam^2_{21} = (7.41^{+0.21}_{-0.19})\times10^{-5} eV^2, tan^2{\theta}_{12} = 0.446^{+0.030}_{-0.029}, and sin^2{\theta}_{13} = (2.5^{+1.8}_{-1.5})\times10^{-2}. This implied an upper bound of sin^2{\theta}_{13} < 0.053 at the 95% confidence level (C.L.).
A Monte Carlo simulation of the Sudbury Neutrino Observatory proportional counters (1104.2573)
B. Beltran, H. Bichsel, B. Cai, H. Deng, J. A. Formaggio, S. Habib, A. L. Hallin, A. Hime, M. Huang, C. Kraus, H. R. Leslie, J. C. Loach, R. Martin, S. McGee, M. L. Miller, B. Monreal, J. Monroe, N. S. Oblath, S. J. M. Peeters, A. W. P. Poon, G. Prior, K. Rielage, R. G. H. Robertson, M. W. E. Smith, L. C. Stonehill, N. Tolich, T. Van Wechel, H. Wan Chan Tseung, J. Wendland, J. F. Wilkerson, A. Wright
April 13, 2011 hep-ex, nucl-ex, physics.ins-det
The third phase of the Sudbury Neutrino Observatory (SNO) experiment added an array of 3He proportional counters to the detector. The purpose of this Neutral Current Detection (NCD) array was to observe neutrons resulting from neutral-current solar neutrino-deuteron interactions. We have developed a detailed simulation of the current pulses from the NCD array proportional counters, from the primary neutron capture on 3He through the NCD array signal-processing electronics. This NCD array Monte Carlo simulation was used to model the alpha-decay background in SNO's third-phase 8B solar-neutrino measurement.
Scintillation time dependence and pulse shape discrimination in liquid argon (0801.1531)
W. H. Lippincott, K. J. Coakley, D. Gastler, A. Hime, E. Kearns, D. N. McKinsey, J. A. Nikkel, L. C. Stonehill
Sept. 23, 2008 nucl-ex
Using a single-phase liquid argon detector with a signal yield of 4.85 photoelectrons per keV of electronic-equivalent recoil energy (keVee), we measure the scintillation time dependence of both electronic and nuclear recoils in liquid argon down to 5 keVee. We develop two methods of pulse shape discrimination to distinguish between electronic and nuclear recoils. Using one of these methods, we measure a background and statistics-limited level of electronic recoil contamination to be $7.6\times10^{-7}$ between 60 and 128 keV of nuclear recoil energy (keVr) for a nuclear recoil acceptance of 50% with no nuclear recoil-like events above 72 keVr. Finally, we develop a maximum likelihood method of pulse shape discrimination using the measured scintillation time dependence and predict the sensitivity to WIMP-nucleon scattering in three configurations of a liquid argon dark matter detector.
A Model of Nuclear Recoil Scintillation Efficiency in Noble Liquids (0712.2470)
D.-M. Mei, Z.-B. Yin, L. C. Stonehill, A. Hime
March 11, 2008 astro-ph, nucl-ex
Scintillation efficiency of low-energy nuclear recoils in noble liquids plays a crucial role in interpreting results from some direct searches for Weakly Interacting Massive Particle (WIMP) dark matter. However, the cause of a reduced scintillation efficiency relative to electronic recoils in noble liquids remains unclear at the moment. We attribute such a reduction of scintillation efficiency to two major mechanisms: 1) energy loss and 2) scintillation quenching. The former is commonly described by Lindhard's theory and the latter by Birk's saturation law. We propose to combine these two to explain the observed reduction of scintillation yield for nuclear recoils in noble liquids. Birk's constants $kB$ for argon, neon and xenon determined from existing data are used to predict noble liquid scintillator's response to low-energy nuclear recoils and low-energy electrons. We find that energy loss due to nuclear stopping power that contributes little to ionization and excitation is the dominant reduction mechanism in scintillation efficiency for nuclear recoils, but that significant additional quenching results from the nonlinear response of scintillation to the ionization density.
An array of low-background $^3$He proportional counters for the Sudbury Neutrino Observatory (0705.3665)
J. F. Amsbaugh, J. M. Anaya, J. Banar, T. J. Bowles, M. C. Browne, T. V. Bullard, T. H. Burritt, G. A. Cox-Mobrand, X. Dai, H. Deng, M. Di Marco, P. J. Doe, M. R. Dragowsky, C. A. Duba, F. A. Duncan, E. D. Earle, S. R. Elliott, E.-I. Esch, H. Fergani, J. A. Formaggio, M. M. Fowler, J. E. Franklin, P. Geissbühler, J. V. Germani, A. Goldschmidt, E. Guillian, A. L. Hallin, G. Harper, P. J. Harvey, R. Hazama, K. M. Heeger, J. Heise, A. Hime, M. A. Howe, M. Huang, L. L. Kormos, C. Kraus, C. B. Krauss, J. Law, I. T. Lawson, K. T. Lesko, J. C. Loach, S. Majerus, J. Manor, S. McGee, K. K. S. Miknaitis, G. G. Miller, B. Morissette, A. Myers, N. S. Oblath, H. M. O'Keeffe, R. W. Ollerhead, S. J. M. Peeters, A. W. P. Poon, G. Prior, S. D. Reitzner, K. Rielage, R. G. H. Robertson, P. Skensved, A. R. Smith, M. W. E. Smith, T. D. Steiger, L. C. Stonehill, P. M. Thornewell, N. Tolich, B. A. VanDevender, T. D. Van Wechel, B. L. Wall, H. Wan Chan Tseung, J. Wendland, N. West, J. B. Wilhelmy, J. F. Wilkerson, J. M. Wouters
May 23, 2007 nucl-ex
An array of Neutral-Current Detectors (NCDs) has been built in order to make a unique measurement of the total active flux of solar neutrinos in the Sudbury Neutrino Observatory (SNO). Data in the third phase of the SNO experiment were collected between November 2004 and November 2006, after the NCD array was added to improve the neutral-current sensitivity of the SNO detector. This array consisted of 36 strings of proportional counters filled with a mixture of $^3$He and CF$_4$ gas capable of detecting the neutrons liberated by the neutrino-deuteron neutral current reaction in the D$_2$O, and four strings filled with a mixture of $^4$He and CF$_4$ gas for background measurements. The proportional counter diameter is 5 cm. The total deployed array length was 398 m. The SNO NCD array is the lowest-radioactivity large array of proportional counters ever produced. This article describes the design, construction, deployment, and characterization of the NCD array, discusses the electronics and data acquisition system, and considers event signatures and backgrounds.
Solar Neutrinos from CNO Electron Capture (hep-ph/0309266)
L. C. Stonehill, J. A. Formaggio, R. G. H. Robertson
Nov. 18, 2003 hep-ph, nucl-th
The neutrino flux from the sun is predicted to have a CNO-cycle contribution as well as the known pp-chain component. Previously, only the fluxes from beta+ decays of 13N, 15O, and 17F have been calculated in detail. Another neutrino component that has not been widely considered is electron capture on these nuclei. We calculate the number of interactions in several solar neutrino detectors due to neutrinos from electron capture on 13N, 15O, and 17F, within the context of the Standard Solar Model. We also discuss possible non-standard models where the CNO flux is increased.
|
CommonCrawl
|
Welcome to ShortScience.org!
ShortScience.org is a platform for post-publication discussion aiming to improve accessibility and reproducibility of research ideas.
The website has 1435 public summaries, mostly in machine learning, written by the community and organized by paper, conference, and year.
Reading summaries of papers is useful to obtain the perspective and insight of another reader, why they liked or disliked it, and their attempt to demystify complicated sections.
Also, writing summaries is a good exercise to understand the content of a paper because you are forced to challenge your assumptions when explaining it.
Finally, you can keep up to date with the flood of research by reading the latest summaries on our Twitter and Facebook pages.
Popular (Today)
papers.nips.cc
Spatial Transformer Networks
Jaderberg, Max and Simonyan, Karen and Zisserman, Andrew and Kavukcuoglu, Koray
Neural Information Processing Systems Conference - 2015 via Local Bibsonomy
[link] Summary by NIPS Conference Reviews 3 years ago
This paper presents a novel layer that can be used in convolutional neural networks. A spatial transformer layer computes re-sampling points of the signal based on another neural network. The suggested transformations include scaling, cropping, rotations and non-rigid deformation whose paramerters are trained end-to-end with the rest of the model. The resulting re-sampling grid is then used to create a new representation of the underlying signal through bi-linear or nearest neighbor interpolation. This has interesting implications: the network can learn to co-locate objects in a set of images that all contain the same object, the transformation parameter localize the attention area explicitly, fine data resolution is restricted to areas important for the task. Furthermore, the model improves over previous state-of-the-art on a number of tasks.
The layer has one mini neural network that regresses on the parameters of a parametric transformation, e.g. affine), then there is a module that applies the transformation to a regular grid and a third more or less "reads off" the values in the transformed positions and maps them to a regular grid, hence under-forming the image or previous layer. Gradients for back-propagation in a few cases are derived. The results are mostly of the classic deep learning variety, including mnist and svhn, but there is also the fine-grained birds dataset. The networks with spatial transformers seem to lead to improved results in all cases.
Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
Brendel, Wieland and Bethge, Matthias
arXiv e-Print archive - 2019 via Local Bibsonomy
[link] Summary by David Stutz 6 months ago
Brendel and Bethge show empirically that state-of-the-art deep neural networks on ImageNet rely to a large extent on local features, without any notion of interaction between them. To this end, they propose a bag-of-local-features model by applying a ResNet-like architecture on small patches of ImageNet images. The predictions of these local features are then averaged and a linear classifier is trained on top. Due to the locality, this model allows to inspect which areas in an image contribute to the model's decision, as shown in Figure 1. Furthermore, these local features are sufficient for good performance on ImageNet. Finally, they show, on scrambled ImageNet images, that regular deep neural networks also rely heavily on local features, without any notion of spatial interaction between them.
https://i.imgur.com/8NO1w0d.png
Figure 1: Illustration of the heap maps obtained using BagNets, the bag-of-local-features model proposed in the paper. Here, different sizes for the local patches are used.
Also find this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
Ease-of-Teaching and Language Structure from Emergent Communication
Li, Fushan and Bowling, Michael
[link] Summary by CodyWild 2 months ago
An interesting category of machine learning papers - to which this paper belongs - are papers which use learning systems as a way to explore the incentive structures of problems that are difficult to intuitively reason about the equilibrium properties of. In this paper, the authors are trying to better understand how different dynamics of a cooperative communication game between agents, where the speaking agent is trying to describe an object such that the listening agent picks the one the speaker is being shown, influence the communication protocol (or, to slightly anthropomorphize, the language) that the agents end up using.
In particular, the authors experiment with what happens when the listening agent is frequently replaced during training with a untrained listener who has no prior experience with the agent. The idea of this experiment is that if the speaker is in a scenario where listeners need to frequently "re-learn" the mapping between communication symbols and objects, this will provide an incentive for that mapping to be easier to quickly learn.
https://i.imgur.com/8csqWsY.png
The metric of ease of learning that the paper focuses on is "topographic similarity", which is a measure of how compositional the communication protocol is. The objects they're working with have two properties, and the agents use a pair of two discrete symbols (two letters) to communicate about them. A perfectly compositional language would use one of the symbols to represent each of the properties. To mathematically measure this property, the authors calculate (cosine) similarity between the two objects property vectors, and the (edit) distance between the two objects descriptions under the emergent language, and calculate the correlation between these quantities. In this experimental setup, if a language is perfectly compositional, the correlation will be perfect, because every time a property is the same, the same symbol will be used, so two objects that share that property will always share that symbol in their linguistic representation.
https://i.imgur.com/t5VxEoX.png
The premise and the experimental setup of this paper are interesting, but I found the experimental results difficult to gain intuition and confidence from. The authors do show that, in a regime where listeners are reset, topographic similarity rises from a beginning-of-training value of .54 to an end of training value of .59, whereas in the baseline, no-reset regime, the value drops to .51. So there definitely is some amount of support for their claim that listener resets lead to higher compositionality. But given that their central quality is just a correlation between similarities, it's hard to gain intuition for whether the difference is a meaningful. It doesn't naively seem particularly dramatic, and it's hard to tell otherwise without more references for how topographic similarity would change under a wider range of different training scenarios.
Learning to Predict Without Looking Ahead: World Models Without Forward Prediction
Freeman, C. Daniel and Metz, Luke and Ha, David
Reinforcement Learning is often broadly separated into two categories of approaches: model-free and model-based. In the former category, networks simply take observations and input and produce predicted best-actions (or predicted values of available actions) as output. In order to perform well, the model obviously needs to gain an understanding of how its actions influence the world, but it doesn't explicitly make predictions about what the state of the world will be after an action is taken. In model-based approaches, the agent explicitly builds a dynamics model, or a model in which it takes in (past state, action) and predicts next state. In theory, learning such a model can lead to both interpretability (because you can "see" what the model thinks the world is like) and robustness to different reward functions (because you're learning about the world in a way not explicitly tied up with the reward).
This paper proposes an interesting melding of these two paradigms, where an agent learns a model of the world as part of an end-to-end policy learning. This works through something the authors call "observational dropout": the internal model predicts the next state of the world given the prior one and the action, and then with some probability, the state of the world that both the policy and the next iteration of the dynamics model sees is replaced with the model's prediction. This incentivizes the network to learn an effective dynamics model, because the farther the predictions of the model are from the true state of the world, the worse the performance of the learned policy will be on the iterations where the only observation it can see is the predicted one. So, this architecture is model-free in the sense that the gradient used to train the system is based on applying policy gradients to the reward, but model-based in the sense that it does have an internal world representation.
https://i.imgur.com/H0TNfTh.png
The authors find that, at a simple task, Swing Up Cartpole, very low probabilities of seeing the true world (and thus very high probabilities of the policy only seeing the dynamics model output) lead to world models good enough that a policy trained only on trajectories sampled from that model can perform relatively well. This suggests that at higher probabilities of the true world, there was less value in the dynamics model being accurate, and consequently less training signal for it. (Of course, policies that often could only see the predicted world performed worse during their original training iteration compared to policies that could see the real world more frequently).
On a more complex task of CarRacing, the authors looked at how well a policy trained using the representations of the world model as input could perform, to examine whether it was learning useful things about the world.
https://i.imgur.com/v9etll0.png
They found an interesting trade-off, where at high probabilities (like before) the dynamics model had little incentive to be good, but at low probabilities it didn't have enough contact with the real dynamics of the world to learn a sensible policy.
dx.doi.org
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation
Girshick, Ross B. and Donahue, Jeff and Darrell, Trevor and Malik, Jitendra
Conference and Computer Vision and Pattern Recognition - 2014 via Local Bibsonomy
[link] Summary by nandini 2 years ago
# Object detection system overview.
https://i.imgur.com/vd2YUy3.png
1. takes an input image,
2. extracts around 2000 bottom-up region proposals,
3. computes features for each proposal using a large convolutional neural network (CNN), and then
4. classifies each region using class-specific linear SVMs.
* R-CNN achieves a mean average precision (mAP) of 53.7% on PASCAL VOC 2010.
* On the 200-class ILSVRC2013 detection dataset, R-CNN's mAP is 31.4%, a large improvement over OverFeat , which had the previous best result at 24.3%.
## There is a 2 challenges faced in object detection
1. localization problem
2. labeling the data
1 localization problem :
* One approach frames localization as a regression problem. they report a mAP of 30.5% on VOC 2007 compared to the 58.5% achieved by our method.
* An alternative is to build a sliding-window detector. considered adopting a sliding-window approach increases the number of convolutional layers to 5, have very large receptive fields (195 x 195 pixels) and strides (32x32 pixels) in the input image, which makes precise localization within the sliding-window paradigm.
2 labeling the data:
* The conventional solution to this problem is to use unsupervised pre-training, followed by supervise fine-tuning
* supervised pre-training on a large auxiliary dataset (ILSVRC), followed by domain specific fine-tuning on a small dataset (PASCAL),
* fine-tuning for detection improves mAP performance by 8 percentage points.
* Stochastic gradient descent via back propagation was used to effective for training convolutional neural networks (CNNs)
## Object detection with R-CNN
This system consists of three modules
* The first generates category-independent region proposals. These proposals define the set of candidate detections available to our detector.
* The second module is a large convolutional neural network that extracts a fixed-length feature vector from each region.
* The third module is a set of class specific linear SVMs.
Module design
1 Region proposals
* which detect mitotic cells by applying a CNN to regularly-spaced square crops.
* use selective search method in fast mode (Capture All Scales, Diversification, Fast to Compute).
* the time spent computing region proposals and features (13s/image on a GPU or 53s/image on a CPU)
2 Feature extraction.
* extract a 4096-dimensional feature vector from each region proposal using the Caffe implementation of the CNN
* Features are computed by forward propagating a mean-subtracted 227x227 RGB image through five convolutional layers and two fully connected layers.
* warp all pixels in a tight bounding box around it to the required size
* The feature matrix is typically 2000x4096
3 Test time detection
* At test time, run selective search on the test image to extract around 2000 region proposals (we use selective search's "fast mode" in all experiments).
* warp each proposal and forward propagate it through the CNN in order to compute features. Then, for each class, we score each extracted feature vector using the SVM trained for that class.
* Given all scored regions in an image, we apply a greedy non-maximum suppression (for each class independently) that rejects a region if it has an intersection-over union (IoU) overlap with a higher scoring selected region larger than a learned threshold.
## Training
1 Supervised pre-training:
* pre-trained the CNN on a large auxiliary dataset (ILSVRC2012 classification) using image-level annotations only (bounding box labels are not available for this data)
2 Domain-specific fine-tuning.
* use the stochastic gradient descent (SGD) training of the CNN parameters using only warped region proposals with learning rate of 0.001.
3 Object category classifiers.
* use intersection-over union (IoU) overlap threshold method to label a region with The overlap threshold of 0.3.
* Once features are extracted and training labels are applied, we optimize one linear SVM per class.
* adopt the standard hard negative mining method to fit large training data in memory.
### Results on PASCAL VOC 201012
1 VOC 2010
* compared against four strong baselines including SegDPM, DPM, UVA, Regionlets.
* Achieve a large improvement in mAP, from 35.1% to 53.7% mAP, while also being much faster
https://i.imgur.com/0dGX9b7.png
2 ILSVRC2013 detection.
* ran R-CNN on the 200-class ILSVRC2013 detection dataset
* R-CNN achieves a mAP of 31.4%
https://i.imgur.com/GFbULx3.png
#### Performance layer-by-layer, without fine-tuning
1 pool5 layer
* which is the max pooled output of the network's fifth and final convolutional layer.
*The pool5 feature map is 6 x6 x 256 = 9216 dimensional
* each pool5 unit has a receptive field of 195x195 pixels in the original 227x227 pixel input
2 Layer fc6
* fully connected to pool5
* it multiplies a 4096x9216 weight matrix by the pool5 feature map (reshaped as a 9216-dimensional vector) and then adds a vector of biases
* It is implemented by multiplying the features computed by fc6 by a 4096 x 4096 weight matrix, and similarly adding a vector of biases and applying half-wave rectification
#### Performance layer-by-layer, with fine-tuning
* CNN's parameters fine-tuned on PASCAL.
* fine-tuning increases mAP by 8.0 % points to 54.2%
### Network architectures
* 16-layer deep network, consisting of 13 layers of 3 _ 3 convolution kernels, with five max pooling layers interspersed, and topped with three fully-connected layers. We refer to this network as "O-Net" for OxfordNet and the baseline as "T-Net" for TorontoNet.
* RCNN with O-Net substantially outperforms R-CNN with TNet, increasing mAP from 58.5% to 66.0%
* drawback in terms of compute time, with in terms of compute time, with than T-Net.
1 The ILSVRC2013 detection dataset
* dataset is split into three sets: train (395,918), val (20,121), and test (40,152)
#### CNN features for segmentation.
* full R-CNN: The first strategy (full) ignores the re region's shape and computes CNN features directly on the warped window. Two regions might have very similar bounding boxes while having very little overlap.
* fg R-CNN: the second strategy (fg) computes CNN features only on a region's foreground mask. We replace the background with the mean input so that background regions are zero after mean subtraction.
* full+fg R-CNN: The third strategy (full+fg) simply concatenates the full and fg features
https://i.imgur.com/n1bhmKo.png
Adversarial Examples Are Not Bugs, They Are Features
Ilyas, Andrew and Santurkar, Shibani and Tsipras, Dimitris and Engstrom, Logan and Tran, Brandon and Madry, Aleksander
- 2019 via Local Bibsonomy
Keywords: adversarial
Ilyas et al. present a follow-up work to their paper on the trade-off between accuracy and robustness. Specifically, given a feature $f(x)$ computed from input $x$, the feature is considered predictive if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}[y f(x)] \geq \rho$;
similarly, a predictive feature is robust if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}\left[\inf_{\delta \in \Delta(x)} yf(x + \delta)\right] \geq \gamma$.
This means, a feature is considered robust if the worst-case correlation with the label exceeds some threshold $\gamma$; here the worst-case is considered within a pre-defined set of allowed perturbations $\Delta(x)$ relative to the input $x$. Obviously, there also exist predictive features, which are however not robust according to the above definition. In the paper, Ilyas et al. present two simple algorithms for obtaining adapted datasets which contain only robust or only non-robust features. The main idea of these algorithms is that an adversarially trained model only utilizes robust features, while a standard model utilizes both robust and non-robust features. Based on these datasets, they show that non-robust, predictive features are sufficient to obtain high accuracy; similarly training a normal model on a robust dataset also leads to reasonable accuracy but also increases robustness. Experiments were done on Cifar10. These observations are supported by a theoretical toy dataset consisting of two overlapping Gaussians; I refer to the paper for details.
arxiv-sanity.com
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro and Sameer Singh and Carlos Guestrin
arXiv e-Print archive - 2016 via Local arXiv
Keywords: cs.LG, cs.AI, stat.ML
First published: 2016/02/16 (3 years ago)
Abstract: Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.
[link] Summary by Martin Thoma 3 years ago
This paper describes how to find local interpretable model-agnostic explanations (LIME) why a black-box model $m_B$ came to a classification decision for one sample $x$. The key idea is to evaluate many more samples around $x$ (local) and fit an interpretable model $m_I$ to it. The way of sampling and the kind of interpretable model depends on the problem domain.
For computer vision / image classification, the image $x$ is divided into superpixels. Single super-pixels are made black, the new image $x'$ is evaluated $p' = m_B(x')$. This is done multiple times.
The paper is also explained in [this YouTube video](https://www.youtube.com/watch?v=KP7-JtFMLo4) by Marco Tulio Ribeiro.
A very similar idea is already in the [Zeiler & Fergus paper](http://www.shortscience.org/paper?bibtexKey=journals/corr/ZeilerF13#martinthoma).
## Follow-up Paper
* June 2016: [Model-Agnostic Interpretability of Machine Learning](https://arxiv.org/abs/1606.05386)
* November 2016:
* [Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance](https://arxiv.org/abs/1611.05817)
* [An unexpected unity among methods for interpreting
model predictions](https://arxiv.org/abs/1611.07478)
Dual Learning for Machine Translation
Yingce Xia and Di He and Tao Qin and Liwei Wang and Nenghai Yu and Tie-Yan Liu and Wei-Ying Ma
Keywords: cs.CL
Abstract: While neural machine translation (NMT) is making good progress in the past two years, tens of millions of bilingual sentence pairs are needed for its training. However, human labeling is very costly. To tackle this training data bottleneck, we develop a dual-learning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a dual-learning game. This mechanism is inspired by the following observation: any machine translation task has a dual task, e.g., English-to-French translation (primal) versus French-to-English translation (dual); the primal and dual tasks can form a closed loop, and generate informative feedback signals to train the translation models, even if without the involvement of a human labeler. In the dual-learning mechanism, we use one agent to represent the model for the primal task and the other agent to represent the model for the dual task, then ask them to teach each other through a reinforcement learning process. Based on the feedback signals generated during this process (e.g., the language-model likelihood of the output of a model, and the reconstruction error of the original sentence after the primal and dual translations), we can iteratively update the two models until convergence (e.g., using the policy gradient methods). We call the corresponding approach to neural machine translation \emph{dual-NMT}. Experiments show that dual-NMT works very well on English$\leftrightarrow$French translation; especially, by learning from monolingual data (with 10% bilingual data for warm start), it achieves a comparable accuracy to NMT trained from the full bilingual data for the French-to-English translation task.
[link] Summary by tqri 3 years ago
In this article, the authors provide a framework for training two translation models with large accessible monolingual corpus.
In traditional methods, machine translation models always require large parallel corpus to train a good quality model, which is expensive to acquire. However, the massive monolingual data is not fully utilized. The monolingual corpus are typically used in pretraining the NMT decoder rnn and augmenting initial parallel corpus through self-generated translations.
The authors embed machine translation task into a reinforcement learning framework, in which two agents act as two different native speakers respectively and know little about each other and then they learn to translate by trying to communicate with each other.
**The two speakers**, `A` and `B`, obviously know well about their corresponding language respectively, this situation is easily simulated by two well-trained language models for `A` and `B`. Then, speaker `A` tries to tell a sentence $x$ to `B` by translating it into $y$ in `B`'s language. Since they don't know each other, `B` is uncertain about what `A` truly means by saying $y$. However, `B` is capable of evaluate the degree of sensibility of $y$ from his own understanding. Next, `B` informs `A` his sensibility evaluation score and tries to recover what `A` truly means in `A`'s language, i.e. $x'$. And similarly, `A` can also evaluate the degree of sensibility of $x'$ from his own understanding.
In general, the very original idea that `A` tried to convey, is passed through a noisy channel to `B`, and then back to `A` through another noisy channel. The former noisy channel is a `A-B` translation model and the latter a `B-A` translation model in the framework.
Think about how the first American learnt Chinese in history and I think it is intuitively similar to the principle in this work.
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
Shiyu Liang and Yixuan Li and R. Srikant
Keywords: cs.LG, stat.ML
Abstract: We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is 95%.
Liang et al. propose a perturbation-based approach for detecting out-of-distribution examples using a network's confidence predictions. In particular, the approaches based on the observation that neural network's make more confident predictions on images from the original data distribution, in-distribution examples, than on examples taken from a different distribution (i.e., a different dataset), out-distribution examples. This effect can further be amplified by using a temperature-scaled softmax, i.e.,
$ S_i(x, T) = \frac{\exp(f_i(x)/T)}{\sum_{j = 1}^N \exp(f_j(x)/T)}$
where $f_i(x)$ are the predicted logits and $T$ a temperature parameter. Based on these softmax scores, perturbations $\tilde{x}$ are computed using
$\tilde{x} = x - \epsilon \text{sign}(-\nabla_x \log S_{\hat{y}}(x;T))$
where $\hat{y}$ is the predicted label of $x$. This is similar to "one-step" adversarial examples; however, in contrast of minimizing the confidence of the true label, the confidence in the predicted label is maximized. This, applied to in-distribution and out-distribution examples is illustrated in Figure 1 and meant to emphasize the difference in confidence. Afterwards, in- and out-distribution examples can be distinguished using simple thresholding on the predicted confidence, as shown in various experiment, e.g., on Cifar10 and Cifar100.
https://i.imgur.com/OjDVZ0B.png
Figure 1: Illustration of the proposed perturbation to amplify the difference in confidence between in- and out-distribution examples.
AI Safety Gridworlds
Jan Leike and Miljan Martic and Victoria Krakovna and Pedro A. Ortega and Tom Everitt and Andrew Lefrancq and Laurent Orseau and Shane Legg
Keywords: cs.LG, cs.AI
Abstract: We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.
[link] Summary by dniku 6 months ago
The paper proposes a standardized benchmark for a number of safety-related problems, and provides an implementation that can be used by other researchers. The problems fall in two categories: specification and robustness. Specification refers to cases where it is difficult to specify a reward function that encodes our intentions. Robustness means that agent's actions should be robust when facing various complexities of a real-world environment. Here is a list of problems:
1. Safe interruptibility: agents should neither seek nor avoid interruption.
2. Avoiding side effects: agents should minimize effects unrelated to their main objective.
3. Absent supervisor: agents should not behave differently depending on presence of supervisor.
4. Reward gaming: agents should not try to exploit errors in reward function.
2. Robustness:
1. Self-modification: agents should behave well when environment allows self-modification.
2. Robustness to distributional shift: agents should behave robustly when test differs from train.
3. Robustness to adversaries: agents should detect and adapt to adversarial intentions in environment.
4. Safe exploration: agent should behave safely during learning as well.
It is worth noting that problems 1.2, 1.4, 2.2, and 2.4 have been described back in "Concrete Problems in AI Safety".
It is suggested that each of these problems be tackled in a "gridworld" environment — a 2D environment where the agent lives on a grid, and the only actions it has available are up/down/left/right movements. The benchmark consists of 10 environments, each corresponding to one of 8 problems mentioned above. Each of the environments is an extremely simple instance of the problem, but nevertheless they are of interest as current SotA algorithms usually don't solve the posed task.
Specifically, the authors trained A2C and Rainbow with DQN update on each of the environments and showed that both algorithms fail on all of specification problems, except for Rainbow on 1.1. This is expected, as neither of those algorithms are designed for cases where reward function is misspecified. Both algorithms failed on 2.2--2.4, except for A2C on 2.3. On 2.1, the authors swapped A2C for Rainbow with Sarsa update and showed that Rainbow DQN failed while Rainbow Sarsa performed well.
Overall, this is a good groundwork paper with only a few questionable design decisions, such as the design of actual reward in 1.2. It is unlikely to have impact similar to MNIST or ImageNet, but it should stimulate safety-related research.
|
CommonCrawl
|
Rate of convergence of a stochastic particle method for the Kolmogorov equation with variable coefficients
by Pierre Bernard, Denis Talay and Luciano Tubaro PDF
In a recent paper, E. G. Puckett proposed a stochastic particle method for the nonlinear diffusion-reaction PDE in $[0,T] \times \mathbb {R}$ (the so-called "KPP" (Kolmogorov-Petrovskii-Piskunov) equation): \[ \left \{ \begin {array}{*{20}{c}} \frac {{\partial u}}{{\partial t}} = Au = \Delta u + f(u), \hfill \\ u(0, \cdot ) = {u_0}( \cdot ), \hfill \\ \end {array} \right .\] where $1 - {u_0}$ is the cumulative function, supposed to be smooth enough, of a probability distribution, and f is a function describing the reaction. His justification of the method and his analysis of the error were based on a splitting of the operator A. He proved that, if h is the time discretization step and N the number of particles used in the algorithm, one can obtain an upper bound of the norm of the random error on $u(T,x)$ in ${L^1}(\Omega \times \mathbb {R})$ of order $1/{N^{1/4}}$, provided $h = \mathcal {O}(1/{N^{1/4}})$, but conjectured, from numerical experiments, that it should be of order $\mathcal {O}h + \mathcal {O}(1/\sqrt N )$, without any relation between h and N. We prove that conjecture. We also construct a similar stochastic particle method for more general nonlinear diffusion-reaction-convection PDEs \[ \left \{ \begin {array}{*{20}{c}} \frac {{\partial u}}{{\partial t}} = Lu + f(u), \hfill \\ u(0,\cdot ) = {u_0}(\cdot ), \hfill \\ \end {array} \right .\] where L is a strongly elliptic second-order operator with smooth coefficients, and prove that the preceding rate of convergence still holds when the coefficients of L are constant, and in the other case is $\mathcal {O}(\sqrt h ) + \mathcal {O}(1/\sqrt N )$. The construction of the method and the analysis of the error are based on a stochastic representation formula of the exact solution u.
A. Bensoussan and J.-L. Lions, Applications des inéquations variationnelles en contrôle stochastique, Méthodes Mathématiques de l'Informatique, No. 6, Dunod, Paris, 1978 (French). MR 0513618
Piermarco Cannarsa and Vincenzo Vespri, Generation of analytic semigroups by elliptic operators with unbounded coefficients, SIAM J. Math. Anal. 18 (1987), no. 3, 857–872. MR 883572, DOI 10.1137/0518063
Brigitte Chauvin and Alain Rouault, KPP equation and supercritical branching Brownian motion in the subcritical speed area. Application to spatial trees, Probab. Theory Related Fields 80 (1988), no. 2, 299–314. MR 968823, DOI 10.1007/BF00356108
B. Chauvin and A. Rouault, A stochastic simulation for solving scalar reaction-diffusion equations, Adv. in Appl. Probab. 22 (1990), no. 1, 88–100. MR 1039378, DOI 10.2307/1427598
B. Chauvin and A. Rouault, Supercritical branching Brownian motion and K-P-P equation in the critical speed-area, Math. Nachr. 149 (1990), 41–59. MR 1124793, DOI 10.1002/mana.19901490104
Avner Friedman, Stochastic differential equations and applications. Vol. 1, Probability and Mathematical Statistics, Vol. 28, Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1975. MR 0494490
Takeyuki Hida, Brownian motion, Applications of Mathematics, vol. 11, Springer-Verlag, New York-Berlin, 1980. Translated from the Japanese by the author and T. P. Speed. MR 562914
H. Kunita, Stochastic differential equations and stochastic flows of diffeomorphisms, École d'été de probabilités de Saint-Flour, XII—1982, Lecture Notes in Math., vol. 1097, Springer, Berlin, 1984, pp. 143–303. MR 876080, DOI 10.1007/BFb0099433
G. N. Mil′šteĭn, Approximate integration of stochastic differential equations, Teor. Verojatnost. i Primenen. 19 (1974), 583–588 (Russian, with English summary). MR 0356225
A. Pazy, Semigroups of linear operators and applications to partial differential equations, Applied Mathematical Sciences, vol. 44, Springer-Verlag, New York, 1983. MR 710486, DOI 10.1007/978-1-4612-5561-1
G. Da Prato and E. Sinestrari, Differential operators with nondense domain, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 14 (1987), no. 2, 285–344 (1988). MR 939631
Elbridge Gerry Puckett, Convergence of a random particle method to solutions of the Kolmogorov equation $u_t=\nu u_{xx}+u(1-u)$, Math. Comp. 52 (1989), no. 186, 615–645. MR 964006, DOI 10.1090/S0025-5718-1989-0964006-X
Franz Rothe, Global solutions of reaction-diffusion systems, Lecture Notes in Mathematics, vol. 1072, Springer-Verlag, Berlin, 1984. MR 755878, DOI 10.1007/BFb0099278
Arthur S. Sherman and Charles S. Peskin, A Monte Carlo method for scalar reaction diffusion equations, SIAM J. Sci. Statist. Comput. 7 (1986), no. 4, 1360–1372. MR 857799, DOI 10.1137/0907090
H. Bruce Stewart, Generation of analytic semigroups by strongly elliptic operators, Trans. Amer. Math. Soc. 199 (1974), 141–162. MR 358067, DOI 10.1090/S0002-9947-1974-0358067-4
D. Talay, Simulation and numerical analysis of stochastic differential systems: a review, Rapport de Recherche INRIA, vol. 1313, 1990 (and to appear in Effective Stochastic Analysis (P. Kree and W. Wedig, eds.), Springer-Verlag).
Retrieve articles in Mathematics of Computation with MSC: 65M12, 35K57, 60J15, 60J60
Retrieve articles in all journals with MSC: 65M12, 35K57, 60J15, 60J60
MSC: Primary 65M12; Secondary 35K57, 60J15, 60J60
|
CommonCrawl
|
Search results for: D. De Jesus Damiao
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
The European Physical Journal C > 2019 > 79 > 7 > 1-27
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Measurement of the top quark mass in the all-jets final state at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV and combination with the lepton+jets channel
A top quark mass measurement is performed using $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 of LHC proton–proton collision data collected with the CMS detector at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The measurement uses the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ all-jets final state. A kinematic fit is performed to reconstruct the decay of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ system...
Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks is performed in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te collected with the CMS detector at the LHC. The analyzed data sample corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . The signal is characterized by a large missing transverse...
Measurement of exclusive $$\mathrm {\Upsilon }$$ Υ photoproduction from protons in $$\mathrm {p}$$ p Pb collisions at $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text {TeV} $$ sNN=5.02TeV
The exclusive photoproduction of $$\mathrm {\Upsilon }\mathrm {(nS)} $$ Υ(nS) meson states from protons, $$\gamma \mathrm {p} \rightarrow \mathrm {\Upsilon }\mathrm {(nS)} \,\mathrm {p}$$ γp→Υ(nS)p (with $$\mathrm {n}=1,2,3$$ n=1,2,3 ), is studied in ultraperipheral $$\mathrm {p}$$ p Pb collisions at a centre-of-mass energy per nucleon pair of $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text...
Measurement of associated production of a $$\mathrm {W}$$ W boson and a charm quark in proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te
Measurements are presented of associated production of a $$\mathrm {W}$$ W boson and a charm quark ($$\mathrm {W}+\mathrm {c}$$ W+c ) in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.7$$\,\text {fb}^{-1}$$ fb-1 collected by the CMS experiment at the CERN LHC. The $$\mathrm {W}$$ W bosons are identified...
Search for a heavy resonance decaying to a top quark and a vector-like top quark in the lepton + jets final state in pp collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
A search is presented for a heavy spin-1 resonance $$\mathrm {Z}'$$ Z′ decaying to a top quark and a vector-like top quark partner $$\text {T} $$ T in the lepton + jets final state. The search is performed using a data set of $$\mathrm {p}$$ p $$\mathrm {p}$$ p collisions at a centre-of-mass energy of 13$$\,\text {TeV}$$ TeV corresponding to an integrated luminosity of $$35.9{\,\text {fb}^{-1}}...
Study of the underlying event in top quark pair production in $$\mathrm {p}\mathrm {p}$$ pp collisions at 13$$~\text {Te}\text {V}$$ Te
Measurements of normalized differential cross sections as functions of the multiplicity and kinematic variables of charged-particle tracks from the underlying event in top quark and antiquark pair production are presented. The measurements are performed in proton-proton collisions at a center-of-mass energy of 13$$~\text {Te}\text {V}$$ Te , and are based on data collected by the CMS experiment at...
Search for rare decays of $$\mathrm {Z}$$ Z and Higgs bosons to $${\mathrm {J}/\psi } $$ J/ψ and a photon in proton-proton collisions at $$\sqrt{s}$$ s = 13$$\,\text {TeV}$$ TeV
A search is presented for decays of $$\mathrm {Z}$$ Z and Higgs bosons to a $${\mathrm {J}/\psi } $$ J/ψ meson and a photon, with the subsequent decay of the $${\mathrm {J}/\psi } $$ J/ψ to $$\mathrm {\mu ^+}\mathrm {\mu ^-} $$ μ+μ- . The analysis uses data from proton-proton collisions with an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV collected...
Search for single production of vector-like quarks decaying to a top quark and a $$\mathrm {W} $$ W boson in proton–proton collisions at $$\sqrt{s} = 13 \,\text {TeV} $$ s=13TeV
A search is presented for the single production of vector-like quarks in proton–proton collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The data, corresponding to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 , were recorded with the CMS experiment at the LHC. The analysis focuses on the vector-like quark decay into a top quark and a $$\mathrm {W} $$ W boson, with one muon or electron...
Measurement of differential cross sections for inclusive isolated-photon and photon+jet production in proton-proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
Measurements of inclusive isolated-photon and photon+jet production in proton–proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV are presented. The analysis uses data collected by the CMS experiment in 2015, corresponding to an integrated luminosity of 2.26$$\,\text {fb}^{-1}$$ fb-1 . The cross section for inclusive isolated photon production is measured as a function of the photon transverse...
Measurement of differential cross sections for $${\text {Z}}$$ Z boson production in association with jets in proton-proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
The European Physical Journal C > 2018 > 78 > 11 > 1-41
The production of a $${\text {Z}}$$ Z boson, decaying to two charged leptons, in association with jets in proton-proton collisions at a centre-of-mass energy of 13$$\,\text {TeV}$$ TeV is measured. Data recorded with the CMS detector at the LHC are used that correspond to an integrated luminosity of 2.19$$\,\text {fb}^\text {-1}$$ fb-1 . The cross section is measured as a function of the jet multiplicity...
Studies of $${\mathrm {B}} ^{*}_{{\mathrm {s}}2}(5840)^0 $$ Bs2∗(5840)0 and $${\mathrm {B}} _{{\mathrm {s}}1}(5830)^0 $$ Bs1(5830)0 mesons including the observation of the $${\mathrm {B}} ^{*}_{{\mathrm {s}}2}(5840)^0 \rightarrow {\mathrm {B}} ^0 \mathrm {K} ^0_{\mathrm {S}} $$ Bs2∗(5840)0→B0KS0 decay in proton-proton collisions at $$\sqrt{s}=8\,\text {TeV} $$ s=8TeV
Measurements of $${\mathrm {B}} ^{*}_{{\mathrm {s}}2}(5840)^0 $$ Bs2∗(5840)0 and $${\mathrm {B}} _{{\mathrm {s}}1}(5830)^0 $$ Bs1(5830)0 mesons are performed using a data sample of proton-proton collisions corresponding to an integrated luminosity of , collected with the CMS detector at the LHC at a centre-of-mass energy of $$8\,\text {TeV} $$ 8TeV . The analysis studies P-wave $${\mathrm {B}} ^0_{\mathrm...
Measurement of the top quark mass with lepton+jets final states using $$\mathrm {p}$$ p $$\mathrm {p}$$ p collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV
The mass of the top quark is measured using a sample of $${{\text {t}}\overline{{\text {t}}}}$$ tt¯ events collected by the CMS detector using proton-proton collisions at $$\sqrt{s}=13$$ s=13 $$\,\text {TeV}$$ TeV at the CERN LHC. Events are selected with one isolated muon or electron and at least four jets from data corresponding to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 ....
Search for new physics in dijet angular distributions using proton–proton collisions at $$\sqrt{s}=13\hbox {TeV}$$ s=13TeV and constraints on dark matter and other models
A search is presented for physics beyond the standard model, based on measurements of dijet angular distributions in proton–proton collisions at $$\sqrt{s}=13\hbox {TeV}$$ s=13TeV . The data collected with the CMS detector at the LHC correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . The observed distributions, corrected to particle level, are found to be in agreement with...
Search for third-generation scalar leptoquarks decaying to a top quark and a $$\tau $$ τ lepton at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A search for pair production of heavy scalar leptoquarks (LQs), each decaying into a top quark and a $$\tau $$ τ lepton, is presented. The search considers final states with an electron or a muon, one or two $$\tau $$ τ leptons that decayed to hadrons, and additional jets. The data were collected in 2016 in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te with the CMS detector...
Measurement of the $$\mathrm {Z}/\gamma ^{*} \rightarrow \tau \tau $$ Z/γ∗→ττ cross section in pp collisions at $$\sqrt{s} = 13 \hbox { TeV}$$ s=13TeV and validation of $$\tau $$ τ lepton analysis techniques
A measurement is presented of the $$\mathrm {Z}/\gamma ^{*} \rightarrow \tau \tau $$ Z/γ∗→ττ cross section in $$\text {pp}$$ pp collisions at $$\sqrt{s} = 13\hbox { TeV}$$ s=13TeV , using data recorded by the CMS experiment at the LHC, corresponding to an integrated luminosity of $$2.3\hbox { fb}^{-1}$$ 2.3fb-1 . The product of the inclusive cross section and branching fraction is measured to be...
Measurement of charged particle spectra in minimum-bias events from proton–proton collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV
Pseudorapidity, transverse momentum, and multiplicity distributions are measured in the pseudorapidity range $$|\eta | < 2.4$$ |η|<2.4 for charged particles with transverse momenta satisfying $$p_{\mathrm {T}} > 0.5\,\text {GeV} $$ pT>0.5GeV in proton–proton collisions at a center-of-mass energy of $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV . Measurements are presented in three different...
Search for beyond the standard model Higgs bosons decaying into a bb¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair in pp collisions at s=13 $$ \sqrt{s}=13 $$ TeV
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Journal of High Energy Physics > 2018 > 2018 > 8 > 1-43
Abstract A search for Higgs bosons that decay into a bottom quark-antiquark pair and are accompanied by at least one additional bottom quark is performed with the CMS detector. The data analyzed were recorded in proton-proton collisions at a centre-of-mass energy of s=13 $$ \sqrt{s}=13 $$ TeV at the LHC, corresponding to an integrated luminosity of 35.7 fb−1. The final state considered in this analysis...
Electroweak production of two jets in association with a Z boson in proton–proton collisions at $$\sqrt{s}= $$ s= 13$$\,\text {TeV}$$ TeV
A measurement of the electroweak (EW) production of two jets in association with a $$\mathrm {Z} $$ Z boson in proton-proton collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV is presented, based on data recorded in 2016 by the CMS experiment at the LHC corresponding to an integrated luminosity of 35.9$$\,\text {fb}^{\text {--}1}$$ fb--1 . The measurement is performed in the $$\ell \ell \mathrm...
Measurement of associated Z + charm production in proton–proton collisions at $$\sqrt{s} = 8$$ s=8 $$\,\text {TeV}$$ TeV
A study of the associated production of a $$\mathrm{Z} $$ Z boson and a charm quark jet ($$\mathrm{Z} + \mathrm{c} $$ Z+c ), and a comparison to production with a $$\mathrm{b} $$ b quark jet ($$\mathrm{Z} + \mathrm{b} $$ Z+b ), in $$\mathrm {p}\mathrm {p}$$ pp collisions at a centre-of-mass energy of 8$$\,\text {TeV}$$ TeV are presented. The analysis uses a data sample corresponding to an integrated...
Last year (19)
HIGGS (20)
SUPERSYMMETRY (12)
TOP QUARK (9)
EXOTICA (8)
EXTRA DIMENSIONS (5)
HEAVY IONS (5)
B2G (4)
ELECTROWEAK (4)
HADRON-HADRON SCATTERING (4)
HEAVY ION (4)
QCD (4)
RESONANCES (4)
SUSY (4)
W′ (4)
B-PHYSICS (3)
DIBOSON (3)
JETS (3)
PHOTON (3)
TAU (3)
Z′ (3)
2HDM (2)
AQGC (2)
B PHYSICS (2)
B-TAGGING (2)
CHARGE ASYMMETRY (2)
DIJET (2)
DILEPTONS (2)
DIMUONS (2)
DIPHOTON (2)
HEAVY-IONS (2)
LEPTON-FLAVOUR-VIOLATION (2)
LEPTONS (2)
LOW MISSING TRANSVERSE ENERGY (2)
MSSM (2)
MUON (2)
MUONS (2)
PHOTONS (2)
QUARKONIUM PRODUCTION (2)
RESONANCE (2)
TPRIME (2)
VH (2)
Τ (2)
13 TEV (1)
ADD (1)
ALPHAT (1)
ANOMALOUS COUPLING (1)
ANOMALOUS COUPLINGS (1)
ASYMMETRY (1)
ATGC (1)
B HADRONS (1)
B0 DECAYS (1)
BOTTOMONIA (1)
BOTTOMONIUM (1)
BSM SEARCHES (1)
CHARGE RATIO (1)
CHARM-TAGGING (1)
CKM (1)
CONTACT INTERACTIONS (1)
CORRELATIONS (1)
COSMIC RAYS (1)
CP VIOLATION (1)
DALITZ DECAY (1)
DI-BOSON (1)
DI-HIGGS (1)
DIELECTRON (1)
DIJETS (1)
DILEPTON (1)
DIMUON (1)
DRELL–YAN (1)
ELECTRONS (1)
ELECTROWEAK CORRECTIONS (1)
ELECTROWEAK INTERACTION (1)
ELECTROWEAK PRODUCTION (1)
EVENT SHAPES (1)
EXOTIC SPIN (1)
EXPERIMENTAL RESULTS (1)
FIELD THEORIES IN DIMENSIONS OTHER THAN FOUR (1)
FORWARD PHYSICS (1)
FOUR LEPTONS (1)
FOURTH GENERATION (1)
Elsevier (166)
Physics Letters B (160)
The European Physical Journal C (27)
Journal of High Energy Physics (23)
Nuclear Physics A (4)
Nuclear Physics, Section A (2)
|
CommonCrawl
|
Mechanics around a rail tank wagon
Some time ago I came across a problem which might be of interest to the physics.se, I think. The problem sounds like a homework problem, but I think it is not trivial (i am still thinking about it):
Consider a rail tank wagon filled with liquid, say water.
Suppose that at some moment $t=0$, a nozzle is opened at left side of the tank at the bottom. The water jet from the nozzle is directed vertically down. Question:
What is the final velocity of the rail tank wagon after emptying?
Simplifications and assumptions:
Rail tracks lie horizontally, there is no rolling (air) friction, the speed of the water jet from the nozzle is subject to the Torricelli's law, the horizontal cross-section of the tank is a constant, the water surface inside the tank remains horizontal.
Data given:
$M$ (mass of the wagon without water)
$m$ (initial mass of the water)
$S$ (horizontal cross-section of the tank)
$S\gg s$ (cross sectional area of the nozzle)
$\rho$ (density of the water)
$l$ (horizontal distance from the nozzle to the centre of the mass of the wagon with water)
$g$ (gravitational acceleration)
My thinking at the moment is whether dimensional methods can shed light on a way to the solution. One thing is obvious: If $l=0$ then the wagon will not move at all.
classical-mechanics fluid-dynamics
Martin GalesMartin Gales
$\begingroup$ @Pavel: what kind of argument is that? There is also no reason for it to not start moving. Except if you provide such reason and you didn't provide any reason (in particular, you haven't used any of the assumptions of the problem in your answer). This problem is certainly very non-trivial and it reminds me of Feynman's problem of the sprinkler (to which he provided at least two opposite answers at different times and ended up doing an experiment to make sure). $\endgroup$ – Marek Dec 6 '10 at 21:03
$\begingroup$ @Martin: thanks for this surprisingly difficult problem ! $\endgroup$ – Frédéric Grosshans Dec 7 '10 at 9:17
$\begingroup$ This has to be one of the best questions on this site :-) $\endgroup$ – Sklivvz Dec 8 '10 at 21:35
$\begingroup$ I found that after about 10 or 20 comments, discussing this problem was frustrating. My mindset was, "I spent three hours of hard focus working on this problem. If everyone else would stop yapping for a while and do the same, they would see that I am right." Looking back, I realize this attitude was pretty arrogant, and served only to upset me and probably piss off some of my correspondents. So I would like to apologize in general for any curt or rude comments I made here and retire from further conversation. Thank you, Martin, for the interesting problem. $\endgroup$ – Mark Eichenlaub Dec 10 '10 at 22:42
$\begingroup$ @Mark: I think all of us have that (arrogant) attitude from time to time (of course, I am especially talking about myself; not trying to offend anyone) and it's natural for a physicist to think that he understands everything perfectly :-) By the way, regarding the comments, I think you'll agree that the format of discussion under answers is really unwieldy. In case you haven't noticed we have a (working) chat room now; so if you are still interested in discussion, come visit: chat.stackexchange.com/rooms/71/physics (of course, everyone else is welcome too) $\endgroup$ – Marek Dec 11 '10 at 14:42
Interesting problem. I think my approach and answer is very close to other posted solutions. I also added a possible scenario. The basic summary is it is the change in the average momentum of the water in the wagon that causes the wagon to move. Requiring the water to distribute it self evenly in the wagon causes this relation:
average momentum of water in the wagon = $l\times$ mass flow out of wagon
In cases where the wagon has been and forever shall expel water at a constant rate, the wagon stands still. Imagine it being refilled from above its center of mass. You can actually do this same problem with an empty cart being filled from above instead of emptying below. With $l$ being the horizontal point from the wagon's center of mass at which the water falls down.
The wagon does move if there is some fluctuation in the mass flow out of the wagon either by abrupt starts/stops or by running out of water.
$t_{c}\to$ time when wagon runs dry
$l\to$ distance from center of mass of wagon to nozzle, positive $l$ implies nozzle is on the right side of the wagon
$x(t)\to$ center of mass of wagon
$x_{cm}(t)\to$ center of mass of everything
$h(t)\to$ height of water in the container
$m(t)\to$total mass of the wagon including any water it holds
$m_{w}\to$ mass of initial water
$m_{c}\to$ mass of the wagon; the c is for the critical point of $m(t)$ when all the water is gone.
Originally c was for container but it makes sense $m(t_{c})=m_c$
$x(0)=0$
$\dot{x}(0)=0$
I'm going to side step the issue of initial conditions for now. I'm going to treat the system as if the nozzle was always open and water has always been running. Only concerned with how a container with a constant cross section, S, would drain.
Torricelli's Law : Mass Flow =$-\dot{m}(t)$ : Mass of System
$$v(t)=\sqrt{2 g h(t)}$$ $$-\dot{m}(t)=\rho s v(t)$$ $$m(t)=\rho S h(t) + m_{c}$$ Combine to eliminate $m(t)$ and $v(t)$ $$\frac{\dot{h}}{\sqrt{h(t)}}=-\frac{s}{S}\sqrt{2 g}$$
The answer to the differential equation: $$h(t)=h(0){\left(1-t\sqrt{\frac{g {s}^{2}}{2 {S}^{2} h(0)}}\right)}^{2}$$ $$h(t)=h(0){\left(1-\frac{t}{t_{c}}\right)}^{2}$$ where $t_{c}=\sqrt{\frac{2 {S}^{2} h(0)}{g {s}^{2}}}$ and $h(t>t_c)=0$
from there we get $m(t)$: $$m(t)=\rho S h(0) {(1-\frac{t}{t_{c}})}^{2} + m_{c}$$ $$m(t)=m_{w} {(1-\frac{t}{t_{c}})}^{2} + m_{c}$$ and for $m(t>t_{c})$ is simply $m_{c}$, the mass of the wagon
In order to find the center of mass we will account for all of it. At $t=0$, $x_{cm}(0)=x(0)$=0 since all the mass is in the wagon and we assumed equally distributed.
The Wagon and its contents $$m(t)x(t)$$
Water that has left the wagon
If water leaves the the wagon at $t=\tau$, then it will have speed $\dot{x}(\tau)$. Therefore its location is $f(t,\tau)$: $$f(t,\tau) = l+x(\tau)+\dot{x}(\tau)(t-\tau)$$ Then we just integrate to get their contributions. We get their infinitesimal masses from our mass flow: $$\int_0^t f(t,\tau) [-\dot{m}(\tau)]d\tau$$
Combine $$m(0)x_{cm}(t)=m(t)x(t)-\int_0^t f(t,\tau)\dot{m}(\tau)d\tau$$
Differentiating gives us: $$m(0)\dot{x_{cm}}(t)=\dot{m}(t)x(t)+m(t)\dot{x}(t)-f(t,t)\dot{m}(t)-\int_0^t \frac{df(t,\tau)}{dt}\dot{m}(\tau)d\tau$$
Simplifying: $$f(t,t)=x(t)+ l$$ $$\frac{df(t,\tau)}{dt}=\dot{x}(\tau)$$
Integration by parts: $$\int_0^t\dot{m}(\tau)\dot{x}(\tau)d\tau=m(t)\dot{x}(t)-\int_0^tm(\tau)\ddot{x}(\tau)d\tau$$
Repalce: $$m(0)\dot{x_{cm}}(t)=\dot{m}(t)x(t)+m(t)\dot{x}(t)-\dot{m}(t)(x(t)+ l)-m(t)\dot{x}(t)+\int_0^tm(\tau)\ddot{x}(\tau)d\tau$$
Explanation - In order these terms stand for:
mass dissapearing from wagon at the center of mass
momentum of wagon and its contents
mass appearing outside of wagon at the nozzle
last two terms account for momentum of water outside of the wagon
Combining the first and third terms gives us the average momentum the water in the wagon must have to maintain its even distribution horizontally in the container. They are not evidence for instantaneous dissapearance from the center and reappearance at the nozzle.
Result: $$m(0)\dot{x_{cm}}(t)=-\dot{m}(t) l+\int_0^tm(\tau)\ddot{x}(\tau)d\tau$$
where: $$m(t)=m_{w} {(1-\frac{t}{t_{c}})}^{2} + m_{c}$$
Wagon w/ Brakes
In this scenario, the wagon has been losing water before $t=0$. However the force of the brakes keeps $\dot{x}(t)=0$. At $t=0$ the brakes are released and it is allowed to move. This avoids any instantaneous jump in velocity by the wagon. It also allows $x_{cm}$ to be a non-zero constant after $t=0$.
Setting $t=0$: $$m(0)\dot{x_{cm}}(0)=-\dot{m}(0) l+\int_0^0m(\tau)\ddot{x}(\tau)d\tau$$ $$m(0)\dot{x_{cm}}(0)=-\dot{m}(0) l$$ $$\dot{x_{cm}}(0)=-\frac{\dot{m}(0)}{m(0)} l$$ $$\dot{x_{cm}}(0)=\frac{2 l m_w}{t_c m(0)}$$
For $t>0$ there is no force from the brakes: $$\ddot{x_{cm}}(t\ge0)=0$$ $$\dot{x_{cm}}(t\ge0)=\frac{2 l m_w}{t_c m(0)}$$
In other words in this situation at $t=0$ the momentum of the whole system matches that of the water in side the wagon. The only question now is as time evolves how is that momentum transfered to the wagon and water leaving the moving wagon.
Differentiate the system's momentum: $$m(0)\ddot{x_{cm}}(t)=-\ddot{m}(t) l+\frac{d}{d t}\int_0^tm(\tau)\ddot{x}(\tau)d\tau$$ $$0=-\ddot{m}(t)l+m(t)\ddot{x}(t)$$ $$\ddot{x}(t)=\frac{\ddot{m}(t)l}{m(t)}$$
Physical Considerations
Therefore we have a simple system as long as $\ddot{m}(t)$ is continuous. The physical explanation is that if we abruptly closed the nozzle the water in the wagon does not come to an immediate stop relative to the wagon. It sloshes around and after a certain relaxation time redistributes its momentum to the system as a whole. Similarly with the quick turn on, the water in the container can't just gain an average momentum to match $-\dot{m}(t)l$. Again there must be some relaxation time for the water to hit that equilibrium where it can evenly distribute itself in the wagon. It is not that these situations are impossible but that my equations would not take into account these relaxation times.
My situation just avoids that. The water in the wagon has already hit some equilibrium before $t=0$. Also having the water move under its own weight provides a slow turn off.
Velocity of Wagon
Combining the results from previous sections: $$\ddot{x}(t)=\frac{2\frac{m_w}{{t_c}^2}l}{m_{w} {(1-\frac{t}{t_{c}})}^{2} + m_{c}}$$ $$\ddot{x}(t)=\frac{2 l m_w}{{t_c}^2 m_c}{\left[\frac{m_w}{m_c}{(1-\frac{t}{t_c})}^{2}+1\right]}^{-1}$$
$$\int\frac{du}{1+u^2}=\arctan(u)$$ $$u=\sqrt{\frac{m_w}{m_c}}(1-\frac{t}{t_c})$$ $$\dot{x}(t)=-\frac{2 l}{t_c}\sqrt{\frac{m_w}{m_c}}\int\frac{du}{1+u^2}$$
$$\dot{x}(t)=\frac{2 l}{t_c}\sqrt{\frac{m_w}{m_c}}\left[\arctan\sqrt{\frac{m_w}{m_c}}-arctan\sqrt{\frac{m_w}{m_c}}\left(1-\frac{t}{t_c}\right)\right]$$
Extremely Heavy Wagon: $\sqrt{\frac{m_w}{m_c}}\ll1$ $$\arctan(x)\to x-\frac{1}{3}x^3$$ $$\dot{x}(t_c)=\frac{2 l m_w}{t_c m_c}$$ $$\dot{x_{cm}}(t\ge0)=\frac{2 l m_w}{t_c m(0)}$$
This makes physical sense. The wagon's final momentum is just about equal to our initial momentum. The higher order terms would account for the momentum that the dispensed water has.
Regular Wagon: $\sqrt{\frac{m_w}{m_c}}\gg1$ $$\arctan(x)\to \frac{\pi}{2}$$ $$\dot{x}(t_c)=\frac{\pi l}{t_c}\sqrt{\frac{m_w}{m_c}}$$ $$\dot{x_{cm}}(t\ge0)=\frac{2 l m_w}{t_c m(0)}$$ $$p_{cm}(t\ge0)=\frac{2}{\pi}\sqrt{\frac{m_w}{m_c}}p(t)$$
This case has the wagon with a significantly smaller portion of the systems momentum.
$\begingroup$ +1 this is pretty much the same thing I did (at least mathematically) and I think it's the clearest solution yet posted. $\endgroup$ – David Z♦ Dec 13 '10 at 9:56
$\begingroup$ Really nice work! Now if you will add the case of m'(0)=0 (mass flow at t=0 is zero) then +1 is guaranteed:) $\endgroup$ – Martin Gales Dec 13 '10 at 11:33
$\begingroup$ @kalle43 My specific case avoids that issue. The energy was there starting before $t=0$ the water in the wagon has an average horizontal momentum. I assumed that the water level in the tank remains horizontal at all times as the problem stated. Giving the water in the wagon some momentum was the only way to satisfy this requirement $\endgroup$ – David Dec 13 '10 at 12:37
$\begingroup$ @kalle: I still think that energy is not conserved. This is not a closed(i repeate:closed) system and gravity is adding energy until the tank is empty. $\endgroup$ – Martin Gales Dec 13 '10 at 12:44
$\begingroup$ @kalle43 & @Martin If I get time I'll try to work out m'(0)=0 case. As kalle43's suggestion points out the crux might be lowering the initial mass flow as the source of energy for the average momentum of the water in the wagon. Very interesting, thanks Kalle43. $\endgroup$ – David Dec 13 '10 at 12:52
OK, that is my second tentative to solve this problem. I think I have a solution this time, thank to the discussion of others in that thread. The solution is $v_{\text{final}}=\sqrt{2gh(0)}\frac{ls}{h(0)S}(1-\frac{\pi}{2})$ if $m\gg M$. This corresponds to a few millimetres per second towards the left for a wagon full of water.
Here is how I've derived it :
In order not to neglect not negligible contributions, I will pose the problem for a cart of a quite arbitrary shape, before restricting it to our cart.
$S(z)$ : section of the cart at altitude $z$
$h(t)$ : height of water at time $t$
$l(z)$ : abscissa of the centre of mass (CoM) of the slice of water at altitude $z$ - $M$ : Mass of the empty cart
$m = \int_0^h(0) dz S(z) \rho$ : initial mass of water
$\mu(t)$ : remaining mass of water at time $t$
$f(t)=-dµ/dt > 0$ is the mass flow of water
$v_v(z,t) < 0$ : vertical speed of the water slice at altitude $z$
$v_h(z,t)$ : horizontal speed of its CoM.
In the case of the cart, we will have :
$S(z)$ is constant above the nozzle. Let $\delta+\epsilon$ be the nozzle height. We then have $S(z)=S$ for $z>\delta+\epsilon$. For numerical appplications, we'll suppose a $3\times3\times10$ m³ cart, with $S=30$ m².
The last part of the nozzle is a pipe of height $\delta\ll h(0)$. In this pipe $S(z<\delta)= s\ll S$. If the output has a 10 cm side, $s=1O^{-2}$ m².
$h(0) = 3$ m
Above the nozzle, the CoM of the water is fixed at $l(z>\delta+\epsilon)=0$, while in the lower part, $l(z<\delta)=-l$, wher $l=5$ m.
I'll assume $M=10^4$ kg, but I've no idea whether it's realistic.
$\rho = 10^3$ kg·m⁻³
$m=\rho S h(0) =$ 9·10⁴ kg
$g=10 m·s⁻²$
Vertical movement of water
In the following, we will assume that the horizontal acceleration $a$ of the cart stays $a\ll g$ during the movement. A nonzero acceleration would induce correction terms proportional to $\frac{a^2}{g^2}$, and we will check that this hypothesis is consistent later. This assumption allows us to neglect any motion of the cart when looking at the movement of water in the cart referential, and then compute $f(t)$, $h(t)$ and $\mu(t)$. We will then use the resulting f this computation to find the horizontal movement of the cart.
The incompressibility of water allows us to write
$$ f(t)=-\rho S(z,t) v_v(z,t) =\rho S(h(t)) \frac{dh}{dt} =- \rho s v_v(0,t) \quad(*)$$
Bernoulli, at altitude $h$ and $0$ gives us
\begin{gather} \left(\frac{dh}{dt}\right)^2 + 2gh = (v_v(0,t))^2 \\ 2gh=(dh/dt)² (\frac{S(h)²}{S(0)}² -1) \end{gather}
In our case, except in the nozzle, $\frac{S(h)^2}{S(0)^2}=\frac{S^2}{s^2}\simeq 10^7$. We will therefore neglect the $-1$ in the following.
This equation has the following solution : $$ h(t)=h(0)(1-t/t_m)^2 \text{ for } t\in[0, t_m]$$
and $h(t>t_m)=0$, with $t_m=\frac Ss \sqrt{2h(0)/g}$. Here $t_m=3\cdot 10^3 \sqrt(6/10) \sim 2000$ s.
We have then $\mu(t)=m (1-t/t_m)^2$ and $f(t)=f(0)(1-t/t_m)$ with $f(0)=\rho s \sqrt{2gh(0)}\sim=10^{-2+3}\sqrt{60}\sim80$ kg·s⁻¹.
Conservation of the horizontal momentum
Now comes the interesting part of the problem, the horizontal movement.
Momenta will be computed in the cart referential ($P^{CR}$) and in the rail referential ($P^RR$).If you look at the water inside the cart, its momentum will be
$$P^{CR}_{\text{water}}=\rho\int_{0}^{h(t)}dz S(z) v_h(z,t)$$
with $v_h(z,t)= dl/dz v_v(z,t)$. From that and the expression $(*)$, we have
$$P^{CR}_{\text{water}}=- f(t) \int_{0}{h(t)}dz dl/dz= f(t) (l(0)-l(h(t))).$$
Going back to the more physical rail-refrential, we have then
$$P^{RR}_{\text{water}}=µ(t)v(t) + f(t) (l(0)-l(h(t)))$$
We also have, for the cart,
$$ P^{RR}_{\text{cart}}=M v(t)$$
As stated in other answers (but not my previous one :-(), one should not forget the momentum of the water which has left the cart in previous time :
$$P^{RR}_{\text{leaked water}}=\int_0^t d\tau f(\tau) v(\tau)$$
Summing these term, together with the momentum conservation, we have :
$$ 0=P^{RR}_{\text{total}}=(M+\mu(t))v(t) + f(t) (l(0)-l(h(t))) + \int_0^t d\tau f(\tau) v(\tau) $$
For example when the cart is empty, $f(t)=0$, $\mu(t)=0$ and the above equations becomes : $$ 0=P^{RR}_{\text{total}}=Mv_{\text{final}} + \int_0^t d\tau f(\tau) v(\tau) $$ The cart can have a final nonzero speed, if its momentum is compensated by the net momentum of the water having left the cart.
Differentiating the momentum conservation relatively to $t$, we obtain,
$$ 0=(M+µ(t))\frac{dv}{dt} - f(t) v(t) + \frac{df}{dt}(l(0)-l(h(t))) - f(t) \frac{dh}{dt} \frac{dl}{dz} + f(t) v(t)$$
This equation can be simplified into
$$ \frac{dv}{dt}=\frac{1}{M+\mu(t)}\left[\frac{df}{dt}[l(h(t))-l(0)] - \frac{dl}{dz}\frac{f(t)^2}{\rho S(h(t))}\right] $$
Knowing $f(t)$ as per the previous section allows us to integrate this equation, at least numerically, for any cart. In the following, we solve the equation for our cart geometry, distinguishing three steps.
Step 1: opening the nozzle
When the nozzle is quickly opened at $t=0$, the cart is full and $\mu=m$ is constant. the equation we have to solve is then $$\frac{dv}{dt}=\frac{1}{M+m}\frac{df}{dt}l-0 $$ from which we easily deduce $$\Delta v = \frac{l\Delta f}{M+m}=\frac{lf(0)}{M+m}.$$ With the numerical values above, this corresponds to a speed of 4 mm·s⁻¹. This movement of the cart compensate the internal acceleration of the water inside the cart towards the nozzle.
As wee will see later, this abrupt speed change is the biggest acceleration taken by the cart. If the nozzle is opened in one second, which is still quickly enough to keep $\mu=m$ approximation valid, the horizontal acceleration $a$ is still small : $\frac{a}{g}=4\cdot10^{-4}$.
Step 2: Emptying the cart above the nozzle
Above the nozzle, we have a constant $l(h)=0$ and the differential equation is $$\frac{dv}{dt}=\frac{l}{M+\mu(t)}\frac{df}{dt}$$.
If the cart is emptied with a constant $f(t)$, it does not accelerate nor slow down, until the f(t) is cut. In that moment the back action is the same in a reverse direction, but with a lower mass. (M instead of M+m). We end therefore with a net speed towards the left of value $lf(1/(M+m)-1/M)$
In the more general case where f slowly decrease to 0, $df/dt <0$, implying a slow down, and indeed a reversal of the speed, since the total mass $M+µ(t)$ decreases.
If we plug into the above equation the values we have for $f(t)$ and $\mu(t)$, we have
$$\frac{dv}{dt}=\frac{lf(0)}{t_m(M+m(1-t/t_m)^2)}=-g\frac{ls^2m}{h(0)S^2M}\frac1{1+\frac mM(1-t/t_m)^2}$$ which can be analytically integrated using $\int dt/(1+t^2)= \arctan t$. We have then $$v(t)-v(0)=-\frac{ls}{h(0)S}\sqrt{2gh(0)}\left[\arctan\sqrt{\frac mM} - \arctan\left(\frac{t_m-t}{t_m}\sqrt{\frac mM}\right)\right]$$.
We have then $$v(t_m)=v(0)-\frac{ls}{hS}\sqrt{2gh(0)}\arctan\sqrt{\frac mM}$$ In the limit $m\gg M$, where the mass of water is larger than the cart mass, $\arctan\sqrt{m/M}\simeq\pi/2$ and $v(0)=\sqrt{2gh(0)}\frac{ls}{h(0)S}$, so that : $$v(t_m)=\sqrt{2gh(0)}\frac{ls}{h(0)S}(1-\frac{\pi}{2})$$
step 3: Showing that the nozzle has no influence, so long at it is small
The problem of the nozzle is the zone where $\frac{dl}{dz}$ is not small. let say that this zone is of height $\epsilon$, above a vertical pipe of height $\delta$, with $\epsilon\ll\delta\ll h(0)$. I have the intuition that the problem is not so dangerous, since the $\propto l/\epsilon$ derivative will be only relevant for a time proportional to $\epsilon$, and the small amount of water involved should keep the corrective term small. But I have nothing more rigorous yet :-(
Frédéric GrosshansFrédéric Grosshans
$\begingroup$ v(0) should be 0 right ? And v(t)=const ? These 2 violates momentum conservation. What is t_m ? $\endgroup$ – TROLLHUNTER Dec 10 '10 at 16:02
$\begingroup$ @kalle43: what problem are your referring to ? There are several of them. If you are referring to your last objection, about moving only in one direction, it is not the case. If f(t)=f is constant, all the interesting stuff happens when f is "switched" on and off. When it's switched on, the cart get a kick towards the right and move at the speed $+lf/(M+m)$. The speed stays constant afterwards until $f$ is switched off, and the cart get a kick towards the left $\Delta v=-lf/M$, bigger than the first one since it is lighter, so its final speed is $-lf\frac{m}{M(M+m)}$, towards the left. $\endgroup$ – Frédéric Grosshans Dec 10 '10 at 18:22
$\begingroup$ @kalle43 : yes, but $\mu(t)$ can be linear only when there is still some water. When $\mu(t)=0$, $f(t)$ changes abruptly to 0, and (in your notation) $m''\neq0$ for a short time. This $m''$ peak then changes the speed. This occurs at time $t=m/f$ I have then three speeds : $v(t<0)=0$, $v(t\in[0,m/f])=+lf/(M+m)>0$ and $v(t>m/f)=-lfm/M(M+m)<0$. $\endgroup$ – Frédéric Grosshans Dec 10 '10 at 18:49
$\begingroup$ Ok, so at 0<t<m/f, its is moving in one direction with constant speed v. It's momentum is constant p. So it must have dropped of water with momentum -p, but the nozzle points straight down, so how can the water dropped off have gained momentum to the left, if the cart has so far only moved towards the right? Or do you mean that the water inside the cart has momentum -p ? $\endgroup$ – TROLLHUNTER Dec 10 '10 at 19:01
$\begingroup$ @kalle43 : exactly. There is a constant flow of water inside the cart which has a momentum $-p$. That is the key point. The main tank is centred, but you can imagine the nozzle as a pipe going from the bottom-centre of the tank to the point $-l$, ant then turning down. In the horizontal section of the pipe, you have a mass $sl\rho$ of water, with speed $-f/s\rho$. The total amount of momentum is then $-fl$. $\endgroup$ – Frédéric Grosshans Dec 10 '10 at 19:19
Qualitative Answer
I think the cart exhibits an extremely surprising behavior. The cart begins by sitting still on the track. The hole is to the left of the center. When the nozzle is opened, water in the cart begins a net flow to the left. The cart, conserving momentum, picks up a velocity to the right. In a steady state, the flow of water would be constant and the cart would move at constant velocity. However, as the flow rate begins to decrease, the velocity of the cart decreases. Eventually, the cart comes to a standstill, then actually reverses directions, moving to the left before the last water falls out. When the last of the water is gone, the cart is coasting to the left. The center of mass of the system never moves, because as the center of mass of the cart moves, the center of mass of the water moves oppositely. Momentum is also conserved, because as the cart picks up momentum, the water picks up opposite momentum. If the water also slides after hitting the track, by the end of the process the water will have a net motion somewhat to the right to compensate the motion to the left of the cart.
Quantitative Answer
Let the cart move at a speed $v$ to the right, and the water move at an average speed $w$ to the right. In general, $v \neq w$ because the water's center of mass is moving relative to the cart. The hole is at $l$. If the hole is on the left then $l$ is negative.
The velocity of the water relative to the cart is $w-v$. This velocity comes from the fact that the water, if it were to continue as it is now, would all move from the center of the cart to the hole, a distance $l$, in a time $m/f$, with $f$ the mass flow rate. Thus the kinematic relation
$$w-v = \frac{lf}{m}$$
Next, we want to conserve momentum. This gives
$$\frac{d}{dt}(Mv + mw) = 0$$
Taking this derivative, we have to keep in mind that $M$ and $m$ are changing because water is flowing out of the cart. $m$ is decreasing at the rate $f$, and $M$ is increasing at the rate $f$ when we think of $M$ as the total mass moving at speed $v$ rather than the mass of the cart.
$$M\dot{v} + m\dot{w} + f(v-w) = 0$$
Physically, the first two terms represent the force on the cart and the force on the water in the cart. The last term represents the force on the water entering the nozzle. Water entering the nozzle goes from $w$ to $v$, thus experiencing acceleration. We have an earlier expression for $v-w$, so plug it in.
$$M\dot{v}+m\dot{w} = \frac{lf^2}{m}$$
I would like to solve for $\dot{v}$. To do this, take the time derivative of the kinematic equation for $w-v$
$$\dot{w} - \dot{v} = \frac{l\dot{f}}{m} + \frac{lf^2}{m^2}$$
These last two equations simplify to
$$\dot{v} = \frac{-l\dot{f}}{M+m}$$
When the flow rate is constant, there is no acceleration. This is plausible because we can imagine watching in a center-of-mass frame where the cart moves to the right and the water moves to the left. The water entering the nozzle feels an acceleration, but the water in the cart is also accelerating, and in the opposite direction. (The water in the cart is accelerating because there is less and less of it, so on average it must move faster to deliver the correct flow rate from the center of the cart to the nozzle.)
Right when we release the nozzle, the flow rate very quickly jumps up, and so the cart quickly picks up speed, too. $m$ is essentially constant over the course of this acceleration, so the cart jumps up to a speed
$$v = -\frac{lf}{M+m}$$
If $m$ were to remain constant, we would find that this relation continues to hold, so that when the water stops flowing, the cart also stops. However, $m$ is not constant; it decreases. When the flow slows to a stop, the acceleration of the cart is now larger because $m$ is smaller. Hence, by the time all the water has left the cart, it is actually moving to the left. This is surprising but necessary - the water is mostly moving to the right because the cart initially moved to the right. The cart must wind up moving left when all is said and done to compensate.
If we suppose the flow rate is constant the entire time, except abruptly beginning and ending (an assumption not in the original problem, which is qualitatively similar but more work to calculate), the final velocity of the cart is
$$v_f = \frac{lfm}{M(M+m)}$$
The water is all flowing at the speed the cart originally jumped to,
$$w_f = -\frac{lf}{M+m}$$
so we see that momentum is conserved.
Mark EichenlaubMark Eichenlaub
$\begingroup$ @Martin That's a good point, but I don't think it invalidates the analysis. It depends on what "quickly" means - quickly compared to what. In the sense I used quickly, what's important is that the mass of water draining during the acceleration is small compared to the total mass of water. The cart could accelerate for a time that is short compared to the drainage time $m/f$. For effects like wave motion, what's important is that the cart's acceleration time is slow compared to, say, the period of the sloshing mode. If the drainage time is very long compared to the sloshing period, $\endgroup$ – Mark Eichenlaub Dec 7 '10 at 13:43
$\begingroup$ ... then the acceleration could be "quick" in the original sense I meant it, but "slow" in terms of wave effects. $\endgroup$ – Mark Eichenlaub Dec 7 '10 at 13:43
$\begingroup$ @Martin On second thought, the time of acceleration doesn't seem like the most important factor - perhaps the magnitude of the acceleration compared to $g$ is more important. I think the same basic idea that it is in principle possible to avoid waves should hold, though, if we can control the speed at which we ramp up the flow. If there are wave effects cropping up, though, it would seem that is an issue with the original statement that the water is flat, which simply proves to be unphysical. $\endgroup$ – Mark Eichenlaub Dec 7 '10 at 14:12
$\begingroup$ @kalle Thanks for posting, and I understand that it's subtle, but the momentum conservation equation I wrote does consider water leaving the system. At any given instant, the momentum of the cart and water still in the cart is $Mv+mw$. (I know that this ignores water that has left the cart. Bear with me a moment, please.) Suppose an amount of time $dt$ passes. Then the momentum changes because $v$ changes, $w$ changes, and because water leaves the cart. The momentum change due to a change in $v$ is $M\textrm{d}v$. The momentum change due to a change in $w$ is $m\textrm{d}w$. (cont.) $\endgroup$ – Mark Eichenlaub Dec 8 '10 at 22:26
$\begingroup$ @Mark : In order to understand your answer, I have developed a more complete model, which quantitatively finds your answer for a constant $f$ :-) $\endgroup$ – Frédéric Grosshans Dec 10 '10 at 16:07
Here is my attempt. I went to a somewhat different path than kalle43 and this is a little easier i think.
Let $x(t)$ be the coordinate of the nozzle at time $t$. Consider an infinitesimal mass of water $dm$ departing the nozzle at time $\tau$ : $$dm=-m'(\tau)d\tau$$ Here $m'(t)$ denotes the time derivative of the mass of water inside the tank.
Let $x(\tau)$ be the horizontal coordinate of $dm$ at time $\tau$. Then at time $t>\tau$ the horizontal coordinate of $dm$ will be: $$x(\tau)+(t-\tau)x'(\tau)$$ Here $x'(t)$ denotes the time derivative of the coordinate of the nozzle at time $t$ or simply velocity of the wagon.
Now sum $x_idm_i$ (static moment of mass) over all infinitesimal particles emitted from the nozzle within the time period $(0...t)$ will be expressed by the integral:
$$-\int_0^t [x(\tau)+(t-\tau)x'(\tau)]m'(\tau)d\tau$$ The following step is to get static moment of mass of the wagon with water inside it. This is a simple:$$[l+x(t)][M+m(t)]$$ Now the static moment of mass of the whole system(the wagon with water + emitted water) is expressed as the sum of last two expressions:
$$-\int_0^t [x(\tau)+(t-\tau)x'(\tau)]m'(\tau)d\tau+[l+x(t)][M+m(t)]=pt+c$$ $p=const$ and $c=const$
Now you ask what means $pt+c$.This becomes clear when we differentiate the last equation with respect to $t$: $$-\int_0^t x'(\tau)m'(\tau)d\tau- x(t)m'(t)+x'(t)[M+m(t)]+m'(t)[l+x(t)]=p$$ $p=const$
This result represents the horizontal momentum of the whole system(the wagon with water + emitted water). This must be conserved. So $c$ is simply integration constant.
Now the most important part follows:
Consider the initial moment $t=0$. At this moment let the coordinate of the nozzle be zero:$(x(0)=0)$ as well as the initial velocity of the wagon:$(x'(0)=0)$. Then the momentum equation gives:
$$lm'(0)=p=const$$
What can we conclude from this result? First, before the opening of the nozzle the momentum of the whole system(wagon+water inside it) is definitely zero. But after opening, at $t=0$ the momentum remains zero only if $m'(0)=0$. Otherwise it suddenly becomes different from zero. And this last is realized in the given problem. The momentum of the whole system(the wagon with water + emitted water) becomes different from zero and the wagon starts to move in one direction.
But if $m'(0)=0$ then Mark Eichenlaub's scenario will start, i think.
Now let's differentiate the momentum equation with respect to $t$ to get the equation of motion of the wagon: $$[M+m(t)]x''(t)=-lm''(t)$$ Actually, I was shocked that the equation turned out to be as simple.
I drifted from Torricelli's law and added an example which confirms quantitatively Mark Eichenlaub's qualitative answer. This shows also that the law of conservation of energy is irrelevant in this problem. Only the mass change of the wagon does matter.
I picked a function $m(t)$ such that $m'(0)=0$. So there is no need to worry about any instantaneous jump at $t=0$ and the horizontal momentum remains zero. $$m(t)=\frac{m}{2}\left(1+cos\frac{\pi{t}}{T}\right);0\leq t\leq T$$ and the equation of motion:
$$[M+m(t)]x''(t)=-lm''(t)$$ The solution of the equation:
$$\dot{x}(t)=\frac{l\pi^2}{T}\left(\frac{t}{T}-\frac{2 }{\pi}\frac{\eta+1}{\sqrt{{2\eta+1}}}\arctan\frac{tan\frac{\pi{t}}{2T}}{\sqrt{2\eta+1}}\right)$$ where $\eta=\frac{m}{2M}$
This solution follows closely by the behavior Mark gave. Final velocity is directed to the left $(v_f<0)$ and is given by the expression: $$v_f=\frac{l\pi^2}{T}\left(1-\frac{\eta+1}{\sqrt{{2\eta+1}}}\right)$$
$\begingroup$ Cool. Your differential equation is actually the same as mine for $\dot{v}$. The stuff about $m'(0)$ being different from zero is a good point. Basically, if $m'(0) \neq 0$, the cart experiences infinite acceleration until it reaches the recoil speed $-lm'(0)/(M+m)$. This is actually the same issue your raised in a comment to my post - if the cart does accelerate very quickly as the flow turns on, we might expect the assumption that the surface of the water stays flat to fail. Anyway, it's nice to see someone take a different tack and confirm each other's work. $\endgroup$ – Mark Eichenlaub Dec 9 '10 at 11:54
$\begingroup$ @kalle43 : the momentum is conserved, because some water has been left with some momentum to the left. $\endgroup$ – Frédéric Grosshans Dec 10 '10 at 16:19
$\begingroup$ That is impossible unless the wagon has been moving to the left at some earlier point, but he predicts it to start at 0 and coast to the right forever. $\endgroup$ – TROLLHUNTER Dec 10 '10 at 16:22
$\begingroup$ From my own work I got the same EOM as you did for the tank, so I think you're right in that respect. But when you say near the end that the total momentum suddenly becomes different from zero, that'd be a blatant violation of the law of conservation of momentum. The total momentum should remain zero. (When I get a chance I'll try to verify that directly using the solution to the differential equation) $\endgroup$ – David Z♦ Dec 12 '10 at 8:42
$\begingroup$ @David: Maybe I did not express myself correctly. My first language is not English. I repeate from my answer: It follows from the horizontal momentum equation of the system that at t=0 : l*m'(0)=p=const. So the momentum of the system depends strongly of the initial condition m'(0). If m'(0) is not zero then it implies discontinuity appearing. At this point you can not argue the validity of the law of conservation of momentum, i think. $\endgroup$ – Martin Gales Dec 13 '10 at 10:30
This answer presents an analogy that I hope will clarify how it is possible that 1) the wagon moves 2) the wagon winds up with a net velocity at the end of the problem. This isn't a direct answer - it's intended as supporting conceptual material (so I've marked it community wiki).
Throughout this answer, all velocities and all momenta are calculated solely in the reference frame of the rail.
Imagine that the tank does not have water in it. Instead it has a gun that shoots clay lumps. The gun is mounted at the middle. It can shoot any size clay lump at any speed.
There is a hole in the wagon floor. For convenience, the hole is all the way at the left side of the wagon. If the gun shoots a lump of clay to the left, the gun, which is rigidly attached to the rest of the wagon, will recoil some. The lump will fly towards the left side of the wagon and collide with the left wall completely inelastically. Then it will fall down through the hole in the floor and exit the wagon with exactly the same horizontal speed (if any) as the wagon.
First experiment
The tank starts out stationary with a lump of mass $m$ in the gun. It shoots the lump at speed $v$. The lump is moving to the left; $v$ is negative. The momentum of the lump is $mv$. Let the recoil speed of the wagon be $w_0$. By conservation of momentum, $mv + Mw_0 = 0$. Therefore, the cart recoils, moving at speed
$$w_0 = -v*m/M$$,
which is to the right.
Next, the lump collides with the left wall. At this point the lump and wagon must move at some new, mutual speed after the collision. Call that $w_f$. Conservation of momentum implies $w_f = 0$ and the wagon has come to a dead stop. The lump falls through the hole straight down and the wagon sits still for the remainder of eternity. It is displaced from its original position.
Second Experiment
The tank starts out with two lumps of clay in the gun, each of mass $m/2$. The gun shoots one lump at speed $v$ as before. Conservation of momentum gives $m/2*v + (M+m/2)*w_0 = 0$, or
$$w_0 = -v\frac{m}{2(M+m/2)}$$
Next, we wait until the moment when that lump hits the left wall. At precisely that moment, we fire the next lump, also at speed $v$. We make the acceleration profiles of the two lumps exactly equal in magnitude and opposite in sign. This way, the forces on the two lumps must be equal. Those forces come from the rigid body of the gun and wagon combined. Hence, the gun/wagon feels no net force and no acceleration during this process.
The first lump is now comoving with the wagon at speed $w_0$. It falls through the hole moving at that speed.
Next, the second lump collides with the wagon. The second lump and the wagon come to some mutual velocity $w_f$. Conservation of momentum gives $mw_0/2 + (M + m/2)w_f = 0$, or
$$w_f = -w_0 \frac{m}{2(M+m/2)}$$
or substituting in for $w_0$
$$w_f = v \left(\frac{m}{2(M+m/2)}\right)^2$$
The second lump falls out of the wagon and moves at speed $w_f$, and the wagon coasts at speed $w_f$ from then on. $w_f$ is proportional to $v$ and has the same sign. The wagon is moving to the left at the end of the process.
Mark Eichenlaub
$\begingroup$ Your pure qualitative answer is much more clear than this one (at least for me). What is x? $\endgroup$ – Martin Gales Dec 15 '10 at 9:03
$\begingroup$ @Martin Oops - $x$ is just a typo. I'll fix it, thanks. That's okay this answer isn't what you're looking for. I just wanted to present an explicit, easy-to-understand example of why the cart can move and even have net motion at the end of the process. $\endgroup$ – Mark Eichenlaub Dec 15 '10 at 9:17
$\begingroup$ You do not need even to shoot the second lump, just release it while the first one is in flight. Still, it is misleading; note than neither of the solutions proposed have the speed changing sign. $\endgroup$ – arivero Jan 19 '11 at 19:38
My answer below is wrong: it doesn't take into account the momentum of water leaving the cart once it has started moving.
Basically, by conservation of the horizontal momentum in the absence of any horizontal force, the speed of the wagon at the end will be 0. However, the position of the centre of mass of the (Wagon+Water) system should also be conserved, so the wagon will move slowly to the right during the process, which can probably be linked to a pressure difference inside the tank. But it will stop by the time the Wagon is empty.
The real question is therefore not the final speed, but the final displacement. Let x be the current position of the Wagon's centre of mass. When a mass -dµ of water goes through the nozzle, its centre of mass is displaced by l to the left, and the centre of mass of the wagon is displaced by -l·dµ/(µ+M) to the right, where µ is the remaining mass of water inside the wagon.
Integrating this gives $$\Delta x=-l\int_m^0\frac{\mathrm d\mu}{\mu+M}=l\ln\frac{m+M}M $$.
Of course, if the wagon moves initially to a (non-relativistic !) speed, the previous analysis stays true in the moving reference frame. The speed will not change, but the wagon will have a Δx advance compared to a wagon with the same initial speed, but a closed nozzle
Edited to correct a sign error*strong text*
$\begingroup$ @ Skilvvz: The movement of the water inside the tank indices a pressure gradient inside the tank, which is translated to a movement of the Wagon. It is the movement of the water inside the tank which moves the Wagon, not the water leaving the Wagon. By the way, if you moved all the water inside the tank with a ballast system, you could move the wagon without letting the water escaping it. The flown out water is just a red herring : it is not the one which moves the wagon. It is only used to move the water inside the wagon. $\endgroup$ – Frédéric Grosshans Dec 6 '10 at 15:24
$\begingroup$ @Skilvvz: 2 possibilities (I think 1 is the good one) : because 1. The speed of water is not 0 and plays a role in the pressure 2. The horizontality condition might be only aproximatly true. $\endgroup$ – Frédéric Grosshans Dec 6 '10 at 16:38
$\begingroup$ @Frédéric: oh, I just realized that you can't use CoM analysis this easily. The water that comes out the wagon when it is moving has non-zero velocity (with respect to frame where the wagon was stationary initially) and will always have it non-zero. Unless you want to bring 2nd law into game (which actually makes the water stop), but then CoM principle shouldn't be valid anymore. $\endgroup$ – Marek Dec 6 '10 at 17:45
$\begingroup$ @dmckee: right, I am sorry about that, but this problem is genuinely hard. If one were to consider the full description then they obviously have to take continuum mechanics into account. It's totally unclear to me how to reduce that infinite number of degrees of freedoms + thermodynamics into some simple system. I am certain though that it can be done, so if someone with better intuition comes along that will be great. $\endgroup$ – Marek Dec 6 '10 at 21:21
$\begingroup$ @mbq: The water level fall unifromly throughout the tank right. So each volume element $dV$ that leaves through the nozzle at one end it must have come ultimately from a layer spread across the upper surface of the liquid, so there is a bulk flow. That's the easy part. The hard part is how do you reconcile that with frictionlessness and "straight down" requirement for exhausting the liquid. My suspicion is that there are second order effects we're neglecting. If we allow a tiny bit a frictions we can slow the whole business down until the car is always static. $\endgroup$ – dmckee --- ex-moderator kitten♦ Dec 7 '10 at 0:10
With a vertical jet, Torricelli's law still holds because the displacement of the wagon is orthogonal to the acting forces, gravity plus (arguably, but orthogonal in any case) reaction force, so no work is used by the wagon, $\Delta W = {\bf F} \cdot {\Delta \bf x}=0$ and all the energy still goes to the water jet.
Thus we can calculate $m(t)$ as usual. Forget the drawing and use a square tank. The one in the drawing was calculated by Kepler, and it complicates the problem. Let the height of the water to be simply $h(t)={m(t)\over \rho S}$, ok? And $2 g h(t)$ is the square speed of the jet, the variation of mass follows $ m'(t)= - \rho s \sqrt { 2 g h(t)}$, and at the end we have $$ m'(t) = - \sqrt {2 g \rho s^2 \over S} \sqrt{ m(t)} $$ Which solves to $m(t)= m (1 - t \sqrt {g \rho s^2 \over 2 m S})^2 $ and tell us that the tank becomes empty at $t_f=\sqrt {2 m S \over g \rho s^2 }=\sqrt {2 h S^2 \over g s^2 }$.
We can plug this into Frederik "wrong" solution $x(t)= l \ln {m+M \over m(t)+M}$ t o get the displacement $$ x(t) = l \ln {m+M \over (1 - t/t_F)^2 m +M}$$ and the velocity $$ \dot x(t)= { 2 m l \over t_F} { (1-t/t_F) \over (1 - t/t_F)^2 m +M } = { 2 l (t_F-t) \over (t_F - t)^2 + {Mt_F^2 \over m} }$$
Note that in the limit of $M \ll m$, we get $ \dot x(t)= { 2 l \over (t_F - t) } $ and thus $ \ddot x = { 2 l \over (t_F - t)^2 }$, similar to other answers. Note that in this limit the speed at $t_F$ is infinite, but it is massless, so we can stop it anyway.
Another curious issue is that $ \dot x(0) = { 2 l \over t_f (1 + M/m)} $ is not zero. It sounds strange, but consider that the initial speed of the jet is not zero neither.
Before to consider variants of Frederik solution, it is important to note that we have four blobs of mass playing some role.
the leaked water, $m-m(t)$
the leaking water, $\Delta m= - m'(t) \Delta t$
the cart mass, M
the wagon water, m(t)
In the leaking process, the leaked water is already inertial, will a horizontal momentum (in the railway direction) equal and opposite to the momentum of the other three masses, Or, for small $\Delta t$, equal to $- (M + m(t)) V_{c+w}$. The questions to be fixed are: 1) which is the actual direction of the force by the water and the leaking water horizontal velocity: the velocity of the cart, the one of the CM of the water, or some other one? and 2) Does the acceleration of the cart changes enough the direction of "gravity" inside the cart (remember your last bus trip) to be considered a major perturbation of the problem?
Point 2 is most probably a red herring, at least in the approximation where $M \ll m(t)$, because in such case we don't have reasons to expect the accelerations of the cart and the [CM of the] water inside to be different. Remember that the "horizontal gravity" inside the wagon will be the difference of these accelerations.
ariveroarivero
$\begingroup$ There are more problems, this is defined for any x'(-t) t>0, you have to set x'(T)=0 at some point T for realistic solution, thus toricelli does not hold, only for steady flow. m(t) is not an analytic function, since it is constant for all t<t_0, that is why its impossible to get exact solution. But there exist very realistic m(t), under certain simplifications. $\endgroup$ – TROLLHUNTER Jan 19 '11 at 14:20
$\begingroup$ It seems that we need to go with steady flow. The main worry, really, is the validity or not of Frederik solution. Is its assumed reaction force orthogonal to the tank displacement, or not? $\endgroup$ – arivero Jan 19 '11 at 14:44
$\begingroup$ @arivero : There is some small force acting on the left side of the wagon because Toricelli law is not exactly respected. Then some non-zero work can be achieved. You make the same error as I did in my "wrong" solution. $\endgroup$ – Frédéric Grosshans Jan 20 '11 at 17:03
$\begingroup$ @Fréderic, but then, how is that in the "light wagon" limit the equation has the same shape that the official answer? Is it wrong too? I will redo the proof or your wrong answer tomorrow. Note that the small force you mention, if it exists, is not an external force, the only external force is gravity. $\endgroup$ – arivero Jan 20 '11 at 23:04
$\begingroup$ @arivero : about the force : it is indeed not external to the "Cart+Water system". Therefore the center of gravity of the whole system (water+cart) does not move, but you should not take forget the water which has left the system with an horizontal speed (in the fixed reference frame), which is essentially what the "official solution" does, and my what my wrong solution forgets. You can also consider only the wagon movement, to which water is external. In this case, you have to take into account the horizontal force of the water on the wagon. $\endgroup$ – Frédéric Grosshans Jan 21 '11 at 13:32
Some years later... I am reviewing this problem mostly for my own benefit, But it could be useful if somebody is still wanting to discuss the answers, particularly without any braking system.
let me start with an alternative wrong solution, aka a variation: allow the system to drop the water without horizontal velocity, for instance using a periodic obturator or a refilling system such that first a quantity $\delta m$ of water is expelled without disturbance, then the resultant bubble is liberate and some short time for the system to relax is allowed.
In this solution, obviously the cart moves away from the nozzle side, i.e to the right, to keep the original CM. The move is such that
$\delta x = l {\delta m \over M-\delta m} \approx l {\delta m \over M} \approx - l { m'(t) \over m(t)} \delta t $
where the last step is no doubt a bit tricky given that our initial postulate is that $m(t)$ is a multiple step function and we are approaching it with a differentiable function. The point of the approximation is that we can then solve for the speed of the wagon.
$\delta x = - l {d \ln m(t) \over dt }\delta t$
$x(t)= l \ \ln({m(0) \over m(t)})$
and then in this variation the wagon stops when $m(t)=m_c$ and there is not more water to drop, no more CM to correct:
$x_f = l \ \ln({m_w + m_c \over m_c})$
Note that I am using mass values from the accepted solution, but $l$ from the original question. To be clear: the nozzle was in the left side of the wagon, the wagon has coasted left until it stopped, and it actually stopped because a cunning device was making sure that the water was launched without horizontal speed. Note also the difference against a single dropping operation where all the water $m_w$ is deployed at the zero coordinate; then $x_f = l\ m_w/m_c$.
Now lets add horizontal speed. From the instant that we allow some water to coast indefinitely right side, we will need to find in the solution at least one point where the wagon actually reverses move and starts to coast left.
To keep fixed the CM, nothing beats the equation from the chosen answer
$0=\dot{m}(t) l+\int_0^tm(\tau)\ddot{x}(\tau)d\tau$
which differentiates to
$0=\ddot{m}(t) l+ m(t)\ddot{x}(t)$
$\ddot{x}(t) = - l {\ddot{m}(t) \over m(t)}$
and the real deal is that even for a constant acceleration in the mass of leaked water the denominator makes the result more colourful. I think, comparing with the move in the "water drop" case, that it can be interpreted telling that the wagon needs an extra momentum $l \ m'(t)$, thus an extra force $l\ m''(t)$ that translates to an extra acceleration $l\ m''(t)/m(t)$. But this is just an interpretation and really the equation is almost the expected from dimensional analysis -as the original poster did suggested indeed- so various interpretations could be fitted.
As for initial conditions, it has sense to ask $x'(0)=-l m'(0)/m(0)$ not only because it is the speed in the car in the approximation with multiple steps system, but also because it is compatible with the CM condition taking x''(t) a dirac delta in t=0.
Lets try an example where the initial speed of flow and wagon are zero. To do this, instead of a brake we can use a function $m(t)$ that reaches the Torricellian regime at $t_0$, using initially some extra water $m_{nt}$ and a controlled pumping. So in the starting phase we have
$t < t_0 : \ddot{m}(t) = -{2m_w \over t_c t_0} : \dot{m}(t) =-{2m_w \over t_c t_0} t : m(t)= m_c+m_w+m_{nt}- {m_w \over t_c t_0} t^2$
Note that the extra water is thus $m_{nt} = m_w (t_0/t_c)$
When we enter the question regime we change sign of the acceleration
$t_0 < t < t_c + t_0: \ddot{m}(t) = {2 m_w \over t_c^2} : \dot{m}(t) = - 2 {m_w \over t_c} (1-\frac{t-t_0}{t_c}) :m(t)=m_{w} {(1-\frac{t-t_0}{t_{c}})}^{2} + m_{c} $
and of course finally
$ t_c +t_0 < t : \ddot{m}(t) =0 : \dot{m}(t) =0 : m(t) = m_c $
so that the final speed of the wagon will be the integration
$ \dot x(t_0+t_c) = l \int_0^{t_0} {{2m_w / t_c t_0} \over {m_c+ m_w (1 + t_0/t_c) - {m_w \over t_c t_0} t^2}} - l \int_{t_0}^{t_0+t_c} {{2 m_w / t_c^2} \over {m_w {(1-\frac{t-t_0}{t_c})}^{2} + m_c}}$ $= 2l (\int_0^{t_0} {{1 / t_c t_0} \over{ \frac{m_c}{m_w}+ (1 + t_0/t_c) - {1 \over t_c t_0} t^2}} - \int_{t_0}^{t_0+t_c} {{1 / t_c^2} \over { \frac{m_c}{m_w}+ {(1-\frac{t-t_0}{t_c})}^2 }})$ $= 2l (\int_0^{t_0} {1 \over{ t_c t_0 \frac{m_c}{m_w}+ (t_c t_0 + t_0^2) - t^2}} - \int_{t_0}^{t_0+t_c} { 1 \over { t_c^2 \frac{m_c}{m_w}+ {(t_c-(t-t_0))}^2 }})$
$\begingroup$ A complete solution should, I believe, to substitute Torricelli by its spirit: no energy loss, so that all the potential energy is used either to move the wagon or to shoot the water at some speed v(t). $\endgroup$ – arivero Jul 25 '15 at 18:40
As the problem is initially described, the nozzle is located on the left bottom side of the tank with the nozzle exit facing downward. if this is the case, there will be no horizontal force to act as a thrust to start the tank in a horizontal motion. Any thrust that may be developed by the water exiting the nozzle will be in the opposite direction of the jetting water. That is, in a vertical upward direction.
Now if the nozzle exit was directed to the left or right of the tank in a horizontal direction, the exiting water will surly develop thrust to move the tank along the rail tracks. the amount of thrust created will be a function of the flow rate and nozzle size. the maximum head of water will not be greater than the height of the water in the tank. the fact that inside the tank the water travels internally in the left direction will create no external force to move the tank. any force applied for horizontal motion must be external.
Gerard De SantisGerard De Santis
$\begingroup$ Consider what would be happening to the center of mass of the system if this were correct: it would spontaneously move with no force in the system violating Newton's laws. There must be second order effects that are easy to neglect. Some of the other answers explore had one understands those. $\endgroup$ – dmckee --- ex-moderator kitten♦ Apr 22 '16 at 0:00
Clearly the water going out of the nozzle does not contribute any horizontal momentum change. Initially the wagon is still and the water flows downward.
The only reason why the wagon could move is that there is a force acting on the right size of the nozzle as the water hits it and its direction is turned towards the floor, thus exerting a force.
But let's think about this. How can we calculate this force? The force is equal to the pressure of the water times the vertical cross section area of the nozzle.
However, the water pressure is the same on the side with the nozzle and the side without. The force on the left side of the nozzle is compensated by an equal force on the right side of the wagon.
The forces on the left are exactly canceled by the ones on the right. F3 is canceled out by -F3 acting on the left side of the tap.
If there were no left side of the tap we would have a net horizontal force (the wagon would be propelled by recoil), but having a left side keeps the wagon still.
It's clear to me that there will be, in real life, second order effects like unbalances in the density of the water which could make the wagon oscillate or move. But the question clearly states that the water remains horizontal (therefore undisturbed) and that Torricelli's law applies. This only happens when the outward flow is so slow that any inhomogeneities in density are second order effects and the water can be treated as to always have a laminar flow.
In any case the system is analogous as standing on a frictionless surface. Short of throwing something outwards, one wouldn't be able to propel. Throwing something downwards wouldn't help.
To address Mark's and Marek's concern about the conservation of momentum, I can say this:
the water, internally, initially falls down and gains vertical momentum
at some point it will necessarily turn left. The momentum will not change in magnitude, but in direction: from down to left. This creates a reaction force on the bottom and on the right side.
at the final point (the nozzle): the water will turn down again, from left to down. This creates a reaction force on the top of the nozzle and on the left of the nozzle
since the water flows vertically w.r.t. wagon, it has zero horizontal momentum at the exit point
this constraint implies that the left hand force and the right hand force compensate.
instead, there will be a torque. I have not calculated this, but depending on the length of the tube, this torque could eventually make the wagon tilt (if the weight of the tube remains negligible). Normally, though, the torque will not have a movement effect, it would merely move the center of mass towards the right.
To understand this a bit better:
Imagine the same problem without the nozzle
Water flows freely left, horizontally.
The water flows out with a speed of $v(t)=\sqrt{2gh(t)}$ and a horizontal momentum that can be calculated via the parameter $s$ and $v(t)$, and a vertical momentum of zero
The wagon's horizontal momentum momentum changes by the same amount, opposite sign
The wagon recoils right
Re-imagine the original problem, with the nozzle
Water flows freely downwards, vertically
The water flows out with a speed of $v(t)=\sqrt{2gh(t)}$ and a vertical momentum that can be calculated via the parameter $s$ and $v(t)$ and a horizontal momentum of zero
The wagon's horizontal momentum changes by the same amount, which is zero
The wagon stays still
SklivvzSklivvz
$\begingroup$ I think the cart will move(cf my answer), even if the final velocity will be 0. $\endgroup$ – Frédéric Grosshans Dec 6 '10 at 17:16
$\begingroup$ @Sklivvz Your answer says that the cart does not move, but the water moves from the center of the cart (on average) to someplace to the side of that. Hence, if the cart doesn't move, the center of mass of the system does move. Since it starts out stationary and there is no external force in the horizontal direction, this is impossible. $\endgroup$ – Mark Eichenlaub Dec 7 '10 at 8:43
$\begingroup$ @Sklivvz: you argument is definitely incorrect. You say water doesn't have horizontal momentum. Well, that's true for the water that has already left the wagon. But it isn't true for water that is still flowing inside the wagon. You completely ignored this in your analysis. You can't just arbitrarily reduce this infinite DoF system to one or two DoF and expect that it will be correct. $\endgroup$ – Marek Dec 7 '10 at 11:00
$\begingroup$ @Sklivvz I said "burned", but I meant "expelled". So if the fuel is expelled uniformly, then I agree there won't be a force, just like there wouldn't be one in this problem if the water leaked out uniformly from the floor. I was imagining the fuel is expelled from near the bottom - sorry that wasn't clear. As for an upward force, it is just saying that if there were a ball inside the rocket instead of fuel, and the ball accelerated down, the rocket would get lighter (experience an upward force) during the acceleration. Same for anything with mass accelerating down, including fuel. $\endgroup$ – Mark Eichenlaub Dec 7 '10 at 11:10
$\begingroup$ @Sklivvz: but the flow of the water (and associated momentum) is definitely not second-order. It's arguably the most important effect in the whole problem. $\endgroup$ – Marek Dec 7 '10 at 20:55
A quantitative answer
The three main conservation laws of fluid mechanics are
Between the time $t$ and $t+\mathrm{d}t$ a mass of water $\mathrm{d}m(t)$ escapes through the nozzle. The mass escapes at a speed governed by Torricelli's law - obtained through 1. and 3.:
$$v(t) = \sqrt{2gh(t)}$$
The direction of the water is determined by the inclination of the nozzle $\theta$ which we may generalize to vary from $0$ radians (horizontal, pointing left) to $\frac{\pi}{2}$ (vertical, pointing down).
$$\mathbf{v}(t) = -v(t) \pmatrix{ \sin \theta \\ \cos \theta }$$
The momentum of the water flowing out is determined by
$$ \mathbf{p}(t) = m(t) \mathbf{v}(t) = -m(t)v(t) \pmatrix{ sin \theta \\ cos \theta }$$
$$ \mathbf{p}(t) = p(t) \pmatrix{ sin \theta \\ cos \theta }$$
Since the fluid is incompressible and mass is conserved, the mass flowing out corresponds to an equivalent decrease in the amount of water from the top.
$$ \mathrm{d}m(t) = \rho S \mathrm{d}h(t)$$
But also, the water will flow at a speed $v(t)$ at the nozzle, so the water that escapes is
$$ \mathrm{d}m(t) = \rho s v(t)\mathrm{d}t$$
$$S \mathrm{d}h(t) = s v(t)\mathrm{d}t$$
$$ \mathrm{d}h(t) = \frac{s}{S}v(t)\mathrm{d}t$$
Plugging in the equation for $v(t)$ and introducing $\sigma=\frac{s}{S}$
$$ \mathrm{d}h(t) = -\sigma \sqrt{2g h(t)} \mathrm{d}t$$
solving this first-order nonlinear ordinary differential equation and using $h_0 = h(t=0)$ and $v_0 = v(t=0) = \sqrt{2gh_0}$
$$h(t) = \frac{1}{2}g\sigma^2t^2 - v_0\sigma t+h_0 \approx h_0 - v_0\sigma t$$
This lets us find $v(t)$, $m(t)$ and $\mathbf{p}(t)$:
$$v(t) \approx -\sqrt{2gh_0 - 2gv_0\sigma t}$$
$$\frac{\mathrm{d}m}{\mathrm{d}t} = \rho s v(t) \approx - \rho s \sqrt{2gh_0 - 2gv_0\sigma t}$$
Which is solved by the (approximate) solution:
$$m(t) \approx m(t) = C +\frac{2 \rho s \sqrt{2g} (h_0-\sigma v_0 t)^{\frac{3}{2}}}{3 \sigma v_0}$$
Note: an analytical solution exists, but it's really ugly
To calculate $C$ we must use the condition that when all the water is gone, $m(t) = 0$. To do so we can solve:
$$0=h(t)\approx h_0 - v_0\sigma t \implies t_f \approx \frac{h_0}{v_0 \sigma}$$
$$0 = m(t = t_f) = C +\frac{2 \rho s \sqrt{2g} (h_0-\sigma v_0 \frac{h_0}{v_0 \sigma})^{\frac{3}{2}}}{3 \sigma v_0}\implies C=0$$
$$ m(t) \approx \frac{2 \rho s \sqrt{2g} (h_0-\sigma v_0 t)^{\frac{3}{2}}}{3 \sigma v_0} $$
finally, the magnitude of the linear momentum is given by:
$$ p(t) = m(t)v(t) \approx -\frac{2 \rho s \sqrt{2g} (h_0-\sigma v_0 t)^{\frac{3}{2}}}{3 \sigma v_0} \sqrt{2gh_0 - 2gv_0\sigma t}$$
Let's see the effect of the two components of $\mathbf{p}$. The horizontal component propels the wagon by reaction; the vertical component creates a torque that pushes the center of mass to the right - note that the flow pushes the center of mass to the left.
If $\theta = 0$, all the linear momentum is horizontal. There is no torque, the wagon will move by reaction and the center of mass doesn't move because the water flowing out and the wagon move in opposite directions:
$p_{wagon} = p(t)$
If $\theta = \frac{\pi}{2}$ all the linear momentum is vertical. There will be a torque but no horizontal movement, as there is no horizontal momentum. This implies that the contributions to the center of mass by the water flowing out and the torque must cancel out.
Finally, if $\theta$ has a middle value, a compositions of the two behaviours will occur.
In regards to the problem, $\theta = \frac{\pi}{2}$ and therefore the wagon will not move
$\begingroup$ @Martin, what kind of comment is that? :-( $\endgroup$ – Sklivvz Dec 11 '10 at 11:47
$\begingroup$ That's not an explanation, either you can point out an error in my line of thought or you can't. I am using the assumptions you provided, like Torricelli's law. $\endgroup$ – Sklivvz Dec 13 '10 at 10:04
$\begingroup$ There is a fundamental error on your analysis. Your starting point is not the law of conservation of horizontal momentum of the system ( the wagon with water + leaked water). I do not have anything more to add. $\endgroup$ – Martin Gales Dec 13 '10 at 12:18
$\begingroup$ I use and verify conservation of momentum at the end of the answer, starting with "Let's see the effect of the two components of p.". I've postponed it because my other answer is all about conservation of momentum (and it gets us to the exact same conclusion). $\endgroup$ – Sklivvz Dec 13 '10 at 15:37
$\begingroup$ @Sklivvz Neither of your answers conserve momentum. We talked about this extensively in chat, and there you said you thought the cart moved. What has changed? (The problem with this particular answer is that it begs the question.) $\endgroup$ – Mark Eichenlaub Dec 14 '10 at 13:52
Short version: movement inside the closed system cannot accelerate it. Zero horizontal speed at exit means zero speed at t->infinity.
More detailed version:
Let me transfer the problem to a simpler one:
We have an open wagon with me standng on one side of it holding a heavy box. Now I will start running towards the other side of the wagon. This will cause the wagon to move in the opposite direction.
At a certain point I will have to decelarate so that I stop at the other side of the wagon. This will create eqaual force to accelerating thus compensating any speed that developed during the accelerating.
The position of the wagon will be changed so that the center of mass will not have moved. The speed will be equal to starting speed.
Now I drop the box straight down. (I will use a bit of force to simulate the water pressure, but that is not important) Speed is zero, wagon moved box is down.
Now, let's say I have multiplied, have negligible weight and the box is a molecule of water. The final speed will be certainly zero again. The question is, what the displacement of the wagon will be. I have two answers and cannot choose either:
The centre of mass has to be kept the same (horizontally, gravitation can move it vertically down). This determines the final position of the wagon.
The final displacement is speed integrated over time. Now for each molecule that will start moving left, there will be one stopping at the nozzle. This would compensate the forces in real time keeping the speed at zero and so the displacement.
Please correct me if my analogy is wrong at some point and try to answer the question about final displacement.
Edit - more explanations
Assuming the wagon moves during the process it's true that the water will have a momentum relative to rail and it will travel at the same speed as the wagon. That means there will be no net force from this water coming down.
Imagine a very long tube open on both sides filled with water. If you put this tube vertically in a homogeneous gravitation field the water will flow (fall) out of it. If it moves at a constant speed the water would behave the same relative to the tube. The outside observer would see a tube moving to the side and a column of water moving down and to the side (at the same speed, so it would stay under the tube all the time). The same goes for the water from the nozzle: it will always have the same horizontal speed as the wagon at the point of leaving thus having no effect whatsoever on its movement. This is true disregarding the speed of the wagon.
Having said this the only forces affecting the whole water-wagon system are those caused by the internal movement of water. On this frictionless rail you can change the wagon's position from inside only at the cost of regrouping the stuff inside (changing the mass distribution through the system). Someone (let's say a lobster) walking on a wagon (of zero weigh for simplification) on a frictionless rail cannot move relative to the rail. It is the same as if this lobster was trying to walk on frictionless ice: there would be no reactive force to move him. Looking at the lobster on the zero-weight wagon we would see a lobster walking, though not moving, and a wagon moving under him. As the only mass in this system is the lobster, the centre of mass would not move.
Returning to the water - after opening the nozzle the water starts moving to the left and because there was no speed at t=0 there had to be some acceleration. than the water is gradually moved towards the left end of the wagon where it loses its horizontal speed and leaves the wagon at zero horizontal speed. While stopping the decelerating will compensate any forces (and speed) created during the acceleration. Whether this is going on at zero or non zero speed relative to the rail has no influence.
As we have no external force in the horizontal direction, there centre of mass has to stay unmoved (which requires the wagon to move). At the same time the zero momentum of the water train system has to be preserved so unless the water leaves the train with non-zero horizontal speed relative to the wagon the wagon cannot end up with non-zero horizontal speed relative to the water expelled.
LukasLukas
$\begingroup$ If you are running with the box, and drop the box while running and the box falls through a hole in the floor, then the box has some net momentum. The cart will then have net momentum in the opposite direction, even after you stop running. (Your analysis is similar to most people's first thoughts, so I suggest reading through the other solutions.) $\endgroup$ – Mark Eichenlaub Dec 12 '10 at 23:29
$\begingroup$ I did read the other solutions. The problem in saying that I drop the box while running is that the water stops (horizontally) before leaving the wagon. $\endgroup$ – Lukas Dec 12 '10 at 23:48
$\begingroup$ @Mark Eichenlaub: I carefully read through your answer and I cannot agree with your qualitative analysis. With the nozzle pointing downwards (and this very important) the only force besides gravitation affecting the wagon is the reaction of the water running down (lifts left side of wagon) but this is compensated (save for a very strong flow caused by a pump) by the gravity. With no external forces affecting the wagon we only have the displacement of water within the wagon and that cannot accelerate it, let alone to the other direction. $\endgroup$ – Lukas Dec 13 '10 at 0:17
$\begingroup$ @Lukas Also consider this: You do believe the wagon moves so that the center of mass of the entire system stays put, right? Then the wagon must have nonzero speed at some time. At that time, the exiting water is falling straight down as viewed by the wagon, but is not falling straight down as viewed by the rail. $\endgroup$ – Mark Eichenlaub Dec 13 '10 at 1:03
$\begingroup$ @Mark: You are right that I did no computation so far. That is because mindless computing without having a good qualitative analysis is useless. But yes, I should probably approach this problem with more scientific methods. I have to admit that if the water is flowing down (wagon relative) from a wagon with non-zero speed (rail relative) the wagon has to go in the opposite direction so that the momentum is preserved. That is really surprising. $\endgroup$ – Lukas Dec 15 '10 at 11:57
Not the answer you're looking for? Browse other questions tagged classical-mechanics fluid-dynamics or ask your own question.
Momentum paradox
Increase in velocity by loss of mass?
Common approximations for fluid outflow beyond Torricelli's law
Is Torricelli's law "wrong" for big holes? - Tank draining problem
Shape of water jet
Analysis of a system consisting of a leaking tank of water
Torricelli's Law: Why is the height of the intersection of streams equal to the sum of depths?
Fluid Dynamics pressure and velocity
Modeling water flow between two tanks
|
CommonCrawl
|
Roles of microRNA-34a targeting SIRT1 in mesenchymal stem cells
Fengyun Zhang1,2,
Jinjin Cui1,2,
Xiaojing Liu3,
Bo Lv1,2,
Xinxin Liu1,2,
Zulong Xie1,2 &
Bo Yu1,2
Stem Cell Research & Therapy volume 6, Article number: 195 (2015) Cite this article
A Correction to this article was published on 31 July 2020
Mesenchymal stem cell (MSC)-based therapies have had positive outcomes both in animal models of cardiovascular diseases and in clinical patients. However, the number and function of MSCs decline during hypoxia and serum deprivation (H/SD), reducing their ability to contribute to endogenous injury repair. MicroRNA-34a (miR-34a) is originally identified as a TP53-targeted miRNA that modulates cell functions, including apoptosis, proliferation, and senescence via several signaling pathways, and hence is an appealing target for MSC-based therapy for myocardial infarction.
Bone marrow-derived MSCs were isolated from 60–80 g male donor rats. Expression levels of miR-34a were determined by qRT-PCR. The roles of miR-34a in regulating cell vitality, apoptosis and senescence were investigated using the cell counting kit (CCK-8) assay, flow cytometric analysis of Annexin V-FITC/PI staining and senescence-associated β-galactosidase (SA-β-gal) staining, respectively. The expression of silent information regulator 1 (SIRT1) and forkhead box class O 3a (FOXO3a) and of apoptosis- and senescence-associated proteins in MSCs were analyzed by western blotting.
The results of the current study showed that miR-34a was significantly up-regulated under H/SD conditions in MSCs, while overexpression of miR-34a was significantly associated with increased apoptosis, impaired cell vitality and aggravated senescence. Moreover, we found that the mechanism underlying the proapoptotic function of miR-34a involves activation of the SIRT1/FOXO3a pathway, mitochondrial dysfunction and finally, activation of the intrinsic apoptosis pathway. Further study showed that miR-34a can also aggravate MSC senescence, an effect which was partly abolished by the reactive oxygen species (ROS) scavenger, N-acetylcysteine (NAC).
Our study demonstrates for the first time that miR-34a plays pro-apoptotic and pro-senescence roles in MSCs by targeting SIRT1. Thus, inhibition of miR-34a might have important therapeutic implications in MSC-based therapy for myocardial infarction.
Ischemic heart disease (IHD) is the leading cause of death worldwide, and the resulting heart failure aggravates a country's health burden, particularly in developed countries [1]. Existing therapies are typically only able to slow, rather than reverse or prevent, the progression of heart failure. Furthermore, side effects remain the key issue among these effective therapeutics [2]. In the last few years, bone marrow-derived mesenchymal stem cells (MSCs) have been found to function as one of the most suitable candidate seed cells for repairing and regenerating cardiomyocytes as well as restoring heart function, and have been widely studied [3, 4]. Transplantation of MSCs leads to improved neovascularization of ischemic myocardium and inhibition of myocardial fibrosis, in addition to an increase in the secretion of prosurvival growth factors, including vascular endothelial growth factor, insulin-like growth factor, and hepatocyte growth factor [4, 5]. Despite these advantages, the poor survival rate of MSCs within the first few days after engrafting in infarcted hearts leads to only marginal functional improvement [6, 7]. The harsh microenvironment of the infarcted myocardium produces high levels of oxidative stress, which makes a great contribution to cellular senescence and causes a sharp decline in the proliferative capacity and regenerative potential of MSCs [8]. There is thus an urgent need to identify a strategy to protect the cells against the hostile microenvironment created by ischemia, hypoxia, the inflammatory response, and pro-apoptotic and pro-senescence factors in order to improve the efficacy of MSC transplantation therapy.
MicroRNAs (miRNAs) are endogenous ~22-nucleotide RNAs that have emerged as negative regulators of gene expression, acting by targeting mRNAs for cleavage or translational repression, which occurs primarily through base pairing to the 3′ untranslated regions (UTRs) of target mRNAs [9, 10]. With rapid advances in understanding of the regulation and roles of these small, noncoding RNAs in cardiac pathology, the therapeutic potential of regulation of miRNAs in cardiac disease settings is considered high [9, 11]. Among the known miRNAs, expression of miR-34a was found to be elevated in mouse hearts after myocardial infarction (MI) [12] and in cardiac tissue from patients with heart disease [13], while inhibition of the expression of miR-34a alleviated apoptosis and senescence in myocardial cells [14, 15] and other cell lines [16–18]. However, the precise role of miR-34a in MSCs has not been unraveled to date.
Silent information regulator 1 (SIRT1), one of the potential targets of miR-34a [19], is an NAD-dependent deacetylase that regulates apoptosis in response to oxidative and genotoxic stress and plays a critical role in regulating cell cycle, senescence, and metabolism [19–21]. Initially identified as a longevity gene, SIRT1 has recently been implicated as a novel modulator of myocyte homeostasis, playing a key role in cardiomyopathy through the deacetylation of forkhead box O transcription factor 3a (FOXO3a) [20], which was also acknowledged as the transcription factor most closely related to the anti-oxidative protective effects associated with longevity [22, 23]. Further study showed in endothelial progenitor cells (EPCs) that SIRT1 has a pivotally protective role in the regulation of EPC apoptosis induced by H2O2, and that SIRT1 exerted this protective effect by inhibiting FOXO3a via FOXO3a ubiquitination and subsequent degradation [24]. However, it is entirely unknown whether SIRT1 affects biological activities in MSCs; and if so, what role FOXO3a plays in this process.
In the current study, we tested the hypothesis that overexpression of miR-34a increases cellular susceptibility to hypoxia and serum deprivation (H/SD)-induced apoptosis and aggravates cell senescence, and investigated the underlying mechanisms. The results showed that miR-34a played a crucial role in a plethora of biological processes via regulation of SIRT1/FOXO3a and the reactive oxygen species (ROS) pathway in MSCs. Inhibition of miR-34a might therefore be a promising therapeutic strategy for enhancing the biological functions of MSCs, thus demonstrating great therapeutic potential in clinical transplantation.
Male Sprague–Dawley rats weighing 60–80 g were obtained from the Laboratory Animal Science Department of the Second Affiliated Hospital of Harbin Medical University, Heilongjiang, P.R. China. All experimental animal procedures were approved by the Local Ethical Committee on Animal Care and Use of Harbin Medical University.
MSC culture
MSCs were cultured using the whole bone marrow adherent method, as described previously [25]. Briefly, total bone marrow was harvested from the femora of rats and was plated into 25 cm2 culture flasks at a concentration of 106 cells/ml in Iscove's modified Dulbecco's medium (IMDM; HyClone-Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10 % fetal bovine serum (FBS; Gibco, Grand Island, NY, USA) and 1 % penicillin/streptomycin (Beyotime Institute of Biotechnology, Nantong, China), at 37 °C with 5 % CO2. After 3 days of incubation, the medium was changed and then replaced every 3 days thereafter. Approximately 7–9 days after seeding, the cells became 70–80 % confluent. The adherent cells were released from the dishes using 0.25 % trypsin (Beyotime Institute of Biotechnology, Beijing, China) and expanded at a 1:2 or 1:3 dilution. MSCs at passage 3–5 were used in all experiments. MSCs were characterized by flow cytometric analysis for the expression of the typical markers CD90, CD29, and CD44 (all from BD Biosciences, Franklin Lakes, NJ, USA), and the absence of the hematopoietic markers CD45 (eBioscience, San Diego, CA, USA) and CD34 (Santa Cruz Biotechnology, Inc., Dallas, TX, USA), as reported previously [25].
Cell viability assay
The viability of MSCs was determined using the cell counting kit-8 (CCK-8) assay (Beyotime Institute of Biotechnology, Beijing, China) in accordance with the manufacturer's protocols. Cells were seeded into a 96-well plate (3000 cells per well), and their growth was measured following addition of 10 μl CCK-8 into the culture medium for 2 hours. The absorbance of each well was quantified at 450 nm (Tecan Infinite M200 microplate reader; LabX, Austria). All data were calculated from triplicate samples.
MSC H/SD treatment
Apoptosis was induced by H/SD in vitro, which was designed to mimic the in vivo conditions of ischemia in the myocardium and was carried out as reported previously [26]. Briefly, MSCs were washed and cultured with serum-free IMDM and incubated in a 5 % CO2/95 % N2 incubator (controlled atmosphere chamber; PLAS-Labs, Lansing, MI, USA) for 6 hours. MSCs incubated in a 5 % CO2/95 % O2 incubator were used as the normoxic control and cultured in complete medium.
Measurement of apoptosis
Apoptosis was determined by staining cells with Annexin V–fluorescein isothiocyanate (FITC) and counterstaining with propidium iodide (PI) using the Annexin V–FITC/PI apoptosis detection kit (BD PharMingen, San Diego, CA, USA). Briefly, 0.5 × 106 cells were washed twice with phosphate-buffered saline (PBS) and stained with 5 μl Annexin V–FITC and 5 μl PI in 1× binding buffer (BD PharMingen) for 15 minutes at room temperature in the dark. Analyses were performed using bivariate flow cytometry in a BD FACSCanto II equipped with BD FACSDiva software (Becton-Dickinson, San Jose, CA, USA).
Target gene prediction
To identify the potential targets of miR-34a that mediated its pro-apoptotic role in MSCs, bioinformatics algorithms including miRBase (University of Manchester, Manchester, UK), TargetScan (David Bartel Lab, Whitehead Institute for Biomedical Research, MA, USA), PicTar (Rajewsky lab, NY, USA and Max Delbruck Centrum, Berlin, DE), and miRanda (Computational Biology Center at MSKCC, NY, USA) were applied.
Before transfection, MSCs were replanted into six-well plates at a density of 2 × 105 cells per well and incubated overnight. For overexpression or inhibition of miR-34a, cells were transfected with different concentrations of miR-34a mimic or miR-34a inhibitor (both from Invitrogen, Carlsbad, CA, USA). For small interfering RNA (siRNA)-mediated gene knockdown, 100 nM SIRT1 siRNA (GenePharma Co., Ltd, Shanghai, China) was transfected into cells. As controls, cells were transfected with negative control (NC) mimic, NC inhibitor of miR-34a (both from Invitrogen,Carlsbad, CA, USA), or scrambled siRNA (siRNA-NT) of SIRT1 (GenePharma Co., Ltd, Shanghai, China). All miRNAs and siRNA were transfected into MSCs using a commercial transfection reagent (X-treme siRNA Transfection Reagent; Roche Applied Science, Penzberg, Germany) according to the manufacturer's protocol. Forty-eight or 72 hours after transfection, cells were harvested for further analysis.
RNA extraction and quantitative RT-PCR
For analysis of miR-34a expression, total RNA was extracted from the MSCs using TRizol reagent (Invitrogen, Shanghai, China) and reverse-transcribed into cDNA according to the manufacturer's instructions. Quantitative RT-PCR (qRT-PCR) was performed to analyze the level of miR-34a with the miRcute miRNA First-Strand cDNA Synthesis Kit and the miRcute miRNA qPCR Detection Kit (SYBR Green; Tiangen, Beijing, China). All primers for miR-34a and U6 for the TaqMan miRNA assays were purchased from Gene Pharma. Relative gene expression levels were calculated by comparing the △Ct values between control and experimental conditions for each PCR target using the following equation:
$$ \mathrm{Relative}\kern0.5em \mathrm{gene}\kern0.5em \mathrm{expression}={2}^{\hbox{-} \left(\Delta \mathrm{C}\mathrm{t}\kern0.5em \mathrm{sample}\hbox{-} \Delta \mathrm{C}\mathrm{T}\kern0.5em \mathrm{control}\right)} $$
For several other genes, total cellular RNA was isolated and reverse-transcribed using the transcriptor First-Stand cDNA Synthesis Kit, according to the manufacturer's instructions. qRT-PCR was carried out using the fast-start universal SYBR master and fluorescence quantitative PCR system [27]. The relative expression level of mRNAs was normalized to that of the internal control glyceraldehyde 3-phosphate dehydrogenase (GAPDH) using the 2–ΔΔCt cycle threshold method. Table 1 presents all related gene sequences.
Table 1 Primers for quantitative RT-PCR and oligonucleotides
Measurement of mitochondrial membrane potential
Mitochondrial membrane potential (∆Ψm) was measured using the JC-1 mitochondrial membrane potential assay kit (Beyotime Institute of Biotechnology, Beijing, China). JC-1 was widely used to assess changes in ∆Ψm and mitochondrial permeability transition. After designated treatment, cells were incubated with JC-1 working dye for 20 minutes, then washed twice with cold JC-1 staining buffer and visualized under a fluorescence microscope (DMI4000B; Leica, Wetzlar, Germany).
ROS staining
Cells were left untreated or pretreated with NAC, miR-34a mimic, siRNA-SIRT1, and miR-34a inhibitor separately or in combination and then stimulated with the diluted fluoroprobe 2′,7′-dichlorodihydrofluorescein diacetate (DCFH-DA; Beyotime Institute of Biotechnology, Beijing, China) for 20 minutes at 37 °C with slight shaking every 5 minutes. After washing with serum-free culture medium, the cells were collected and examined by flow cytometry.
Senescence-associated β-galactosidase staining
MSC senescence was determined by in situ staining for senescence-associated β-galactosidase (SA-β-gal) using a senescence cell histochemical staining kit (Beyotime Institute of Biotechnology, Beijing, China). Briefly, MSCs after treatment were first fixed for 30 minutes at room temperature in fixation buffer. After washing with PBS, cells were incubated with β-galactosidase staining solution for 16 hours at 37 °C without CO2. The reaction was stopped by the addition of PBS. Statistical analysis was performed by counting 600 cells for each sample.
Protein extraction and western blot analysis
After designated treatment, cells were washed twice with ice-cold PBS, and the total protein concentration was analyzed using the bicinchoninic acid assay (BCA; Beyotime Institute of Biotechnology, Beijing, China) according to the manufacturer's instructions. Total cell extracts (50 μg total protein) were resolved by sodium dodecyl sulfate (SDS)–10 % polyacrylamide gel electrophoresis and transferred onto polyvinylidene difluoride (PVDF) membranes. Nonspecific binding was inhibited by incubating the membranes with 8 % skimmed milk in Tris-buffered saline (TBS) with 0.5 % Tween-20. Subsequently, membranes were incubated with antibodies against SIRT1, FOXO3a, cleaved-caspase 3 (Cl.CASP3), cleaved-polyADP-ribose polymerase 1 (Cl.PARP1), cytochrome c, P53 (all from Cell Signaling Technology, Danvers, MA, USA), p16, γ-H2A.X (both from Abcam, Cambridge, MA, USA), p21 (Santa Cruz, CA, USA), and β-actin (Zhongshan Golden Bridge Biotechnology, Beijing, China) overnight at 4 °C at an appropriate dilution (1:1000). The membranes were washed with TBS with Tween-20 (TBS-T) and then incubated with peroxidase-conjugated Affinipure goat anti-rabbit IgG (H + L) and anti-mouse IgG (H + L)-labeled secondary antibodies (Zhongshan Golden Bridge Biotechnology, Beijing, China) diluted at 1:5000 for 1 hour at 37 °C. Specific complexes were visualized on an X-ray film using Electro-Chemi-Luminescence (ECL) detection with BeyoECL Plus (Beyotime Institute of Biotechnology, Beijing, China) following the manufacturer's protocol. All data were obtained in triplicate, independent experiments.
All data were analyzed using SPSS 19.0 (SPSS Inc., Chicago, IL, USA) and were expressed as mean ± standard deviation (SD). Comparisons between two groups were performed using Student's t test, while the significance of differences between three or more experimental groups was determined by one-way analysis of variance. P <0.05 was considered statistically significant.
miR-34a expression increases under H/SD, and correlates with decreased cell survival and increased apoptosis
qRT-PCR results showed that miR-34a was expressed in normal MSCs and that expression increased significantly when exposed to atmospheric conditions representing H/SD for 6 hours (Fig. 1a). We then used CCK-8 to evaluate the role of miR-34a in MSC survival, and found that overexpression of miR-34a reduced cell survival, while inhibition of miR-34a expression showed the opposite effect (Fig. 1b, c). Apoptosis has been identified as a major mechanism that reduces the survival rate after transplantation of MSCs into the harsh microenvironment of infarcted myocardium [26, 28]. To further determine the role of miR-34a in MSCs under conditions of H/SD, Annexin V–FITC/PI staining was performed. The results showed that miR-34a mimic-treated MSCs were significantly more apoptotic than the NC mimic group both in normal and H/SD conditions (Fig. 1d, e). However, when miR-34a was inhibited, MSCs showed better resistance to H/SD than the NC inhibitor group (Fig. 1d, e). This supports our hypothesis that the decreased cell survival and increased apoptosis in MSCs are associated with overexpression of miR-34a.
miR-34a expression increases under H/SD, and correlates with decreased cell survival and increased apoptosis. a Rat MSCs were cultured in normal condition or exposed to H/SD for 6 hours and were transiently transfected with miR-34a mimic, NC mimic, miR-34a inhibitor, or NC inhibitor for 48 hours, respectively. The expression of miR-34a was determined by qRT-PCR. *P <0.05. b, c Effects of miR-34a on the vitality of MSCs were examined by CCK-8 assay. *P <0.05 vs. NC mimic (b), *P <0.05 vs. NC inhibitor (c). d, e Flow cytometric analysis of apoptotic cells in normal and H/SD conditions, in cultures of miR-34a mimic, NC mimic, miR-34a inhibitor, or NC inhibitor treated (MSCs were transfected for 48 hours and exposure to H/SD and maintained as such for 6 hours). Each column represents mean ± SD from three independent experiments. *P <0.05. H/SD hypoxia and serum deprivation, miRNA microRNA, NC negative control, PI propidium iodide
SIRT1 is a direct target of posttranscriptional repression by miR-34a
Bioinformatics results suggested that SIRT1, identified as an apoptosis-associated gene, was the potential target of miR-34a. In addition, miRBase showed that the binding sites of miR-34a are evolutionarily conserved in both human and mouse (Fig. 2a). To test the hypothesis that miR-34a regulates SIRT1 expression in MSCs from rats, we transfected MSCs with a miR-34a mimic or miR-34a inhibitor. As a control, some cells were transfected with NC mimic or NC inhibitor. Western blot analysis demonstrated a dose-dependent decrease in SIRT1 protein expression in miR-34a mimic-transfected cells compared with the NC mimic group (Fig. 2c). Notably, the inhibition of miR-34a in MSCs was concurrent with the increased expression of SIRT1 (Fig. 2d). However, qRT-PCR showed little difference in the expression levels of SIRT1 mRNA among the treatment groups. These data suggest that SIRT1 is likely to be targeted by miR-34a posttranscriptionally.
SIRT1 is a direct target of posttranscriptional repression by miR-34a. a The predicted targeting sites with miR-34a of SIRT1 3′-UTR (Hsa, human; Mmu, mouse) are highlighted in red. b qRT-PCR analysis was applied to detect mRNA expression of SIRT1 in MSCs after transfection with miR-34a mimic, NC mimic, miR-34a inhibitor, or NC inhibitor for 48 hours, respectively. c, d Western blot analysis showed dose-dependent regulation of SIRT1 by miR-34a after transfection with miR-34a mimic, NC mimic, miR-34a inhibitor, or NC inhibitor for 72 hours, respectively. Each column represents mean ± SD from three independent experiments. *P <0.05 vs. control, △P <0.05 vs. transfection with 10 nM miR-34a mimic. miRNA microRNA, NC negative control, ORF open reading frame, SIRT1 silent information regulator 1, UTR untranslated region
miR-34a induces apoptosis by modifying SIRT1 and FOXO3a expression
After identifying SIRT1 as a direct target of miR-34a, we investigated whether knockdown of SIRT1 by siRNA (siRNA-SIRT1) induces apoptosis in MSCs. Similar to miR-34a mimic treatment, suppression of SIRT1 expression promoted apoptosis, revealed by flow cytometric analysis of the percentage of cells that were Annexin V+/PI– (Fig. 3a, b). CASP3 is a well-studied mediator of apoptosis, because it is either partially or totally responsible for the cleavage of many key proteins, such as PARP1 [18]. In this study, increased activities of the cleaved CASP3 and cleaved PARP1 were observed when SIRT1 was knocked down or miR-34a was overexpressed (Fig. 3b, d). These findings suggest that knockdown of SIRT1 or treatment with miR-34a mimic acts similarly in the regulation of apoptosis.
miR-34a induces apoptosis by modifying SIRT1 and FOXO3a expression. a, b Apoptosis was analyzed by measuring Annexin V+/PI– cells using flow cytometry in cultures of siRNA-SIRT1, siRNA-NT, or siRNA-SIRT1 cotransfected with miR-34a inhibitor-treated MSCs, under normal and H/SD conditions (MSCs were transfected for 72 hours and exposure to H/SD and maintained as such for 6 hours). *P <0.05 vs. normal siRNA-NT, △P < 0.05 vs. H/SD siRNA-NT. c, d MSCs were transfected with miR-34a mimic, NC mimic, siRNA-SIRT1, or siRNA-NT for 72 hours, respectively, and then CASP3 and PARP1 activity was measured using western blot. *P <0.05 vs. NC mimic, △P <0.05 vs. siRNA-NT. e, f Western blot analysis of SIRT1, FOXO3a, Bim, CASP3, and PARP1 protein expression in cultures of siRNA-NT, siRNA-SIRT1, miR-34a inhibitor, or siRNA-SIRT1 cotransfected with miR-34a inhibitor-treated MSCs, under normal and H/SD conditions (MSCs were transfected for 72 hours and exposure to H/SD and maintained as such for 6 hours). β-actin was used as the internal control. Each column represents mean ± SD from three independent experiments. *P <0.05 vs. normal scramble, △P <0.05 vs. H/SD scramble. CASP3 caspase 3, FOXO3a forkhead box O transcription factor 3a, H/SD hypoxia and serum deprivation, miRNA microRNA, NC negative control, PARP1 polyADP-ribose polymerase 1, PI propidium iodide, SIRT1 silent information regulator 1, siRNA small interfering RNA, siRNA-NT scrambled siRNA
To further elaborate the relationship between miR-34a and SIRT1, MSCs were transfected with miR-34a inhibitor or siRNA-SIRT1, or a combination, before exposure to H/SD. The results showed miR-34a inhibitor reduced CASP3 activity and expression of cleaved PARP1 (Fig. 3e, f), while siRNA-SIRT1 partly abolished the effects of the miR-34a inhibitor, verified both by flow cytometric analysis of the percentage of Annexin V+/PI– cells (Fig. 3a, b) and by western blot analysis of cleaved CASP3 and cleaved PARP1 (Fig. 3e, f).
SIRT1 plays important roles in many pathophysiological processes by deacetylating various substrates, including FOXO3, which has been reported to promote apoptosis by regulating its downstream target, the well-known pro-apoptotic protein Bim [24]. Our work revealed that downregulation of miR-34a decreased total FOXO3a and Bim protein expression, whereas SIRT1 knockdown increased the expression of these two proteins (Fig. 3e, f). However, neither miR-34a inhibitor nor siRNA-SIRT1 altered the mRNA level of FOXO3a (data not shown), indicating that miR-34a and SIRT1 could regulate FOXO3a posttranscriptional activity under H/SD conditions. Our study revealed that the activation of SIRT1/FOXO3a and the intrinsic apoptosis pathway of CASP3–PARP1 might be involved in the pro-apoptotic function of miR-34a.
miR-34a exerts pro-apoptotic effects via activation of the mitochondrial apoptosis pathway
∆Ψm is the main parameter of mitochondrial function used as an indicator of cell health [29]. Therefore, to understand the intrinsic apoptotic pathway that is activated by miR-34a, we performed JC-1 staining. In contrast to the scramble cells, cells treated with the miR-34a inhibitor displayed significant changes in ΔΨm (Fig. 4a). To further ascertain the effect of miR-34a, cellular fractionation was performed and cell lysates from cytosolic and mitochondrial fractions were subjected to western blotting to detect the expression of cytochrome c, which was released from mitochondria and functioned as a key mediator of apoptosis [30]. Western blot analysis revealed an inhibition of cytochrome c release in the miR-34a inhibitor group, while siRNA-SIRT1 reversed its effect (Fig. 4b, c). Taken together, these data support the hypothesis that miR-34a may be involved in the apoptotic process of MSCs induced by H/SD through activation of the mitochondrial apoptosis pathway by targeting SIRT1.
miR-34a exerts pro-apoptotic effects via activation of the mitochondrial apoptosis pathway. MSCs were transfected with siRNA-NT, siRNA-SIRT1, miR-34a inhibitor, or siRNA-SIRT1 cotransfected with miR-34a inhibitor under normal and H/SD conditions (MSCs were transfected for 72 hours and exposure to H/SD and maintained as such for 6 hours). Then ∆Ψm (a) was analyzed by measuring JC-1 fluorescence, and cytosolic and mitochondrial cytochrome c expression (b, c) were measured with western blot. Each column represents mean ± SD from three independent experiments. *P <0.05 vs. H/SD scramble, △P <0.05 vs. miR-34a inhibitor, ▲P <0.05 vs. siRNA-SIRT1. H/SD hypoxia and serum deprivation, miRNA microRNA, SIRT1 silent information regulator 1, siRNA small interfering RNA
Overexpression of miR-34a induces senescence in MSCs
Considering that the regenerative capacity of MSCs contributed greatly to their function, we further examined cellular senescence in miR-34a mimic-transfected MSCs. SA-β-gal activity, which is a characteristic feature of senescence-related growth arrest [31], was assayed. Results revealed that overexpression of miR-34a significantly increased the percentage of SA-β-gal-positive cells compared with that of scramble (Fig. 5a, b). SIRT1 inhibition has been reported to be associated with premature senescence and impaired proliferative activity in EPCs [17]. Consistently, the percentage of SA-β-gal-positive senescent cells was remarkably increased following SIRT1 knockdown (Fig. 5a, b).
Overexpression of miR-34a induces senescence in MSCs. Cells were left untreated or pretreated with miR-34a mimic, siRNA-SIRT1, ROS scavenger NAC (10 mM), and miR-34a inhibitor separately or in combination for 72 hours, and then cellular senescence was analyzed by SA-β-gal staining (a, b). Cellular ROS production was assessed by measuring the fluorescent intensity of DCFH-DA determined using flow cytometry (c, d). Cellular DNA damage and senescence-related proteins including γ-H2A.X, p53, p21, and p16 were determined with western blot (e, f). Each column represents mean ± SD from three independent experiments. *P <0.05 vs. scramble, △P <0.05 vs. miR-34a mimic, ▲P <0.05 vs. siRNA-SIRT1.cp. DCFH 2′,7′-dichlorodihydrofluorescein, MFI mean fluorescence intensity, miRNA microRNA, NAC N-acetylcysteine, SA-β-gal senescence-associated galactosidase, SIRT1 silent information regulator 1, siRNA small interfering RNA
ROS have been reported to induce oxidative stress that causes DNA and cell damage and can induce cell senescence through the p53/p21 pathway [32]. As expected, the miR-34a mimic increased ROS production, which was alleviated by addition of the ROS scavenger N-acetylcysteine (NAC) (Fig. 5c, d). γ-H2A.X, a sensitive marker for the formation of DNA damage foci, was examined by western blotting together with the senescence-related proteins p53, p21, and p16. In the miR-34a mimic and siRNA-SIRT1 group, the expression of p16, p53, and p21 was obviously increased compared with that in the control group (Fig. 5e, f). However, when ROS was removed by NAC or when miR-34a was inhibited by a miR-34a inhibitor, the expression of γ-H2A.X, p16, p53, and p21 was significantly reduced (Fig. 5e, f).
Our results show that miR-34a is significantly upregulated under H/SD conditions in MSCs, and overexpression of miR-34a is strongly associated with increased apoptosis, lower viability, and increased senescence. SIRT1, identified as a direct and functional target of miR-34a, protects MSCs from H/SD-induced apoptosis through its downstream effector FOXO3a. Further experiments indicated that the mitochondrial permeability transition and intrinsic apoptosis pathway of the CASP3–PARP1 axis were involved during this process. Moreover, miR-34a was also found to aggravate the senescence of MSCs in a ROS-dependent manner. This study strongly suggests that miR-34a is a promising candidate for a more optimized and appealing target for MSC-based therapy in MI.
Since MSCs are easily obtained and exhibit impressive paracrine ability as well as multilineage differentiation potential [33], autologous MSCs offer a great advantage when transplanted into ischemic or infarcted heart to regenerate and repopulate the injured myocardium and restore heart function. However, repair of cardiomyocytes and restoration of heart function are limited by poor survival [34], increased senescence [35], and loss of immunoprivilege in long-term preclinical studies of engrafted MSCs in the infarcted area [36]. Researchers have attempted numerous approaches to overcome these limitations, and have made some improvements in restoring cardiac function [37, 38]. Despite these successes, strategies are still needed to make transplanting MSCs into the infarcted area easier and more effective.
Recent observations have revealed that miRNAs are involved in the processes of apoptosis, proliferation, senescence, autophagy, and differentiation, among others, exhibiting powerful and unexpected roles in modulating cell biological functions by upregulating or downregulating these processes [18, 39]. Among the known miRNAs, miR-34a has been demonstrated to be involved in apoptosis [40] and senescence [17] and to inhibit various key regulators of cell cycle progression [41]. miR-34a belongs to one of several evolutionarily-conserved families of miRNAs, namely miR-34, and was originally identified as a TP53-targeted mRNA [40]. miR-34a was expressed in almost every tissue but was scarcely expressed in lung tissue [42]. Frequently downregulated and functioning as an independent prognostic indicator in multiple types of cancers [18, 43], miR-34a expression levels were significantly upregulated in the animal model of acute MI and in the aged hearts [15] and were strongly correlated with left ventricular end-diastolic dimension 1 year after acute MI [44]. Furthermore, overexpression of miR-34a was demonstrated to promote the apoptosis of myocardial cell during MI [45], aggravate senescence, and impede angiogenesis ability of EPCs [17] and endothelial cells [46], as well as induce senescence and inflammation in vascular smooth muscle cells [47], leading to myocardial and vascular dysfunctions. Delivery of antagomir Ant-34a or LNA-based anti-miRNAs, however, enhanced cardiac contractile recovery after acute MI, which was associated with reduced fibrosis, increased capillary density, and elevated myocardial cell functions [14, 15]. Thus, we presumed that miR-34a may play a crucial role in the MI microenvironment and contribute to the poor survival rate of MSCs in the infarcted area. In the present study, we showed that miR-34a was expressed in normal MSCs and was elevated greatly during H/SD, designed to mimic the in vivo conditions of ischemia and hypoxia. Overexpression of miR-34a aggravated MSC apoptosis, while inhibition of miR-34a expression conferred resistance to H/SD-induced apoptosis. These data suggests that miR-34a not only regulates resident myocardial cell apoptosis but also plays an important role in engrafted MSCs survival in the infarcted area.
SIRT1, one of the potential targets of miR-34a, has been identified as an apoptosis inhibitor and has been found to act as a longevity gene in many studies reported previously [48]. Recently, SIRT1 was reported to inhibit the apoptosis of vascular adventitial fibroblasts (VAFs) [49]. Consistent with this report, our results showed that when SIRT1 was knocked down by siRNA-SIRT1, the apoptosis of MSCs induced by H/SD increased. However, we found a moderate increase in the expression of SIRT1 protein level in MSCs during H/SD (Fig. 3e), which was also found in EPCs exposed to H2O2 [24]. As is well known, SIRT1 is an NAD-dependent deacetylase, and the balance between NAD+/NADH is crucial for cellular survival. A recent study demonstrated that during hypoxic and ischemic insults the concentration of NAD+ increases at the very beginning, leading to activation of the SIRT1-dependent cleavage of acetyl groups [50]. This compensation effect can partly explain our result of the increased SIRT1 expression during H/SD.
In response to oxidative stress, SIRT1 forms a complex with FOXO3 and then enhances cellular stress resistance [51]. FOXO3a is a member of the mammalian FOXO family of forkhead transcription factors, which are critical regulators of stress responses, oncogenesis, and longevity by directly regulating the genes involved in apoptosis, cell cycle progression, and stress responses [52]. FOXO activities can be regulated by phosphorylation, ubiquitination, as well as deacetylation to inhibit apoptosis [25, 51, 53]. It has been identified that SIRT1 can bind to and deacetylate FOXO3a and then suppress its transcriptional activity, which also exerts favorable effects on oxidative stress resistance in cardiac myocytes [51]. In the current study, we showed in MSCs that FOXO3a was downregulated in response to a miR-34a inhibitor, and could be abolished by silencing SIRT1 expression, suggesting that the restorative function of the miR-34a inhibitor in MSCs was mediated through the SIRT1–FOXO3a signaling pathway. Regulation of the SIRT1–FOXO3a signaling pathway through some basic strategies may therefore play an active role in clinical heart disease.
We further examined the role of ΔΨm and the intrinsic apoptosis pathway of the CASP3–PARP1 axis in the pro-apoptotic activity of miR-34a in MSCs. Considerable evidence implicated that mitochondrial dysfunction or a change in the ΔΨm was one of the signs of cell death [28]. The decrease of ΔΨm activates effector CASP3 by a series of reactions, and subsequently induces cell apoptosis. As expected, we found that inhibition of miR-34a increased the ΔΨm of MSCs during H/SD, and decreased cleaved CASP3 expression.
Like other cells, MSCs entered into the senescence process when exposed to oxidative stress which greatly reduced their regenerative capacity and limited their transplantation efficiency. As an inevitable by-product of mitochondrial respiration, ROS in moderate amounts is necessary for cell survival, proliferation, and longevity [54]. However, during hypoxia, an imbalance between the formation and scavenging of free radicals leads to overproduction of electrons. These electrons react with remnant molecular oxygen, leading to ROS generation [55]. Abundant ROS could result in cell senescence through inducing DNA damage [54]. In this study, we explored whether ROS was the main mediator of MSC senescence induced by miR-34a overexpression. Results showed that overexpression of miR-34a increased β-galactosidase-positive cells as well as ROS production. When ROS production was reduced by NAC in MSCs, DNA damage was attenuated and the expression of p16, p53, and p21 was reduced. These results imply that ROS has an important role in MSC senescence induced by miR-34a overexpression and thus may partly explain the poor survival rate of engrafted MSCs in the infarcted area.
With rapid development of anti-miRNA chemistries, even ahead of miRNA mimicry [56], the miR-34a knockdown by antagomirs or LNA-based anti-miRNAs has been shown to protect against the deterioration of cardiac systolic function in mice after acute MI [15]. Moreover, no side effect was reported. Intracoronary infusion or intramyocardial delivery of MSCs modified with current developing therapeutics to inhibit the expression of miR-34a might thus have great advantage in application for vascular diseases.
In conclusion, our study reveals that miR-34a is greatly elevated in MSCs during H/SD, and overexpression of miR-34a leads to robust apoptosis, while inhibition of miR-34a significantly increases pressure resistance of MSCs to H/SD. This apoptosis is highly regulated by SIRT1/FOXO3a pathway activation, mitochondrial dysfunction, and finally activation of the intrinsic apoptosis pathway of the CASP3–PARP1 axis. Moreover, our results show that miR-34a overexpression increases cellular senescence, which may be regulated by ROS production. However, more work will be needed to determine the role of SIRT1 and FOXO3a in miR-34a-mediated functions in myocardial ischemia in vivo and to investigate whether other signaling pathways such as NOTCH signaling pathways, which have been reported to be regulated by miR-34a [57], are involved in the H/SD process in MSCs.
Our data demonstrate that miR-34a is involved in the process of H/SD in MSCs, while inhibition of miR-34a leads to an increase in SIRT1 and a decrease in FOXO3a protein expression, fewer apoptotic cells, and better viability. Moreover, we found that overexpression of miR-34a induced senescence of MSCs, which may partly be abolished by the ROS scavenger NAC. Inhibition of miR-34a in MSCs would thus be beneficial and could demonstrate great therapeutic potential in clinical transplantation for vascular disorders.
An amendment to this paper has been published and can be accessed via the original article.
∆Ψm:
Mitochondrial membrane potential
BCA:
Bicinchoninic acid assay
CASP3:
Caspase 3
CCK-8:
DCFH-DA:
2′,7′-Dichlorodihydrofluorescein diacetate
EPC:
Endothelial progenitor cell
FBS:
Fetal bovine serum
FITC:
Fluorescein isothiocyanate
FOXO3a:
Forkhead box O transcription factor 3a
Glyceraldehyde 3-phosphate dehydrogenase
H/SD:
Hypoxia and serum deprivation
IHD:
IMDM:
Iscove's modified Dulbecco's medium
miRNA:
Mesenchymal stem cell
NAC:
NC:
Negative control
PARP1:
polyADP-ribose polymerase 1
PI:
Propidium iodide
PVDF:
Polyvinylidene difluoride
Quantitative RT-PCR
ROS:
SA-β-gal:
Senescence-associated β-galactosidase
SD:
Sodium dodecyl sulfate
siRNA:
Small interfering RNA
siRNA-NT:
Scrambled siRNA
SIRT1:
Silent information regulator 1
TBS:
Tris-buffered saline
TBS-T:
TBS with Tween-20
UTR:
Untranslated region
VAF:
Vascular adventitial fibroblast
Moran AE, Forouzanfar MH, Roth GA, Mensah GA, Ezzati M, Murray CJ, et al. Temporal trends in ischemic heart disease mortality in 21 world regions, 1980 to 2010: the Global Burden of Disease 2010 study. Circulation. 2014;129:1483–92.
PubMed Central Article PubMed Google Scholar
McMurray JJ. Clinical practice. Systolic heart failure. N Engl J Med. 2010;362:228–38.
Quevedo HC, Hatzistergos KE, Oskouei BN, Feigenbaum GS, Rodriguez JE, Valdes D, et al. Allogeneic mesenchymal stem cells restore cardiac function in chronic ischemic cardiomyopathy via trilineage differentiating capacity. Proc Natl Acad Sci U S A. 2009;106:14022–7.
Karantalis V, DiFede DL, Gerstenblith G, Pham S, Symes J, Zambrano JP, et al. Autologous mesenchymal stem cells produce concordant improvements in regional function, tissue perfusion, and fibrotic burden when administered to patients undergoing coronary artery bypass grafting: the Prospective Randomized Study of Mesenchymal Stem Cell Therapy in Patients Undergoing Cardiac Surgery (PROMETHEUS) trial. Circ Res. 2014;114:1302–10.
Leistner DM, Fischer-Rasokat U, Honold J, Seeger FH, Schachinger V, Lehmann R, et al. Transplantation of progenitor cells and regeneration enhancement in acute myocardial infarction (TOPCARE-AMI): final 5-year results suggest long-term safety and efficacy. Clin Res Cardiol. 2011;100:925–34.
Toma C, Pittenger MF, Cahill KS, Byrne BJ, Kessler PD. Human mesenchymal stem cells differentiate to a cardiomyocyte phenotype in the adult murine heart. Circulation. 2002;105:93–8.
Pagani FD, DerSimonian H, Zawadzka A, Wetzel K, Edge AS, Jacoby DB, et al. Autologous skeletal myoblasts transplanted to ischemia-damaged myocardium in humans. Histological analysis of cell survival and differentiation. J Am Coll Cardiol. 2003;41:879–88.
Burova E, Borodkina A, Shatrova A, Nikolsky N. Sublethal oxidative stress induces the premature senescence of human mesenchymal stem cells derived from endometrium. Oxidative Med Cell Longev. 2013;2013:474931.
van Rooij E, Purcell AL, Levin AA. Developing microRNA therapeutics. Circ Res. 2012;110:496–507.
Bernardo BC, Charchar FJ, Lin RC, McMullen JR. A microRNA guide for clinicians and basic scientists: background and experimental techniques. Heart Lung Circ. 2012;21:131–42.
Small EM, Olson EN. Pervasive roles of microRNAs in cardiovascular biology. Nature. 2011;469:336–42.
Lin RC, Weeks KL, Gao XM, Williams RB, Bernardo BC, Kiriazis H, et al. PI3K(p110 alpha) protects against myocardial infarction-induced heart failure: identification of PI3K-regulated miRNA and mRNA. Arterioscler Thromb Vasc Biol. 2010;30:724–32.
Greco S, Fasanaro P, Castelvecchio S, D'Alessandra Y, Arcelli D, Di Donato M, et al. MicroRNA dysregulation in diabetic ischemic heart failure patients. Diabetes. 2012;61:1633–41.
Bernardo BC, Gao XM, Winbanks CE, Boey EJ, Tham YK, Kiriazis H, et al. Therapeutic inhibition of the miR-34 family attenuates pathological cardiac remodeling and improves heart function. Proc Natl Acad Sci U S A. 2012;109:17615–20.
Boon RA, Iekushi K, Lechner S, Seeger T, Fischer A, Heydt S, et al. MicroRNA-34a regulates cardiac ageing and function. Nature. 2013;495:107–10.
Krzeszinski JY, Wei W, Huynh H, Jin Z, Wang X, Chang TC, et al. miR-34a blocks osteoporosis and bone metastasis by inhibiting osteoclastogenesis and Tgif2. Nature. 2014;512:431–5.
Zhao T, Li J, Chen AF. MicroRNA-34a induces endothelial progenitor cell senescence and impedes its angiogenesis via suppressing silent information regulator 1. Am J Physiol Endocrinol Metab. 2010;299:E110–6.
Liu K, Huang J, Xie M, Yu Y, Zhu S, Kang R, et al. MIR34A regulates autophagy and apoptosis by targeting HMGB1 in the retinoblastoma cell. Autophagy. 2014;10:442–52.
Yamakuchi M, Ferlito M, Lowenstein CJ. miR-34a repression of SIRT1 regulates apoptosis. Proc Natl Acad Sci U S A. 2008;105:13421–6.
Lai L, Yan L, Gao S, Hu CL, Ge H, Davidow A, et al. Type 5 adenylyl cyclase increases oxidative stress by transcriptional regulation of manganese superoxide dismutase via the SIRT1/FoxO3a pathway. Circulation. 2013;127:1692–701.
Mouchiroud L, Houtkooper RH, Moullan N, Katsyuba E, Ryu D, Canto C, et al. The NAD(+)/sirtuin pathway modulates longevity through activation of mitochondrial UPR and FOXO signaling. Cell. 2013;154:430–41.
Li M, Chiu JF, Mossman BT, Fukagawa NK. Down-regulation of manganese-superoxide dismutase through phosphorylation of FOXO3a by Akt in explanted vascular smooth muscle cells from old rats. J Biol Chem. 2006;281:40429–39.
Warr MR, Binnewies M, Flach J, Reynaud D, Garg T, Malhotra R, et al. FOXO3A directs a protective autophagy program in haematopoietic stem cells. Nature. 2013;494:323–7.
Wang YQ, Cao Q, Wang F, Huang LY, Sang TT, Liu F, et al. SIRT1 protects against oxidative stress-induced endothelial progenitor cells apoptosis by inhibiting FOXO3a via FOXO3a ubiquitination and degradation. J Cell Physiol. 2015;230:2098–107.
Xia W, Zhang F, Xie C, Jiang M, Hou M. Macrophage migration inhibitory factor confers resistance to senescence through CD74-dependent AMPK-FOXO3a signaling in mesenchymal stem cells. Stem Cell Res Ther. 2015;6:82.
Zhu W, Chen J, Cong X, Hu S, Chen X. Hypoxia and serum deprivation-induced apoptosis in mesenchymal stem cells. Stem Cells. 2006;24:416–25.
Wang XQ, Shao Y, Ma CY, Chen W, Sun L, Liu W, et al. Decreased SIRT3 in aged human mesenchymal stromal/stem cells increases cellular susceptibility to oxidative stress. J Cell Mol Med. 2014;18:2298–310.
Deuse T, Peter C, Fedak PW, Doyle T, Reichenspurner H, Zimmermann WH, et al. Hepatocyte growth factor or vascular endothelial growth factor gene transfer maximizes mesenchymal stem cell-based myocardial salvage after acute myocardial infarction. Circulation. 2009;120:S247–54.
Duchen MR. Roles of mitochondria in health and disease. Diabetes. 2004;53:S96–102.
Garrido C, Galluzzi L, Brunet M, Puig PE, Didelot C, Kroemer G. Mechanisms of cytochrome c release from mitochondria. Cell Death Differ. 2006;13:1423–33.
Dimri GP, Lee X, Basile G, Acosta M, Scott G, Roskelley C, et al. A biomarker that identifies senescent human cells in culture and in aging skin in vivo. Proc Natl Acad Sci U S A. 1995;92:9363–7.
Wang H, Zhou W, Zheng Z, Zhang P, Tu B, He Q, et al. The HDAC inhibitor depsipeptide transactivates the p53/p21 pathway by inducing DNA damage. DNA Repair. 2012;11:146–56.
Frenette PS, Pinho S, Lucas D, Scheiermann C. Mesenchymal stem cell: keystone of the hematopoietic stem cell niche and a stepping-stone for regenerative medicine. Annu Rev Immunol. 2013;31:285–316.
Clifford DM, Fisher SA, Brunskill SJ, Doree C, Mathur A, Watt S, et al. Stem cell treatment for acute myocardial infarction. Cochr Database Syst Rev. 2012;2:CD006536.
Ni NC, Li RK, Weisel RD. The promise and challenges of cardiac stem cell therapy. Semin Thorac Cardiovasc Surg. 2014;26:44–52.
Huang XP, Sun Z, Miyagi Y, McDonald Kinkaid H, Zhang L, Weisel RD, et al. Differentiation of allogeneic mesenchymal stem cells induces immunogenicity and limits their long-term benefits for myocardial repair. Circulation. 2010;122:2419–29.
Shahzad U, Li G, Zhang Y, Li RK, Rao V, Yau TM. Transmyocardial revascularization enhances bone marrow stem cell engraftment in infarcted hearts through SCF-C-kit and SDF-1-CXCR4 signaling axes. Stem Cell Rev. 2015;11:332–46.
Dhingra S, Li P, Huang XP, Guo J, Wu J, Mihic A, et al. Preserving prostaglandin E2 level prevents rejection of implanted allogeneic mesenchymal stem cells and restores postinfarction ventricular function. Circulation. 2013;128:S69–78.
Jheon AH, Li CY, Wen T, Michon F, Klein OD. Expression of microRNAs in the stem cell niche of the adult mouse incisor. PLoS One. 2011;6, e24536.
He L, He X, Lim LP, de Stanchina E, Xuan Z, Liang Y, et al. A microRNA component of the p53 tumour suppressor network. Nature. 2007;447:1130–4.
Tarasov V, Jung P, Verdoodt B, Lodygin D, Epanchintsev A, Menssen A, et al. Differential regulation of microRNAs by p53 revealed by massively parallel sequencing: miR-34a is a p53 target that induces apoptosis and G1-arrest. Cell Cycle. 2007;6:1586–93.
Bommer GT, Gerin I, Feng Y, Kaczorowski AJ, Kuick R, Love RE, et al. p53-mediated activation of miRNA34 candidate tumor-suppressor genes. Curr Biol. 2007;17:1298–307.
Asslaber D, Pinon JD, Seyfried I, Desch P, Stocher M, Tinhofer I, et al. microRNA-34a expression correlates with MDM2 SNP309 polymorphism and treatment-free survival in chronic lymphocytic leukemia. Blood. 2010;115:4191–7.
Matsumoto S, Sakata Y, Suna S, Nakatani D, Usami M, Hara M, et al. Circulating p53-responsive microRNAs are predictive indicators of heart failure after acute myocardial infarction. Circ Res. 2013;113:322–6.
Fan F, Sun A, Zhao H, Liu X, Zhang W, Jin X, et al. MicroRNA-34a promotes cardiomyocyte apoptosis post myocardial infarction through down-regulating aldehyde dehydrogenase 2. Curr Pharm Des. 2013;19:4865–73.
Ito T, Yagi S, Yamakuchi M. MicroRNA-34a regulation of endothelial senescence. Biochem Biophys Res Commun. 2010;398:735–40.
Badi I, Burba I, Ruggeri C, Zeni F, Bertolotti M, Scopece A et al. MicroRNA-34a induces vascular smooth muscle cells senescence by SIRT1 downregulation and promotes the expression of age-associated pro-inflammatory secretory factors. J Gerontol Ser A Biol Sci Med Sci. 2014. doi: 10.1093/gerona/glu180. Epub ahead of print.
Gomes AP, Price NL, Ling AJ, Moslehi JJ, Montgomery MK, Rajman L, et al. Declining NAD(+) induces a pseudohypoxic state disrupting nuclear-mitochondrial communication during aging. Cell. 2013;155:1624–38.
Wang W, Yan C, Zhang J, Lin R, Lin Q, Yang L, et al. SIRT1 inhibits TNF-alpha-induced apoptosis of vascular adventitial fibroblasts partly through the deacetylation of FoxO1. Apoptosis. 2013;18:689–701.
Blander G, Guarente L. The Sir2 family of protein deacetylases. Annu Rev Biochem. 2004;73:417–35.
Brunet A, Sweeney LB, Sturgill JF, Chua KF, Greer PL, Lin Y, et al. Stress-dependent regulation of FOXO transcription factors by the SIRT1 deacetylase. Science. 2004;303:2011–5.
Nakae J, Oki M, Cao Y. The FoxO transcription factors and metabolic regulation. FEBS Lett. 2008;582:54–67.
Wang X, Chen WR, Xing D. A pathway from JNK through decreased ERK and Akt activities for FOXO3a nuclear translocation in response to UV irradiation. J Cell Physiol. 2012;227:1168–78.
Yee C, Yang W, Hekimi S. The intrinsic apoptosis pathway mediates the pro-longevity response to mitochondrial ROS in C. elegans. Cell. 2014;157:897–909.
Acin-Perez R, Carrascoso I, Baixauli F, Roche-Molina M, Latorre-Pellicer A, Fernandez-Silva P, et al. ROS-triggered phosphorylation of complex II by Fgr kinase regulates cellular adaptation to fuel use. Cell Metab. 2014;19:1020–33.
van Rooij E, Olson EN. MicroRNA therapeutics for cardiovascular disease: opportunities and obstacles. Nat Rev Drug Discov. 2012;11:860–72.
Bu P, Chen KY, Chen JH, Wang L, Walters J, Shin YJ, et al. A microRNA miR-34a-regulated bimodal switch targets Notch in colon cancer stem cells. Cell Stem Cell. 2013;12:602–15.
This study was supported by grants from the National Natural Science Foundation of China (to BY, grant numbers 81171430 and 81330033; to BL, grant number 81400296) and the Key Laboratory of Myocardial Ischemia Mechanism and Treatment (Harbin Medical University), Ministry of Education (to XiaL, grant number KF201412).
Key Laboratories of Education Ministry for Myocardial Ischemia Mechanism, The Second Affiliated Hospital of Harbin Medical University, 148 Baojian Road, Harbin, 150086, P.R. China
Fengyun Zhang, Jinjin Cui, Bo Lv, Xinxin Liu, Zulong Xie & Bo Yu
Department of Cardiology, The Second Affiliated Hospital of Harbin Medical University, 148 Baojian Road, Harbin, 150086, P.R. China
Department of Cardiology, Mudanjiang Forestry Central Hospital, 50 XinhuaRoad, Mudanjiang, 157000, P.R. China
Xiaojing Liu
Fengyun Zhang
Jinjin Cui
Bo Lv
Xinxin Liu
Zulong Xie
Bo Yu
Correspondence to Bo Yu.
The authors declare that there are no competing interests.
FZ contributed to the experimental design, carried out the molecular biology experiments, performed the statistical analysis, and drafted the manuscript. JC participated in the design of the study and the sequence alignment and revised the manuscript. XiaL was responsible for MSC transfection, statistical analysis, and helped to draft the manuscript. BL carried out the immunoassays and revised the manuscript. XinL participated in isolation and culture of MSCs and helped to draft the manuscript. ZX participated in the design of the study, performed the statistical analysis, and helped to revise the manuscript. BY conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
Zhang, F., Cui, J., Liu, X. et al. Roles of microRNA-34a targeting SIRT1 in mesenchymal stem cells. Stem Cell Res Ther 6, 195 (2015). https://doi.org/10.1186/s13287-015-0187-x
Revised: 13 August 2015
Beyotime Institute
SIRT1 Expression
Intrinsic Apoptosis Pathway
Zhongshan Golden Bridge Biotechnology
|
CommonCrawl
|
OSA Publishing > Optical Materials Express > Page 231
Optical properties of Li-based nonlinear crystals for high power mid-IR OPCPA pumped at 1 µm under realistic operational conditions
Mahesh Namboodiri, Cheng Luo, Gregor Indorf, Torsten Golz, Ivanka Grguraš, Jan H. Buss, Michael Schulz, Robert Riedel, Mark J. Prandolini, and Tim Laarmann
Mahesh Namboodiri,1 Cheng Luo,1 Gregor Indorf,2 Torsten Golz,2 Ivanka Grguraš,2 Jan H. Buss,2 Michael Schulz,2 Robert Riedel,2 Mark J. Prandolini,2,3,5 and Tim Laarmann1,4,6
1Deutsches Elektronen-Synchrotron DESY, Notkestraße 85, 22607 Hamburg, Germany
2Class 5 Photonics GmbH, Notkestraße 85, 22607 Hamburg, Germany
3Universität Hamburg, Institut für Experimentalphysik, Luruper Chaussee 149, 22761, Hamburg, Germany
4The Hamburg Centre for Ultrafast Imaging CUI, 22761 Hamburg, Germany
[email protected]
[email protected]
Tim Laarmann https://orcid.org/0000-0003-4289-8536
M Namboodiri
C Luo
G Indorf
T Golz
I Grguraš
J Buss
M Schulz
R Riedel
M Prandolini
T Laarmann
•https://doi.org/10.1364/OME.414478
Mahesh Namboodiri, Cheng Luo, Gregor Indorf, Torsten Golz, Ivanka Grguraš, Jan H. Buss, Michael Schulz, Robert Riedel, Mark J. Prandolini, and Tim Laarmann, "Optical properties of Li-based nonlinear crystals for high power mid-IR OPCPA pumped at 1 µm under realistic operational conditions," Opt. Mater. Express 11, 231-239 (2021)
Femtosecond mid-IR difference-frequency generation in LiInSe2 (OME)
50-µJ level, 20-picosecond, narrowband difference-frequency generation at 4.6, 5.4, 7.5, 9.2, and 10.8 µm in LiGaS2 and LiGaSe2 at Nd:YAG laser pumping and various crystalline Raman laser seedings (OME)
Optical, thermal, electrical, damage, and phase-matching properties of lithium selenoindate (JOSAB)
Nonlinear Optical Materials
High power lasers
Laser materials
Yb:YAG lasers
Original Manuscript: November 10, 2020
Experimental set-up and samples
Optical properties of mid-infrared, Li-based nonlinear crystals (NLC) are estimated under realistic experimental conditions for high power lasers using the thermal imaging method. The study focuses on crystals with relatively large apertures for high energy and power applications that are transparent in a broad spectral range (6–16 µm). For this purpose, a high average power Yb:YAG laser amplifier system was used that pumps the crystals and the thermal response of the materials was recorded. An estimate of the linear and nonlinear absorption coefficients of different non-oxide crystals at the 1-µm pump wavelength along with their nonlinear refractive index is provided. To the best of our knowledge, linear and nonlinear absorption coefficients are presented for the first time, including the nonlinear refractive index of AGS, LGSe, LIS, and LISe. These optical material properties are of utmost importance for cutting-edge laser developments close to damage thresholds since they affect the resulting beam quality and conversion efficiencies of novel high power optical parametric amplifiers operating in the long-wavelength mid-infrared spectral range.
Mid-infrared (mid-IR) optical parametric chirped-pulse amplifier (OPCPA) laser systems operating at central wavelengths beyond 5 µm and delivering ultrashort pulses at high repetition rate and high power levels are of considerable interest for vibrational spectroscopy, label-free microscopy and ultrafast dynamic studies [1]. The dream to efficiently control matter, materials and building blocks of life with light beyond electronic excitations, pushes the development of both the mid-IR laser sources and sophisticated laser pulse shaping capabilities in this spectral range [2]. Due to the exceptional power scalability of 1 µm Yb pump lasers, OPAs operating between 3 and 4 µm have been developed that provide femtosecond laser pulses at MHz-rate on the multiple watt-level [3–8]. These lasers rely on wide-bandgap oxide crystals for parametric frequency down-conversion, which are available commercially with aperture sizes well above 1 cm. Different nonlinear optical materials such as LiNbO3, KNbO3, and KTiOAsO4 (KTA) can be pumped with short (< 1.5 ps) pulse high power lasers with negligible two-photon absorption at 1 µm and high damage threshold [9,10]. Since oxide crystals exhibit strong multiphoton absorption bands in the spectral range above 5 µm, one typically has to build an OPA-DFG (difference frequency generation) cascade to reach the longer-wave mid-IR range. In corresponding laser schemes, the OPA stages uses oxide crystals, and the DFG stage is based on non-oxide semiconductor crystals, such as AgGaSe2 [11,12], GaSe [2,13–15], AgGaS2 [16,17], CdSiP2 [18] and ZnGeP2 [19–21]. The bandgap of these materials is rather small, i.e. on the order of the 2.4 eV, which matches the two-photon energy of high power commercial Yb pump lasers. It excludes or at least makes it extremely challenging to efficiently pump a parametric frequency down-conversion process in these crystals at 1 µm without damage [22].
The overall pump-to-mid-IR energy conversion efficiency of the OPA/DFG cascades is typically well below 0.5% at a center wavelength of ≈ 8 µm. Thus, eliminating the usual difference frequency generation step in producing laser pulses in the limit of long wavelength (λ ≥ 5 µm) holds the promise to increase the conversion efficiency of parametric down-conversion devices significantly. As a prerequisite, novel non-oxide Li-based materials – LiGaS2 (LGS), LiGaSe2 (LGSe), LIS (LiInS2), and LiInSe2 (LISe) – are becoming available that are transparent in a broad spectral range across the mid-IR (6–16 µm) and exhibit sufficiently large damage threshold. Combined with high-average power 1-µm Yb pump laser technology this opens up the door to extend ultrashort high power laser pulses for research, development and industry applications towards the longer-wave mid-IR. Impressive results have been achieved recently by using the LGS crystals, which exhibit a transparency range of 0.32–11.6 µm and have a large bandgap of ≈ 4 eV. Ultrashort pulses on the nanojoule-level with a pulse duration close to the Fourier-transform limit at central wavelengths ranging from 7–11 µm have been demonstrated in OPA [23–25], DFG [13], and intra-pulse DFG laser systems [7,26,27]. On the one hand, further scaling up of the pulse energy and average power of the LGS-based mid-IR lasers is constrained in DFG- and OPA-schemes, due to the current limitations in the available crystal aperture size (≈ 7 × 7 mm2) at larger lengths (≈ 5–7 mm). On the other hand, it has been suggested that in OPCPA systems, up-scaling of the pulse energy beyond 100 µJ and watt-level average power should in principle be straightforward by using longer crystals (several mm) and longer pump pulse width (≈ 10 ps) [1]. Of course, at some point the damage threshold of LGS at peak intensity, on the order of 50 GW/cm2, comes into play [23]. But even before, when pumped at 1 µm with ultrashort pulses at MHz repetition rate it is crucial to control the resulting thermal loads. Insufficient heat conductivity may result in temperature gradients across the crystal. Spatially inhomogeneous refractive index changes can occur, which lead to varying phase-matching conditions across the crystal, limiting the attainable average power, the spectral bandwidth, the beam quality and compromise the stable and reliable long-term performance. Thus, in order to design robust mid-IR laser architectures detailed information on the optical properties of the converter materials under realistic high power and energy conditions is essential [28]. However, investigations of material properties related to ultrafast laser-induced damage in nonlinear crystals at repetition rates above a few kHz are rare [29].
An easy to implement thermal imaging method gives a good upper limit estimate of the linear and nonlinear absorption coefficients, including the nonlinear refractive index "under operational conditions" [30]. This means that the material parameters are measured using the same laser pump parameters as that which would be used for high energy and power OPCPA-pumping. In this case, a high power laser is irradiating the entire surface area of a large crystal (with aperture 9 × 9 mm2). These measurements provide a more realistic absorption and nonlinear coefficients averaging over large crystals with local volume defects, impurities, surface effects and surface defects. Thus, the thermal imaging method provides realistic upper limits compared to methods measuring locally inside a small "perfect" crystals [30]. In comparison, the well-established photothermal common-path interferometry PCI technique [31] yields only linear absorption, very accurately and punctually within the crystal volume using a low power continuous wave laser and thus multi-photon absorption coefficient is not measured with this method. Additionally, the well-known z-scan method, introduced in 1990 by van Stryland and co-workers [32], provides accurately the nonlinear refractive index n2 within a small volume of the crystal.
We have selected a few of the non-oxide, nonlinear crystals (NLC) for the present study and derive their linear and two-photon absorption coefficients along with the nonlinear refractive index, cf. Table 1 at 1.03 µm, using a commercial Yb:YAG Innoslab from AMPHOS. The laser system provides τ = 0.92 ps (FWHM, fitted with a sech2 function) pulses at λ0 = 1030 nm (spectral bandwidth ≈ 1.1 nm), M2 < 1.3 and a tunable repetition rate from 200 kHz up to 1 MHz. Similar average power levels can be obtained at both repetition rates with maximum output power P = 200 ± 2 W.
Table 1. Optical properties of mid-IR non-oxide, Li-based nonlinear crystals
2. Experimental set-up and samples
The experimental setup is depicted in Fig. 1. All NLCs were placed on two ceramic stands, where only the bottom corners were in contact with the crystals and fixed from the top using a nylon tip screw; such that all surfaces were exposed to ambient air flow, and thus, thermal conduction to the holder is kept to a minimum. The "free-standing" NLCs were irradiated by the 1 µm laser beam under real application like conditions. The experiments were conducted at 1 MHz and 200 kHz repetition rate, and by using a half-wave plate and polarizer, the average power at each repetition rate can be changed. The laser beam sizes were similar in both cases (1/e2 radius ω = 2.1 mm), which was adjusted to fill the complete crystal aperture (9 × 9 mm2) in order to obtain a homogeneous heat distribution across the crystal. The experiments were performed on LGS (type II-XY plane, θ = 90°; φ = 37.5°), LGSe (type II-XY plane, θ = 90°; φ = 33.5°), LIS (type II-XY plane, θ = 90°; φ = 31.5°), LISe (type II-XY plane, θ = 90°; φ = 33.0°) and AGS (type II-XY plane, θ = 90°; φ = 39.5°). The 2 mm thick crystals were commercially bought from the vendor Ascut Ltd.&Co.KG [28]. The crystals were uncoated on both sides and the cut angle is selected for 10 µm idler wavelength when pumped at 1.03 µm. All measurements were performed with pump laser in "e" polarization. The reflected and transmitted pump laser powers were measured to estimate the power inside the crystal and Fresnel reflections at both front and back surfaces, resulting in a reduction of about 13.4% of the total power. The beam profile of the transmitted beam was monitored (Basler Aca 1300gm) by reflecting a small part of it using a wedge. The thermal images of the crystals under investigation were recorded in thermal equilibrium using an IR camera (FLUKE Ti25, spectral range 7.5–14 µm). A typical example of the heat distribution of a pumped crystal is presented in Fig. 1 (bottom, left).
Fig. 1. Thermal imaging setup: All nonlinear crystals (NLC) were of size 9 × 9×2 mm3 and mounted "free standing" (see text). Power meters were used to estimate the power inside the crystal and to confirm the expected Fresnel reflections at both front and back surfaces. The thermal camera was used to measure the temperature across the surfaces of the crystal. An example image from the thermal camera is shown (bottom, left). A Basler camera is used to record beam profiles that are modified by the nonlinear response the NLCs.
3.1 Linear and nonlinear absorption coefficients
The major limiting factors in increasing the average power levels of the OPCPAs are thermal effects due to absorption of pump, signal and idler pulses in the NLCs. In the case of mid-IR OPCPAs (6–16 µm), the relatively small conversion efficiency points to negligible absorption of the idler, and effects of absorption of the signal wavelengths are expected to be similar to that of the pump wavelength. Therefore, under these conditions, we expect the pump pulse to dominate the thermal equilibrium conditions of the crystals [30,33].
Under thermal equilibrium, a thermal model is solved analytically to obtain the linear and nonlinear absorption coefficients. The model was developed using known laser parameters and material parameters with the absorption coefficients α (linear), β (multi-photon) as free parameters. The heat equation is derived by considering energy transfer upon laser impact due to black body radiation and convection under thermal equilibrium [30,34]. For high repetition-rate Yb pump lasers, this is justified because the thermal relaxation time is much longer then the temporal gap between consecutive pulses. Basically, in the thermal model it is assumed that the absorbed laser power Pabs, which can be expressed as [35]
(1)$${P_{\textrm{abs}}} = \; {C_1}\alpha I + {C_2}\beta {I^2},$$
is re-emitted. Thus, total heat H exchanged is given by
(2)$$H\; = {H_{\textrm{black} - \textrm{body}}} + {H_{\textrm{convection}}} = \; \sigma \epsilon A({{T_\textrm{C}}^4 - {T_\textrm{R}}^4} )+ hA({{T_\textrm{C}} - {T_\textrm{R}}} )$$
which finally results in
(3)$$\sigma \epsilon A({{T_\textrm{C}}^4 - {T_\textrm{R}}^4} )+ \; hA({{T_\textrm{C}} - {T_\textrm{R}}} )= \; {C_1}\alpha I + {C_2}\beta {I^2}$$
The heat radiated due to black-body radiation is determined by the Stefan-Boltzmann constant ($\sigma $ = 5.669 × 10−8 Wm−2K−4) [34], the surface emissivity of the crystals were calibrated against a thermal reference because of the transparency range of the thermal camera is in the transmission window of the mid-IR NLCs. For the case of LGS, ${\epsilon _{LGS}}$ = 0.6 (and for the other crystals, ${\epsilon _{AGS}}$ = 0.37, ${\epsilon _{LGSe}}$ = 0.42, ${\epsilon _{LIS}}$ = 0.6 and ${\epsilon _{LISe}}$ = 0.44) is used, the surface area A of the crystal, the crystal temperature TC and room temperature TR. A common value for the heat convection coefficient is $h$ = 10 Wm−2 K−1 according to literature [34].
The coefficients C1 and C2 are determined by the laser pulse parameters: repetition rate f, radial beam waists ${\omega _\textrm{x}}{\omega _\textrm{y}}$ perpendicular to the propagation direction, the temporal width of the pulses $\tau $ and by the crystal length L. Integration of the squared hyperbolic secant (sech) function describing the pulse shape gives
(4)$${C_1} = fL({1.206{\omega_\textrm{x}}{\omega_\textrm{y}}\tau } )\; \textrm{and} \; {C_2} = fL({0.426{\omega_\textrm{x}}{\omega_\textrm{y}}\tau } ).$$
The variation of the average temperature across the crystal surface with increasing pump laser intensity is shown in Fig. 2, in case of the LGS crystal. The temperature is determined from the center of the crystal aperture in the thermal image. The experiments were performed at 1 MHz and 200 kHz repetition rates. The average power inside the crystal is estimated taking into account the losses due to Fresnel reflections by measuring the reflected power from the surfaces of the crystal. The pulse duration and the spatial beam profiles were very similar at both experimental campaigns. The lower pulse energies at high repetition rate (1 MHz) data show a larger contribution from linear absorption, whereas at 200 kHz, and therefore at higher pulse energies, the nonlinear absorption is dominating (also see Fig. 3). This general trend is observed for all non-oxide NLCs under investigation, except for LGSe. In LGSe crystal, strong multi-photon absorption could already be observed in the low energy per pulse data at 1 MHz. By calculating the total heat as a function of pump laser intensity from the recorded thermal images and by fitting the data according to Eq. (3), rough estimates of the linear and nonlinear absorption coefficients are derived for the different NLCs. The $\alpha $ is derived from the linear fit to 1 MHz data, which is then used in 200 kHz data for nonlinear curve fit to obtain the $\beta $. Independent measure of the $\alpha $ improves the fitting accuracy for the estimation of $\beta .$ As an example, the data evaluation for the LGS crystal is shown in Fig. 3.
Fig. 2. An example temperature profile using the LGS crystal obtained from thermal images of the crystal recorded by increasing the irradiation power levels inside the crystal at 1 MHz (a) and 200 kHz (b) repetition rates. The average temperature across the crystal as a function of laser intensity is represented by the black line.
Fig. 3. Linear (a) at 1 MHz and nonlinear curve fit at 200 kHz (b) applied to obtain linear and nonlinear absorption coefficients from the analytical model of total heat vs intensity for the LGS crystal.
The estimated values of the linear and nonlinear absorption coefficients for the various crystals are summarized in Table 1; values for the bandgap and deff were taken from [36] and [37], respectively. The fitting error for all graphs was less than 0.2%. However, because all systematic errors tend to increase the estimated values using these methods [30], all experimental derived values are given as upper limits (also see discussion in Section 1).
Furthermore, for all non-oxide NLCs, we observed the generation of second harmonic like 'green' light along with increased lensing effects at high intensities; however, too small to affect the analysis of this work. The AGS crystal showed a large Kerr lensing effect already at an average power of 45 W, both in the 1 MHz and 200 kHz data sets, which corresponds to peak intensities of ≈ 0.8 GW/cm2 and ≈ 4.2 GW/cm2, respectively. Therefore, the experiments on AGS were not carried out further for higher power levels to avoid damage of the crystal. A similar observation was made in case of LISe as well. In case of the AGS crystal, to within experimental error only a linear increase in temperature was observed for both repetition rates. Thus, only the linear absorption coefficient was obtained for AGS.
3.2 Nonlinear refractive index
We propose a simple and robust method to give an upper estimate of the second-order nonlinear refractive index n2 of nonlinear crystals based on beam profile measurements after traversing the crystal under realistic high power user conditions and beam parameters. Note that cascaded second-order effects and Kerr lensing can contribute to an effective n2 [38,39], which is an important parameter for the OPCPA design. In order to benchmark the measurements, we chose uncoated LGS as a sample material, because its effective n2 value upon high-power pumping at 1 µm has been published recently: 3.5 × 10−15 cm2/W [23] and 4.1 × 10−15 cm2/W [29]. The latter study utilized the well-known z-scan method introduced in 1990 by van Stryland and co-workers [32].
The Basler camera images were taken after the beam had passed the "free standing" NLC and imaged into the camera with a wedge (W) and lens (L2 = 400 mm focal length) (see Fig. 1). The results for the LGS crystal are shown in Fig. 4. At 1 MHz, thus relatively low intensity, the beam profiles remain similar (with and without the NLC crystal), and therefore we can also neglect the effects of thermal lensing. But at 200 kHz, the beam profiles show a reduction in the beam diameter with increasing pump intensity resulting from Kerr lensing (Fig. 4). The beam profile measurements were implemented in order to monitor these processes and to identify, if any irreversible damage occurs during the measurement.
Fig. 4. Basler camera images of the beam profile demonstrating the nonlinear response with increasing pump intensity for the crystal LGS for both 1 MHz and 200 kHz repetition rate. The beam profiles from top to bottom are given for applied laser pump power of 100%, 80%, 40%, 20% and 10% of total power (corresponding estimated crystal internal power is in square brackets).
According to the electro-optic (Kerr) effect, a Kerr lens is formed with focal length fK given by
(5)$${f_\textrm{K}} = \; {\omega ^2}/({4{n_2}IL} ).$$
Here, ω is the 1/e2 beam radius (determined by the knife-edge method), I describes the peak intensity and L denotes the length of the crystal. Thus with increasing pump intensity, the nonlinear crystal forms an effective lens, reducing the original size of the pump profile, which is measured without a NLC or at very low intensities. Assuming Gaussian beam propagation and known laser, crystal and imaging optics, we derive fK by comparing the images at various intensities with a simple simulation of Gaussian beam propagation using idealized lenses. In case of LGS crystal, we can provide an upper estimate of n2 < 6.4×10−15 cm2/W, which is in very good agreement to the published experimental results. The derived second-order nonlinear refractive index n2 values of all other crystals under investigations are summarized in the Table 1.
4. Summary and conclusion
Only few suitable nonlinear crystals are available in the mid-IR (6–16 µm), a spectral region of great importance for vibrational molecular spectroscopy and atmospheric sensing. And therefore, in this paper we measure critical material properties of a range of NLCs pumped at 1 µm, which can be used as input for realistic simulations of high-average power laser performance.
Although the well-known AGS crystal has a high deff, it has a lower bandgap and a much higher nonlinear refractive index (n2) compared to the other Lithium-based crystals when pumped at 1 µm (Table 1), making this crystal not suitable for high energy and power applications. For low power applications, LIS and LISe might be promising, because they have a relatively high deff. But for high energy and power applications, LGS crystals combine sufficiently large deff with a comparatively high optical damage threshold due to the large bandgap and broad transparency range. Additionally, LGS has the lowest multi-photon absorption coefficient and nonlinear refractive index (see Table 1). This makes the material a very promising candidate among the nonlinear non-oxide crystals for high power ultrashort mid-IR OPCPA applications pumped directly at 1 µm.
The presented data provides crucial design parameters for the development of high-average-power mid-IR OPCPA systems based on industrial Yb pump laser technology under realistic user conditions. Finally, we would like to mention that an interesting alternative avenue is of course to use Ho/Tm doped high-power lasers to directly pump suitable nonlinear crystals at 2 µm. However, the challenge is to further boost the pulse energy and average power of these 2-µm pump lasers in order to make full use of the corresponding high effective nonlinearity of the available nonlinear crystals.
Cluster of Excellence "CUI: Advanced Imaging of Matter" of the Deutsche Forschungsgemeinschaft (DFG) (EXC 2056 project ID 390715994); European Regional Development Fund; the Hamburgische Investitionsund Förderbank (IFB); Free and Hanseatic City of Hamburg ('Supernova DFG').
We thank Class 5 Photonics, Dr. Slawomir Skruszewicz and Dr. Andreas Przystawik for fruitful discussions.
1. S. Qu, H. Liang, K. Liu, X. Zou, W. Li, Q. J. Wang, and Y. Zhang, "9 µm few-cycle optical parametric chirped-pulse amplifier based on LiGaS2," Opt. Lett. 44(10), 2422 (2019). [CrossRef]
2. M. A. Jakob, M. Namboodiri, M. J. Prandolini, and T. Laarmann, "Generation and characterization of tailored MIR waveforms for steering molecular dynamics," Opt. Express 27(19), 26979 (2019). [CrossRef]
3. T. Südmeyer, E. Innerhofer, F. Brunner, R. Paschotta, T. Usami, H. Ito, S. Kurimura, K. Kitamura, D. C. Hanna, and U. Keller, "High-power femtosecond fiber-feedback optical parametric oscillator based on periodically poled stoichiometric LiTaO3," Opt. Lett. 29(10), 1111 (2004). [CrossRef]
4. F. Mörz, T. Steinle, A. Steinmann, and H. Giessen, "Multi-Watt femtosecond optical parametric master oscillator power amplifier at 43 MHz," Opt. Express 23(18), 23960 (2015). [CrossRef]
5. T. Steinle, F. Mörz, A. Steinmann, and H. Giessen, "Ultra-stable high average power femtosecond laser system tunable from 1.33 to 20 mm," Opt. Lett. 41(21), 4863 (2016). [CrossRef]
6. F. Adler, K. C. Cossel, M. J. Thorpe, I. Hartl, M. E. Fermann, and J. Ye, "Phase-stabilized, 1.5 W frequency comb at 2.8–4.8 mm," Opt. Lett. 34(9), 1330 (2009). [CrossRef]
7. I. Pupeza, D. Sanchez, J. Zhang, N. Lilienfein, M. Seidel, N. Karpowicz, T. Paasch-Colberg, I. Znakovskaya, M. Pescher, W. Schweinberger, V. Pervak, E. Fill, O. Pronin, Z. Wei, F. Krausz, A. Apolonski, and J. Biegert, "High-power sub-two-cycle mid-infrared pulses at 100 MHz repetition rate," Nat. Photonics 9(11), 721–724 (2015). [CrossRef]
8. A. Schliesser, N. Picqué, and T. W. Hänsch, "Mid-infrared frequency combs," Nat. Photonics 6(7), 440–449 (2012). [CrossRef]
9. F. Bach, M. Mero, M.-H. Chou, and V. Petrov, "Laser induced damage studies of LiNbO3 using 1030-nm, ultrashort pulses at 10-1000 kHz," Opt. Mater. Express 7(1), 240 (2017). [CrossRef]
10. F. Bach, M. Mero, V. Pasiskevicius, A. Zukauskas, and V. Petrov, "High repetition rate, femtosecond and picosecond laser induced damage thresholds of Rb:KTiOPO4 at 1.03 µm," Opt. Mater. Express 7(3), 744 (2017). [CrossRef]
11. O. Novák, P. R. Krogen, T. Kroh, T. Mocek, F. X. Kärtner, and K.-H. Hong, "Femtosecond 8.5 µm source based on intrapulse difference-frequency generation of 2.1 µm pulses," Opt. Lett. 43(6), 1335 (2018). [CrossRef]
12. M. Beutler, I. Rimke, E. Büttner, P. Farinello, A. Agnesi, V. Badikov, D. Badikov, and V. Petrov, "Difference-frequency generation of ultrashort pulses in the mid-IR using Yb-fiber pump systems and AgGaSe2," Opt. Express 23(3), 2730 (2015). [CrossRef]
13. M. Knorr, J. Raab, M. Tauer, P. Merkl, D. Peller, E. Wittmann, E. Riedle, C. Lange, and R. Huber, "Phase-locked multi-terahertz electric fields exceeding 13 MV/cm at a 190 kHz repetition rate," Opt. Lett. 42(21), 4367 (2017). [CrossRef]
14. C. Gaida, M. Gebhardt, T. Heuermann, F. Stutzki, C. Jauregui, J. Antonio-Lopez, A. Schülzgen, R. Amezcua-Correa, A. Tünnermann, I. Pupeza, and J. Limpert, "Watt-scale super-octave mid-infrared intrapulse difference frequency generation," Light: Sci. Appl. 7(1), 94 (2018). [CrossRef]
15. J. Zhang, K. F. Mak, N. Nagl, M. Seidel, D. Bauer, D. Sutter, V. Pervak, F. Krausz, and O. Pronin, "Multi- mW, few-cycle mid-infrared continuum spanning from 500 to 2250 cm−1," Light: Sci. Appl. 7(2), 17180 (2018). [CrossRef]
16. G. M. Archipovaite, P. Malevich, E. Cormier, T. Lihao, A. Baltuska, and T. Balciunas, "Efficient few-cycle mid-IR pulse generation in the 5-11 µm window driven by an Yb amplifier," in Advanced Solid State Lasers (Optical Society of America, 2017) paper AM4A.4.
17. A. Lanin, A. Voronin, E. Stepanov, A. Fedotov, and A. Zheltikov, "Multioctave, 3–18 µm sub-two-cycle supercontinua from self-compressing, self-focusing soliton transients in a solid," Opt. Lett. 40(6), 974 (2015). [CrossRef]
18. H. Liang, P. Krogen, Z. Wang, H. Park, T. Kroh, K. Zawilski, P. Schunemann, J. Moses, L. F. DiMauro, F. X. Kärtner, and K.-H. Hong, "High-energy mid-infrared sub-cycle pulse synthesis from a parametric amplifier," Nat. Commun. 8(1), 141 (2017). [CrossRef]
19. L. von Grafenstein, M. Bock, D. Ueberschaer, K. Zawilski, P. Schunemann, U. Griebner, and T. Elsaesser, "5 µm few-cycle pulses with multi-gigawatt peak power at a 1 kHz repetition rate," Opt. Lett. 42(19), 3796 (2017). [CrossRef]
20. D. Sanchez, M. Hemmer, M. Baudisch, S. Cousin, K. Zawilski, P. Schunemann, O. Chalus, C. Simon-Boisson, and J. Biegert, "7 µm, ultrafast, sub-millijoule-level mid-infrared optical parametric chirped pulse amplifier pumped at 2 µm," Optica 3(2), 147 (2016). [CrossRef]
21. T. Kanai, P. Malevich, S. S. Kangaparambil, K. Ishida, M. Mizui, K. Yamanouchi, H. Hoogland, R. Holzwarth, A. Pugzlys, and A. Baltuska, "Parametric amplification of 100 fs mid-infrared pulses in ZnGeP2 driven by a Ho:YAG chirped-pulse amplifier," Opt. Lett. 42(4), 683 (2017). [CrossRef]
22. V. Petrov, "Parametric down-conversion devices: The coverage of the mid-infrared spectral range by solid-state laser sources," Opt. Mater. 34(3), 536–554 (2012). [CrossRef]
23. M. Seidel, X. Xiao, S. A. Hussain, G. Arisholm, A. Hartung, K. T. Zawilski, P. G. Schunemann, F. Habel, M. Trubetskov, V. Pervak, O. Pronin, and F. Krausz, "Multi-watt, multi-octave, mid-infrared femtosecond source," Sci. Adv. 4(4), eaaq1526 (2018). [CrossRef]
24. S. B. Penwell, L. Whaley-Mayda, and A. Tokmakoff, "Single-stage MHz mid-IR OPA using LiGaS2and a fiber laser pump source," Opt. Lett. 43(6), 1363 (2018). [CrossRef]
25. B.-H. Chen, E. Wittmann, Y. Morimoto, P. Baum, and E. Riedle, "Octave-spanning single-cycle middle-infrared generation through optical parametric amplification in LiGaS2," Opt. Express 27(15), 21306 (2019). [CrossRef]
26. B.-H. Chen, T. Nagy, and P. Baum, "Efficient middle-infrared generation in LiGaS2 by simultaneous spectral broadening and difference-frequency generation," Opt. Lett. 43(8), 1742 (2018). [CrossRef]
27. K. Kaneshima, N. Ishii, K. Takeuchi, and J. Itatani, "Generation of carrier-envelope phase-stable mid-infrared pulses via dual-wavelength optical parametric amplification," Opt. Express 24(8), 8660 (2016). [CrossRef]
28. L. Isaenko, A. Yelisseyev, S. Lobanov, A. Titov, V. Petrov, J.-J. Zondy, P. Krinitsin, A. Merkulov, V. Vedenyapin, and J. Smirnova, "Growth and properties of LiGaX2 (X = S, Se, Te) single crystals for nonlinear optical applications in the mid-IR," Cryst. Res. Technol. 38(35), 379–387 (2003). [CrossRef]
29. M. Mero, L. Wang, W. Chen, N. Ye, G. Zhang, V. Petrov, and Z. Heiner, "Laser-induced damage of nonlinear crystals in ultrafast, high-repetition-rate, mid-infrared optical parametric amplifiers pumped at 1 µm," Proc. SPIE 11063, 1106307 (2019). [CrossRef]
30. R. Riedel, J. Rothhardt, K. Beil, B. Gronloh, A. Klenke, H. Höppner, M. Schulz, U. Teubner, C. Kränkel, J. Limpert, A. Tünnermann, M. J. Prandolini, and F. Tavella, "Thermal properties of borate crystals for high power optical parametric chirped-pulse amplification," Opt. Express 22(15), 17607 (2014). [CrossRef]
31. A. Alexandrovski, M. Fejer, A. Markosyan, and R. Route, "Photothermal common-path interferometry (PCI): new developments," Proc. SPIE 7193, 71930D (2009). [CrossRef]
32. M. Sheik-Bahae, A. A. Said, T.-H. Wei, D. J. Hagan, and E. W. van Stryland, "Sensitive measurement of optical nonlinearities using a single beam," IEEE J. Quantum Electron. 26(4), 760–769 (1990). [CrossRef]
33. M. K. R. Windeler, K. Mecseki, A. Miahnahri, J. S. Robinson, J. M. Fraser, A. R. Fry, and F. Tavella, "100 W high-repetition-rate near-infrared optical parametric chirped pulse amplifier," Opt. Lett. 44(17), 4287 (2019). [CrossRef]
34. M. Sabaeian, F. S. Jalil-Abadi, M. M. Rezaee, A. Motazedian, and M. Shahzadeh, "Temperature distribution in a Gaussian end-pumped nonlinear KTP crystal: the temperature dependence of thermal conductivity and radiation boundary condition," Braz. J. Phys. 45(1), 1–9 (2015). [CrossRef]
35. S. Seidel and G. Mann, "Numerical modeling of thermal effects in nonlinear crystals for high average power second harmonic generation," Proc. SPIE 2989, 204 (1997). [CrossRef]
36. L. I. Isaenko and A. P. Yelisseyev, "Recent studies of nonlinear chalcogenide crystals for the mid-IR," Semicond. Sci. Technol. 31(12), 123001 (2016). [CrossRef]
37. A. V. Smith, "SNLO nonlinear optics code (free version)" from http://www.as-photonics.com/snlo.
38. R. DeSalvo, D. J. Hagan, M. Sheik-Bahae, G. Stegeman, and E. W. Van Stryland, "Self-focusing and self-defocusing by cascaded second-order effects in KTP," Opt. Lett. 17(1), 28 (1992). [CrossRef]
39. M. Falconieri, "Thermo-optical effects in z-scan measurements using high-repetition-rate lasers," J. Opt. A: Pure Appl. Opt. 1(6), 662–667 (1999). [CrossRef]
S. Qu, H. Liang, K. Liu, X. Zou, W. Li, Q. J. Wang, and Y. Zhang, "9 µm few-cycle optical parametric chirped-pulse amplifier based on LiGaS2," Opt. Lett. 44(10), 2422 (2019).
M. A. Jakob, M. Namboodiri, M. J. Prandolini, and T. Laarmann, "Generation and characterization of tailored MIR waveforms for steering molecular dynamics," Opt. Express 27(19), 26979 (2019).
T. Südmeyer, E. Innerhofer, F. Brunner, R. Paschotta, T. Usami, H. Ito, S. Kurimura, K. Kitamura, D. C. Hanna, and U. Keller, "High-power femtosecond fiber-feedback optical parametric oscillator based on periodically poled stoichiometric LiTaO3," Opt. Lett. 29(10), 1111 (2004).
F. Mörz, T. Steinle, A. Steinmann, and H. Giessen, "Multi-Watt femtosecond optical parametric master oscillator power amplifier at 43 MHz," Opt. Express 23(18), 23960 (2015).
T. Steinle, F. Mörz, A. Steinmann, and H. Giessen, "Ultra-stable high average power femtosecond laser system tunable from 1.33 to 20 mm," Opt. Lett. 41(21), 4863 (2016).
F. Adler, K. C. Cossel, M. J. Thorpe, I. Hartl, M. E. Fermann, and J. Ye, "Phase-stabilized, 1.5 W frequency comb at 2.8–4.8 mm," Opt. Lett. 34(9), 1330 (2009).
I. Pupeza, D. Sanchez, J. Zhang, N. Lilienfein, M. Seidel, N. Karpowicz, T. Paasch-Colberg, I. Znakovskaya, M. Pescher, W. Schweinberger, V. Pervak, E. Fill, O. Pronin, Z. Wei, F. Krausz, A. Apolonski, and J. Biegert, "High-power sub-two-cycle mid-infrared pulses at 100 MHz repetition rate," Nat. Photonics 9(11), 721–724 (2015).
A. Schliesser, N. Picqué, and T. W. Hänsch, "Mid-infrared frequency combs," Nat. Photonics 6(7), 440–449 (2012).
F. Bach, M. Mero, M.-H. Chou, and V. Petrov, "Laser induced damage studies of LiNbO3 using 1030-nm, ultrashort pulses at 10-1000 kHz," Opt. Mater. Express 7(1), 240 (2017).
F. Bach, M. Mero, V. Pasiskevicius, A. Zukauskas, and V. Petrov, "High repetition rate, femtosecond and picosecond laser induced damage thresholds of Rb:KTiOPO4 at 1.03 µm," Opt. Mater. Express 7(3), 744 (2017).
O. Novák, P. R. Krogen, T. Kroh, T. Mocek, F. X. Kärtner, and K.-H. Hong, "Femtosecond 8.5 µm source based on intrapulse difference-frequency generation of 2.1 µm pulses," Opt. Lett. 43(6), 1335 (2018).
M. Beutler, I. Rimke, E. Büttner, P. Farinello, A. Agnesi, V. Badikov, D. Badikov, and V. Petrov, "Difference-frequency generation of ultrashort pulses in the mid-IR using Yb-fiber pump systems and AgGaSe2," Opt. Express 23(3), 2730 (2015).
M. Knorr, J. Raab, M. Tauer, P. Merkl, D. Peller, E. Wittmann, E. Riedle, C. Lange, and R. Huber, "Phase-locked multi-terahertz electric fields exceeding 13 MV/cm at a 190 kHz repetition rate," Opt. Lett. 42(21), 4367 (2017).
C. Gaida, M. Gebhardt, T. Heuermann, F. Stutzki, C. Jauregui, J. Antonio-Lopez, A. Schülzgen, R. Amezcua-Correa, A. Tünnermann, I. Pupeza, and J. Limpert, "Watt-scale super-octave mid-infrared intrapulse difference frequency generation," Light: Sci. Appl. 7(1), 94 (2018).
J. Zhang, K. F. Mak, N. Nagl, M. Seidel, D. Bauer, D. Sutter, V. Pervak, F. Krausz, and O. Pronin, "Multi- mW, few-cycle mid-infrared continuum spanning from 500 to 2250 cm−1," Light: Sci. Appl. 7(2), 17180 (2018).
G. M. Archipovaite, P. Malevich, E. Cormier, T. Lihao, A. Baltuska, and T. Balciunas, "Efficient few-cycle mid-IR pulse generation in the 5-11 µm window driven by an Yb amplifier," in Advanced Solid State Lasers (Optical Society of America, 2017) paper AM4A.4.
A. Lanin, A. Voronin, E. Stepanov, A. Fedotov, and A. Zheltikov, "Multioctave, 3–18 µm sub-two-cycle supercontinua from self-compressing, self-focusing soliton transients in a solid," Opt. Lett. 40(6), 974 (2015).
H. Liang, P. Krogen, Z. Wang, H. Park, T. Kroh, K. Zawilski, P. Schunemann, J. Moses, L. F. DiMauro, F. X. Kärtner, and K.-H. Hong, "High-energy mid-infrared sub-cycle pulse synthesis from a parametric amplifier," Nat. Commun. 8(1), 141 (2017).
L. von Grafenstein, M. Bock, D. Ueberschaer, K. Zawilski, P. Schunemann, U. Griebner, and T. Elsaesser, "5 µm few-cycle pulses with multi-gigawatt peak power at a 1 kHz repetition rate," Opt. Lett. 42(19), 3796 (2017).
D. Sanchez, M. Hemmer, M. Baudisch, S. Cousin, K. Zawilski, P. Schunemann, O. Chalus, C. Simon-Boisson, and J. Biegert, "7 µm, ultrafast, sub-millijoule-level mid-infrared optical parametric chirped pulse amplifier pumped at 2 µm," Optica 3(2), 147 (2016).
T. Kanai, P. Malevich, S. S. Kangaparambil, K. Ishida, M. Mizui, K. Yamanouchi, H. Hoogland, R. Holzwarth, A. Pugzlys, and A. Baltuska, "Parametric amplification of 100 fs mid-infrared pulses in ZnGeP2 driven by a Ho:YAG chirped-pulse amplifier," Opt. Lett. 42(4), 683 (2017).
V. Petrov, "Parametric down-conversion devices: The coverage of the mid-infrared spectral range by solid-state laser sources," Opt. Mater. 34(3), 536–554 (2012).
M. Seidel, X. Xiao, S. A. Hussain, G. Arisholm, A. Hartung, K. T. Zawilski, P. G. Schunemann, F. Habel, M. Trubetskov, V. Pervak, O. Pronin, and F. Krausz, "Multi-watt, multi-octave, mid-infrared femtosecond source," Sci. Adv. 4(4), eaaq1526 (2018).
S. B. Penwell, L. Whaley-Mayda, and A. Tokmakoff, "Single-stage MHz mid-IR OPA using LiGaS2and a fiber laser pump source," Opt. Lett. 43(6), 1363 (2018).
B.-H. Chen, E. Wittmann, Y. Morimoto, P. Baum, and E. Riedle, "Octave-spanning single-cycle middle-infrared generation through optical parametric amplification in LiGaS2," Opt. Express 27(15), 21306 (2019).
B.-H. Chen, T. Nagy, and P. Baum, "Efficient middle-infrared generation in LiGaS2 by simultaneous spectral broadening and difference-frequency generation," Opt. Lett. 43(8), 1742 (2018).
K. Kaneshima, N. Ishii, K. Takeuchi, and J. Itatani, "Generation of carrier-envelope phase-stable mid-infrared pulses via dual-wavelength optical parametric amplification," Opt. Express 24(8), 8660 (2016).
L. Isaenko, A. Yelisseyev, S. Lobanov, A. Titov, V. Petrov, J.-J. Zondy, P. Krinitsin, A. Merkulov, V. Vedenyapin, and J. Smirnova, "Growth and properties of LiGaX2 (X = S, Se, Te) single crystals for nonlinear optical applications in the mid-IR," Cryst. Res. Technol. 38(35), 379–387 (2003).
M. Mero, L. Wang, W. Chen, N. Ye, G. Zhang, V. Petrov, and Z. Heiner, "Laser-induced damage of nonlinear crystals in ultrafast, high-repetition-rate, mid-infrared optical parametric amplifiers pumped at 1 µm," Proc. SPIE 11063, 1106307 (2019).
R. Riedel, J. Rothhardt, K. Beil, B. Gronloh, A. Klenke, H. Höppner, M. Schulz, U. Teubner, C. Kränkel, J. Limpert, A. Tünnermann, M. J. Prandolini, and F. Tavella, "Thermal properties of borate crystals for high power optical parametric chirped-pulse amplification," Opt. Express 22(15), 17607 (2014).
A. Alexandrovski, M. Fejer, A. Markosyan, and R. Route, "Photothermal common-path interferometry (PCI): new developments," Proc. SPIE 7193, 71930D (2009).
M. Sheik-Bahae, A. A. Said, T.-H. Wei, D. J. Hagan, and E. W. van Stryland, "Sensitive measurement of optical nonlinearities using a single beam," IEEE J. Quantum Electron. 26(4), 760–769 (1990).
M. K. R. Windeler, K. Mecseki, A. Miahnahri, J. S. Robinson, J. M. Fraser, A. R. Fry, and F. Tavella, "100 W high-repetition-rate near-infrared optical parametric chirped pulse amplifier," Opt. Lett. 44(17), 4287 (2019).
M. Sabaeian, F. S. Jalil-Abadi, M. M. Rezaee, A. Motazedian, and M. Shahzadeh, "Temperature distribution in a Gaussian end-pumped nonlinear KTP crystal: the temperature dependence of thermal conductivity and radiation boundary condition," Braz. J. Phys. 45(1), 1–9 (2015).
S. Seidel and G. Mann, "Numerical modeling of thermal effects in nonlinear crystals for high average power second harmonic generation," Proc. SPIE 2989, 204 (1997).
L. I. Isaenko and A. P. Yelisseyev, "Recent studies of nonlinear chalcogenide crystals for the mid-IR," Semicond. Sci. Technol. 31(12), 123001 (2016).
A. V. Smith, "SNLO nonlinear optics code (free version)" from http://www.as-photonics.com/snlo .
R. DeSalvo, D. J. Hagan, M. Sheik-Bahae, G. Stegeman, and E. W. Van Stryland, "Self-focusing and self-defocusing by cascaded second-order effects in KTP," Opt. Lett. 17(1), 28 (1992).
M. Falconieri, "Thermo-optical effects in z-scan measurements using high-repetition-rate lasers," J. Opt. A: Pure Appl. Opt. 1(6), 662–667 (1999).
Adler, F.
Agnesi, A.
Alexandrovski, A.
Amezcua-Correa, R.
Antonio-Lopez, J.
Apolonski, A.
Archipovaite, G. M.
Arisholm, G.
Bach, F.
Badikov, D.
Badikov, V.
Balciunas, T.
Baltuska, A.
Baudisch, M.
Bauer, D.
Baum, P.
Beil, K.
Beutler, M.
Biegert, J.
Bock, M.
Brunner, F.
Büttner, E.
Chalus, O.
Chen, B.-H.
Chou, M.-H.
Cormier, E.
Cossel, K. C.
Cousin, S.
DeSalvo, R.
DiMauro, L. F.
Elsaesser, T.
Falconieri, M.
Farinello, P.
Fedotov, A.
Fejer, M.
Fermann, M. E.
Fill, E.
Fraser, J. M.
Fry, A. R.
Gaida, C.
Gebhardt, M.
Giessen, H.
Griebner, U.
Gronloh, B.
Habel, F.
Hagan, D. J.
Hanna, D. C.
Hänsch, T. W.
Hartl, I.
Hartung, A.
Heiner, Z.
Hemmer, M.
Heuermann, T.
Holzwarth, R.
Hong, K.-H.
Hoogland, H.
Höppner, H.
Huber, R.
Hussain, S. A.
Innerhofer, E.
Isaenko, L.
Isaenko, L. I.
Ishida, K.
Ishii, N.
Itatani, J.
Ito, H.
Jakob, M. A.
Jalil-Abadi, F. S.
Jauregui, C.
Kanai, T.
Kaneshima, K.
Kangaparambil, S. S.
Karpowicz, N.
Kärtner, F. X.
Keller, U.
Kitamura, K.
Klenke, A.
Knorr, M.
Kränkel, C.
Krausz, F.
Krinitsin, P.
Krogen, P.
Krogen, P. R.
Kroh, T.
Kurimura, S.
Laarmann, T.
Lange, C.
Lanin, A.
Lihao, T.
Lilienfein, N.
Limpert, J.
Liu, K.
Lobanov, S.
Mak, K. F.
Malevich, P.
Mann, G.
Markosyan, A.
Mecseki, K.
Merkl, P.
Merkulov, A.
Mero, M.
Miahnahri, A.
Mizui, M.
Mocek, T.
Morimoto, Y.
Mörz, F.
Moses, J.
Motazedian, A.
Nagl, N.
Nagy, T.
Namboodiri, M.
Novák, O.
Paasch-Colberg, T.
Paschotta, R.
Pasiskevicius, V.
Peller, D.
Penwell, S. B.
Pervak, V.
Pescher, M.
Petrov, V.
Picqué, N.
Prandolini, M. J.
Pronin, O.
Pugzlys, A.
Pupeza, I.
Qu, S.
Raab, J.
Rezaee, M. M.
Riedel, R.
Riedle, E.
Rimke, I.
Robinson, J. S.
Rothhardt, J.
Route, R.
Sabaeian, M.
Said, A. A.
Sanchez, D.
Schliesser, A.
Schulz, M.
Schülzgen, A.
Schunemann, P.
Schunemann, P. G.
Schweinberger, W.
Seidel, M.
Seidel, S.
Shahzadeh, M.
Sheik-Bahae, M.
Simon-Boisson, C.
Smirnova, J.
Smith, A. V.
Stegeman, G.
Steinle, T.
Steinmann, A.
Stepanov, E.
Stutzki, F.
Südmeyer, T.
Sutter, D.
Takeuchi, K.
Tauer, M.
Tavella, F.
Teubner, U.
Thorpe, M. J.
Titov, A.
Tokmakoff, A.
Trubetskov, M.
Tünnermann, A.
Ueberschaer, D.
Usami, T.
Van Stryland, E. W.
Vedenyapin, V.
von Grafenstein, L.
Voronin, A.
Wang, Q. J.
Wei, T.-H.
Wei, Z.
Whaley-Mayda, L.
Windeler, M. K. R.
Wittmann, E.
Xiao, X.
Yamanouchi, K.
Ye, J.
Ye, N.
Yelisseyev, A.
Yelisseyev, A. P.
Zawilski, K.
Zawilski, K. T.
Zhang, G.
Zhang, Y.
Zheltikov, A.
Znakovskaya, I.
Zondy, J.-J.
Zou, X.
Zukauskas, A.
Braz. J. Phys. (1)
Cryst. Res. Technol. (1)
J. Opt. A: Pure Appl. Opt. (1)
Nat. Commun. (1)
Opt. Lett. (13)
Opt. Mater. (1)
Opt. Mater. Express (2)
Optica (1)
Sci. Adv. (1)
Semicond. Sci. Technol. (1)
(1) Pabs=C1αI+C2βI2,
(2) H=Hblack−body+Hconvection=σϵA(TC4−TR4)+hA(TC−TR)
(3) σϵA(TC4−TR4)+hA(TC−TR)=C1αI+C2βI2
(4) C1=fL(1.206ωxωyτ)andC2=fL(0.426ωxωyτ).
(5) fK=ω2/(4n2IL).
Optical properties of mid-IR non-oxide, Li-based nonlinear crystals
derived constants for 1030 nm wavelength
nonlinear crystal
Bandgap [36] [eV]
deff [37] [pm/V]
α [cm−1]
β [cm/GW]
n2 cm2/W
AgGaS2 (AGS) 2.70 15.9 < 0.005 - < 140.6 × 10−15
LiGaS2 (LGS) 4.15 5.60 < 0.002 < 3.2 × 10−4 < 6.4 × 10−15
LiGaSe2 (LGSe) 3.34 9.27 < 0.002 < 2.0 × 10−3 < 27.4 × 10−15
LiInS2 (LIS) 3.57 6.90 < 0.001 < 2.6 × 10−4 < 9.4 × 10−15
LiInSe2 (LISe) 2.86 9.48 < 0.01 < 1.6 × 10−3 < 16.2 × 10−15
|
CommonCrawl
|
Expand this Topic clickable element to expand a topic
This website uses cookies to deliver some of our products and services as well as for analytics and to provide you a more personalized experience. Click here to learn more. By continuing to use this site, you agree to our use of cookies. We've also updated our Privacy Notice. Click here to see what's new.
Learn more about our response to COVID-19 including information for authors, reviewers, readers, and librarians
Only if other resources available (images, video, datasets)
• Use these formats for best results: Smith or J Smith
• Use a comma to separate multiple people: J Smith, RL Jones, Macarthur
Any : All :
Tips for preparing a search:
Keep it simple - don't use too many different parameters.
Separate search groups with parentheses and Booleans. Note the Boolean sign must be in upper-case.
Example: (diode OR solid-state) AND laser [search contains "diode" or "solid-state" and laser]
Example: (photons AND downconversion) - pump [search contains both "photons" and "downconversion" but not "pump"]
Improve efficiency in your search by using wildcards.
Asterisk ( * ) -- Example: "elect*" retrieves documents containing "electron," "electronic," and "electricity"
Question mark (?) -- Example: "gr?y" retrieves documents containing "grey" or "gray"
Use quotation marks " " around specific phrases where you want the entire phrase only.
For best results, use the separate Authors field to search for author names.
Author name searching:
Use these formats for best results: Smith or J Smith
Use a comma to separate multiple people: J Smith, RL Jones, Macarthur
Note: Author names will be searched in the keywords field, also, but that may find papers where the person is mentioned, rather than papers they authored.
Paper #
OIDA Reports
Report Year
Enter only one date to search
After ("From") or Before ("To")
Optics & Photonics Topics
Selected topics
Find articles with any selected topics Find articles with all selected topics
Click the to reveal subtopics. Use the checkbox to select a topic to filter your search.
About Optics & Photonics Topics
OSA Publishing developed the Optics and Photonics Topics to help organize its diverse content more accurately by topic area. This topic browser contains over 2400 terms and is organized in a three-level hierarchy. Read more.
Topics can be refined further in the search results. The Topic facet will reveal the high-level topics associated with the articles returned in the search results.
Energy and Environmental Optics Express
Engineering and Laboratory Notes
Spotlight on Optics
Clear my choices above
Journals Journals
Advances in Optics and Photonics
Applied Optics
Applied Spectroscopy
Biomedical Optics Express
Chinese Optics Letters
Current Optics and Photonics
Journal of Lightwave Technology
Journal of Near Infrared Spectroscopy
Journal of Optical Technology
Journal of Optical Communications and Networking
Journal of the Optical Society of America A
Journal of the Optical Society of America B
Optical Materials Express
Optics Letters
OSA Continuum
Photonics Research
Legacy Journals
Journal of Display Technology (2005-2016)
Journal of the Optical Society of Korea (1997-2016)
Journal of Optical Networking (2002-2009)
Journal of the Optical Society of America (1917-1983)
Optics News (1975-1989)
Optics and Photonics News
Find Conferences
Optical Fiber Communication (OFC)
Conference on Lasers and Electro-Optics (CLEO)
Frontiers in Optics (FiO)
Compact EUV & X-ray Light Sources
High Intensity Lasers and High Field Phenomena
Mid-Infrared Coherent Sources
Asia Communications and Photonics Conference
Advanced Solid State Lasers
Lasers eBook
OPN Centennial eBooklets
OSA Century of Optics
Interactive Science Publishing (ISP)
Optics ImageBank
OSA Publishing China
Open Access Information
Open Access Statement and Policy
Terms for Journal Article Reuse
Login to access favorites
OSA Publishing > OSA Continuum > Volume 3 > Issue 11 > Page 3141
Takashige Omatsu, Editor-in-Chief
Issues in Progress
Feature Issues
X-ray verification of sol-gel resist shrinkage in substrate-conformal imprint lithography for a replicated blazed reflection grating
Jake A. McCoy, Marc A. Verschuuren, Drew M. Miles, and Randall L. McEntaffer
Jake A. McCoy,1,* Marc A. Verschuuren,2 Drew M. Miles,1 and Randall L. McEntaffer1
1Department of Astronomy & Astrophysics, The Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802, USA
2Philips SCIL Nanoimprint Solutions, De Lismortel 31, 5612 AR, Eindhoven, The Netherlands
*Corresponding author: [email protected]
Jake A. McCoy https://orcid.org/0000-0002-1605-7517
Find other works by these authors
J McCoy
M Verschuuren
D Miles
R McEntaffer
Vol. 3,
Issue 11,
•https://doi.org/10.1364/OSAC.402405
Add to CiteULike
Add to Mendeley
Add to BibSonomy
Copy Citation Text
Jake A. McCoy, Marc A. Verschuuren, Drew M. Miles, and Randall L. McEntaffer, "X-ray verification of sol-gel resist shrinkage in substrate-conformal imprint lithography for a replicated blazed reflection grating," OSA Continuum 3, 3141-3156 (2020)
Endnote (RIS)
Get PDF (2521 KB)
Set citation alerts
Save article
Sol-gel zirconia diffraction grating using a soft imprinting process (AO)
Resonant waveguide grating fabrication on planar and cylindrical substrates using a photosensitive TiO2 sol-gel approach (OME)
Comparative analysis of direct laser writing and nanoimprint lithography for fabrication of optical phase elements (AO)
Table of Contents Category
Holography, Gratings, and Diffraction
The topics in this list come from the OSA Optics and Photonics Topics applied to this article.
Astronomical spectroscopy
Diffraction efficiency
Nanoimprint lithography
Scanning electron microscopy
X ray spectroscopy
Original Manuscript: July 10, 2020
Revised Manuscript: October 16, 2020
Manuscript Accepted: October 19, 2020
Grating Fabrication by SCIL
Beamline Experiments
Analysis and Discussion
Equations (5)
Surface-relief gratings fabricated through nanoimprint lithography (NIL) are prone to topographic distortion induced by resist shrinkage. Characterizing the impact of this effect on blazed diffraction efficiency is particularly important for applications in astrophysical spectroscopy at soft x-ray wavelengths (λ ≈ 0.5 − 5 nm) that call for the mass-production of large-area grating replicas with sub-micron, sawtooth surface-relief profiles. A variant of NIL that lends itself well for this task is substrate-conformal imprint lithography (SCIL), which uses a flexible, composite stamp formed from a rigid master template to imprint nanoscale features in an inorganic resist that cures thermodynamically through a silica sol-gel process. While SCIL enables the production of several hundred imprints before stamp degradation and avoids many of the detriments associated with large-area imprinting in NIL, the sol-gel resist suffers shrinkage dependent on the post-imprint cure temperature. Through atomic force microscopy and diffraction-efficiency testing at beamline 6.3.2 of the Advanced Light Source, the impact of this effect on blaze response is constrained for a ∼160-nm-period grating replica cured at 90°C. Results demonstrate a ∼2° reduction in blaze angle relative to the master grating, which was fabricated by anisotropic wet etching in 〈311〉-oriented silicon to yield a facet angle close to 30°.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Instrument development for astrophysical spectroscopy at soft x-ray wavelengths ($\lambda \approx 0.5 - 5$ nm) represents an active area of research that utilizes blazed gratings with sub-micron periodicities, which are often replicated from a master grating template featuring a custom groove layout [1–3]. Starting with a master grating fabricated by anisotropic wet etching in mono-crystalline silicon and surface-treated for anti-stiction, a sawtooth surface-relief mold that enables high diffraction efficiency in the soft x-ray can be patterned in ultraviolet (UV)-curable, organic resist via UV-nanoimprint lithography (UV-NIL) [2,4,5]. This has been demonstrated by Miles, et al. [2] through beamline diffraction-efficiency testing of a gold-coated, UV-NIL replica with a periodicity of $\sim$160 nm, which was imprinted from a stamp wet-etched in $\langle 311 \rangle$-oriented silicon to yield a nominal blaze angle of 29.5$^{\circ }$ over a 72 cm$^2$ variable-line-space groove layout. These results show that crystallographic etching coupled with UV-NIL processing is capable of producing large-area, blazed gratings that perform with high diffraction efficiency in an extreme off-plane mount. As illustrated in Fig. 1, the incoming radiation in this geometry is nearly parallel to the groove direction so that propagating orders are confined to the surface of a cone as described by
(1)$$\sin \left( \alpha \right) + \sin \left( \beta \right) = \frac{n \lambda}{d \sin \left( \gamma \right)} \; \; \textrm{for} \; \; n = 0, \pm 1, \pm 2, \pm 3 \cdots$$
where $d$ is the groove spacing, $\gamma \lessapprox 2^{\circ }$ is the half-opening angle of the cone, $\alpha$ is the azimuthal incidence angle and $\beta$ is the azimuthal diffracted angle of the $n^{\textrm {th}}$ diffracted order [7].
Fig. 1. Geometry for a reflection grating producing a conical diffraction pattern [2,6]. In an extreme off-plane mount, the incoming radiation is nearly parallel to the groove direction with $\gamma \lessapprox 2^{\circ }$ while $\alpha$ is free to match the blaze angle, $\delta$, in a Littrow configuration with $\alpha = \beta = \delta$. At a distance $L$ away from the point of incidence on the grating, the arc radius is $r = L \sin \left ( \gamma \right )$ and diffracted orders are each separated by a distance $\lambda L / d$ along the dispersion direction, where $d$ is the groove spacing.
Download Full Size | PPT Slide | PDF
While UV-NIL has been proven to be a suitable technology for replicating surface-relief molds for x-ray reflection gratings [1,2], there are aspects of this process that lead to practical difficulties for realizing a state-of-the-art grating spectrometer with mass-produced reflection gratings. First, the rigidity of a thick silicon stamp requires a relatively high applied pressure for imprints of substantial area to achieve conformal contact between the stamp and the resist-coated blank substrate so that air pockets that give rise to unpatterned areas can be avoided [8]. High-pressure imprinting conditions can also lead to imperfections that arise from particulate contaminants, and potentially, damage to the stamp surface. Additionally, the pattern fidelity of a rigid stamp is gradually degraded as it makes repeated imprints such that in the case of the UV-NIL process described by Miles, et al. [2], a single stamp typically can produce tens of quality grating replicas [8]. As a result, the implementation of UV-NIL becomes impractical for future astronomical instruments such as The Rockets for Extended-source X-ray Spectroscopy [9] and The Off-plane Grating Rocket Experiment [10] that each require hundreds of replicated gratings and additionally, the X-ray Grating Spectrometer for the Lynx X-ray Observatory mission concept, which calls for the production of thousands of replicated gratings [3].
An alternative NIL technique for the mass production of x-ray reflection gratings is substrate-conformal imprint lithography (SCIL) [11–13]. Unlike standard NIL that uses a rigid stamp for direct imprinting, SCIL centers on the use of a low-cost, flexible stamp molded from a rigid master template. With stamp features carried in a modified form of polydimethylsiloxane (PDMS) that has an increased Young's modulus relative to that of standard PDMS, SCIL offers a way for nanoscale patterns to be imprinted in resist over large areas using a stamp that conforms locally to particulate contaminants and globally to any slight bow of the replica substrate, while avoiding damage to the master template by eliminating the need for an applied high pressure. Additionally, wave-like sequential imprinting, which is made possible by specialized pneumatic tooling coupled with the flexibility of the stamp, serves to eliminate large trapped air pockets [11,13]. Packaged equipment that automates spin-coating and this pneumatic-based SCIL wafer-scale imprint method for high-volume replication has been developed by Philips SCIL Nanoimprint Solutions [14]. This production platform, known as AutoSCIL, was first applied to x-ray reflection grating technology for the grating spectrometer on board the Water Recovery X-ray Rocket [12,15], which utilized 26 nickel-coated replicas of a 110 cm$^2$ master grating fabricated through crystallographic etching in a manner similar to the processing described by Miles, et al. [2].
Although SCIL stamps are compatible with many UV-curable, organic resists similar to those used for UV-NIL [8,16], high-volume production that relies on long stamp lifetime is best suited for use with a brand of inorganic resist that cures through a thermodynamically-driven, silica sol-gel process [11]. Synthesized by Philips SCIL Nanoimprint Solutions and known commercially as NanoGlass, this resist is stored as a $-20^{\circ }$C sol containing silicon precursors tetramethylorthosilicate (TMOS) and methyltrimethoxysilane (MTMS) suspended in a mixture of water and alcohols [13]. When a SCIL stamp is applied to a wafer freshly spin-coated with a film of resist, its features are filled through capillary action while the precursors react to form a gel, and ultimately a solid silica-like network, along with alcohols and water left as reaction products. This sol-gel process carries out over the course of 15 minutes at room temperature (or, a few minutes at $\sim$50$^{\circ }$C) while reaction products and trapped air diffuse into the stamp, leaving solidified resist molded to the inverse of the stamp topography after stamp separation. The imprinted resist initially has $\sim$70% the density of fused silica due to the presence of nanoscale pores and methyl groups bound to silicon that arise from the organically-modified MTMS precursor. However, the material can be densified for stability through a 15-minute bake at a temperature $T \gtrapprox 50^{\circ }$C to induce further cross-linking in the silica network, where $T \gtrapprox 450^{\circ }$C breaks the silicon-carbon bonds while inducing a moderate level of shrinkage and $T \gtrapprox 850^{\circ }$C gives rise to the density of maximally cross-linked fused silica [13].
Using the AutoSCIL production platform, a single stamp is capable of producing $\gtrapprox$700 imprints in sol-gel resist at a rate of 60, 150-mm-diameter wafers per hour, without pattern degradation [11–13]. While this makes SCIL an attractive method for mass producing x-ray reflection gratings, the thermally-induced densification of the silica sol-gel network causes resist shrinkage similar in effect to the UV-curing of organic resists in UV-NIL [8,17,18]. It has been previously reported that a $T \approx 200^{\circ }$C treatment of sol-gel resist leads to $\sim$15% volumetric shrinkage in imprinted laminar gratings while temperatures in excess of $1000^{\circ }$C result in a maximal, $\sim$30% shrinkage [13]. Based on these results, it is hypothesized that a low-temperature treatment should lead to $\sim$10% volumetric shrinkage in the resist, which is comparable to typical levels of resist shrinkage in UV-NIL [8]. To probe the impact that resist shrinkage in SCIL has on blaze angle in an x-ray reflection grating, this paper presents beamline diffraction-efficiency measurements of a gold-coated imprint that was cured at a temperature of $T \approx 90^{\circ }$C and compares them to theoretical models for diffraction efficiency that characterize the expected centroids for peak orders, as well as measurements of the corresponding silicon master grating in a similar configuration. These results corroborate atomic force microscopy (AFM) measurements of the tested gratings that, together, serve as experimental evidence for resist shrinkage affecting the blaze response of an x-ray reflection grating through a reduction in facet angle.
This paper is organized as follows: section 2 describes the fabrication of the gratings used for this study while section 3 presents their diffraction-efficiency measurements, which were gathered at beamline 6.3.2 of the Advanced Light Source (ALS) synchrotron facility at Lawrence Berkeley National Laboratory (LBNL) [19–21]. Section 4 then analyses these results and compares them to AFM measurements in order to demonstrate a non-negligible blaze angle reduction that is expected to occur in the replica based on an approximate model for resist shrinkage. Conclusions and a summary of this work are then provided in section 5. The SCIL processing described in this paper was performed by Philips SCIL Nanoimprint Solutions using a master grating template fabricated at the Nanofabrication Laboratory of the Pennsylvania State University (PSU) Materials Research Institute [22]. All field-emission scanning electron microscopy (FESEM) was carried out with a Zeiss Leo 1530 system at the PSU Nanofabrication Laboratory while all AFM was carried out using a Bruker Icon instrument equipped with a SCANASYST-AIR tip and PeakForce Tapping$^{\textrm {TM}}$ mode at the PSU Materials Characterization Laboratory.
2. Grating Fabrication by SCIL
The master grating template chosen for this study was originally used as a direct stamp for the UV-NIL processing described by Miles, et al. [2]. This 75 mm by 96 mm (72 cm$^2$) grating was fabricated through a multi-step process centering on anisotropic wet etching in a $\langle 311 \rangle$-oriented, 500-$\mu$m-thick, 150-mm-diameter silicon wafer using potassium hydroxide (KOH). As described by Miles, et al. [2], the groove layout was patterned as a variable-line-space profile using electron-beam lithography with the groove spacing, $d$, ranging nominally from 158.25 nm to 160 nm along the groove direction, which is aligned with the $\langle 110 \rangle$ direction in the $\{ 311 \}$ plane of the wafer surface. This layout was then transferred by reactive ion etch into a thin film of stoichiometric silicon nitride (Si$_3$N$_4$) formed by low-pressure chemical vapor deposition before the native silicon dioxide (SiO$_2$) on the exposed surface of the silicon wafer was removed with a buffered oxide etch. Next, a timed, room-temperature KOH etch was carried out to generate an asymmetric, sawtooth-like structure defined by exposed $\{ 111 \}$ planes that form an angle $\theta \equiv \arccos \left ( 1/3 \right ) \approx 70.5^{\circ }$ at the bottom of each groove, as well as $\sim$30-nm-wide flat-tops that exist beneath the Si$_3$N$_4$ hard mask. Due to the $\langle 311 \rangle$ surface orientation of the silicon wafer, the exposed $\{ 111 \}$ planes define nominal facet angles of $\delta = 29.5^{\circ }$ and $180^{\circ } - \theta - \delta \approx 80^{\circ }$. A cross-section image of the grating following the removal of Si$_3$N$_4$ using hydrofluoric acid is shown under FESEM in Fig. 2.
Fig. 2. Cross-section FESEM image of the silicon master used for SCIL stamp construction, which was originally used as a direct stamp for UV-NIL [2].
Prior to constructing the composite stamp used for imprint production, the silicon master was cleaned in a heated bath of Nano-Strip$^{\textrm {TM}}$ (VWR Int.), which consists primarily of sulfuric acid, and then by oxygen plasma before being surface treated for anti-stiction with a self-assembled monolayer of 1,1,2,2H-perfluorodecyltrichlorosilane (FDTS) [23] achieved through a 50$^{\circ }$C molecular vapor deposition (MVD) process. As described by Verschuuren, et al. [11] and illustrated schematically in Fig. 3(a), a standard SCIL stamp consists primarily of two components that are supported by a flexible sheet of glass with a thickness of about 200 $\mu$m: a $\sim$50-$\mu$m-thick layer of modified PDMS that carries the inverse topography of the silicon master, and an underlying, $\gtrapprox$0.5-mm-thick layer of standard, soft PDMS that attaches to the glass sheet by application of an adhesion promoter. A rubber gasket can then be glued to the outer perimeter of the square glass sheet for use with the pneumatic-based SCIL wafer-scale imprint method to produce imprints with topographies that resemble that of the silicon master. However, in an effort to produce imprints that emulate the UV-NIL replica described by Miles, et al. [2], which was fabricated using the silicon master as a direct stamp, this process was modified to realize a stamp with an inverted topography, as in Fig. 3(b), so as to allow the production of imprints with sharp apexes and flat portions at the bottom of each groove [12].
Fig. 3. Schematic for SCIL composite stamps of two varieties: a) an initial stamp featuring an inverted topography molded directly from the silicon master shown in Fig. 2 and b) a secondary stamp featuring a topography similar to the silicon master, which was molded using the first stamp as a master template. In either case, grating grooves are carried in a layer of X-PDMS tens of microns thick that sits on a 200-mm-diameter, flexible glass sheet buffered by a $\gtrapprox$0.5-mm-thick layer of soft PDMS. A rubber gasket can be attached for use with the pneumatic-based SCIL wafer-scale imprint method. This illustration neglects slight rounding that can occur in sharp corners under the influence of surface tension in X-PDMS.
The variety of modified PDMS used for this study was X-PDMS version 3, (Philips SCIL Nanoimprint Solutions), which was dispensed over the surface of the MVD-treated silicon master and then solidified through two rounds of spin-coating and baking steps using primary and accompanying components of the material. First, after the silicon master was cleaned again using deionized water and IPA, $\sim$3 g of the primary component was dispensed over the wafer through a short, 2 krpm spin-coat process using a low spin acceleration, leaving a layer tens of microns thick. This was followed immediately by a 50$^{\circ }$C hotplate bake for 3 minutes and a room-temperature cool-down of 10 minutes to leave the material in a tacky state. Next, $\sim$3 g of the accompanying component was spin-coated over this layer in a similar way before the wafer was baked by 70$^{\circ }$C hotplate for 10 minutes to form an intermediate layer also tens of microns thick. The doubly-coated silicon master was then oven-baked at 75$^{\circ }$C for 20 hours to form a $\sim$50-$\mu$m-thick layer of cured X-PDMS with a Young's modulus on the order of several tens of megapascals. In principle, this level of stiffness is sufficient for the stamp to carry grating grooves with $d \lessapprox 160$ nm without pattern distortion or feature collapse [11,13].
Using the SCIL Stamp Making Tool (SMT) built by Philips SCIL Nanoimprint Solutions, the initial, non-inverted stamp was formed by curing soft, Sylgard 184 PDMS (Dow, Inc.) between the X-PDMS layer and a 200-$\mu$m-thick sheet of D 263 glass (Schott AG), cut into a 200-mm-diameter circle. Consisting primarily of two, opposite-facing vacuum chucks heated to 50$^{\circ }$C with surfaces flat to $\lessapprox$10 $\mu$m peak-to-valley, this tool was used to spread $\sim$12 g of degassed PDMS evenly over the surface of the X-PDMS-coated silicon master. With the silicon master secured to the bottom chuck and the glass sheet secured to the top chuck, the two components were carefully brought into contact to spread the PDMS to a uniform thickness of $\gtrapprox 0.5$ mm using micrometer spindles, while ensuring that the two surfaces were parallel to within 20 $\mu$m. These materials were baked in this configuration at 50$^{\circ }$C until the PDMS was cured so that the stamp could be carefully separated from the silicon master. Using this initial stamp as a master template, the secondary, inverted stamp was constructed on a square sheet of glass through steps identical to those outlined above. This processing was enabled by the initial stamp being constructed on a round sheet of glass, which allowed it to be spin-coated with X-PDMS and subsequently cured using the same processing steps outlined above for the silicon master.
Several blazed grating molds were imprinted by hand into $\sim$100-nm-thick films of NanoGlassT1100 sol-gel resist spin-coated on 1-mm-thick, 150-mm-diameter silicon wafers using the inverted SCIL stamp just described. Although the pneumatic-based SCIL wafer-scale imprint method is best equipped for minimizing pattern distortion over 150-mm-diameter wafers, imprinting by hand is sufficient for producing a small number of grating molds suitable for the diffraction-efficiency testing described in section 3, which depends primarily on the groove facet shape over a local area defined by the projected size of the monochromatic beam at the ALS. With imprinting taking place at room temperature, 15 minutes of stamp-resist contact was allotted for the sol-gel process to carry out in each imprint. Each wafer was baked by hotplate to 90$^{\circ }$C for 15 minutes following stamp separation to densify the imprinted material to a small degree, thereby inducing resist shrinkage. An FESEM cross-section of a replica produced in this way is shown in Fig. 4, where grating grooves are seen imprinted over a residual layer of resist a few tens of nanometers thick.
Fig. 4. Cross-section FESEM image of a grating imprint with a groove spacing of $d \lessapprox 160$ nm in $\sim$100-nm-thick sol-gel resist coated on a silicon wafer.
3. Beamline Experiments
Previous test campaigns have demonstrated that reflection gratings operated in an extreme off-plane mount can be measured for soft x-ray diffraction efficiency using a beamline facility suitable for short-wavelength reflectometry [2,6,24,25]. The experiments described here took place at beamline 6.3.2 of the ALS, which provides a highly-coherent beam of radiation tunable over extreme UV and soft x-ray wavelengths that strikes a stage-mounted optic [19–21]. At a distance $L \approx 235$ mm away from the point of incidence on the grating, a photodiode detector attached to staging can be used to measure the intensity of propagating orders, which are spaced along the dispersion direction by a distance $\lambda L / d$ as illustrated in Fig. 1. Absolute diffraction efficiency in the $n^{\textrm {th}}$ propagating order is measured through $\mathcal {E}_n \equiv \mathcal {I}_n / \mathcal {I}_{\textrm {inc}}$, where $\mathcal {I}_n$ and $\mathcal {I}_{\textrm {inc}}$ are noise-subtracted intensity measurements of the $n^{\textrm {th}}$ diffracted beam and the incident beam, respectively, which can be gathered for each order using a vertical, 0.5-mm-wide slit to mask the detector [2,6]. Although this beam is s-polarized to a high degree, x-ray reflection gratings have been demonstrated experimentally to have a polarization-insensitive efficiency response for extreme off-plane geometries [25].
With the SCIL imprint described in section 2 emulating the UV-NIL replica tested by Miles, et al. [2], diffraction-efficiency testing was carried out in a nearly identical geometry where the half-cone opening angle is $\gamma \approx 1.7^{\circ }$ while the azimuthal incidence angle, $\alpha$, is close to the nominal blaze angle of $\delta = 29.5^{\circ }$ in a near-Littrow configuration. The silicon master was tested without a reflective overcoat whereas the inverted SCIL replica was coated with a thin layer of gold to avoid modification of the sol-gel resist by the incident beam, and moreover, to provide a surface with tabulated data for index of refraction and high reflectivity at a $1.7^{\circ }$ grazing-incidence angle. This layer was sputter-coated on the replica in an identical fashion to Miles, et al. [2]: 5 nm of chromium was deposited for adhesion followed immediately by 15 nm of gold, without breaking vacuum. Because this thickness is several times larger than the $1/\mathrm {e}$ penetration depth in gold at grazing-incidence angles, it is justified to treat this top film as a thick slab in this context [6,26].
Following the test procedure outlined by Miles, et al. [2], near-Littrow configurations with $\gamma \approx 1.7^{\circ }$ for both the silicon master and the coated SCIL replica were established at the beamline using principal-axis rotations and in-situ analysis of the diffracted arc. The system throw, $L$, was experimentally determined separately for each installed grating by comparing the known detector length to the apparent angular size of the detector as measured by a goniometric scan of the beam at the location of $0^{\textrm {th}}$ order. The arc radius, $r$, was then determined by measuring the locations of propagating orders over a few photon energies and then fitting the data to a half-circle so that $\gamma$ could be inferred from $\sin \left ( \gamma \right ) = r / L$ [2,6]. Using the $x$-distance between the direct beam and the center of the fitted arc, $\Delta x_{\textrm {dir}}$, $\alpha$ was measured using $\sin \left ( \alpha \right ) = \Delta x_{\textrm {dir}} / r$ before similar calculations described by McCoy, et al. [6] were carried out to cross-check measured principal-axis angles with $\gamma$ and $\alpha$. These measured parameters are listed in Table 1 for both the silicon master and the coated SCIL replica. By the scalar equation for blaze wavelength
(2)$$\lambda_b = \frac{d \sin \left( \gamma \right)}{n} \left[\sin \left( \alpha \right) + \sin \left( 2 \delta - \alpha \right) \right] \approx \frac{2 d \gamma \sin \left( \delta \right)}{n} \left( 1 - \frac{|\delta - \alpha|^2}{2} \right),$$
where radiation is preferentially diffracted to an angle $\beta = 2 \delta - \alpha$ in Eq. (1), $\mathcal {E}_n$ for propagating orders with $n=2$ and $n=3$ are expected to maximize in the spectral range 440 eV to 900 eV for a grating with $d \lessapprox 160$ nm in a near-Littrow configuration with $\gamma \approx 1.7^{\circ }$. The approximate expression for $\lambda _b$, which is valid for small values of $\gamma$ and $|\delta - \alpha |$, suggests that the locations of peak orders are most sensitive to $\delta$ and $\gamma$ in an extreme off-plane mount rather than $\alpha$ provided that $|\delta - \alpha | \ll 1$ radian, which describes a near-Littrow configuration. With both gratings loosely satisfying this condition for $\alpha$, the grating geometries listed in Table 1 were employed for testing.
Table 1. Measured diffracted arc parameters for the silicon master and the coated SCIL replica in their respective test configurations.
Experimental data for $\mathcal {E}_n$ were gathered as a function of photon energy over the range 440 eV to 900 eV in the test configurations summarized in Table 1. Following Miles, et al. [2], $\mathcal {I}_n$ for each photon energy was measured using the masked photodiode by scanning the diffracted arc horizontally, in 50 $\mu$m steps, and then determining the maximum of each diffracted order; $\mathcal {I}_{\textrm {inc}}$ for each photon energy was measured in an analogous way, with the grating moved out of the path of the beam. Through $\mathcal {I}_n / \mathcal {I}_{\textrm {inc}}$, $\mathcal {E}_n$ was measured every 20 eV between 440 eV and 900 eV for bright propagating orders that are characteristic of each grating's blaze response. These results for both the silicon master and the SCIL replica are plotted in Fig. 5 and compared to Fresnel reflectivity for silicon with 3 nm of native SiO$_2$ and a thick slab of gold, respectively. In an identical fashion to McCoy, et al. [6], Fresnel reflectivity was treated using standard-density index of refraction data from the LBNL Center for X-ray Optics on-line database [27] with a grazing-incidence angle $\zeta$ determined from $\sin \left ( \zeta \right ) = \sin \left ( \gamma \right ) \cos \left ( \delta - \alpha \right )$, using measured values for $\gamma$, $\alpha$ and $\delta$ (or $\delta '$). Peak-order, absolute efficiency ranges from 40-45% for both gratings or equivalently, 65-70% measured relative to the reflectivity in each case, which is comparable to the results reported by Miles, et al. [2] for the corresponding UV-NIL replica.
Fig. 5. Measured diffraction-efficiency data for the silicon master (left) and the gold-coated SCIL replica (right) in geometrical configurations described by the parameters listed in Table 1 compared to Fresnel reflectivity at the facet incidence angle in each case.
4. Analysis and Discussion
The soft x-ray diffraction-efficiency measurements presented in section 3 demonstrate that both the silicon master and the SCIL replica exhibit a significant blaze response in a near-Littrow, grazing-incidence configuration. Using these data, the following analysis seeks to constrain the impact of resist shrinkage on blaze angle in the SCIL replica by comparing measured, single-order efficiency curves to those predicted by theoretical models for diffraction efficiency. These models were produced with the aid of the software package PCGrate-SX version 6.1, which solves the Helmholtz equation through the integral method for a custom grating boundary and incidence angles input by the user [28,29]. Based on the findings of Marlowe, et al. [25], which verify that x-ray reflection gratings are polarization-insensitive for extreme off-plane geometries, the incident radiation is treated as a plane wave with transverse-electric polarization relative to the groove direction; the direction of the wave vector, as illustrated in Fig. 1, is defined by the angles $\gamma$ and $\alpha$ listed in Table 1. The choice of grating boundary for the silicon master and the SCIL replica follows from the considerations presented in subsections 4.1 and 4.2, respectively, along with AFM measurements of the tested gratings. In each case, the grating boundary is taken to be perfectly conducting in PCGrate-SX while the overall response is modulated by Fresnel reflectivity to yield a predicted result for absolute diffraction efficiency. Considering that the $\lessapprox$0.5-mm cross-sectional diameter of the beam projects to tens of millimeters at grazing incidence, and that the point of incidence is the central grooved region of each grating, the groove spacing in each case is taken to be $d=159.125$ nm, which is the nominal average of the variable-line-space profile described in section 2.
4.1 Silicon Master
As a point of reference for examining resist shrinkage in the SCIL replica, the diffraction-efficiency results for the silicon master from the left panel of Fig. 5 are compared to various PCGrate-SX models that are based on the wet-etched grating topography described in section 2. Illustrated in Fig. 6 and shown under FESEM in Fig. 2, the cross-sectional shape of the grating profile resembles a series of acute trapezoids with flat tops of width $w$ that each protrude a distance $\Delta h$ of a few nanometers so that the groove depth, $h$, is given approximately by
(3)$$h \approx \frac{d - w}{\cot \left( \delta \right) - \cot \left( \theta + \delta \right)} + \Delta h$$
with $\theta \approx 70.5^{\circ }$ defined by the intersection of exposed $\{ 111 \}$ planes and $\delta$ as the active blaze angle. Although the depth of these sharp grating grooves could not be verified by AFM due to the moderate aspect ratio of the scanning-probe tip, it is estimated that this quantity falls in the range $h \approx 65-70$ nm based on the expected value of $\delta = 29.5^\circ$ for a $\langle 311 \rangle$-oriented silicon surface. Under AFM, facet surface roughness, $\sigma$, measures $\lessapprox 0.4$ nm RMS while the average of 30 blaze angle measurements over a 0.5 $\mu$m by 1 $\mu$m area yields $\delta = 30.0 \pm 0.8^{\circ }$, where the uncertainty is one standard deviation. Although these AFM data were gathered with vertical measurements calibrated to a 180-nm standard at the PSU Materials Characterization Laboratory, this blaze angle measurement is limited in its accuracy due to a relatively poor lateral resolution on the order of a few nanometers. The measurement is, however, consistent with the nominal value of $\delta = 29.5^\circ$ and is considered a reasonable estimation for the blaze angle of the silicon master.
Fig. 6. Schematic illustration of the silicon master cross-section with $\delta = 29.5^{\circ }$ as the blaze angle and $\theta \approx 70.5^{\circ }$ defined by the crystal structure of silicon. At a groove spacing of $d \lessapprox 160$ nm, the flat-top regions have widths $w \gtrapprox 30$ nm as a result of the etch undercut while the groove depth is $h \approx 65-70$ nm by Eq. (3). Indicated by the circle, the indented portion of the etched topography cannot be described with a functional form for diffraction-efficiency analysis.
From the above considerations, the grating boundary used for PCGrate-SX modeling was defined using the trapezoid-like groove shape shown in the inset of Fig. 7, with nominal sawtooth angles of $\delta = 29.5^{\circ }$ and $80^{\circ }$, a flat-top width of $w = 35$ nm, a nub-protrusion height of $\Delta h = 3$ nm and a groove depth of $h \approx 67$ nm that follows from Eq. (3). In both panels of Fig. 7, the model that utilizes the nominal values $\gamma = 1.71^{\circ }$ and $\alpha = 23.7^{\circ }$ is plotted using dotted lines for each diffracted order shown, with uncertainties listed in Table 1 represented as shaded swaths. These results show that the constrained geometry leads to the production of models that roughly match the experimental data. Mismatches between the models and the data may be in part due to the detailed shape of nubs atop of each groove, which cannot be described with a functional form as illustrated in Fig. 6. Although this limits the accuracy of the PCGrate-SX models utilized, the model uncertainty swaths indicate that $\gamma$ serves to shift the centroids of peak orders (i.e. the photon energy equivalent to $\lambda _b$) while $\alpha$ has a small impact as expected from Eq. (2).
Fig. 7. Measured diffraction-efficiency data for the silicon master from the left panel of Fig. 5 compared to PCGrate-SX models that assume a groove profile similar to the wet-etched topography described in section 2, with sawtooth angles $\delta =29.5^{\circ }$ and $180^{\circ } - \theta - \delta \approx 80^{\circ }$, a flat-top width of $w = 35$ nm, a nub-protrusion height of $\Delta h = 3$ nm and an overall grove depth of $h \approx 67$ nm by Eq. (3). In the left and right panels, respectively, $\gamma$ and $\alpha$ are allowed to vary at levels of $\pm 0.03^{\circ }$ and $\pm 0.7^{\circ }$, which are represented by shaded uncertainty swaths.
With the centroids of the efficiency curves shown in Fig. 7 depending directly on the blaze angle by Eq. (2), a series of models with $28^{\circ } \leq \delta \leq 31^{\circ }$ in steps of $1^{\circ }$ are compared to $n=2$ and $n=3$ absolute-efficiency data in Fig. 8. In each of these models, $w = 35$ nm and $\Delta h = 3$ nm are fixed while the sawtooth angles vary as $\delta$ and $180^{\circ } - \theta - \delta$ with the overall groove depth, $h$, following from Eq. (3). The modeled efficiency in each case, which assumes a perfectly smooth grating boundary due to the small RMS facet roughness measured by AFM, was normalized to match the peak efficiency of the measured data so that the peak-centroid positions could be compared. Dotted lines represent the nominal model with $\gamma = 1.71^{\circ }$ and $\alpha = 23.7^{\circ }$ while the shaded swaths show the $\pm 0.03^{\circ }$ uncertainty in $\gamma$. These results support the expectation that the blaze angle of the silicon master is in the neighborhood of the nominal value of $\delta = 29.5^{\circ }$ as well as the AFM-measured value of $\delta = 30.0 \pm 0.8^{\circ }$.
Fig. 8. Measured diffraction-efficiency data in orders $n=2$ and $n=3$ for the silicon master compared to PCGrate-SX models with $28^{\circ } \leq \delta \leq 31^{\circ }$ that are normalized to match the data in terms of peak efficiency while the shaded swaths represent the $\pm 0.03^{\circ }$ uncertainty in $\gamma$. These results indicate that $\delta$ for the silicon master is close to the nominal value of $\delta = 29.5^{\circ }$.
4.2 SCIL Replica
In a similar manner to Fig. 8 for the silicon master, the experimental data from the right panel of Fig. 5 are compared to several PCGrate-SX models with varying blaze angle, $\delta '$, in order to evaluate resist shrinkage in the SCIL replica. Such a grating imprint in sol-resist produced using the methodology described in section 2 is shown under AFM in the top panel of Fig. 9 while an identical grating following the sputtering deposition described in section 3 is shown in the bottom panel. The average blaze angle from 30 measurements over these 0.5 $\mu$m by 1 $\mu$m areas measures $\delta ' = 27.9 \pm 0.7^{\circ }$ for the bare imprint and $\delta ' = 28.4 \pm 0.8^{\circ }$ following the coating. These measurements, which are consistent with one another to one standard deviation, give $\delta ' / \delta = 0.93 \pm 0.03$ and $\delta ' / \delta = 0.95 \pm 0.04$ as a reduction in blaze angle relative to $\delta = 30.0 \pm 0.8^{\circ }$ measured for the silicon master. The statistical consistency between these two measurements suggests that coating effects had a minimal impact on the blaze angle and that $\delta ' / \delta$ constrained from diffraction-efficiency testing results is expected to be indicative of resist shrinkage alone.
Fig. 9. AFM images of a grating imprint with a groove spacing of $d \lessapprox 160$ nm in sol-gel resist, as in Fig. 4. The bare imprint (top) has facet roughness and average blaze angle measuring $\sigma \approx 0.6$ nm RMS and $\delta ' = 27.9 \pm 0.7^{\circ }$, respectively. The sputter-coated imprint (bottom) yields $\sigma \approx 0.8$ nm RMS while the average blaze angle is statistically consistent with $\delta ' = 28.4 \pm 0.8^{\circ }$.
Unlike the silicon master profile illustrated in Fig. 6, the inverted topography of the SCIL replica features a relatively sharp apex and a flat-bottom portion of width $w$, which is largely shadowed in a near-Littrow configuration. With PCGrate-SX simulations showing that only the active blaze angle significantly affects the results in terms of peak-order centroids in such a geometry, the groove profile for diffraction-efficiency modeling is treated as an ideal sawtooth with a sharp, $90^{\circ }$ apex angle and no flat-bottom portion, which yields a groove depth of $h \approx 66$ nm. As in Fig. 8 for the silicon master, these models assume perfectly smooth surfaces and are normalized to the data in terms of peak efficiency in order to compare peak centroids. The outcome is presented in Fig. 10 where the diffraction-efficiency data for the SCIL replica in orders $n=2$ and $n=3$ are each plotted against five PCGrate-SX models with $26^{\circ } \leq \delta ' \leq 30^{\circ }$ in steps of $1^{\circ }$, all with $\alpha = 30.7^{\circ }$ and $\gamma = 1.75 \pm 0.04^{\circ }$ from Table 1, with the latter represented by uncertainty swaths. It is apparent from Fig. 10 that the data are most consistent with the $\delta ' = 28^{\circ }$ model, as expected from AFM measurements. In order to interpret this result in the context of SCIL processing, $\delta ' \approx 28^{\circ }$ is compared to an approximate model for resist shrinkage that is considered in the following discussion.
Fig. 10. Measured diffraction-efficiency data in orders $n=2$ and $n=3$ for the coated SCIL replica compared to PCGrate-SX models that assume an ideal sawtooth with blaze angles ranging between $26^{\circ } \leq \delta ' \leq 30^{\circ }$, which have been normalized to match the data. These results show that the measured data most closely match a grating with $\delta ' = 28^{\circ }$.
To formulate a simple model resist shrinkage, it is first assumed that shrinkage effects in the SCIL stamp can be neglected, which is expected due to the high intrinsic cross-link density of X-PDMS [13]. The profile of the imprinted blazed grating, without resist shrinkage, is considered to be composed of a series of groove facets with spacing $d \lessapprox 160$ nm that resembles the inverse of the silicon master described in section 2. These facets are separated from one another by the distance $w \gtrapprox 30$ nm defined in Fig. 6 so that the base of each groove facet has a width $b \approx d - w \lessapprox 130$ nm, which is assumed to be a small enough size scale for material relaxation in sol-gel resist. As illustrated in in Fig. 11(a), the shallow side of the facet is assigned the nominal value of $\delta = 29.5^{\circ }$ while the effect of the protruding nubs on the silicon master is ignored for simplicity so that the groove depth with $\Delta h = 0$ is $h \lessapprox 67$ nm by Eq. (3). Simulations of resist shrinkage in UV-NIL based on continuum mechanics of elastic media indicate that on average, a volume element $V$ shrinks to $V' = V \left ( 1 - \chi \right )$ with $\chi$ as the fractional loss in volume [17,18]. In this regard, the residual layer of resist that exists beneath the groove facets is expected to experience reduction in thickness alone. Stress-induced substrate deformation from this laterally-constrained shrinkage is considered to be negligible owing to the 1-mm thickness of the silicon wafer used for the grating replica.
Fig. 11. Approximate model for resist shrinkage with $y=0$ representing a fixed boundary defined by the residual layer. Left: a) The original facet shape has a blaze angle $\delta = 29.5^{\circ }$, an apex angle $\theta \approx 70.5^{\circ }$ and an area $A$. b) A shrunken facet is generated by dividing the original facet shape into 1000 layers along the $y$-direction and then requiring that the area of each is reduced to $A'= 0.9 A$ with $\chi = 0.1$ while the ratio between lateral and vertical shrinkage varies with $y$ according to Eq. (4) for $\ell _e / h = 0.05$. Right: Reduced blaze angle predicted by model relative to the initial blaze angle, $\delta ' / \delta$, as a function of $\ell _e / h$ for various values of $\chi$. The marked star indicates $\chi = 0.1$ and $\ell _e / h = 0.05$ used for the illustrated model.
The residual layer effectively serves as a fixed boundary for the shrinking groove facets, which retain their original groove spacing, $d$, throughout the process of resist shrinkage [18]. As such, shrinkage in each of these groove facets is assumed to manifest as a reduction in cross-sectional area due to the inability of the material network to relax over large groove lengths. Without knowledge of the elastic properties of sol-gel resist or the details of its thermodynamical shrinkage mechanism, the simple resist-shrinkage model presented here stems from the assumption that throughout each imprinted groove facet, the reduction in cross-sectional area from $A = b h / 2$ to $A' = A \left ( 1 - \chi \right )$ is uniform in magnitude while the ratio of lateral shrinkage to vertical shrinkage, $S$, varies spatially according to
(4)$$S = 1 - \mathrm{e}^{- y / \ell_e} \quad \textrm{for} \ \ 0 \leq y \leq h$$
with $\ell _e$ as an arbitrary $1 / \mathrm {e}$ length scale for $S$ approaching unity as $y$ increases toward $h$. By introducing $s_x = 1 - S f$ and $s_y = 1 - f$ as functions of position that describe shrinkage in the $x$ and $y$ directions shown in Fig. 11(a) and then requiring $s_x s_y = 1 - \chi$, it is found that
(5)$$f = \frac{1 + S - \sqrt{(1+S)^2 - 4 S \chi}}{2 S} \quad \textrm{for} \ \ 0 \leq S \leq 1$$
parameterizes $s_x$ and $s_y$. These expressions are incorporated into the resist-shrinkage model by first considering the original groove facet shape shown in Fig. 11(a) to be composed of 1000 rectangular layers, each with an identical, thin, vertical thickness. A shrunken facet profile is produced by requiring the area of each of these layers to be reduced according to $s_x$ and $s_y$ for specified values of $\chi$ and $\ell _e$.
Figure 11(b) shows a shrunken facet profile predicted for $\chi = 0.1$ and $\ell _e = 0.05 h$ where the blaze angle is reduced to $\delta ' \approx 0.93 \delta$ while the groove depth shrinks to $h' \approx 0.91 h$ as the apex angle widens with $\theta ' \approx 1.05 \theta$. Because the facet features curvature near its base and flattens to a linear slope as $y$ becomes larger than $\ell _e$, $\delta '$ is measured from the upper half of the facet, where $S \lessapprox 1$ for relatively small values of $\ell _e / h$. The quantity $\delta ' / \delta$ determined in this way is plotted as a function of $\ell _e / h$ for various values of $\chi$ in the right panel of Fig. 11, where the marked star indicates $\chi = 0.1$ and $\ell _e / h = 0.05$ for the illustrated model. Despite $\ell _e / h$ remaining poorly constrained without measurements for $h'/h$ and $\theta ' / \theta$, the comparison between the resist-shrinkage model just presented and $\delta ' / \delta \approx 0.93$ determined from diffraction-efficiency analysis along with AFM measurements supports the hypothesis stated in section 1 that the level of volumetric shrinkage for a 90$^{\circ }$C-treated sol-gel imprint is approximately 10%. Although this analysis does not tightly constrain $\delta '$, it does demonstrate that the SCIL replica functions as a blazed grating with a facet angle reduced by $\sim$2$^{\circ }$ relative to the silicon master, which has been shown to exhibit a blaze angle of $\delta \approx 30^{\circ }$, giving a value for $\delta ' / \delta$ that is consistent with a typical shrunken facet with $\chi \approx 0.1$.
5. Summary and Conclusions
This paper describes a SCIL process for patterning blazed grating surface-relief molds in NanoGlass T1100, a thermodynamically-curable, silica sol-gel resist, and characterizes the impact of resist shrinkage induced by a $90^{\circ }$C post-imprint treatment through diffraction-efficiency testing in the soft x-ray supported by AFM measurements of the blaze angle. An imprinted grating that features the inverse topography of the wet-etched silicon master template was sputter-coated with gold, using chromium as an adhesion layer, before being tested for diffraction efficiency in an extreme off-plane mount at beamline 6.3.2 of the ALS. By testing the silicon master in a similar configuration and comparing the results of both gratings to theoretical models for diffraction efficiency, it was found that the response of the coated SCIL replica is consistent with a reduced blaze angle of $\delta ' \approx 28^{\circ }$ whereas the silicon master yields diffraction-efficiency results characteristic of a nominal $\langle 311 \rangle$ blaze angle with $\delta \approx 30^{\circ }$. According to an approximate model formulated for resist shrinkage, this outcome supports the hypothesis that the replicated grating experienced volumetric shrinkage in the sol-gel resist on the level of 10%. The result serves as experimental evidence for sol-gel resist shrinkage impacting the performance of an x-ray reflection grating in terms of its ability to maximize diffraction efficiency for a specific diffracted angle. Monitoring this effect is particularly relevant for instrument development in astrophysical x-ray spectroscopy that relies on the production of large numbers of identical gratings, where resist shrinkage should be compensated for in the master grating to ensure that the replicas perform as expected [3,9,10]. Although the AutoSCIL production platform provides an avenue for high-volume production of grating imprints, sputter-coating is limited in its throughput, and moreover, the impact of ion bombardment on the sol-gel network has not been investigated. This motivates the pursuit of alternative deposition processes that are both capable of high throughput and compatible with sol-gel resist.
National Aeronautics and Space Administration (80NSSC17K0183, NNX16AP92H); U.S. Department of Energy (DE-AC02-05CH11231).
This research was supported by NASA Space Technology Research Fellowships and used resources of the Nanofabrication Laboratory and the Materials Characterization Laboratory at the Penn State Materials Research Institute, Philips SCIL Nanoimprint Solutions and beamline 6.3.2 of the Advanced Light Source, which is a DOE Office of Science User Facility under contract no. DE-AC02-05CH11231.
1. R. McEntaffer, C. DeRoo, T. Schultz, B. Gantner, J. Tutt, A. Holland, S. O'Dell, J. Gaskin, J. Kolodziejczak, W. W. Zhang, K.-W. Chan, M. Biskach, R. McClelland, D. Iazikov, X. Wang, and L. Koecher, "First results from a next-generation off-plane X-ray diffraction grating," Exp Astron 36(1-2), 389–405 (2013). [CrossRef]
2. D. M. Miles, J. A. McCoy, R. L. McEntaffer, C. M. Eichfeld, G. Lavallee, M. Labella, W. Drawl, B. Liu, C. T. DeRoo, and T. Steiner, "Fabrication and diffraction efficiency of a large-format, replicated x-ray reflection grating," The Astrophysical Journal 869(2), 95 (2018). [CrossRef]
3. R. McEntaffer, "Reflection grating concept for the Lynx X-Ray Grating Spectrograph," J. Astron. Telesc. Instrum. Syst. 5(02), 1 (2019). [CrossRef]
4. J. Haisma, M. Verheijen, K. van den Heuvel, and J. van den Berg, "Mold-assisted nanolithography: A process for reliable pattern replication," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 14(6), 4124–4128 (1996). [CrossRef]
5. C.-H. Chang, "Fabrication of sawtooth diffraction gratings using nanoimprint lithography," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 21(6), 2755 (2003). [CrossRef]
6. J. A. McCoy, R. L. McEntaffer, and D. M. Miles, "Extreme ultraviolet and soft x-ray diffraction efficiency of a blazed reflection grating fabricated by thermally activated selective topography equilibration," The Astrophysical Journal 891(2), 114 (2020). [CrossRef]
7. M. Neviere, D. Maystre, and W. R. Hunter, "On the use of classical and conical diffraction mountings for xuv gratings," J. Opt. Soc. Am. 68(8), 1106–1113 (1978). [CrossRef]
8. H. Schift and A. Kristensen, Nanoimprint Lithography – Patterning of Resists Using Molding (Springer Berlin Heidelberg, Berlin, Heidelberg, 2010), pp. 271–312.
9. D. M. Miles, R. M. McEntaffer, J. H. Tutt, T. Anderson, M. Weiss, L. Baker, J. Weston, B. O'Meara, R. C. McCurdy, B Myers, and F. Grisé, "An introduction to the Rockets for Extended-source X-ray Spectroscopy," (2019), p. 111180B.
10. J. H. Tutt, R. L. McEntaffer, B. Donovan, T. B. Schultz, M. P. Biskach, K.-W. Chan, J. D. Kearney, J. R. Mazzarella, R. S. McClelland, R. E. Riveros, T. T. Saha, M. Hlinka, W. W. Zhang, M. R. Soman, A. D. Holland, M. R. Lewis, K. Holland, and N. J. Murray, "The Off-plane Grating Rocket Experiment (OGRE) system overview," (2018), p. 106996H.
11. M. A. Verschuuren, M. Megens, Y. Ni, H. van Sprang, and A. Polman, "Large area nanoimprint by substrate conformal imprint lithography (SCIL)," Adv. Opt. Technol. 6(3-4), 243–264 (2017). [CrossRef]
12. M. A. Verschuuren, J. McCoy, R. P. Huber, R. van Brakel, M. Paans, and R. Voorkamp, "AutoSCIL 200mm tooling in production, x-ray optics, and cell growth templates," in Novel Patterning Technologies 2018, vol. 10584E. M. Panning, ed., International Society for Optics and Photonics (SPIE, 2018), pp. 185–197.
13. M. A. Verschuuren, M. W. Knight, M. Megens, and A. Polman, "Nanoscale spatial limitations of large-area substrate conformal imprint lithography," Nanotechnology 30(34), 345301 (2019). [CrossRef]
14. https://www.scil-nano.com.
15. D. M. Miles, S. V. Hull, T. B. Schultz, J. H. Tutt, M. Wages, B. D. Donovan, R. L. McEntaffer, A. D. Falcone, T. B. Anderson, E. Bray, D. N. Burrows, T. Chattopadhyay, C. M. Eichfeld, N. Empson, F. Grisé, C. R. Hillman, J. A. McCoy, M. McQuaide, B. J. Myers, T. Steiner, M. A. Verschuuren, D. Yastishock, and N. Zhang, "Water Recovery X-Ray Rocket grating spectrometer," J. Astron. Telesc. Instrum. Syst. 5(04), 1–11 (2019). [CrossRef]
16. R. Ji, M. Hornung, M. A. Verschuuren, R. van de Laar, J. van Eekelen, U. Plachetka, M. Moeller, and C. Moormann, "UV enhanced substrate conformal imprint lithography (UV-SCIL) technique for photonic crystals patterning in LED manufacturing The 35th International Conference on Micro- and Nano-Engineering (MNE).," Microelectron. Eng. 87(5-8), 963–967 (2010). [CrossRef]
17. M. Shibata, A. Horiba, Y. Nagaoka, H. Kawata, M. Yasuda, and Y. Hirai, "Process-simulation system for UV-nanoimprint lithography," J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. 28(6), C6M108–C6M113 (2010). [CrossRef]
18. A. Horiba, M. Yasuda, H. Kawata, M. Okada, S. Matsui, and Y. Hirai, "Impact of resist shrinkage and its correction in nanoimprint lithography," Jpn. J. Appl. Phys. 51(6S), 06FJ06 (2012). [CrossRef]
19. http://cxro.lbl.gov/als632/.
20. E. M. Gullikson, S. Mrowka, and B. B. Kaufmann, "Recent developments in EUV reflectometry at the Advanced Light Source," (2001).
21. J. H. Underwood, E. M. Gullikson, M. Koike, P. J. Batson, P. E. Denham, K. D. Franck, R. E. Tackaberry, and W. F. Steele, "Calibration and standards beamline 6.3.2 at the advanced light source," Rev. Sci. Instrum. 67(9), 3372 (1996). [CrossRef]
22. https://www.mri.psu.edu/.
23. Y. Zhuang, O. Hansen, T. Knieling, C. Wang, P. Rombach, W. Lang, W. Benecke, M. Kehlenbeck, and J. Koblitz, "Vapor Phase Self-assembled Monolayers for Anti-stiction Applications in MEMS," J. Microelectromech. Syst. 16(6), 1451–1460 (2007). [CrossRef]
24. J. H. Tutt, R. L. McEntaffer, H. Marlowe, D. M. Miles, T. J. Peterson, C. T. DeRoo, F. Scholze, and C. Laubis, "Diffraction Efficiency Testing of Sinusoidal and Blazed Off-Plane Reflection Gratings," J. Astron. Instrum. 05(03), 1650009 (2016). [CrossRef]
25. H. Marlowe, R. L. McEntaffer, J. H. Tutt, C. T. DeRoo, D. M. Miles, L. I. Goray, V. Soltwisch, F. Scholze, A. F. Herrero, and C. Laubis, "Modeling and empirical characterization of the polarization response of off-plane reflection gratings," Appl. Opt. 55(21), 5548 (2016). [CrossRef]
26. D. Attwood and A. Sakdinawat, X-Rays and Extreme Ultraviolet Radiation: Principles and Applications, 2nd ed. (Cambridge University Press, 2017).
27. http://henke.lbl.gov/optical_constants/.
28. https://www.pcgrate.com/loadpurc/download.
29. L. I. Goray and G. Schmidt, "Solving conical diffraction grating problems with integral equations," J. Opt. Soc. Am. A 27(3), 585–597 (2010). [CrossRef]
Article Order
R. McEntaffer, C. DeRoo, T. Schultz, B. Gantner, J. Tutt, A. Holland, S. O'Dell, J. Gaskin, J. Kolodziejczak, W. W. Zhang, K.-W. Chan, M. Biskach, R. McClelland, D. Iazikov, X. Wang, and L. Koecher, "First results from a next-generation off-plane X-ray diffraction grating," Exp Astron 36(1-2), 389–405 (2013).
[Crossref]
D. M. Miles, J. A. McCoy, R. L. McEntaffer, C. M. Eichfeld, G. Lavallee, M. Labella, W. Drawl, B. Liu, C. T. DeRoo, and T. Steiner, "Fabrication and diffraction efficiency of a large-format, replicated x-ray reflection grating," The Astrophysical Journal 869(2), 95 (2018).
R. McEntaffer, "Reflection grating concept for the Lynx X-Ray Grating Spectrograph," J. Astron. Telesc. Instrum. Syst. 5(02), 1 (2019).
J. Haisma, M. Verheijen, K. van den Heuvel, and J. van den Berg, "Mold-assisted nanolithography: A process for reliable pattern replication," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 14(6), 4124–4128 (1996).
C.-H. Chang, "Fabrication of sawtooth diffraction gratings using nanoimprint lithography," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 21(6), 2755 (2003).
J. A. McCoy, R. L. McEntaffer, and D. M. Miles, "Extreme ultraviolet and soft x-ray diffraction efficiency of a blazed reflection grating fabricated by thermally activated selective topography equilibration," The Astrophysical Journal 891(2), 114 (2020).
M. Neviere, D. Maystre, and W. R. Hunter, "On the use of classical and conical diffraction mountings for xuv gratings," J. Opt. Soc. Am. 68(8), 1106–1113 (1978).
H. Schift and A. Kristensen, Nanoimprint Lithography – Patterning of Resists Using Molding (Springer Berlin Heidelberg, Berlin, Heidelberg, 2010), pp. 271–312.
D. M. Miles, R. M. McEntaffer, J. H. Tutt, T. Anderson, M. Weiss, L. Baker, J. Weston, B. O'Meara, R. C. McCurdy, B Myers, and F. Grisé, "An introduction to the Rockets for Extended-source X-ray Spectroscopy," (2019), p. 111180B.
J. H. Tutt, R. L. McEntaffer, B. Donovan, T. B. Schultz, M. P. Biskach, K.-W. Chan, J. D. Kearney, J. R. Mazzarella, R. S. McClelland, R. E. Riveros, T. T. Saha, M. Hlinka, W. W. Zhang, M. R. Soman, A. D. Holland, M. R. Lewis, K. Holland, and N. J. Murray, "The Off-plane Grating Rocket Experiment (OGRE) system overview," (2018), p. 106996H.
M. A. Verschuuren, M. Megens, Y. Ni, H. van Sprang, and A. Polman, "Large area nanoimprint by substrate conformal imprint lithography (SCIL)," Adv. Opt. Technol. 6(3-4), 243–264 (2017).
M. A. Verschuuren, J. McCoy, R. P. Huber, R. van Brakel, M. Paans, and R. Voorkamp, "AutoSCIL 200mm tooling in production, x-ray optics, and cell growth templates," in Novel Patterning Technologies 2018, vol. 10584E. M. Panning, ed., International Society for Optics and Photonics (SPIE, 2018), pp. 185–197.
M. A. Verschuuren, M. W. Knight, M. Megens, and A. Polman, "Nanoscale spatial limitations of large-area substrate conformal imprint lithography," Nanotechnology 30(34), 345301 (2019).
https://www.scil-nano.com .
D. M. Miles, S. V. Hull, T. B. Schultz, J. H. Tutt, M. Wages, B. D. Donovan, R. L. McEntaffer, A. D. Falcone, T. B. Anderson, E. Bray, D. N. Burrows, T. Chattopadhyay, C. M. Eichfeld, N. Empson, F. Grisé, C. R. Hillman, J. A. McCoy, M. McQuaide, B. J. Myers, T. Steiner, M. A. Verschuuren, D. Yastishock, and N. Zhang, "Water Recovery X-Ray Rocket grating spectrometer," J. Astron. Telesc. Instrum. Syst. 5(04), 1–11 (2019).
R. Ji, M. Hornung, M. A. Verschuuren, R. van de Laar, J. van Eekelen, U. Plachetka, M. Moeller, and C. Moormann, "UV enhanced substrate conformal imprint lithography (UV-SCIL) technique for photonic crystals patterning in LED manufacturing The 35th International Conference on Micro- and Nano-Engineering (MNE).," Microelectron. Eng. 87(5-8), 963–967 (2010).
M. Shibata, A. Horiba, Y. Nagaoka, H. Kawata, M. Yasuda, and Y. Hirai, "Process-simulation system for UV-nanoimprint lithography," J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. 28(6), C6M108–C6M113 (2010).
A. Horiba, M. Yasuda, H. Kawata, M. Okada, S. Matsui, and Y. Hirai, "Impact of resist shrinkage and its correction in nanoimprint lithography," Jpn. J. Appl. Phys. 51(6S), 06FJ06 (2012).
http://cxro.lbl.gov/als632/ .
E. M. Gullikson, S. Mrowka, and B. B. Kaufmann, "Recent developments in EUV reflectometry at the Advanced Light Source," (2001).
J. H. Underwood, E. M. Gullikson, M. Koike, P. J. Batson, P. E. Denham, K. D. Franck, R. E. Tackaberry, and W. F. Steele, "Calibration and standards beamline 6.3.2 at the advanced light source," Rev. Sci. Instrum. 67(9), 3372 (1996).
https://www.mri.psu.edu/ .
Y. Zhuang, O. Hansen, T. Knieling, C. Wang, P. Rombach, W. Lang, W. Benecke, M. Kehlenbeck, and J. Koblitz, "Vapor Phase Self-assembled Monolayers for Anti-stiction Applications in MEMS," J. Microelectromech. Syst. 16(6), 1451–1460 (2007).
J. H. Tutt, R. L. McEntaffer, H. Marlowe, D. M. Miles, T. J. Peterson, C. T. DeRoo, F. Scholze, and C. Laubis, "Diffraction Efficiency Testing of Sinusoidal and Blazed Off-Plane Reflection Gratings," J. Astron. Instrum. 05(03), 1650009 (2016).
H. Marlowe, R. L. McEntaffer, J. H. Tutt, C. T. DeRoo, D. M. Miles, L. I. Goray, V. Soltwisch, F. Scholze, A. F. Herrero, and C. Laubis, "Modeling and empirical characterization of the polarization response of off-plane reflection gratings," Appl. Opt. 55(21), 5548 (2016).
D. Attwood and A. Sakdinawat, X-Rays and Extreme Ultraviolet Radiation: Principles and Applications, 2nd ed. (Cambridge University Press, 2017).
http://henke.lbl.gov/optical_constants/ .
https://www.pcgrate.com/loadpurc/download .
L. I. Goray and G. Schmidt, "Solving conical diffraction grating problems with integral equations," J. Opt. Soc. Am. A 27(3), 585–597 (2010).
Anderson, T.
Anderson, T. B.
Attwood, D.
Baker, L.
Batson, P. J.
Benecke, W.
Biskach, M.
Biskach, M. P.
Bray, E.
Burrows, D. N.
Chan, K.-W.
Chang, C.-H.
Chattopadhyay, T.
Denham, P. E.
DeRoo, C.
DeRoo, C. T.
Donovan, B.
Donovan, B. D.
Drawl, W.
Eichfeld, C. M.
Empson, N.
Falcone, A. D.
Franck, K. D.
Gantner, B.
Gaskin, J.
Goray, L. I.
Grisé, F.
Gullikson, E. M.
Haisma, J.
Hansen, O.
Herrero, A. F.
Hillman, C. R.
Hirai, Y.
Hlinka, M.
Holland, A.
Holland, A. D.
Holland, K.
Horiba, A.
Hornung, M.
Huber, R. P.
Hull, S. V.
Hunter, W. R.
Iazikov, D.
Ji, R.
Kaufmann, B. B.
Kawata, H.
Kearney, J. D.
Kehlenbeck, M.
Knieling, T.
Knight, M. W.
Koblitz, J.
Koecher, L.
Koike, M.
Kolodziejczak, J.
Kristensen, A.
Labella, M.
Lang, W.
Laubis, C.
Lavallee, G.
Lewis, M. R.
Marlowe, H.
Matsui, S.
Maystre, D.
Mazzarella, J. R.
McClelland, R.
McClelland, R. S.
McCoy, J.
McCoy, J. A.
McCurdy, R. C.
McEntaffer, R.
McEntaffer, R. L.
McEntaffer, R. M.
McQuaide, M.
Megens, M.
Miles, D. M.
Moeller, M.
Moormann, C.
Mrowka, S.
Murray, N. J.
Myers, B
Myers, B. J.
Nagaoka, Y.
Neviere, M.
Ni, Y.
O'Dell, S.
O'Meara, B.
Okada, M.
Paans, M.
Peterson, T. J.
Plachetka, U.
Polman, A.
Riveros, R. E.
Rombach, P.
Saha, T. T.
Sakdinawat, A.
Schift, H.
Schmidt, G.
Scholze, F.
Schultz, T.
Schultz, T. B.
Shibata, M.
Soltwisch, V.
Soman, M. R.
Steele, W. F.
Steiner, T.
Tackaberry, R. E.
Tutt, J.
Tutt, J. H.
Underwood, J. H.
van Brakel, R.
van de Laar, R.
van den Berg, J.
van den Heuvel, K.
van Eekelen, J.
van Sprang, H.
Verheijen, M.
Verschuuren, M. A.
Voorkamp, R.
Wages, M.
Weiss, M.
Weston, J.
Yastishock, D.
Yasuda, M.
Zhang, N.
Zhang, W. W.
Zhuang, Y.
Adv. Opt. Technol. (1)
Appl. Opt. (1)
Exp Astron (1)
J. Astron. Instrum. (1)
J. Astron. Telesc. Instrum. Syst. (2)
J. Microelectromech. Syst. (1)
J. Opt. Soc. Am. (1)
J. Opt. Soc. Am. A (1)
J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. (2)
J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. (1)
Jpn. J. Appl. Phys. (1)
Microelectron. Eng. (1)
Rev. Sci. Instrum. (1)
OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.
Alert me when this article is cited.
Click here to see a list of articles that cite this paper
Fig. 10.
View in Article | Download Full Size | PPT Slide | PDF
Equations on this page are rendered with MathJax. Learn more.
(1) sin ( α ) + sin ( β ) = n λ d sin ( γ ) for n = 0 , ± 1 , ± 2 , ± 3 ⋯
(2) λ b = d sin ( γ ) n [ sin ( α ) + sin ( 2 δ − α ) ] ≈ 2 d γ sin ( δ ) n ( 1 − | δ − α | 2 2 ) ,
(3) h ≈ d − w cot ( δ ) − cot ( θ + δ ) + Δ h
(4) S = 1 − e − y / ℓ e for 0 ≤ y ≤ h
(5) f = 1 + S − ( 1 + S ) 2 − 4 S χ 2 S for 0 ≤ S ≤ 1
Measured diffracted arc parameters for the silicon master and the coated SCIL replica in their respective test configurations.
measured parameter
L 234.7 ± 3.0 mm 235.6 ± 3.0 mm
r 6.98 ± 0.08 mm 7.20 ± 0.14 mm
Δ x dir 2.80 ± 0.03 mm 3.68 ± 0.07 mm
γ 1.71 ± 0.03 ∘ 1.75 ± 0.04 ∘
α 23.7 ± 0.7 ∘ 30.7 ± 0.9 ∘
Confirm Citation Alert
Please login to set citation alerts.
MathJax Help
Equations displayed with MathJax. Right click equation to reveal menu options.
Field Error
Select as filters
Select Topics Cancel
OSAP Bookshelf
Optics & Photonics News
About OSA Publishing
© Copyright 2021 | The Optical Society. All Rights Reserved
Institutional Login (OSA participates in eduGAIN)
China CARSI Member Access
China CAoS Member Access
OSA Privacy Policy
| Sort
Journals ()
Conferences ()
OIDA Reports ()
Apply Filters Cancel
include more topics »
Browse the topics: Click the to reveal subtopics. Use the checkbox to select a topic to filter your search.
Add Selected Topic Filters Cancel
for="" class="sf-list_label sf-authors-bold" class="sf-list_label" >
Frequency ascending
Frequency descending
Article Count
Alphabetical A>Z
Alphabetical Z>A
Newest date first
Oldest date first
|
CommonCrawl
|
-20x^2-50x+200=30x-10
Simple and best practice solution for -20x^2-50x+200=30x-10 equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so don`t hesitate to use it as a solution of your homework.
If it's not what You are looking for type in the equation solver your own equation and let us solve it.
Equation:
Solution for -20x^2-50x+200=30x-10 equation:
We move all terms to the left:
-20x^2-50x+200-(30x-10)=0
We get rid of parentheses
-20x^2-50x-30x+10+200=0
We add all the numbers together, and all the variables
-20x^2-80x+210=0
a = -20; b = -80; c = +210;
Δ = b2-4ac
Δ = -802-4·(-20)·210
Δ = 23200
The delta value is higher than zero, so the equation has two solutions
We use following formulas to calculate our solutions:
$x_{1}=\frac{-b-\sqrt{\Delta}}{2a}$
$x_{2}=\frac{-b+\sqrt{\Delta}}{2a}$
The end solution:
$\sqrt{\Delta}=\sqrt{23200}=\sqrt{400*58}=\sqrt{400}*\sqrt{58}=20\sqrt{58}$
$x_{1}=\frac{-b-\sqrt{\Delta}}{2a}=\frac{-(-80)-20\sqrt{58}}{2*-20}=\frac{80-20\sqrt{58}}{-40}
$x_{2}=\frac{-b+\sqrt{\Delta}}{2a}=\frac{-(-80)+20\sqrt{58}}{2*-20}=\frac{80+20\sqrt{58}}{-40}
See similar equations:
| -2/3x+9=7/8+1/2x | | 4+(2*8)=x | | (6+14)*2=x | | 5+x=19-4+5 | | v+5=15 | | (6x-10)=5x-14 | | 3(x+6)-7=-18 | | 13=y+10 | | 4+x=14-1+8 | | 3x^2+5x^2-4x-4=0 | | 11/6? =n+97? | | x^2+20x+100=1 | | x/5+4=27/5 | | -9y+46=-5(y-2) | | 7x+4x-3+2=16 | | (9y+10)-(3y-3=) | | 22=(-5)+3x-6 | | 2c/3?4c/5=7 | | -5x+3(x+8=36 | | 6(x+1)=10x-42 | | -5(-3+2n)=7(3-n) | | 15p^2-27p-6=0 | | 3x-(-x-9)=2(x+5) | | 5x+7=117 | | (5/6)x-5=5 | | (2/3)x+(3/4)=51 | | 7(x+7)=6x+9 | | 3v-24=-6(v-5) | | 5(-1y+4)=35 | | 3x²+10=4x | | 5(x+2=50 | | 5(x—2)=50 | | -1(-3y+-2)=-7 | | 1(x+1)=7 | | 1(x+1))=7 | | 1(x-(-1))=7 | | 8n-14n-3n^2+2=0 | | 8=w.2+2 | | -3(x-(-1))=-3 | | 10-8x=-15+11x | | 12y-8+4y+y-2=0 | | 5x+3(.5x)=3,000 | | -3(-3y+5)=-60 | | 25=-0,003x2+0,0638x+40 | | 12y-8y+4y+y-2=0 | | -2(z+4)=-37 | | 15x+5(.5x)=6067 | | 6x+36=-3(x+9) | | -6(u-2)=-8u+14 | | 2w+9=9(w+8) | | 30-5z=8(9z+7) | | C/4+76=4b | | 4x+9x-8x=6+4 | | 8x-90=-2 | | 6+1/2x=x(2) | | 3/2a+1/4=5/8 | | v^2+15v+59=0 | | 5y×2y=-24 | | 3-x/5=10 | | -4(w-9)=3w+8 | | -2=8w | | -9y+20=-2(y-3) | | 5(x+2)=6x+3x-14 | | -3(y+4)=5y-28 | | 10k(k+2)=k-6 |
|
CommonCrawl
|
Stereoselective carbon-carbon bond forming reactions using chiral phosphorus(V) compounds and the derived anions
Kim, Jung Ho
9236502.pdf (13MB) (no description provided) PDF
Title: Stereoselective carbon-carbon bond forming reactions using chiral phosphorus(V) compounds and the derived anions
Author(s): Kim, Jung Ho
Doctoral Committee Chair(s): Denmark, Scott E.
Department / Program: Chemistry
Discipline: Chemistry
Subject(s): Chemistry, Organic
Abstract: C$\sb2$-Symmetric diamines were synthesized in order to examine carbon-carbon bond forming reactions using chiral auxiliary-based phosphorus reagents. The general utility of these diamines as chiral auxiliaries in the diastereoselective alkylation of P-alkyl anions was examined. A systematic study of the alkylation of the P-alkyl anions was accomplished varying N-alkyl and P-alkyl substituents. High diastereoselectivity was achieved with N-neopentyl substrates (up to 92:8 diastereoselectivity).
The P-allyl anions with varying phosphorus substituents have heen investigated. The diastereoselectivity and the regioselectivity of Michael reactions of chiral cis-oxazaphosphorinanes with cyclic enones were very high. The reaction with chiral trans-oxazaphosphorinanes was not selective. The conjugate addition reaction of a variety of P-allyldiazaphosphorinanes with cyclic enones, varying the substituent of P-allyl unit, was highly regio- and diastereoselective.
The nucleophilic addition to the $\alpha,\beta$-unsaturated phosphorus(V) compounds proved to be highly nucleophile-dependent. The nucleophiles with certain range of pK$\sb{\rm a}$ values (25-32) have been shown to react with the $\alpha,\beta$-unsaturated phosphorus(V) compounds. The diastereoselectivity of the reaction with sulfone stabilized anions or the amide enolates was low in either the internal or relative sense due to the flexible conformation of the P-propenyl side chain.
The general reactivity of P-acyl enolate was extraordinarily low toward usual electrophiles except for silylating agents (TMSCl, TESCl) which produced (E)-silyl enol ethers exclusively. Asymmetric aldol reaction of the enolates derived from P-acylphosphorus heterocycles were not highly successful (up to 36% e.e.) mostly due to their low reactivity and the nature of the thermodynamically controlled reaction.
Rights Information: Copyright 1992 Kim, Jung Ho
Identifier in Online Catalog: AAI9236502
OCLC Identifier: (UMI)AAI9236502
Dissertations and Theses - Chemistry
|
CommonCrawl
|
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지)
Asian Australasian Association of Animal Production Societies (아세아태평양축산학회)
Agriculture, Fishery and Food > Agricultural Engineering/Facilities
Asian-Australasian Journal of Animal Sciences (AJAS) aims to publish original and cutting-edge research results and reviews on animal-related aspects of life sciences. Emphasis will be given to studies involving farm animals such as cattle, buffaloes, sheep, goats, pigs, horses and poultry, but studies with other animal species can be considered for publication if the topics are related to fundamental aspects of farm animals. Also studies to improve human health using animal models can be publishable. AJAS will encompass all areas of animal production and fundamental aspects of animal sciences: breeding and genetics, reproduction and physiology, nutrition, meat and milk science, biotechnology, behavior, welfare, health, and livestock farming system. AJAS is sub-divided into 10 sections. - Animal Breeding and Genetics Quantitative and molecular genetics, genomics, genetic evaluation, evolution of domestic animals, and bioinformatics - Animal Reproduction and Physiology Physiology of reproduction, development, growth, lactation and exercise, and gamete biology - Ruminant Nutrition and Forage Utilization Rumen microbiology and function, ruminant nutrition, physiology and metabolism, and forage utilization - Swine Nutrition and Feed Technology Swine nutrition and physiology, evaluation of feeds and feed additives and feed processing technology - Poultry and Laboratory Animal Nutrition Nutrition and physiology of poultry and other non-ruminant animals - Animal Products Milk and meat science, muscle biology, product composition, food safety, food security and functional foods
http://submit.ajas.info KSCI KCI SCOPUS SCIE
Volume 32 Issue 8_spc
Efficiency of Different Selection Indices for Desired Gain in Reproduction and Production Traits in Hariana Cattle
Kaushik, Ravinder;Khanna, A.S. 789
https://doi.org/10.5713/ajas.2003.789 PDF KSCI
An investigation was conducted on 729 Hariana cows maintained at Government Livestock Farm, Hisar, from 1973 to 1999, with an objective to compare the efficiency of various selection indices for attaining desired genetic gains in the index traits. The various traits included were age at first calving (AFC), service period (SP), calving interval (CI), days to first service (DFS), number of services per conception (NSPC), lactation milk yield (LY), peak yield (PY), dry period (DP). Except for LY, PY and AFC the heritabilities of all other traits were low. Desirable associations among reproductive traits are supportive of the fact that any one of these traits incorporated in simultaneous selection is expected to cause correlated response in other traits. Production traits (LY and PY) were positively correlated, while DP had low negative genetic correlation with LY, and high genetic correlation with PY. Thus, DP can be taken as additional criteria in selection index for better over all improvement. Almost all production traits except DP had low negative correlation with AFC, SP, DFS and CI meaning that reduction in reproduction traits up to certain level may increase production performance. While, the correlation of NSPC with LY and PY was moderate positive. Among four trait indices I23: incorporating PY, AFC, SP and NSPC and among three trait indices I1: incorporating LY, AFC and SP were the best as these required least number of generations (4.87 and 1.35, respectively) to attain desired goals. Next in order of preference were PY or LY along with DP and SP as the best indices (I20 and I16) of which, index with PY may be preferred instead of LY as it produced considerably high correlated response in LY and reduction in NSPC as well.
Heterosis and Percent Improvement in Survivability, Reproduction and Production Performance of Various Genetic Groups of Temperate x Zebu Crosses in Tropics
Singh, Kuldeep;Khanna, A.S.;Sangwan, M.L. 794
A study was conducted on 2102 records of 808 crossbred cows of various genetic groups maintained under 'All India Coordinated Research Project on Cattle' at C C S Haryana Agricultural University, Hisar, over 25 years period (1968-1993) with an objective to assess and compare the amount of percent improvement and heterotic effect for different performance traits in various genetic groups produced under this programme. Survivability sharply and significantly declined from 1/2 to $3/4^th$ bred and further from $3/4^th$ to inter-se bred. This may be due to periodic and management differences in addition to the higher level of exotic inheritance and decreased heterotic effect over the filial generations. Jersey and Holstein Friesian crosses among 1/2 breds and their 50% inheritance among $3/4^th$ and inter-se breds had highest improvement and heterosis in reproduction and production traits respectively. Among inter se bred genetic groups, BFH (I) had no recombination loss in SP and CI, while FJH (I), JFH (I) and FBH (I) had on recombination loss in AFC, LY, LL and PE. The crossbreeding of zebu cows with exotic breeds brings about spectacular improvement in comparison to the performance of zebu breed, while conventional selection over several generation would lead to only modest improvement. In addition to additive effect, there was sufficient heterosis in Jersey crosses for reproduction and Holstein Friesian crosses for production performance. Three breed crosses with exotic inheritance between 50 and 75 percent incorporating genes (25 to 50%) from both of these breeds is the best combination for stabilization.
A Timetable of the Early Development Stage of Silkies Embryo
Li, B.C.;Chen, G.H.;Qin, J.;Wang, K.H.;Xiao, X.J.;Xie, K.Z.;Wu, X.S. 800
The early embryos are obtained in different time after the former egg had been laid, and the aim of the present study was to observe the development law of chicken early embryo.The embryo development has been divided into the two periods according to morphology of blastodisc. Cleavage period, from 5.5 h (0 h uterine age) to 15.5 h (10-10.5 h uterine age) after the former egg had laid, formation blastodisc of 6-7 layers cell. Later blastocyst period, from 17.5 h (12-12.5 h uterine age) to area pellucida formation after the former egg had been laid. The first division took place at 5 h (0 h uterine age), morular at 11.5 h (6-6.5 h uterine age), and blastocyst at 15.5 h (10-10.5 h uterine age) after the former egg had been laid.
Response to ACTH Challenge in Female Dairy Calves in Relation to Their Milk Yield
Szucs, E.;Febel, H.;Janbaz, J.;Huszenicza, Gy.;Mezes, M.;Tran, A.T.;Abraham, Cs.;Gaspardy, A.;Gyorkos, I.;Seenger, J.;Nasser, J.A. 806
Attempts have been made to establish relationship between the response to ACTH challenge in female calves, growth and first lactation performance. A total of 19 Holstein calves weighing 100 kg i. v. were given 0.50 IU of ACTH/kg $BW^{.75}$ (EXACTHIN inj., Richter G., Budapest) at 60 days of age. Serial blood samples were taken at times 0, 0.5, 1, 2, 3, 4 and 5 hours and analyzed for cortisol, glucose insulin and FFA levels. From challenge series the area under the curve from time of administration and the following 5 h were calculated. Negative, and mostly loose relationship between response to ACTH challenge for cortisol, insulin, or FFA and ADWG during growth have been established (p>0.05) with positive one for glucose. Bivariate coefficients of correlation varied within the range from -0.35 to 0.15. Estimations reveal negative correlation between the length of first lactation and cortisol or insulin (r=-0.80, p<0.001 and r=-0.45, p<0.10, resp.) Close association between cortisol or insulin and actual first lactation milk yield was found (r=-0.48, p<0.10; r=-0.64, p<0.01, resp.). Close relationship between the response to ACTH challenge and milk protein yield was present only for insulin (r=-0.59, p<0.05).
Effect of Glucosinolates of Taramira (Eruca Sativa) Oilcake on Nutrient Utilization and Growth of Crossbred Calves
Das, Srinibas;Tyagi, Amrish Kumar;Singhal, K.K. 813
Taramira (Eruca sativa) cake, an unconventional oil cake, replaced 25 and 50 per cent crude protein of mustard cake in the ration of crossbred calves in an experiment of 90 days duration. Total glucosinolate content of the three concentrate mixture was almost similar (18.19, 17.95 and $17.95{\mu}mol/g$ dry matter), however, glucouracin was the major glucosinolate of experimental diets. Similar dry matter Intake, nutrient digestibility (except those of fibre fractions) and nitrogen balances as well as similar serum $T_3$ and $T_4$ levels and growth rate in all the groups indicated that taramira cake can replace 50 per cent crude protein of mustard cake in the diet of crossbred calves.
Effect of Different Source of Energy on Urea Molasses Mineral Block Intake, Nutrient Utilization, Rumen Fermentation Pattern and Blood Profile in Murrah Buffaloes (Bubalus bubalis)
Hosamani, S.V.;Mehra, U.R.;Dass, R.S. 818
In order to investigate the effect of different sources of energy on intake and nutrient utilization from urea molasses mineral block (UMMB), rumen fermentation pattern and blood biochemical constituents, 18 intact and 9 rumen fistulated male Murrah buffaloes aged about 3 years and average weight 310.8 kg were randomly allocated into three groups of 9 animals in each, thus each group having 6 intact and three rumen fistulated buffaloes. All animals were fed individually for 90 days. All buffaloes were offered wheat straw as basal roughage and urea molasses mineral block for free choice of licking. Three different energy sources viz., barley grain, (group I), maize grain (group II) and jowar green (group III) were offered to meet their nutrient requirement as per Kearl (1982). At the end of feeding trial, a metabolism trial of 7 days duration was carried out on intact animals to determine the digestibility of nutrients. Rumen fermentation studies were carried out on rumen fistulated animals. After the metabolism trial blood was collected from intact animals to estimate the nitrogen constituents in blood serum of animals fed on different sources of energy. Results revealed no significant difference in the intake of UMMB in three groups. Similarly, the intake of DM (kg), DCP (g) and TDN (kg) per day was similar in three groups statistically. The apparent digestibility of dry matter (DM), organic matter (OM), ether extract (EE) and nitrogen free extract (NFE) was significantly (p<0.05) more in group II than group III, whereas the digestibility of DM, OM and NFE was similar in group I and II. The digestibility of crude fiber (CF) and all the fiber fractions i.e. NDF, ADF, cellulose and hemicellulose was alike in 3 groups. Nitrogen balance (g/d) was significantly (p<0.05) more in group III as compared to group I and II, which were alike statistically, though the N intake (g/d) was similar in 3 groups but N balance (g/d) was significantly (p<0.05) less in group III as compared to other 2 groups. Significantly (p<0.05) higher concentration of total volatile fatty acids (TVFA), total nitrogen (TN) and its fractions were observed in group I and II as compared to group III. There was no effect on rumen pH, rumen volume and digesta flow rate due to different sources of energy in 3 groups. Similarly the blood serum biochemical parameters (NH3-N, urea-N and total protein) were statistically identical in 3 groups.
The Intake and Palatability of Four Different Types of Napier Grass (Pennisetum purpureum) Silage Fed to Sheep
Manyawu, G.J.;Sibanda, S.;Chakoma, I.C.;Mutisi, C.;Ndiweni, P. 823
Four different types of silage from new cultivars of Napier grass (Pennisetum purpureum), cv. NG 1 and NG 2, were fed to eight wethers in order to evaluate their preference and intake by sheep. The silages were prepared from direct-cut NG 1 herbage; pre-wilted NG 1 herbage; NG 1 herbage with maize meal (5% inclusion) and NG 2 herbage with maize meal (5% inclusion). All silages were palatable to sheep. Maize-treated silage had high quality fermentation, characterized by high Fleig scores and low pH, volatile fatty acids (VFA) and ammoniacal nitrogen contents. The pH, Fleig score, in vitro digestible organic matter (IVDOMD) and ammoniacal-N contents for maize-treated cv. NG 1 silage were 3.7, 78, $540g\;kg^{-1}$ dry matter (DM ) and $0.18g\;kg^{-1}$ DM whereas, in maize-treated cv. NG 2 they were 3.6, 59, $^458g\;kg{-1}$ DM and $0.18g\;kg^-1$ DM, respectively. The superior quality of maize-treated silages made them more preferable to sheep. Among the maize-fortified silages, palatability and intake were significantly (p<0.001) greater with cv. NG 1. Although direct-cut silage had better fermentation quality compared to wilted silage, wilted silage was significantly (p<0.001) more preferable to sheep. However, there were no significant differences (p<0.05) in the levels of preference and intake of wilted silage compared to maize-treated cv. NG 2 silage, even though the latter tended to be more palatable. There were indications that high pH (4.6 vs 3.5) and IVDOMD content (476 vs $457g\;kg^{-1}%$ DM) of wilted silage contributed to higher intake, compared to direct-cut silage. It was generally concluded that pre-wilting and treatment of Napier grass with maize meal at ensiling enhances intake and palatability.
Variation in Nutritive Value of Commercial Broiler Diets
Ru, Y.J.;Hughes, R.J.;Choct, M.;Kruk, J.A. 830
The classical energy balance method was used to measure the apparent metabolisable energy (AME) of four batches of broiler starter and finisher diets produced by two commercial feed companies. The results showed there was little variation in protein content between batches, but NDF content varied from 13.3% to 15.5% between batches of diet. The batch variation in chemical composition differed between feed manufacturers. While there was no difference in AME and feed conversion ration (FCR) between batches of starter diets produced by company A, FCR and AME ranged from 1.76-1.94 (p<0.001) and 11.38-11.90 MJ/kg air dry (p<0.05), respectively, for diets produced by company B. Similar results were found in a second experiment. There was no difference in AME, dry matter digestibility (DMD) and FCR between batches for finishing diet produced by company B, but a large variation occurred for the finisher diets from company A (p<0.01), where the ranges of FCR, AME and DMD were 1.95-2.30, 10.5-12.3 (MJ/kg air dry) and 58-68%, respectively. FCR was correlated with AME. AME was negatively related to the content of fibre in the diet, but positively related to DMD. The preliminary results based on 24 samples showed that near infrared spectroscopy (NIR) has the potential to predict FCR, intake, AME and DMD of commercial broiler diets, with $R^2$ being 0.93, 0.89, 0.95 and 0.98, respectively. The standard error of cross validation was below 0.2 for AME and only 0.06 for FCR.
Effect of Variety on Proportion of Botanical Fractions and Nutritive Value of Different Napiergrass (Pennisetum purpureum) and Relationship between Botanical Fractions and Nutritive Value
Islam, M.R.;Saha, C.K.;Sarker, N.R.;Jalil, M.A.;Hasanuzzaman, M. 837
Five varieties of napiergrasses (Pennisetum purpureum) were fractionated botanically into leaf blade, leaf sheath, stem and head. Chemical composition of each of whole napiergrass and their botanical fractions were determined. Correlation, linear and multiple regressions between botanical fractions and nutritive value of varieties of napiergrass were also estimated. All botanical fractions differed due to the effect of variety. Napier Pusha contained the highest proportion of leaf blade and internode, but the lowest proportion of leaf sheath. Napier Hybrid contained the lowest proportion of leaf blade, but highest proportion of node. Consequently, napier Pusha contained the highest (p<0.01) crude protein (CP, 9.0%), but Napier Hybrid had the lowest CP (7.0%). Chemical composition of whole plant differed significantly (p<0.01; except NFE, p>0.05) due to the variety. Not only the whole plant, chemical composition of most botanical fractions of whole plant differed (p<0.05 to 0.01) due to the variety. The intrarelationships between leaf blade and leaf sheath was negative (r=-0.43). Leaf sheath was also negatively correlated to CP, but positively correlated to ash of whole Napier or their botanical fractions. Leaf blade, on the other hand, increases CP but decreases ash content of whole plant or their fractions. These results, therefore, suggest that napiergrass varieties differ widely in terms of botanical fractions and nutritive value, which may have important implications on intake and productivity of animals. Furthermore, napiergrass varieties should be selected for leaf blade only for a better response.
The Effect of Pre-wilting and Incorporation of Maize Meal on the Fermentation of Bana Grass Silage
Manyawu, G.J.;Sibanda, S.;Mutisi, C.;Chakoma, I.C.;Ndiweni, P.N.B. 843
An experiment was conducted to investigate the effects of pre-wilting Bana grass (Pennisetum purpureum x P. americanum) herbage under sunny conditions for 0, 6, 18, 24, 32 and 48 h and ensiling it with maize meal. Four levels of maize meal(viz., 0, 5, 10 and 15% on fresh weight (Fw) basis) were tested. The experiment had a split-plot design. Wilting increased the concentration of water soluble carbohydrates (WSC) significantly (p<0.001) on a Fw basis, although there were no significant changes on DM basis. Unwilted grass contained $36.1g{\cdot}WSC{\cdot}kg^{-1}{\cdot}Fw$ ($127.6g{\cdot}kg^{-1}{\cdot}DM$) and this increased to $64.1g{\cdot}WSC{\cdot}kg^{-1}{\cdot}Fw$ ($116.7g{\cdot}kg^{-1}{\cdot}DM$) after 48 h of pre-wilting. Wilting also increased the DM content of herbage significantly (p>0.001) from 250 to $620g{\cdot}kg^{-1}$, between 0 and 48 h respectively. The concentration of fermentation end-products decreased (except butyric acid) and pH increased when the period of wilting increased, indicating that fermentation was restricted. In particular, lactic acid content declined from 50.8 to $26.2g{\cdot}kg^{-1}{\cdot}DM$ (p<0.01) and the residual WSC contents of silage increased from 2.7 with fresh herbage to $18.1g{\cdot}kg^{-1}{\cdot}DM$ with 48 h of wilting (p<0.001). Rapid wilting for 24 h, to a DM of $450g{\cdot}kg^{-1}$ was optimum since important increases in pH, residual WSC and DMD occurred at this level of wilting. Acetic acid, butyric acid and ammoniacal-N contents were lowest with 24 h of wilting. There were no significant interactions between length of wilting and the incorporation of maize meal. Wilting had a greater influence on fermentation than the incorporation of maize meal. Addition of maize meal facilitated fermentation by increasing forage DM content and reducing effluent production. In addition, the maize meal increased DMD. It was concluded that maize meal should generally be incorporated at a level of 5% on fresh weight basis.
Effect of Ionophore Enriched Cold Processed Mineral Block Supplemented with Urea Molasses on Rumen Fermentation and Microbial Growth in Crossbred Cattle
De, Debasis;Singh, G.P. 852
An experiment was conducted to study the effect of ionophore enriched cold processed mineral block supplemented with urea molasses on microbial growth and rumen fermentation. Twelve adult male crossbred cattle were divided into four groups on body weight basis. Animals were given wheat straw as a basal diet. The animals of group I and II were supplemented with concentrate mixture and animals of group III and IV were supplemented with cold processed urea molasses mineral block (UMMB). Thirty mg monensin/day/animal were supplemented to the animals of group II and 35 ppm monensin were incorporated in the UMMB supplemented to the animals of group IV. Dry matter (DM) intake did not differ significantly among groups. Mean rumen pH was higher in UMMB fed animals. Total volatile fatty acids (TVFA) concentration (mmole/L strained rumen liquor (SRL) in group III (113.19) was significantly (p<0.05) higher than those of group I (105.83) and II (108.74) but similar to group IV (109.34). TVFA production (mole/day) was similar in all the groups. The molar proportion of acetate was significantly (p<0.01) higher in the group I (59.56) than those of group II (51.73) and IV (55.91) but similar to group III (57.12). The molar proportion of propionate was significantly (p<0.01) higher in the monensin treated groups i.e. group II (38.38) and IV (36.26) than those of group I (27.78) and III (33.06). Butyrate molar percent was significantly (p<0.01) higher in group I (12.65) than those of group II (10.19), group III (9.83) and IV (7.84). The reduction of acetate and butyrate was due to UMMB and monensin resulted in lower A:P ratio. Average bacterial pool and bacterial production rate did not differ significantly among groups. Total N concentration (mg/100 ml SRL) was significantly (p<0.01) higher in the group I (55.30) and III (57.70) as compared to the group II (47.97) and IV (47.59). Ammonia-N concentration (mg/100 ml SRL) of group III (34.99) was significantly (p<0.01) higher than that of the group I (25.76) which was again significantly (p<0.01) higher than that of the group II (20.79) and IV (19.83) indicating slower release of ammonia due to monensin in diet. Total bacterial, cellulolytic, proteolytic bacterial and fungal count at 4 h post feeding did not differ significantly (p<0.05) among treatment groups. However, methanogenic bacterial count was significantly (p<0.01) higher in the group I (11.80) compared to the group II (8.43) which was significantly (p<0.01) higher than that of the group III (4.70) and IV (2.90). Average protozoal population was affected by both treatments. Thus feeding of UMMB and monensin in diet affected the rumen fermentation pattern towards propionate production, slower release of ammonia and reduction in methanogenic bacteria in the rumen.
Effects of Dietary Cellulose Levels on Growth, Nitrogen Utilization, Retention Time of Diets in Digestive Tract and Caecal Microflora of Chickens
Cao, B.H.;Zhang, X.P.;Guo, Y.M.;Karasawa, Y.;Kumao, T. 863
This study was conducted to examine the effects of dietary cellulose levels on growth, nitrogen utilization, the retention time of diets in the digestive tract, and caecal microflora of 2-month-old Single Comb White Leghorn male chickens fed 3 purified diets that contained 0, 3.5% and 10% cellulose in equal amount of nutrients for 7 days. Body weight gain and nitrogen utilization were significantly higher (p<0.05), while total microflora counts in the caecal contents and retention time of the diet in the digestive tract were significantly lower (p<0.05) in the group fed 3.5% dietary cellulose compared with the group fed 10% dietary cellulose. Body weight gain, nitrogen utilization and retention time of the diet in the digestive tract decreased significantly while the total microflora count in the caecal contents increased significantly in the group fed 10% dietary cellulose compared to the group fed 0% dietary cellulose (p<0.05). Chickens fed 10% dietary cellulose had significantly increased counts of uric acid-degradative bacteria such as Peptococcaceae and Eubacterium, including Peptostreptococcus (p<0.05). The results suggest that cellulose in purified diets is an effective ingredient and the effects on growth, nitrogen utilization, caecal microflora counts and diet retention time in the digestive tract are dependent on the inclusion rate. Positive or negative effects of dietary cellulose are displayed by growth, nitrogen utilization, caecal microflora counts and retention time of the diet in the digestive tract. Positive effects were displayed when the inclusion rate is 3.5% and negative effects were displayed when that is greater than 3.5% of the diet, and the phenomenon is without reference to the age of the chickens.
Effect of Green Tea By-product on Performance and Body Composition in Broiler Chicks
Yang, C.J.;Yang, I.Y.;Oh, D.H.;Bae, I.H.;Cho, S.G.;Kong, I.G.;Uuganbayar, D.;Nou, I.S.;Choi, K.S. 867
This experiment was conducted to determine the optimum level of green tea by-product (GTB) in diets without antibiotics and to evaluate its effect on broiler performances. A total of 140 Ross broilers were kept in battery cages for a period of 6 weeks. Dietary treatments used in this experiment were antibiotic free group (basal diet as a control), antibiotic added group (basal+0.05% chlortetracycline), GTB 0.5% (basal+GTB 0.5%), GTB 1% (basal+GTB 1%) and GTB 2% (basal+GTB 2%). Antibiotic added group showed significantly higher body weight gain than other treatments (p<0.05). However, no significant differences were observed in feed intake and feed efficiency among treatments (p>0.05). The addition of green tea by-product to diets tended to decrease blood LDL cholesterol content compared to control group although there were no significant differences among treatments (p>0.05). Addition of green tea by-product increased docosahexaenoic acid (DHA) in blood plasma and tended to decrease cholesterol content in chicken meat, but a significant difference was not observed (p>0.05). The values of TBA in chicken meat decreased in groups fed diets with green tea-by product and antibiotics compared to control group (p<0.05). The crude protein content in chicken meat was decreased slightly in treatments with green tea by-product and antibiotics supplementation. The abdominal fat was increased in chickens fed with diets with green tea by-product compared to the control (p<0.05).
Chemical Composition and Nutritional Evaluation of Variously Treated Defatted Rice Polishing for Broiler Feeding
Khalique, A.;Lone, K.P.;Pasha, T.N.;Khan, A.D. 873
The study was conducted to improve the nutritive value of defatted rice polishing (DRP). DRP was treated with various concentrations of HCl, NaOH, $H_2O_2$ and Kemzyme-H $F^{(R)} and the effect on its chemical composition and nutritive value in broiler chicks was observed. The treatments levels of 0.4 N HCl, 0.2 N NaOH and 6% $H_2O_2$ were selected from many concentrations of HCl, NaOH and $H_2O_2$ tried earlier on DRP. The selection was made on the basis of release of nutrients from DRP. The Kemzyme-H $F^{(R)} was used at rate of 0.1% of DRP. The selected concentrations of HCl, NaOH, and $H_2O_2$ were then used for treatment of DRP that was used in biological experiments. Two hundred and forty, day-old Hubbard male broiler chicks (38-40 g) were randomly divided into 48 experimental units with five chicks each. Each chemically treated DRP was incorporated into broiler diets at 10, 20 or 30% levels replacing yellow corn from the control feed and thus sixteen experimental feeds were prepared. These feeds were randomly assigned to 48 experimental units such that there were three replicates of chicks on each diet. The results of the study suggest that DRP can be effectively used in broiler diets at 20% level. The best weight gain and feed conversion ratio were observed with diet containing 20% level of DRP treated with 6% $H_2O_2$. The diets containing 30% levels of treated DRP were uneconomical, as excess use of oil was required to compensate the energy needs of the birds.
The Use of High-oil Corn in Young Broiler Chicken Diets
Kim, I.B.;Allee, G.L. 880
The objective of this study was to measure performance of young broiler chickens fed three varieties of high-oil corn (HOC 1, 2, and 3) compared with eight varieties of normal corn (NC). HOC varieties contained about 80% more oil than NC (average crude fat; 6.71% vs 3.72%) and about 29% more protein (average CP; 9.54% vs 7.38%). Each experimental diet was formulated with the same amount (55.205%) of each corn hybrid. Experiment 1 had by six dietary treatments (HOC1 and five NC varieties, 360 chickens) and Experiment 2 had five treatments (HOC2, HOC3, and three NC varieties, 250 chickens). In Exp. 1, for feed efficiency (F/G), the treatment contained HOC1 had better performance (p<0.05) than other NC varieties except NC5. As expected, there was no significant difference in average daily feed intake (p>0.05) among dietary treatments. The dietary treatment of HOC1 gave an improvement of 4.3% in F/G that came from 6% higher gross energy (GE) value of HOC1. Compared with Exp. 2, the dietary treatments contained HOC hybrids gave 4.4% higher F/G than NC dietary treatments, which came from a 5% increase in GE value. HOC varieties had superior nutrients content to NC for poultry, due to the fact that HOC contained higher concentrations of energy, protein, lysine, and methionine, thus improving growth and F/G.
Effect of Plant Proteolytic Enzyme on the Physico-chemical Properties and Lipid Profile of Meat from Culled, Desi and Broiler Chicken
Sinku, R.P.;Prasad, R.L.;Pal, A.K.;Jadhao, S.B. 884
Proteolytic enzymes are used for meat tenderization, an important process with regard to consumer preference. The proteolytic enzyme, IVRIN was isolated from the plant Cucumis pubescens W and its effect on physico-chemical properties and lipid profile of thigh and breast muscle of culled, desi and broiler birds was studied. Fifty-gram meat was treated with IVRIN containing 32.5 mg enzyme protein at $60^{\circ}C$ for 20 min. The pH of IVRIN treated meat was decreased significantly (p<0.01) and the effect was more pronounced in breast than thigh muscle. The water holding capacity (WHC) was increased significantly (p<0.01) in broiler as compared to desi and culled bird, and in breast compared to thigh muscle. IVRIN failed to produce any impact on muscle fiber diameter (MFD). The MFD of desi was significantly higher (p<0.01) than broiler and culled birds. The total lipid concentration in thigh and breast muscle of desi was lower (p<0.01) than broiler and culled birds, latter being similar in this respect. The cholesterol content was lower (p<0.01) in breast than thigh muscle, in broiler than desi and culled and in IVRIN treated than untreated meat samples. The phospholipid concentration was unaffected by IVRIN. Broiler and culled birds exhibited more phospholipid content than desi birds.
Immunomodulatory and Therapeutic Potential of Enrofloxacin in Bovine Sub Clinical Mastitis
Mukherjee, Reena;Dash, P.K. 889
Immunomodulatory and therapeutic potential of Enrofloxacin was studied in bovine sub clinical mastitis (SCM). The therapeutic efficacy was adjudged by Somatic cell count and Total bacterial count of the milk, whereas, the immuno modulatory potential of the drug was assessed by measuring myeloperoxidase (MPO) and acid phosphates (ACP) enzyme level in the milk leukocytes. Forty-five cows were divided into three equal groups. Gr I consisting 15 cows served as healthy control, whereas, 30 cows (SCM), Gr II and Gr III, selected on the basis of California Mastitis Test (CMT) positive reaction. Gr II cows received 150 mg of Enrofloxacin, once a day for three days and Gr.III received sterile 5 ml PBS (pH 7.4) for 7days, both the treatment were given by intramammary route. The observation was made up to 30 days post-treatment (PT). The CMT of the healthy milk was negative (0), whereas, it ranged between 1 point score and 2 point score in SCM. The Somatic cell count (SCC) and Total bacterial count (TBC) decreased significantly (p<0.05) on day 3 PT in GrII cows in Enrofloxacin treated group, however, such changes were insignificant in PBS treated group. Traces of MPO and ACP enzyme were found in the healthy milk. The mean ACP level enhanced by 70% on day 3 PT in GrII and only 18.7% in Gr. III cows. The mean MPO level enhanced to 32% in Gr. II and 18 % in Gr. III cows on day 3 PT. Concomitant use of Enrofloxacin in SCM at sub optimal dose was found to reduce the bacterial load by increasing the bactericidal enzyme level in the milk polymorphonuclear cells (PMNs) in bovine SCM, which indicates its immunomodulatory potential in mastitis.
Optimization of the Growth Rate of Probiotics in Fermented Milk Using Genetic Algorithms and Sequential Quadratic Programming Techniques
Chen, Ming-Ju;Chen, Kun-Nan;Lin, Chin-Wen 894
Prebiotics (peptides, N-acetyglucoamine, fructo-oligosaccharides, isomalto-oligosaccharides and galactooligosaccharides) were added to skim milk in order to improve the growth rate of contained Lactobacillus acidophilus, Lactobacillus casei, Bifidobacterium longum and Bifidobacterium bifidum. The purpose of this research was to study the potential synergy between probiotics and prebiotics when present in milk, and to apply modern optimization techniques to obtain optimal design and performance for the growth rate of the probiotics using a response surface-modeling technique. To carry out response surface modeling, the regression method was performed on experimental results to build mathematical models. The models were then formulated as an objective function in an optimization problem that was consequently optimized using a genetic algorithm and sequential quadratic programming approach to obtain the maximum growth rate of the probiotics. The results showed that the quadratic models appeared to have the most accurate response surface fit. Both SQP and GA were able to identify the optimal combination of prebiotics to stimulate the growth of probiotics in milk. Comparing both methods, SQP appeared to be more efficient than GA at such a task.
Energy Utilization of Growing Chicks in Various Nutritional Conditions
Sugahara, Kunio 903
For the last two decades, energy utilization of growing chicks has been studied more and more. This paper focuses on the energy utilization estimated by the metabolizable energy (ME) values and the efficiency at which ME is used for growth of chicks under various nutritional environment. Degree of saturation of dietary fats is responsible for nitrogen-corrected apparent metabolizable energy (AMEn) of fats. The effect of dietary fat sources on heat production depends on the kind of unsaturated fatty acids as well as the degree of saturation. Medium chain triglyceride shows lower AME and net energy than long chain triglyceride. Phytase as feed additives increases the AME values of the diet along with improvement of the phosphorous utilization. Ostriches have higher ability to metabolize the energy of fiber-rich foodstuffs than fowls. Their higher ability seems to be associated with fermentation of fiber in the hindgut. Proportions of macronutrients in the diets have influenced not only the gain of body protein and energy but also the oxidative phosphorylation of the chicken liver. Essential amino acids deficiency reduces ME/GE (energy metabolizability) little, if any. Growing chicks respond to a deficiency of single essential amino acids with the reduction of energy retained as protein and increased energy retained as fat. Thus, energy retention is proportional to ME intake despite deficiency, and efficiency of ME utilization is not affected by deficiency of amino acids. Effect of oral administration of clenbuterol, a beta-adrenergic agonist, on the utilization of ME varies with the dose of the agents. Although the heat production related to eating behavior has been estimated less than 5% of ME, tube-feeding diets decreases HI by about 30%.
Transgenesis and Germ Cell Engineering in Domestic Animals
Lee, C.K.;Piedrahita, J.A. 910
Transgenesis is a very powerful tool not only to help understanding the basics of life science but also to improve the efficiency of animal production. Since the first transgenic mouse was born in 1980, rapid development and wide application of this technique have been made in laboratory animals as well as in domestic animals. Although pronuclear injection is the most widely used method and nuclear transfer using somatic cells broadens the choice of making transgenic domestic animals, the demand for precise manipulation of the genome leads to the utilization of gene targeting. To make this technique possible, a pluripotent embryonic cell line such as embryonic stem (ES) cell is required to carry genetic mutation to further generations. However, ES cell, well established in mice, is not available in domestic animals even though many attempt to establish the cell line. An alternate source of pluripotent cells is embryonic germ (EG) cells derived from primordial germ cells (PGCs). To make gene targeting feasible in this cell line, a better culture system would help to minimize the unnecessary loss of cells in vitro. In this review, general methods to produce transgenic domestic animals will be mentioned. Also, it will focus on germ cell engineering and methods to improve the establishment of pluripotent embryonic cell lines in domestic animals.
Effect of De-hulling on Ileal Amino Acids Digestibility of Soybean Meals Fed to Growing Pigs
Kang, Y.F.;Li, D.F.;Xing, J.J.;Mckinnon, P.J.;Sun, D.Y. 928
A study was carried out to determine the effect of de-hulling on apparent and true ileal amino acids digestibility of soybean meals for growing pigs. Twenty barrows (Duroc${\times}$Large white${\times}$Longer white) were fitted with a simple T-cannula at the distal ilium. Digestibility of 20 experimental diets was determined, nine of them were de-hulled soybean meal diets, and nine of them were regular soybean meal diets and two low protein casein diets for determination of endogenous amino acid correction for true digestibility determination. A TEX>$5<{\times}5<$ Latin Squares Design was adopted in this trail. The results showed that de-hulling increased apparent ileal digestibility of isoleucine, threonine, aspartic, tyrosine and indispensable and dispensable amino acid (p<0.05) in soybean meals. Furthermore, dehulling is also increased apparent digestibility of arginine, leucine, lysine, phenylalanine, alanine, glutamic acid, serine and gross amino acids (p<0.01). However, there were no significant differences found for histidine, methionine, tryptophan, cystine and glycine (p>0.05). Similar responses were found for true ileal digestibility. In three dehulled and non-dehulled pairs soybean meals from the same respective sources, de-hulling increased apparent digestibility of lysine, methionine, threonine and cystine 1.42%, 2.06%, 2.18% and 1.40% respectively. True digestibility of lysine, methionine, threonine and cystine was increased 1.65%, 1.94%, 2.30% and 1.82% respectively. A prediction equation for true ileal amino acid digestibility (including lysine and arginine) was established by multivariate linear regression. The independent variables included relevant amino acid, organic matter, crude protein, ether extract and nitrogen free extract. The coefficient R2 values of lysine and agrinine were 0.596 and 0.531 respectively. According to the crude protein content, a prediction equation for lysine and arginine content in soybean meal was also established by single linear regression. The coefficient $R^2$ values of lysine and agrinine were 0.636 and 0.636 respectively.
|
CommonCrawl
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
Zinov'ev, Victor Alexandrovich
Statistics Math-Net.Ru
Total publications: 69
Scientific articles: 66
This page: 2214
Abstract pages: 22627
Full texts: 7125
References: 1267
Doctor of physico-mathematical sciences
http://www.mathnet.ru/eng/person17301
List of publications on Google Scholar
List of publications on ZentralBlatt
https://mathscinet.ams.org/mathscinet/MRAuthorID/193276
Publications in Math-Net.Ru
1. J. Borges, J. Rifà, V. A. Zinoviev, "On completely regular codes", Probl. Peredachi Inf., 55:1 (2019), 3–50 ; Problems Inform. Transmission, 55:1 (2019), 1–45
2. L. A. Bassalygo, V. A. Zinoviev, V. S. Lebedev, "On $m$-near-resolvable block designs and $q$-ary constant-weight codes", Probl. Peredachi Inf., 54:3 (2018), 54–61 ; Problems Inform. Transmission, 54:3 (2018), 245–252
3. L. A. Bassalygo, V. A. Zinoviev, "Remark on balanced incomplete block designs, near-resolvable block designs, and $q$-ary constant-weight codes", Probl. Peredachi Inf., 53:1 (2017), 56–59 ; Problems Inform. Transmission, 53:1 (2017), 51–54
4. V. A. Zinoviev, D. V. Zinoviev, "Generalized Preparata codes and $2$-resolvable Steiner quadruple systems", Probl. Peredachi Inf., 52:2 (2016), 15–36 ; Problems Inform. Transmission, 52:2 (2016), 114–133
5. L. A. Bassalygo, V. A. Zinoviev, "One class of permutation polynomials over finite fields of even characteristic", Mosc. Math. J., 15:4 (2015), 703–713
6. L. A. Bassalygo, V. A. Zinoviev, "Optimal almost equisymbol codes based on difference matrices", Probl. Peredachi Inf., 50:4 (2014), 15–21 ; Problems Inform. Transmission, 50:4 (2014), 313–319
7. V. A. Zinoviev, D. V. Zinoviev, "Non-full-rank Steiner quadruple systems $S(v,4,3)$", Probl. Peredachi Inf., 50:3 (2014), 76–86 ; Problems Inform. Transmission, 50:3 (2014), 270–279
8. V. A. Zinoviev, D. V. Zinoviev, "Structure of Steiner triple systems $S(2^m-1,3,2)$ of rank $2^m-m+2$ over $\mathbb F_2$", Probl. Peredachi Inf., 49:3 (2013), 40–56 ; Problems Inform. Transmission, 49:3 (2013), 232–248
9. V. A. Zinoviev, D. V. Zinoviev, "Steiner triple systems $S(2^m-1,3,2)$ of rank $2^m-m+1$ over $\mathbb F_2$", Probl. Peredachi Inf., 48:2 (2012), 21–47 ; Problems Inform. Transmission, 48:2 (2012), 102–126
10. V. A. Zinoviev, D. V. Zinoviev, "Steiner systems $S(v,k,k-1)$: components and rank", Probl. Peredachi Inf., 47:2 (2011), 52–71 ; Problems Inform. Transmission, 47:2 (2011), 130–148
11. L. A. Bassalygo, V. A. Zinov'ev, "Exact Values of the Sums of Multiplicative Characters of Polynomials over Finite Fields", Mat. Zametki, 88:3 (2010), 340–349 ; Math. Notes, 88:3 (2010), 308–316
12. V. A. Zinoviev, D. V. Zinoviev, "Binary perfect and extended perfect codes of lengths 15 and 16 with ranks 13 and 14", Probl. Peredachi Inf., 46:1 (2010), 20–24 ; Problems Inform. Transmission, 46:1 (2010), 17–21
13. V. A. Zinoviev, D. V. Zinoviev, "On one transformation of Steiner quadruple systems $S(v,4,3)$", Probl. Peredachi Inf., 45:4 (2009), 26–42 ; Problems Inform. Transmission, 45:4 (2009), 317–332
14. V. A. Zinoviev, T. Ericson, "Fourier-invariant pairs of partitions of finite abelian groups and association schemes", Probl. Peredachi Inf., 45:3 (2009), 33–44 ; Problems Inform. Transmission, 45:3 (2009), 221–231
15. L. A. Bassalygo, V. A. Zinov'ev, "Polynomials of Special Form over a Finite Field with an Exact Value of the Trigonometric Sum", Mat. Zametki, 82:1 (2007), 3–10 ; Math. Notes, 82:1 (2007), 3–9
16. G. T. Bogdanova, V. A. Zinov'ev, T. J. Todorov, "On the Construction of $q$-ary Equidistant Codes", Probl. Peredachi Inf., 43:4 (2007), 13–36 ; Problems Inform. Transmission, 43:4 (2007), 280–302
17. V. A. Zinov'ev, J. Rifà, "On New Completely Regular $q$-ary Codes", Probl. Peredachi Inf., 43:2 (2007), 34–51 ; Problems Inform. Transmission, 43:2 (2007), 97–112
18. V. A. Zinov'ev, D. V. Zinov'ev, "On Resolvability of Steiner Systems $S(v=2^m,4,3)$ of Rank $r\le v-m+1$ over $\mathbb F_2$", Probl. Peredachi Inf., 43:1 (2007), 39–55 ; Problems Inform. Transmission, 43:1 (2007), 33–47
19. V. A. Zinov'ev, D. V. Zinov'ev, "Classification of Steiner Quadruple Systems of Order 16 and of Rank 14", Probl. Peredachi Inf., 42:3 (2006), 59–72 ; Problems Inform. Transmission, 42:3 (2006), 217–229
20. L. A. Bassalygo, S. M. Dodunekov, V. A. Zinov'ev, T. Helleseth, "The Gray–Rankin Bound for Nonbinary Codes", Probl. Peredachi Inf., 42:3 (2006), 37–44 ; Problems Inform. Transmission, 42:3 (2006), 197–203
21. V. A. Zinov'ev, D. V. Zinov'ev, "Binary Extended Perfect Codes of Length 16 and Rank 14", Probl. Peredachi Inf., 42:2 (2006), 63–80 ; Problems Inform. Transmission, 42:2 (2006), 123–138
22. V. A. Zinov'ev, D. V. Zinov'ev, "Vasil'ev Codes of Length $n=2^m$ and Doubling of Steiner Systems $S(n,4,3)$ of a Given Rank", Probl. Peredachi Inf., 42:1 (2006), 13–33 ; Problems Inform. Transmission, 42:1 (2006), 10–29
23. L. A. Bassalygo, V. A. Zinov'ev, "On Polynomials of Special Form over a Finite Field of Odd Characteristic Attaining the Weil Bound", Mat. Zametki, 78:1 (2005), 16–25 ; Math. Notes, 78:1 (2005), 14–22
24. V. A. Zinov'ev, T. Helleseth, P. Charpin, "On Cosets of Weight 4 of Binary BCH Codes with Minimum Distance 8 and Exponential Sums", Probl. Peredachi Inf., 41:4 (2005), 36–56 ; Problems Inform. Transmission, 41:4 (2005), 331–348
25. V. A. Zinov'ev, D. V. Zinov'ev, "Classification of Steiner Quadruple Systems of Order 16 and of Rank at Most 13", Probl. Peredachi Inf., 40:4 (2004), 48–67 ; Problems Inform. Transmission, 40:4 (2004), 337–355
26. V. A. Zinov'ev, T. Helleseth, "On Weight Distributions of Cosets of Goethals-Like Codes", Probl. Peredachi Inf., 40:2 (2004), 19–36 ; Problems Inform. Transmission, 40:2 (2004), 118–134
27. V. A. Zinov'ev, D. V. Zinov'ev, "Binary Perfect Codes of Length 15 by Generalized Concatenated Construction", Probl. Peredachi Inf., 40:1 (2004), 27–39 ; Problems Inform. Transmission, 40:1 (2004), 25–36
28. L. A. Bassalygo, V. A. Zinov'ev, "On Polynomials over a Finite Field of Even Characteristic with Maximum Absolute Value of the Trigonometric Sum", Mat. Zametki, 72:2 (2002), 171–177 ; Math. Notes, 72:2 (2002), 152–157
29. V. A. Zinov'ev, D. V. Zinov'ev, "Binary Extended Perfect Codes of Length 16 by the Generalized Concatenated Construction", Probl. Peredachi Inf., 38:4 (2002), 56–84 ; Problems Inform. Transmission, 38:4 (2002), 296–322
30. V. A. Zinov'ev, A. S. Lobstein, "On Generalized Concatenated Constructions of Perfect Binary Nonlinear Codes", Probl. Peredachi Inf., 36:4 (2000), 59–73 ; Problems Inform. Transmission, 36:4 (2000), 336–348
31. S. M. Dodunekov, V. A. Zinov'ev, J. Nilsson, "On Algebraic Decoding of Some Maximal Quaternary Codes and the Binary Golay Code", Probl. Peredachi Inf., 35:4 (1999), 59–67 ; Problems Inform. Transmission, 35:4 (1999), 338–345
32. V. A. Zinov'ev, T. Ericson, "New Lower Bounds for Contact Numbers in Small Dimensions", Probl. Peredachi Inf., 35:4 (1999), 3–11 ; Problems Inform. Transmission, 35:4 (1999), 287–294
33. P. Charpin, A. Tietäväinen, V. A. Zinov'ev, "On Binary Cyclic Codes with Minimum Distance $d=3$", Probl. Peredachi Inf., 33:4 (1997), 3–14 ; Problems Inform. Transmission, 33:4 (1997), 287–296
34. L. A. Bassalygo, V. A. Zinov'ev, "Polynomials of special form over a finite field with maximum modulus of the trigonometric sum", Uspekhi Mat. Nauk, 52:2(314) (1997), 31–44 ; Russian Math. Surveys, 52:2 (1997), 271–284
35. V. A. Zinov'ev, T. Ericson, "On Fourier-Invariant Partitions of Finite Abelian Groups and the MacWilliams Identity for Group Codes", Probl. Peredachi Inf., 32:1 (1996), 137–143 ; Problems Inform. Transmission, 32:1 (1996), 117–122
36. V. A. Zinov'ev, G. L. Katsman, "Universal Code Families", Probl. Peredachi Inf., 29:2 (1993), 3–8 ; Problems Inform. Transmission, 29:2 (1993), 95–100
37. V. A. Zinov'ev, T. Ericson, "New Packings on a Finite-Dimensional Euclidean Sphere", Probl. Peredachi Inf., 28:2 (1992), 47–53 ; Problems Inform. Transmission, 28:2 (1992), 141–146
38. S. M. Dodunekov, V. A. Zinov'ev, T. Ericson, "Concatenation Method for Construction of Spherical Codes in $n$-Dimensional Euclidean Space", Probl. Peredachi Inf., 27:4 (1991), 34–38 ; Problems Inform. Transmission, 27:4 (1991), 303–307
39. V. A. Zinov'ev, S. N. Litsyn, S. L. Portnoi, "Concatenated Codes in Euclidean Space", Probl. Peredachi Inf., 25:3 (1989), 62–75 ; Problems Inform. Transmission, 25:3 (1989), 219–228
40. V. A. Zinov'ev, S. N. Litsyn, "Lower bounds for complete rational trigonometric sums", Uspekhi Mat. Nauk, 43:1(259) (1988), 199–200 ; Russian Math. Surveys, 43:1 (1988), 259–260
41. V. A. Zinov'ev, S. N. Litsyn, "On the General Code Shortening Construction", Probl. Peredachi Inf., 23:2 (1987), 28–34 ; Problems Inform. Transmission, 23:2 (1987), 111–116
42. V. A. Zinov'ev, T. Ericson, "On Concatenated Constant-Weight Codes Beyond the Varshamov–Gilbert Bound", Probl. Peredachi Inf., 23:1 (1987), 110–111 ; Problems Inform. Transmission, 23:1 (1987), 110–111
43. V. A. Zinov'ev, S. N. Litsyn, "On the Dual Distance of BCH Codes", Probl. Peredachi Inf., 22:4 (1986), 29–34 ; Problems Inform. Transmission, 22:4 (1986), 272–277
44. V. A. Zinov'ev, S. N. Litsyn, "On Codes Beyond the Gilbert Bound", Probl. Peredachi Inf., 21:1 (1985), 109–111
45. V. A. Zinov'ev, "On a Generalization of the Johnson Bound for Constant-Weight Codes", Probl. Peredachi Inf., 20:3 (1984), 105–108
46. V. A. Zinov'ev, S. N. Litsyn, "On Shortening of Codes", Probl. Peredachi Inf., 20:1 (1984), 3–11 ; Problems Inform. Transmission, 20:1 (1984), 1–7
47. L. A. Bassalygo, V. A. Zinov'ev, "Some simple consequences of coding theory for combinatorial problems of packings and coverings", Mat. Zametki, 34:2 (1983), 291–295 ; Math. Notes, 34:2 (1983), 629–631
48. G. V. Zaitsev, V. A. Zinov'ev, N. V. Semakov, "Minimum-Check-Density Codes for Correcting Bytes of Errors, Erasures, or Defects", Probl. Peredachi Inf., 19:3 (1983), 29–37 ; Problems Inform. Transmission, 19:3 (1983), 197–204
49. V. A. Zinov'ev, S. N. Litsyn, "On Methods of Code Lengthening", Probl. Peredachi Inf., 18:4 (1982), 29–42 ; Problems Inform. Transmission, 18:4 (1982), 244–254
50. V. A. Zinov'ev, "Generalized Concatenated Codes for Channels with Error Bursts and Independent Errors", Probl. Peredachi Inf., 17:4 (1981), 53–62 ; Problems Inform. Transmission, 17:4 (1981), 254–260
51. V. A. Zinov'ev, V. V. Zyablov, "Codes with Unequal Protection of Information Symbols", Probl. Peredachi Inf., 15:3 (1979), 50–60 ; Problems Inform. Transmission, 15:3 (1979), 197–205
52. L. A. Bassalygo, V. A. Zinov'ev, V. V. Zyablov, M. S. Pinsker, G. Sh. Poltyrev, "Bounds for Codes with Unequal Protection of Two Sets of Messag", Probl. Peredachi Inf., 15:3 (1979), 40–49 ; Problems Inform. Transmission, 15:3 (1979), 190–197
53. V. A. Zinov'ev, V. V. Zyablov, "Correction of Error Bursts and Independent Errors using Generalized Concatenated Codes", Probl. Peredachi Inf., 15:2 (1979), 58–70 ; Problems Inform. Transmission, 15:2 (1979), 125–134
54. I. I. Dumer, V. A. Zinov'ev, "Some New Maximal Codes over $GF(4)$", Probl. Peredachi Inf., 14:3 (1978), 24–34 ; Problems Inform. Transmission, 14:3 (1978), 174–181
55. V. A. Zinov'ev, V. V. Zyablov, "Decoding of Nonlinear Generalized Concatenated Codes", Probl. Peredachi Inf., 14:2 (1978), 46–52 ; Problems Inform. Transmission, 14:2 (1978), 110–114
56. L. A. Bassalygo, V. A. Zinov'ev, "Remark on Uniformly Packed Codes", Probl. Peredachi Inf., 13:3 (1977), 22–25 ; Problems Inform. Transmission, 13:3 (1977), 178–180
57. V. A. Zinov'ev, "Algebraic theory of block codes detecting independent", Itogi Nauki i Tekhniki. Ser. Teor. Veroyatn. Mat. Stat. Teor. Kibern., 13 (1976), 189–234 ; J. Soviet Math., 7:2 (1977), 243–271
58. V. A. Zinov'ev, "Generalized Cascade Codes", Probl. Peredachi Inf., 12:1 (1976), 5–15 ; Problems Inform. Transmission, 12:1 (1976), 2–9
59. L. A. Bassalygo, V. A. Zinov'ev, V. K. Leont'ev, N. I. Fel'dman, "Nonexistence of Perfect Codes for Some Composite Alphabets", Probl. Peredachi Inf., 11:3 (1975), 3–13 ; Problems Inform. Transmission, 11:3 (1975), 181–189
60. L. A. Bassalygo, G. V. Zaitsev, V. A. Zinov'ev, "Uniformly Packed Codes", Probl. Peredachi Inf., 10:1 (1974), 9–14 ; Problems Inform. Transmission, 10:1 (1974), 6–9
61. V. A. Zinov'ev, V. K. Leont'ev, "On Perfect Codes", Probl. Peredachi Inf., 8:1 (1972), 26–35 ; Problems Inform. Transmission, 8:1 (1972), 17–24
62. N. V. Semakov, V. A. Zinov'ev, G. V. Zaitsev, "Uniformly Packed Codes", Probl. Peredachi Inf., 7:1 (1971), 38–50 ; Problems Inform. Transmission, 7:1 (1971), 30–39
63. N. V. Semakov, V. A. Zinov'ev, "Balanced Codes and Tactical Configurations", Probl. Peredachi Inf., 5:3 (1969), 28–36 ; Problems Inform. Transmission, 5:3 (1969), 22–28
64. N. V. Semakov, V. A. Zinov'ev, G. V. Zaitsev, "A Class of Maximum Equidistant Codes", Probl. Peredachi Inf., 5:2 (1969), 84–87 ; Problems Inform. Transmission, 5:2 (1969), 65–68
65. N. V. Semakov, V. A. Zinov'ev, "Complete and Quasi-complete Balanced Codes", Probl. Peredachi Inf., 5:2 (1969), 14–18 ; Problems Inform. Transmission, 5:2 (1969), 11–13
66. N. V. Semakov, V. A. Zinov'ev, "Equidistant $q$-ary Codes with Maximal Distance and Resolvable Balanced Incomplete Block Designs", Probl. Peredachi Inf., 4:2 (1968), 3–10 ; Problems Inform. Transmission, 4:2 (1968), 1–7
67. J. Borges, J. Rifà, V. A. Zinoviev, "Erratum to: "On completely regular codes" [Problemy Peredachi Informatsii 55, no. 1, 3–50 (2019)]", Probl. Peredachi Inf., 55:3 (2019), 109 ; Problems Inform. Transmission, 55:3 (2019), 298
68. V. A. Zinoviev, D. V. Zinoviev, "Remark on "Steiner triple systems $S(2^m-1,3,2)$ of rank $2^m-m+1$ over $\mathbb F_2$" published in Probl. Peredachi Inf., 2012, no. 2", Probl. Peredachi Inf., 49:2 (2013), 107–111 ; Problems Inform. Transmission, 49:2 (2013), 192–195
69. A. M. Barg, L. A. Bassalygo, V. M. Blinovskii, M. V. Burnashev, N. D. Vvedenskaya, G. K. Golubev, I. I. Dumer, K. Sh. Zigangirov, V. V. Zyablov, V. A. Zinov'ev, G. A. Kabatiansky, V. D. Kolesnik, N. A. Kuznetsov, V. A. Malyshev, R. A. Minlos, D. Yu. Nogin, I. A. Ovseevich, V. V. Prelov, Yu. L. Sagalovich, V. M. Tikhomirov, R. Z. Khas'minskii, B. S. Tsybakov, "Mark Semenovich Pinsker. In Memoriam", Probl. Peredachi Inf., 40:1 (2004), 3–5 ; Problems Inform. Transmission, 40:1 (2004), 1–4
Presentations in Math-Net.Ru
1. Об одном семействе сферических кодов с большим расстоянием
V. A. Zinov'ev
Seminar on Coding Theory
2. О совершенных кодах в метрике Ли
3. Binary Perfect Codes with the Specified Rank
Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow
math-net2020_01 [at] mi-ras ru
Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2020
|
CommonCrawl
|
Asymptotic behavior of an SIR reaction-diffusion model with a linear source
Ling Xu 1,2,, , Jianhua Huang 1, and Qiaozhen Ma 2,
College of Science, National University of Defense Technology, Changsha 410073, China
College of Mathematics and Statistics, Northwest Normal University, Lanzhou, Gansu 730070, China
* Corresponding author: Ling Xu
Received October 2018 Revised December 2018 Published June 2019
Fund Project: The authors are supported by the NSF of China (11361053,11771449), the NSF of Gansu Province (17JR5RA069), the University Project of Gansu Province(2017B-90) and the Project of Northwest Normal University(NWNU-LKQN-16-16; NWNU-LKQN-18-14)
This paper is devoted to the well-posedness and long-time behavior of a stochastic suspension bridge equation with memory effect. The existence of the random attractor for the stochastic suspension bridge equation with memory is established. Moreover, the upper semicontinuity of random attractors is also provided when the coefficient of random term approaches zero.
Keywords: Suspension bridge equation, random dynamical system, random attractors, memory, upper semicontinuity.
Mathematics Subject Classification: Primary: 60H15, 35Q35; Secondary: 35B40.
Citation: Ling Xu, Jianhua Huang, Qiaozhen Ma. Upper semicontinuity of random attractors for the stochastic non-autonomous suspension bridge equation with memory. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5959-5979. doi: 10.3934/dcdsb.2019115
N. Ahmed and H. Harbi, Mathematical analysis of dynamical models of suspension bridges, SIAM J. Appl. Math., 58 (1998), 853-874. doi: 10.1137/S0036139996308698. Google Scholar
L. Arnold, Random Dynamical Systems, Spring-verlag, New York, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar
S. Borini and V. Pata, Uniform attractors for a strongly damped wave equation with linear memory, Asymptot. Anal., 20 (1999), 263-277. Google Scholar
I. Chueshov, Monotone Random Systems Theory and Applications, in: Lecture Notes in Mathematics, , vol. 1779, Springer, Berlin, 2002. doi: 10.1007/b83277. Google Scholar
H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probab, Theory Related Fields, 100 (1994), 365-393. doi: 10.1007/BF01193705. Google Scholar
C. Dafermos, Asymptotic stability in viscoelasticity, Arch. Ration. Mech. Anal., 37 (1970), 297-308. doi: 10.1007/BF00251609. Google Scholar
L. Humphreys, Numercial mountain pass solutions of a suspension bridge equation, Nonlinear Anal., 28 (1997), 1811-1826. doi: 10.1016/S0362-546X(96)00020-X. Google Scholar
J. Kang, Long-time behavior of a suspension bridge equations with past history, Appl. Math. Comput., 265 (2015), 509-519. doi: 10.1016/j.amc.2015.04.116. Google Scholar
A. Lazer and P. McKenna, Large-amplitude periodic oscillations in suspension bridges: Some new connection with nonlinear analysis, SIAM Rev., 32 (1990), 537-578. doi: 10.1137/1032120. Google Scholar
Q. Ma, S. Wang and X. Chen, Uniform attractors for the coupled suspension bridge equations, Appl. Math. Comput., 217 (2011), 6604-6615. doi: 10.1016/j.amc.2011.01.045. Google Scholar
Q. Ma and L. Xu, Random attractors for the extensible suspension bridge equation with white noise, Comput. Math. Appl., 70 (2015), 2895-2903. doi: 10.1016/j.camwa.2015.09.029. Google Scholar
Q. Ma and L. Xu, Random attractors for the coupled suspension bridge equations with white noises, Appl. Math. Comput., 306 (2017), 38-48. doi: 10.1016/j.amc.2017.02.019. Google Scholar
Q. Ma and C. Zhong, Existence of global attractors for the coupled suspension bridge equations, J. Math. Anal. Appl., 308 (2005), 365-379. doi: 10.1016/j.jmaa.2005.01.036. Google Scholar
Q. Ma and C. Zhong, Existence of strong solutions and global attractors for the coupled suspension bridge equations, J. Differential Equations, 246 (2009), 3755-3775. doi: 10.1016/j.jde.2009.02.022. Google Scholar
P. McKenna and W. Walter, Nonlinear oscillation in a suspension bridges, Results: Nonlinear Anal., 39 (2000), 731-743. doi: 10.1007/BF00251232. Google Scholar
J. Park and J. Kang, Global attractors for the suspension bridge equations with nonlinear damping, Quart. Appl. Math., 69 (2011), 465-475. doi: 10.1090/S0033-569X-2011-01259-1. Google Scholar
J. Park and J. Kang, Pullback $\mathcal{D}$-attractors for non-autonomous suspension bridge equations, Nonlinear Anal., 71 (2009), 4618-4623. doi: 10.1016/j.na.2009.03.025. Google Scholar
J. Park and J. Kang, Uniform attractor for non-autonomous suspension bridge equations with localized damping, Math. Methods Appl. Sci., 34 (2011), 487-496. doi: 10.1002/mma.1376. Google Scholar
V. Pata and A. Zucchi, Attractors for a damped hyperbolic equation linear memory, Adv. Math. Sci. Appl., 11 (2001), 505-529. Google Scholar
A. Pazy, Semigroup of Linear Operators and Applications to Partial Differntial Equations, Appl. Math. Sci. Berlin, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar
B. Wang, Upper semicontinuty of random attractors for non-compact random dynamical system, Electron. J. Differential Equations, 2009 (2009), 1-18. Google Scholar
C. Zhong, Q. Ma and C. Sun, Existence of strong solutions and global attractors for the suspension bridge equations, Nonlinear Anal., 67 (2007), 442-454. doi: 10.1016/j.na.2006.05.018. Google Scholar
S. Zhou and M. Zhao, Random attractors for damped non-autonomous wave equations with memory and white noise, Nonlinear Anal., 120 (2015), 202-226. doi: 10.1016/j.na.2015.03.009. Google Scholar
Yejuan Wang. On the upper semicontinuity of pullback attractors for multi-valued noncompact random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3669-3708. doi: 10.3934/dcdsb.2016116
Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of random attractors for non-autonomous stochastic strongly damped wave equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2787-2812. doi: 10.3934/dcds.2017120
Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of attractors for non-autonomous stochastic lattice systems with random coupled coefficients. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2221-2245. doi: 10.3934/cpaa.2016035
Tomás Caraballo, María J. Garrido-Atienza, Björn Schmalfuss, José Valero. Attractors for a random evolution equation with infinite memory: Theoretical results. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1779-1800. doi: 10.3934/dcdsb.2017106
Ahmed Y. Abdallah. Upper semicontinuity of the attractor for a second order lattice dynamical system. Discrete & Continuous Dynamical Systems - B, 2005, 5 (4) : 899-916. doi: 10.3934/dcdsb.2005.5.899
Suping Wang, Qiaozhen Ma. Existence of pullback attractors for the non-autonomous suspension bridge equation with time delay. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1299-1316. doi: 10.3934/dcdsb.2019221
Xiangnan He, Wenlian Lu, Tianping Chen. On transverse stability of random dynamical system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 701-721. doi: 10.3934/dcds.2013.33.701
Björn Schmalfuss. Attractors for nonautonomous and random dynamical systems perturbed by impulses. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 727-744. doi: 10.3934/dcds.2003.9.727
Yonghai Wang. On the upper semicontinuity of pullback attractors with applications to plate equations. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1653-1673. doi: 10.3934/cpaa.2010.9.1653
Linfang Liu, Xianlong Fu. Existence and upper semicontinuity of (L2, Lq) pullback attractors for a stochastic p-laplacian equation. Communications on Pure & Applied Analysis, 2017, 6 (2) : 443-474. doi: 10.3934/cpaa.2017023
Ivana Bochicchio, Claudio Giorgi, Elena Vuk. On the viscoelastic coupled suspension bridge. Evolution Equations & Control Theory, 2014, 3 (3) : 373-397. doi: 10.3934/eect.2014.3.373
Tomás Caraballo, Stefanie Sonner. Random pullback exponential attractors: General existence results for random dynamical systems in Banach spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6383-6403. doi: 10.3934/dcds.2017277
Wenqiang Zhao. Pullback attractors for bi-spatial continuous random dynamical systems and application to stochastic fractional power dissipative equation on an unbounded domain. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3395-3438. doi: 10.3934/dcdsb.2018326
Yonghai Wang, Chengkui Zhong. Upper semicontinuity of pullback attractors for nonautonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3189-3209. doi: 10.3934/dcds.2013.33.3189
Zhijian Yang, Yanan Li. Upper semicontinuity of pullback attractors for non-autonomous Kirchhoff wave equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4899-4912. doi: 10.3934/dcdsb.2019036
Lianfa He, Hongwen Zheng, Yujun Zhu. Shadowing in random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 355-362. doi: 10.3934/dcds.2005.12.355
Philippe Marie, Jérôme Rousseau. Recurrence for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 1-16. doi: 10.3934/dcds.2011.30.1
Tomás Caraballo, José A. Langa, James C. Robinson. Stability and random attractors for a reaction-diffusion equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 875-892. doi: 10.3934/dcds.2000.6.875
Marco Campo, José R. Fernández, Maria Grazia Naso. A dynamic problem involving a coupled suspension bridge system: Numerical analysis and computational experiments. Evolution Equations & Control Theory, 2019, 8 (3) : 489-502. doi: 10.3934/eect.2019024
Ling Xu Jianhua Huang Qiaozhen Ma
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.