text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Over 3 years (126)
Microscopy and Microanalysis (19)
MRS Online Proceedings Library Archive (11)
Epidemiology & Infection (10)
Proceedings of the International Astronomical Union (9)
Journal of Fluid Mechanics (7)
Parasitology (6)
The European Physical Journal - Applied Physics (6)
International Astronomical Union Colloquium (4)
Journal of the Marine Biological Association of the United Kingdom (4)
Symposium - International Astronomical Union (4)
The Journal of Agricultural Science (4)
Annals of Human Genetics (3)
Edinburgh Journal of Botany (3)
Bird Conservation International (2)
Infection Control & Hospital Epidemiology (2)
Prehospital and Disaster Medicine (2)
Weed Technology (2)
Zygote (2)
MiMi / EMAS - European Microbeam Analysis Society (19)
International Astronomical Union (17)
MBA Online Only Members (6)
test society (6)
Weed Science Society of America (3)
BLI Birdlife International (2)
Renaissance Society of America (2)
Society for Healthcare Epidemiology of America (SHEA) (2)
World Association for Disaster and Emergency Medicine (2)
Nutrition Society (1)
Test Society 2018-05-10 (1)
Testing Membership Number Upload (1)
Cambridge Contemporary Astrophysics (1)
Prevalence of Diarrheagenic Escherichia coli (DEC) and Salmonella spp. with zoonotic potential in urban rats in Salvador, Brazil
C. Pimentel Sobrinho, J. Lima Godoi, F. Neves Souza, C. Graco Zeppelini, V. Espirito Santo, D. Carvalho Santiago, R. Sady Alves, H. Khalil, T. Carvalho Pereira, M. Hanzen Pinna, M. Begon, S. Machado Cordeiro, J. Neves Reis, F. Costa
Journal: Epidemiology & Infection / Accepted manuscript
Published online by Cambridge University Press: 20 November 2020, pp. 1-9
Greenhouse gas balance and carbon footprint of pasture-based beef cattle production systems in the tropical region (Atlantic Forest biome)
P. P. A. Oliveira, A. Berndt, A. F. Pedroso, T. C. Alves, J. R. M. Pezzopane, L. S. Sakamoto, F. L. Henrique, P. H. M. Rodrigues
The production of beef cattle in the Atlantic Forest biome mostly takes place in pastoral production systems. There are millions of hectares covered with pastures in this biome, including degraded pasture (DP), and only small area of the original Atlantic Forest has been preserved in tropics, implying that actions must be taken by the livestock sector to improve sustainability. Intensification makes it possible to produce the same amount, or more beef, in a smaller area; however, the environmental impacts must be assessed. Regarding climate change, the C dynamics is essential to define which beef cattle systems are sustainable. The objectives of this study were to investigate the C balance (t CO2e./ha per year), the intensity of C emission (kg CO2e./kg BW or carcass) and the C footprint (t CO2e./ha per year) of pasture-based beef cattle production systems, inside the farm gate and considering the inputs. The results were used to calculate the number of trees to be planted in beef cattle production systems to mitigate greenhouse gas (GHG) emissions. The GHG emission and C balance, for 2 years, were calculated based on the global warming potential (GWP) of AR4 and GWP of AR5. Forty-eight steers were allotted to four grazing systems: DP, irrigated high stocking rate pasture (IHS), rainfed high stocking rate pasture (RHS) and rainfed medium stocking rate pasture (RMS). The rainfed systems (RHS and RMS) presented the lowest C footprints (−1.22 and 0.45 t CO2e./ha per year, respectively), with C credits to RMS when using the GWP of AR4. The IHS system showed less favorable results for C footprint (−15.71 t CO2e./ha per year), but results were better when emissions were expressed in relation to the annual BW gain (−10.21 kg CO2e./kg BW) because of its higher yield. Although the DP system had an intermediate result for C footprint (−6.23 t CO2e./ha per year), the result was the worst (−30.21 CO2e./kg BW) when the index was expressed in relation to the annual BW gain, because in addition to GHG emissions from the animals in the system there were also losses in the annual rate of C sequestration. Notably, the intensification in pasture management had a land-saving effect (3.63 ha for IHS, 1.90 for RHS and 1.19 for RMS), contributing to the preservation of the tropical forest.
The role of coherent structures and inhomogeneity in near-field interscale turbulent energy transfers
F. Alves Portela, G. Papadakis, J. C. Vassilicos
Journal: Journal of Fluid Mechanics / Volume 896 / 10 August 2020
Published online by Cambridge University Press: 01 June 2020, A16
Print publication: 10 August 2020
We use direct numerical simulation data to study interscale and interspace energy exchanges in the near field of a turbulent wake of a square prism in terms of a Kármán–Howarth–Monin–Hill (KHMH) equation written for a triple decomposition of the velocity field which takes into account the presence of quasi-periodic vortex shedding coherent structures. Concentrating attention on the plane of the mean flow and on the geometric centreline, we calculate orientation averages of every term in the KHMH equation. The near field considered here ranges between two and eight times the width $d$ of the square prism and is very inhomogeneous and out of equilibrium so that non-stationarity and inhomogeneity contributions to the KHMH balance are dominant. The mean flow produces kinetic energy which feeds the vortex shedding coherent structures. In turn, these coherent structures transfer their energy to the stochastic turbulent fluctuations over all length scales $r$ from the Taylor length $\unicode[STIX]{x1D706}$ to $d$ and dominate spatial turbulent transport of small-scale two-point stochastic turbulent fluctuations. The orientation-averaged nonlinear interscale transfer rate $\unicode[STIX]{x1D6F1}^{a}$ which was found to be approximately independent of $r$ by Alves Portela et al. (J. Fluid Mech., vol. 825, 2017, pp. 315–352) in the range $\unicode[STIX]{x1D706}\leqslant r\leqslant 0.3d$ at a distance $x_{1}=2d$ from the square prism requires an interscale transfer contribution of coherent structures for this approximate constancy. However, the near constancy of $\unicode[STIX]{x1D6F1}^{a}$ in the range $\unicode[STIX]{x1D706}\leqslant r\leqslant d$ at $x_{1}=8d$ which was also found by Alves Portela et al. (2017) is mostly attributable to stochastic fluctuations. Even so, the proximity of $-\unicode[STIX]{x1D6F1}^{a}$ to the turbulence dissipation rate $\unicode[STIX]{x1D700}$ in the range $\unicode[STIX]{x1D706}\leqslant r\leqslant d$ at $x_{1}=8d$ does require interscale transfer contributions of the coherent structures. Spatial inhomogeneity also makes a direct and distinct contribution to $\unicode[STIX]{x1D6F1}^{a}$ , and the constancy of $-\unicode[STIX]{x1D6F1}^{a}/\unicode[STIX]{x1D700}$ close to $1$ would not have been possible without it either in this near-field flow. Finally, the pressure-velocity term is also an important contributor to the KHMH balance in this near field, particularly at scales $r$ larger than approximately $0.4d$ , and appears to correlate with the purely stochastic nonlinear interscale transfer rate when the orientation average is lifted.
P01-08 - Mania, Mania with Delirium and Delirious Mania
B. Barahona-Corrêa, J. Fernandes, J. Alves da Silva, B. Neto, J. Almeida
Since Bell's original description delirious mania (DM) has been repeatedly rediscovered and renamed, resulting in much confusion as to its meaning.Definitions range from mania with self-limited temporal-spatial disorientation to a fatal, delirious catatonic syndrome with euphoric mood, high fever and autonomic instability. Moreover, it remains unclear whether DM is a specific clinical entity or an unspecific, unpredictable complication of mania, and whether it is a useful diagnostic category.
To identify the frequency and clinical features of DM and mania with delirium.
We reviewed all admissions to our acute inpatient unit with mania, hypomania or mixed affective state, in 2006 and 2007. Cases with delirious features and cases with a working diagnosis of DM, were reviewed in detail. The three groups (no delirium, delirious features and DM) were compared for general demographic and clinical variables, as well as features specifically associated with DM (e.g., catatonia; nakedness; inappropriate toileting; unexplained fever, etc).
We found 100 patients with mania, hippomania or mixed affective state. 14 had medically unexplained delirium, 4 of them with a final diagnosis of DM. DM cases (but not non-DM mania cases with delirious features) had extremely long durations of stay, acute onset, hypertermia, catatonia, autonomic instability, anarchic sleep, shouting/coprolalia, delirium persisting for over a week, and were more likely to receive ECT. Moreover, in three of them DM occurred in most manic/mixed affective episodes.
DM is a rare occurrence in bipolar disorder. It has typical clinical features and may be recurrent.
FC12.04 May P300 help differentiate the syndromatic patterns in schizophrenia?
J. Maltez, N. Alves, J.P. Foreid, T. Pimentel, M. Abreu
Journal: European Psychiatry / Volume 15 / Issue S2 / October 2000
Published online by Cambridge University Press: 16 April 2020, p. 306s
Preliminary Data from Famidem Survey: Can we assume who is at Risk Regarding Informal Caregiving in Dementia?
M. Gonçalves Pereira, J. Alves da Silva, I. Carmo, A.L. Papoila, A.M. Cardoso, C. Conceição, M. Gomes, M.M.A. Neves, A. Neves, L. Santos, R. Mateos
In meridional European countries such as Portugal, informal caregivers are almost always close relatives, either key-relatives (those more involved) or not. There are few systematic comparisons between the experience of key-relatives/primary caregivers (PC) and other/secondary caregivers (SC) in psychogeriatrics. We present some preliminary data from the FAMIDEM (Families of People with Dementia) survey.
Non-randomised cross-sectional study comparing two related samples of caregivers (PC versus SC) of 41 patients with DSM-IV dementia from outpatient practices in Lisbon (Portugal). Caregivers' assessments included: Zarit Burden Interview, Caregiver Activity Survey (CAS), Positive Aspects of Caregiving, GHQ-12, Social Network Questionnaire and Dementia Knowledge Questionnaire.
Patients' mean age was 78,7 years (SD 7,9). 24 (58,5%) were women and 58,5% had Alzheimer disease.PC were older than SC (p=0,000) and tended to live with the patient (p=0,000). They reported less emocional support (p=0,021) but higher objective burden-CAS (p=0,002). Regarding all other outcome variables, significant differences between groups were not found. Within the global sample, comparing spousals (n=23) and adult children/other relatives (n=59) yielded interestingly different preliminary results, eg higher GHQ-12 levels (p=0,010).
The experience of caregiving is possibly different regarding PC and SC, but further research is warranted in order to define who really is at risk. Being a spouse may be much more determinant, although most spouses are PC as well. for the moment, it seems prudent not to exclude SC from risk assessments. the final FAMIDEM results, even lacking generalizability, will probably provide interesting clues.
Autoantibodies in Bipolar and Cluster B Personality Disorders
J. Traça Almeida, B. Barahona-Correa, A. Santos, J. Alves da Silva, P. Filipe, M. Talina, M. Xavier
Prevalence of depression and other common psychiatric disorders in autoimmune diseases has been extensively documented. The association between subclinical autoimmunity and behavioural or psychiatric syndromes remains less studied. The best known example is raised titres of autoantibodies with high affinity for the basal ganglia in some obsessive compulsive spectrum syndromes (e.g. Paediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal infections). The possible role of autoimmunity in impulse control disorders remains understudied.
We proposed to study the relation between autoimmunity, affective bipolarity and impulsive psicopathology.
14 bipolar, 10 cluster B personality disorder inpatients. Titres for rheumatoid factor (RA), antithyroglobulin (ATG), antiperoxidase (APO) antinuclear (ANA), anti-neutrophil cytoplasmic (ANCA) and antistreptolysin (ASO) antibodies were measured in all subjects. Psychiatric assessment: non-structured psychiatric interview, MINI International Neuropsychiatric Interview and Millon Clinical Multiaxial Inventory-II.
21,4% of bipolar patients had positive ATG titre vs 11,1% in the cluster B personality group. 28,6% of bipolar patients had positive APO titre vs 22,2% in the cluster B personality group. 16,7% of bipolar patients had positive ASO titre vs 30,0% in the cluster B personality group. None of this differences reached significance.
ASO titre correlated significantly with antisocial (rho=0,435, p=0,043) and autodestructive (rho=0,461, p=0,031) ratings and almost significantly with borderline (rho=0,420, p=0,052) ratings.
The results obtained partly agree with the existing studies. As far as we know a possible correlation between ASOs and impulsive behaviour has not been previously described. The results obtained call for further investigation in the subject.
is Feigned Psychosis a Pathway to Schizophrenia?
J. Traça Almeida, J. Alves da Silva, M. Xavier, R. Gusmão
Factitious disorders (FD) are characterized by intentional production of either physical, psychological or mixed symptoms that mimic various clinical syndromes, with no apparent advantage for the individual concerned other than allowing him to assume the sick role. Large body of work has been accumulated on FD, but the majority of published data deal with the physical variant of the disease, with comparable few reports on psychiatric FD. Although there are many different presentations for psychiatric FD, the factitious psychosis subset justifies particular attention. Factitious psychosis may be prodromic of a genuine chronic psychosis, usually in the context of a personality disorder. Published data shows Munchausen psychosis, a severe subset of FD psychosis, with a prevalence of 0.25% of all inpatient admissions and global FD psychosis attaining 4.1% of all diagnosed psychoses, generally with a poor prognosis.
The scantiness of studies on the subject of psychiatric FD and factitious psychosis in particular, despite its significant prevalence, coupled with the fact that its recognition embarks on a radically different approach compared with the physical variant, stresses the need for case reporting.
We present four clinical cases with discussion of the underlying pathology and outcome, and a systematic review of the literature of FD psychosis case reports. This is followed by further discussion addressing the recognition of factitious psychosis, its etiological contributing factors, management, effects on staff and diagnostic criteria.
EPA-0485 - Evaluating the Somatic Impairments in the Elderly: Preliminary Results of the 10/66-Dementia Research Group Prevalence Study in Portugal
M. Xavier, AM. Cardoso, C. Raminhos, J. Alves da Silva, A. Verdelho, A. Fernandes, C. Ferri, M. Prince, M. Gonçalves-Pereira
Somatic comorbidities are common among elderly patients with mental health problems, namely dementia and depression. Quite often, somatic problems are associated with a substantial impairment in daily routines, as well as to a worse outcome of the neuropsychiatric condition.
to investigate the level of impairment due to comorbid somatic problems in the elderly, as part of the implementation of the 10/66- Dementia Research Group Population-based Research Protocol in Portuguese settings.
A cross-sectional survey was implemented of all residents aged 65 in a semi-rural area in Southern Portugal. Evaluation included a cognitive module and the Geriatric Mental State-AGECAT (GDS). Training of the field researchers was conducted with the supervision of the 10/66-DRG coordinators (CF, MP).
703 elderly participants were evaluated. Interference with daily activities was present in every area assessed, with moderate to severe impact in the following areas: Arthritis or rheumatism (36,9%), eyesight problems (19,8%), hypertension (10,5%) and gastro-intestinal conditions (10,4%). 48,9% of the participants had at least one contact with a primary care health centre in the last three months, and 22,5% had at least one contact with a doctor in a general hospital.
Results showed a relevant degree of impairment due to somatic conditions, and a high use of services, namely at primary care level. The significant prevalence of comorbid somatic conditions should be taken into account regarding the organization of services directed to older patients with mental health problems, that has been considered a priority in the Portuguese Mental Health Plan 2007–2016.
1223 – Executive Functions, Visuoconstructive Ability And Memory In Institutionalized Elderly
S. Moitinho, M. Marques, H. Espírito Santo, V. Vigário, R. Almeida, J. Matreno, V. Alves, T. Nascimento, M. Costa, M. Tomaz, L. Caldas, L. Ferreira, S. Simões, S. Guadalupe, L. Lemos, F. Daniel
Executive functions (EF) are associated to frontal lobes and cognitive decline (CD) with worse results on EF tests.
Objectives/aims
Analyze if the Frontal Assessment Battery/FAB assessing EF discriminates elders with CD (vs. with no CD; Montreal Cognitive Assessment/MoCA), and if the results obtained with the Rey Osterreith Complex Figure Test/ROCF (copy's quality, immediate, and delayed memory) are associated with the CD presence/absence. Moreover, we wanted to assess if copy's quality and 3 minutes memory test are associated with FAB results, since these two tests are supposedly associated with EF and with frontal lobes assessed by the FAB, contrarily to the 20 minutes memory (supposedly related to the temporal area).
556 institutionalized elders (age: M ± SD =80.2 ± 5.23; range=60-100) filled in voluntarily a sociodemographic questionnaire, ROCF, MoCA and FAB.
FAB and all ROCF tests were associated with the absence/presence of CD. Regarding variables stratified by age and education, FAB was associated with immediate memory but not with copy's quality nor with delayed memory. With no stratified ROCF and FAB, correlations confirmed the previous associations, but also between FAB and copy's quality.
Results follow the literature regarding the association between immediate memory and EF (associated to frontal lobes), in contrast to the long-term memory which is associated with the temporal area and that was not associated with FAB. Results concerning copy's quality (ROCF) are not consensual.
EPA-0607 – Patterns of Service use in the Elderly: Preliminary Results of the 10/66-dementia Research Group Prevalence Study in Portugal.
M. Xavier, C. Raminhos, AM. Cardoso, J. Alves da Silva, AM. Verdelho, A. Fernandes, C. Ferri, M. Prince, M. Gonçalves-Pereira
Above 60 years, prevalence rates of neuropsychiatric disorders double with every 5.1 years of age (from 0.7% at 60-65 years to 23.6% for those aged 85 or older). As aged people are dramatically increasing in Portugal, a Country under a serious financial crisis, it is important to understand whether health services are being used appropriately.
to characterize the use of health services among the elderly, as part of the implementation of the 10/66-Dementia Research Group Population-based Research Protocol in Portugal.
A cross-sectional survey was implemented of all residents aged 65 or more in a semi-rural area in Southern Portugal. Core evaluation included a cognitive module and the Geriatric Mental State-AGECAT (GDS). A structured questionnaire assessed the use of services, including health care providers (public, private), inpatient episodes, medication and costs.
703 participants were evaluated. Almost half of the participants (48,9%) were in contact with public primary care facilities, but only 22,5% had a contact with a hospital service. In both settings, nurses and other non-doctor professionals were rarely involved (6,4%) as principal care providers. 11,8% had at least one contact with a private doctor. Inpatient episodes in the last 3 months were very infrequent (3%). The National Health Service covered most costs.
Previous research strongly suggests that health services are not provided equitably to people with mental disorders, namely the elderly. Reliable and cross-culturally comparable information about patterns of care may guide the implementation of adequate management in this area in Portugal.
EPA-0450 – Caregiving in Dementia and in Old Age Depression: Preliminary Results from the 10/66 drg Prevalence Study in Portugal
M. Gonçalves-Pereira, C. Raminhos, A. Cardoso, J. Alves da Silva, M. Caldas de Almeida, C. Ferri, M. Prince, M. Xavier
The burden of neuropsychiatric disorders in the elderly is high, considering patients, their families, and close or extended networks. In Portugal, the 10/66-Dementia Research Group population-based research programmes are running since 2011, with the community prevalence study. The protocol allows for valid diagnoses of dementia and depression, using comprehensive assessments which include the Geriatric Mental State- AGECAT.
Objectives and aims:
We aimed to analyse informal caregiving arrangements and the psychological experience of caregiving in a subsample drawn from the ongoing 10/66 studies.
We report on 580 residents aged 65 + years of a defined catchment area in Portugal (Mora). Assessments included questionnaires on demographic and caregiving issues, the Self-Report Questionnaire (SRQ) on psychological distress and the Zarit Burden Interview (ZBI) on the caregiving experience.
In this subsample, 94 participants were in need of informal caregiving (dementia accounted for 28 cases, depression for 31, and other chronic physical/psychiatric conditions for the remainder). Most primary caregivers were family relatives (mostly wives and daughters) and were living with the patient. A large number were elderly people themselves (mean age 64.1±16.3years). Median scores were 3 on the SRQ (range 0-16) and 8 on the ZBI (range 0-66). Those who were caring for participants with more severe disabilities scored significantly higher on both measures.
These preliminary results of the 10/66 epidemiological community studies support previous suggestions that caregiver strain is also high in subgroups of community samples. Most overburdened families (and individual caregivers) lacked appropriate, tailored interventions. Final results will be available soon.
1525 – Ekbom Syndrome With Folie à Deux: a Case Report
J. Alves, V.L. de-Melo-Neto
Delusional parasitosis (Ekbom Syndrome) was firstly described by Thirbierge in 1894 as acarophobia. Nowadays this syndrome is not considered an independent diagnostic category and is defined in DSM-IV-TR as a delusional disorder, somatic subtype. Folie à Deux is characterized by the "transmission" of delirious thoughts from a "primary patient", the inductor to a "secondary patient", the induced one. The association between delusional parasitosis and Folie ... Deux is an uncommon syndrome that was described by Skott in 1978.
To describe a case of Ekbom Syndrome with Folie à Deux, between mother and daughter, that has the mother as the "primary patient" even with borderline intelligence and the daughter as the induced patient, but without any cognitive deficits.
To highlight that some cases of Folie à Deux can occur in a strictly affective dominance fashion relationship.
Case report.
MRC, 46 years old, illiterate, was referred to the outpatient psychiatric clinic of the University Hospital of the Federal University of Alagoas-Brazil, by a dermatologist of the same hospital because of hyperchromic scaly pruritic skin lesions in legs and back with no findings at skin biopsy that the patient attributed to "bugs" under her skin and that her daughter also believed that existed even when her mother showed her desquamative skin as being the bug.
The dominance between the primary patient and the induced one can be of affective nature only but not cognitive nature in Folie ... Deux with Ekbom Syndrome.
1230 – Selective Attention And Cognitive Decline In Institutionalized Elderly
R. Almeida, M. Marques, H. Espírito Santo, S. Moitinho, V. Vigário, I. Pena, J. Matreno, F. Rodrigues, E. Antunes, D. Simões, A. Costa, A.R. Correia, A.S. Pimentel, V. Alves, T. Nascimento, M. Costa, M. Tomaz, L. Caldas, L. Ferreira, S. Simões, S. Guadalupe, L. Lemos, F. Daniel
When cognitive decline (CD) is present, attention is one of the impaired mental functions. CD is also associated with anxious/depressive symptoms and with some demographic variables, particularly, age.
Investigate the associations between selective attention (Stroop Test: Stroop_Word, Stroop_Color, Difference between Stroop_Word and Stroop_Color, Stroop Ratio_Word, Stroop Ratio_Color and Difference between Stroop Ratio_Word and Stroop Ratio_ Color) and CD (Montreal Cognitive Assessment/MoCA) in institutionalized elders; explore the predictive value of Stroop variables for CD, controlling anxious/depressive symptoms and sociodemographic variables.
140 institutionalized elders (mean age, M = 78.4, SD = 7.48, range = 60-97) voluntarily answered to sociodemographic questions, the MoCA, the Geriatric Anxiety Inventory/GAI, the Geriatric Depression Scale/GDS and Stroop test.
73 elders (52, 1%) had CD. Dichotomized MoCA was associated with Stroop_Word, Stroop_Color, Stroop Ratio_Word, Stroop Ratio_Color, GDS and the sociodemographic variable schooling × profession. Age and education were not tested, since MoCA was stratified according to those variables. GDS, Stroop Ratio_Word and Stroop Ratio_Color showed to predict CD.
There was an association between Stroop_Word, Stroop_Color, Stroop Ratio_Word and Stroop Ratio_Color and CD, confirming that selective attention is smaller when the elderly reveal CD. GDS and CD were, also, associated. However, there was no association between MoCA dichotomized and differences between the correct answers (Stroop_Word and Stroop_Color) and Ratios (Stroop Ratio_Word and Stroop Ratio_Color). Selective attention and depressive symptoms predicted CD. It would be important to intervene through cognitive rehabilitation with the elders to improve their attention.
P-579 - Neuroleptic Malignant Syndrome
J. Teixeira, A.M. Baptista, A. Moutinho, R. Alves, P. Casquinha
Neuroleptic malignant syndrome is a rare but potentially life threatening idiosyncratic complication of neuroleptic drugs. Levenson criteria do help guide in diagnosis of NMS and the major manifestations of the syndrome are muscular rigidity, fever, autonomic dysfunction and altered consciousness. NMS mortality is approximately between 10 to 20%.
The authors present and discuss the case of a patient with mental retardation who developed neuroleptic malignant syndrome after receiving haloperidol and zuclopenthixol for agitation.
Supportive therapy including rehydratation, electrolyte restoration, paracetamol, dantrolene and biperideno were given to the patient.
Supportive therapy, dantrolene and biperideno yielded clinical benefits for neuroleptic malignant syndrome. However the patient developed acute hepatic failure probably secondary to dantrolene with need of admission to an intensive gastroenterological care unit, where he stayed for approximately one month.
Although neuroleptic malignant syndrome and acute liver failure due to dantrolene are rare emergencies, the patient presented in this case developed theses two idiosyncratic, rare and potentially fatal reactions due to haloperidol, zuclopenthixol and dantrolene administration. This report clearly represents a successful clinical outcome only possible due to an early diagnosis and prompt treatment interventions.
EPA-0420 – Unmet Needs in Portuguese Elderly People: Data from Services Research and the 10/66 Prevalence Surveys on Dementia and Depression
M. Gonçalves-Pereira, F. Barreiros, A. Cardoso, A. Verdelho, J. Alves da Silva, C. Raminhos, A. Fernandes, M. Xavier
The healthcare needs of the elderly are seldom assessed in practice. Research in clinical populations with neuropsychiatric disorders generally unravels high levels of unmet needs. Although there are Portuguese studies in needs assessment, explorations of community or social services' scenarios have been scarce.
By gathering data from health and social services research, and from an epidemiological survey in the same region, we aimed to better characterize the unmet needs of Portuguese elderly.
We report on studies with old age people in Seixal, near Lisbon: 1) the Camberwell Assessment of Need for the Elderly was used for auditing a non-profit organization, with day-centre and home support services (n=95), and in a survey of family carers of dementia outpatients (n=116); 2) the 10/66 DRG community prevalence study (n=670) used comprehensive assessments to provide psychiatric diagnoses, data on health and psychosocial needs, and the use of services.
In the social service audit, unmet needs were mainly related to food, company, physical health and daytime activities. Domiciliary care users had more unmet needs than day centre users (p<0.001). Informal caregivers of dementia patients reported information and psychological distress needs. Finally, these 10/66 DRG study partial results highlighted a high prevalence of depression (20.4%; 95%CI 17.4-23.7) and huge health services' utilization needs.
Systematic assessments of needs for care generally unravel high proportions of health and psychosocial problems lacking adequate interventions, in clinical and community populations. This may provide a more consistent basis for health services planning.
First-episode psychosis intervention – description of our early intervention model
B. Melo, C. Alves Pereira, R. Cajão, J. Ribeiro Silva, S. Pereira, E. Monteiro
Published online by Cambridge University Press: 23 March 2020, p. s823
The research about the benefits of early diagnosis and treatment of first-episode psychosis had significantly increased in last decades. There have been several early intervention programs in psychotic disease, implemented worldwide, in order to improve the prognosis of these psychotic patients.
To present a brief description of the first-episode psychosis intervention team of Tondela-Viseu Hospital Centre–Portugal and its model. We aim to further characterize our population and describe its evolution since 2008.
We aim to clarify the benefits of an early intervention in psychosis.
We conducted a retrospective cohort study of patients being followed by our team from November 2008 to September 2016. Demographic and medical data were collected (such as diagnosis, duration of untreated psychosis, treatments and its clinical effectiveness, relapse rate and hospital admissions) in patient's clinical records. The intervention model protocol of this team was also described and analyzed.
This multidisciplinary team consists of three psychiatrists, one child Psychiatrist, one psychologist and five reference therapists (areas of nursing, social service and occupational therapy). It includes patients diagnosed with first-episode psychosis, aged 16 to 42 years old, followed for five years. The team followed, since its foundation, 123 patients, mostly male. The most prevalent diagnosis are schizophrenia and schizophreniform psychosis. The team is currently following 51 patients.
This team's intervention have progressively assumed a more relevant importance in the prognosis of patients with first-episode psychosis, by reducing the duration of untreated psychosis, the relapse rate and by promoting social reintegration.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Increased vitamin supplement to sows, piglets and finishers and the effect in productivity
R. K. S. Santos, A. K. Novais, D. S. Borges, J. B. Alves, J. G. N. Dario, G. Frederico, C. R. Pierozan, J. P. Batista, M. Pereira, C. A. Silva
Journal: animal / Volume 14 / Issue 1 / January 2020
Published online by Cambridge University Press: 16 August 2019, pp. 86-94
Print publication: January 2020
With still limited information on vitamin requirements and considering that many commercial practices adopt dietary vitamin levels above the values suggested by nutritional tables, this study aimed to assess the effect of administering vitamin supplementation to sows in gestation and lactation and to their litters on the reproductive performance and body condition of the sows and on the performance and immune profile of the litters until slaughter. The trial was split into two phases. The first phase used 104 sows, assigned to be randomized to blocks according to parity, submitted until 21 days of lactation to two treatments: control–standard (standard levels of vitamins) and test–elevated (elevated levels of vitamins). Each sow and its respective farrow were considered an experimental unit. The sows underwent evaluations of body condition score, back fat thickness and reproductive performance. In the second phase, 60 barrows and 60 gilts at 21 days of age and mean initial weight of 5.33 ± 1.5 kg until slaughter at 164 days of age. The piglets were assigned to randomized blocks according to the weight and sex of the animals in a 2 × 2 factorial model, with 10 replicates per treatment, where a pen with three animals represented the experimental unit. Following the same treatments of the first phase, the piglets were evaluated for daily weight gain, daily feed intake, feed conversion, mortality rate and humoral immune response. Vitamin supplementation had no positive effects on the reproductive parameters or body composition of sows. However, it positively impacted the performance of the litters in the early nursery stage, but did not lead to superior effects on the immune responses to vaccination against circovirus or mycoplasma.
Mineralization of Sialoliths Investigated by Ex Vivo and In Vivo X-ray Computed Tomography
Pedro Nolasco, Paulo V. Coelho, Carla Coelho, David F. Angelo, J. R. Dias, Nuno M. Alves, António Maurício, Manuel F.C. Pereira, António P. Alves de Matos, Raul C. Martins, Patrícia A. Carvalho
Journal: Microscopy and Microanalysis / Volume 25 / Issue 1 / February 2019
Published online by Cambridge University Press: 04 February 2019, pp. 151-163
The fraction of organic matter present affects the fragmentation behavior of sialoliths; thus, pretherapeutic information on the degree of mineralization is relevant for a correct selection of lithotripsy procedures. This work proposes a methodology for in vivo characterization of salivary calculi in the pretherapeutic context. Sialoliths were characterized in detail by X-ray computed microtomography (μCT) in combination with atomic emission spectroscopy, Fourier transform infrared spectroscopy, X-ray diffraction, scanning electron microscopy, and transmission electron microscopy. Correlative analysis of the same specimens was performed by in vivo and ex vivo helical computed tomography (HCT) and ex vivo μCT. The mineral matter in the sialoliths consisted essentially of apatite (89 vol%) and whitlockite (11 vol%) with average density of 1.8 g/cm3. In hydrated conditions, the mineral mass prevailed with 53 ± 13 wt%, whereas the organic matter, with a density of 1.2 g/cm3, occupied 65 ± 10% of the sialoliths' volume. A quantitative relation between sialoliths mineral density and X-ray attenuation is proposed for both HCT and μCT.
THREE NEW SPECIES OF BARBACENIA (VELLOZIACEAE) FROM TOCANTINS, BRAZIL
R. J. V. Alves, A. R. Guimarães, R. Sadala, M. Lira, N. G. da Silva
Journal: Edinburgh Journal of Botany / Volume 76 / Issue 2 / July 2019
Published online by Cambridge University Press: 12 December 2018, pp. 181-195
Print publication: July 2019
Three new species of the Neotropical genus Barbacenia (Velloziaceae, Pandanales) from Tocantins, Brazil, are described and illustrated, based on morphology and leaf anatomy. The known species richness of the genus is mapped within the countries of South America and the states of Brazil. | CommonCrawl |
Units and Measurement (Summary)
[ "article:topic", "authorname:openstax", "license:ccby", "showtoc:no", "transcluded:yes", "source-phys-4305" ]
Physics 201 - Fall 2019
Book: Physics (Boundless)
1: The Basics of Physics
1.5: Units and Measurement Redux
Key Equations
1.1 The Scope and Scale of Physics
1.2 Units and Standards
1.3 Unit Conversion
1.4 Dimensional Analysis
1.5 Estimates and Fermi Calculations
1.6 Significant Figures
1.7 Solving Problems in Physics
accuracy the degree to which a measured value agrees with an accepted reference value for that measurement
base quantity physical quantity chosen by convention and practical considerations such that all other physical quantities can be expressed as algebraic combinations of them
base unit standard for expressing the measurement of a base quantity within a particular system of units; defined by a particular procedure used to measure the corresponding base quantity
conversion factor a ratio that expresses how many of one unit are equal to another unit
derived quantity physical quantity defined using algebraic combinations of base quantities
derived units units that can be calculated using algebraic combinations of the fundamental units
dimension expression of the dependence of a physical quantity on the base quantities as a product of powers of symbols representing the base quantities; in general, the dimension of a quantity has the form \(L^{a} M^{b} T^{c} I^{d} \Theta^{e} N^{f} J^{g}\) for some powers a, b, c, d, e, f, and g
dimensionally consistent equation in which every term has the same dimensions and the arguments of any mathematical functions appearing in the equation are dimensionless
dimensionless quantity with a dimension of \(L^{0} M^{0} T^{0} I^{0} \Theta^{e} N^{0} J^{0}\)= 1; also called quantity of dimension 1 or a pure number
discrepancy the difference between the measured value and a given standard or expected value
English units system of measurement used in the United States; includes units of measure such as feet, gallons, and pounds
estimation using prior experience and sound physical reasoning to arrive at a rough idea of a quantity's value; sometimes called an "order-of-magnitude approximation," a "guesstimate," a "back-of-the-envelope calculation", or a "Fermi calculation"
kilogram SI unit for mass, abbreviated kg
law description, using concise language or a mathematical formula, of a generalized pattern in nature supported by scientific evidence and repeated experiments
meter SI unit for length, abbreviated m
method of adding percents the percent uncertainty in a quantity calculated by multiplication or division is the sum of the percent uncertainties in the items used to make the calculation
metric system system in which values can be calculated in factors of 10
model representation of something often too difficult (or impossible) to display directly
order of magnitude the size of a quantity as it relates to a power of 10
percent uncertainty the ratio of the uncertainty of a measurement to the measured value, expressed as a percentage
physical quantity characteristic or property of an object that can be measured or calculated from other measurements
physics science concerned with describing the interactions of energy, matter, space, and time; especially interested in what fundamental mechanisms underlie every phenomenon
precision the degree to which repeated measurements agree with each other
second the SI unit for time, abbreviated s
SI units the international system of units that scientists in most countries have agreed to use; includes units such as meters, liters, and grams
significant figures used to express the precision of a measuring tool used to measure a value
theory testable explanation for patterns in nature supported by scientific evidence and verified multiple times by various groups of researchers
uncertainty a quantitative measure of how much measured values deviate from one another
units standards used for expressing and comparing measurements
Percent uncertainty $$Percent\; uncertainty = \frac{\delta A}{A} \times 100 \%$$
Physics is about trying to find the simple laws that describe all natural phenomena.
Physics operates on a vast range of scales of length, mass, and time. Scientists use the concept of the order of magnitude of a number to track which phenomena occur on which scales. They also use orders of magnitude to compare the various scales.
Scientists attempt to describe the world by formulating models, theories, and laws
Systems of units are built up from a small number of base units, which are defined by accurate and precise measurements of conventionally chosen base quantities. Other units are then derived as algebraic combinations of the base units.
Two commonly used systems of units are English units and SI units. All scientists and most of the other people in the world use SI, whereas nonscientists in the United States still tend to use English units.
The SI base units of length, mass, and time are the meter (m), kilogram (kg), and second (s), respectively.
SI units are a metric system of units, meaning values can be calculated by factors of 10. Metric prefixes may be used with metric units to scale the base units to sizes appropriate for almost any application.
To convert a quantity from one unit to another, multiply by conversion factors in such a way that you cancel the units you want to get rid of and introduce the units you want to end up with.
Be careful with areas and volumes. Units obey the rules of algebra so, for example, if a unit is squared we need two factors to cancel it.
The dimension of a physical quantity is just an expression of the base quantities from which it is derived.
All equations expressing physical laws or principles must be dimensionally consistent. This fact can be used as an aid in remembering physical laws, as a way to check whether claimed relationships between physical quantities are possible, and even to derive new physical laws.
An estimate is a rough educated guess at the value of a physical quantity based on prior experience and sound physical reasoning. Some strategies that may help when making an estimate are as follows:
Get big lengths from smaller lengths.
Get areas and volumes from lengths.
Get masses from volumes and densities.
If all else fails, bound it. One "sig. fig." is fine.
Ask yourself: Does this make any sense?
Accuracy of a measured value refers to how close a measurement is to an accepted reference value. The discrepancy in a measurement is the amount by which the measurement result differs from this value.
Precision of measured values refers to how close the agreement is between repeated measurements. The uncertainty of a measurement is a quantification of this.
The precision of a measuring tool is related to the size of its measurement increments. The smaller the measurement increment, the more precise the tool.
Significant figures express the precision of a measuring tool.
When multiplying or dividing measured values, the final answer can contain only as many significant figures as the least-precise value.
When adding or subtracting measured values, the final answer cannot contain more decimal places than the least-precise value.
The three stages of the process for solving physics problems used in this textmap are as follows:
Strategy: Determine which physical principles are involved and develop a strategy for using them to solve the problem.
Solution: Do the math necessary to obtain a numerical solution complete with units.
Significance: Check the solution to make sure it makes sense (correct units, reasonable magnitude and sign) and assess its significance.
Units and Measurement (Exercises)
Units and Standards
Transcluded
source-phys-4305 | CommonCrawl |
TF NEUP 2011
Technical Work Scope Identifier: FC-3
submit proposal at
www.neup.gov/
which forwards you to
https://inlportal.inl.gov/portal/server.pt?open=512&objID=600&mode=2
Tittle: An evaluation of a fission chamber's temporal response using gas electron multiplication
1 Project Summary
2 Importance and relevance of proposed work
3 Logical pathway to work accomplishments
4 Deliverables and outcomes
5 Timeline
7 Benefits of Collaboration
8 Neutron fluxes in reactors
This project seeks to improve the response of a fission chamber by transferring a technology known as Gas Electron Multiplication developed in the late 1990s for high energy physics detectors. A Gas Electron Multiplier functions as a preamplifier when immersed in the detector's gas. The response time of a fission chamber can be improved by reducing the detector's components in size because of the increased free charge produced by the preamplifier. This technology has demonstrated an improvement in an ionization chamber's response time by at least a factor of 3. The faster detector signal will extend the detectable neutron flux beyond the current signal pile up region. The increased signal amplification may also be used to improve the detector's ability to measure the location of the incident particle on the entrance to the detector. We propose to measure the improved performance of a fission chamber equipped with Gas Electron Multipliers and evaluate its use as a neutron tomography imaging device.
Importance and relevance of proposed work
We propose to investigate a neutron instrumentation upgrade by transferring an innovation developed for gaseous detectors in the late 1990s. The NEUP's 2012 request for pre-applications calls for Program Supporting and Mission Supporting R&D in Fuel Cycle Research and Development, Reactor Concepts Research, Development and Demonstration, and Nuclear Energy Advanced Modeling and Simulation. We believe the development of the above instrumentation is aligned with NEUP's Program Supporting and Mission Supporting R&D, in particular the Fuel cycle initiative. As described above, our proposal to equip fission chambers with this novel technology could enhance their neutron detection by decreasing the rise time of the devices signal by a factor of 3 An an added benefit of this investigation, the proposal can also addresses the April 2010 Roadmap report to congress that identified the "Obsolete analog instrumentation and control technologies" as one of the major challenges facing the current nuclear power plant fleet. An improvement of the neutron detection technology will not only assist the Fuel Cycle initiative by providing a detector to essentially perform neutron tomography, but it may also impact the ability to measure high neutron fluxes near the reactor core.
From the RFP
his research topic will also pursue advanced measurement techniques that could complement the ongoing measurement program. In particular, fission multiplicity and fission neutron spectrum measurements as a function of incident neutron energy have been identified as important data in recent sensitivity analyses. Key university research needs for this activity include: " New and improved detector systems and sensor materials that can be used to increase the accuracy, reliability, and efficiency of nuclear materials quantification and tracking from the perspective of the operator or state-level regulator. Such systems could include new neutron coincidence/anti-coincidence counting, spectroscopic analysis, chemical, calorimetric, or other non-nuclear methods, as well as any other novel methods with potential MC&A benefits;
Mission Supporting R&D is considered creative, innovative, and transformative (blue-sky), but must also support the NE mission. Mission-supporting activities that could produce breakthroughs in nuclear technology are also invited to this solicitation. This includes research in the fields or disciplines of nuclear science and engineering that are relevant to NE's mission though may not fully align with the specific initiatives and programs identified in this solicitation. This includes, but is not limited to, Nuclear Engineering, Nuclear Physics, Health Physics, Radiochemistry, Nuclear Materials Science, or Nuclear Chemistry. Examples of topics of interest are new reactor designs and technologies, advanced nuclear fuels and resource utilization, instrumentation and control/human factors, radiochemistry, fundamental nuclear science, and quantification of proliferation risk and creative solutions for the management of used nuclear fuel. Program supporting research requested by this solicitation is detailed as discreet workscopes in Appendix A. The information is organized by program area with each specified workscope providing the basis for a stand-alone R&D pre-application submittal.
Fuel Cycle Research and Development New and improved detector systems and sensor materials that can be used to increase the accuracy, reliability, and efficiency of nuclear materials quantification and tracking from the perspective of the operator or state-level regulator. Such systems could include new neutron coincidence/anti-coincidence counting, spectroscopic analysis, chemical, calorimetric, or other non-nuclear methods, as well as any other novel methods with potential MC&A benefits;
Logical pathway to work accomplishments
The pathway to develop a fission chamber equipped with Gas Electron Multipliers will consist of three broad steps. The first step will focus on the construction of a fission chamber with gas electron multiplication pre-amplifiers. The PI of this proposal has produced several ionization chambers equipped with gas electron multipliers that are constructed from copper clad kapton foils. A comparison of the output signal from an ionization chamber with the gas electron multiplier pre-amplifier is shown in the figure below along with a typical signal from a drift chamber (geiger muller tube) detector. The duration of the output pulse is at least a factor of two longer for an ionization chamber which does not use gas electron multiplication. The output pulse of the gas electron multiplier equipped chamber is also more gaussian like. THis performance occurs because the ionization region within the detector can be reduced in size using the preamplifiers. The ionized particle in the chamber have a shorter drift distance and result in a faster output signal response. The objective in this proposal will be to add a fissionable material to an ionization chamber making it neutron sensitive.
The next logical step will be to test the proposed instrument response to neutrons. The figure below illustrates a neutral particle time of flight measurement using the High Rep Rate Linac facility that is managed by the Idaho Accelerator Center and located at Idaho State University's physics department. A Tungsten radiation was used in conjuction with a 15 MeV electron beam to create a photon source from Bremstrahlung radiation. The emmitted photons entered the experimental cell which held either a water or deuterated water (HDO) target. A NaI detector was positioned 2 meters away from the target and at the azimuthal angle of 90 degrees to detect neutral particles. The hatched peaks in the histrogram represent the NaI detector measurement when using a water target. The un-hatched histogram shows a clear neutron event enhancement when using the HDO target. We observed an integrated neutron rate of 50 Hz in the 2 cm x 2 cm NaI detector. We propose using this device to test the temporal performance of a gas electron multiplier equipped fission chamber.
Deliverables and outcomes
The project will construct a fission chamber equipped with gas electron multiplier pre-amplifiers. The temporal response of the fission chamber to fast neutrons will be quantified.
This project spans a three year period. Year one will be devoted to machining the components of a fission chamber equipped with gas electron multipliers. Assembly of the detector with a fission target will be completed in year 2. In the final year, the device will be tested using neutrons generated by the bremstrahlung photons from electrons accelerated by a 16 MeV Linac that impinge on a target of deuterated water (HDO).
We request total of $290,000 for a period of three years to defray the costs of this Program Supporting development of an improved fission chamber. The main expense will be for a graduate student ($50k) and some faculty mentor time ($20k) per year. Equipment construction and beam time expenses are also requested in the amount of $30k per year.
Benefits of Collaboration
. The Idaho State University Department of Physics Strategic Plan identifies the use of experimental nuclear physics techniques as its focus area to addressing problems in both fundamental and applied science. The major efforts of the department include fundamental nuclear and particle physics, nuclear reactor fuel cycle physics, nuclear non-proliferation and homeland security, accelerator applications, radiation effects in materials and devices, and biology. One of the key ingredients to the department's success has been the completion of the Idaho Accelerator Center (IAC) on April 30, 1999. A substantial amount of lab space (4000 sq.~ft.) within the department has become available due to a combination of the IAC and a remodeling of the physics building. The Physics department has recently added a 400 sq.~ft., class 10,000 clean room that is current;y being used to build large drift chambers that are about 6 feet high and contain more than 4500 wires.
The PI has created a Laboratory for Detector Science at Idaho State University which houses the infrastructure for detector development projects. The 1200 sq.~ft. Laboratory is equipped with flow hoods, a darkroom, and a laminar flow hood used to provide a clean room environment sufficient to construct small prototype detectors. A CODA based data acquisition system with ADC, TDC, and scaler VME modules has been installed to record detector performance measurements. The PIs also established a student machine shop containing a mill, a lathe, drill press, table saw, and band which occupies its own space for the physics department to share. These facilities have a history of being used to construct detectors, measure detector prototype performance, and design electronic circuits.
The Idaho Accelerator Center (IAC) is located less than a mile away from campus and can provide a computer controlled a machining facility for detector construction, an electronics shop for installation of instrumentation, and beamtime for detector performance studies. The IAC houses ten operating accelerators as well as a machine and electronics shop with a permanent staff of 8 Ph.D.s and 6 engineers. Among its many accelerator systems, the Center houses a 16 MeV Linac capable of delivering 20~ns to 2~$\mu$s electron pulses with an instantaneous current of 80 mA up to an energy of 25 MeV at pulse rates up to 1~kHz. Some additional accelerator facilities at Idaho State University include another 25 MeV S-band linac, a 12 MeV a Pelletron, a 9.5 MeV, and 10kA pulsed-power machine. A full description of the facility is available at the web site (www.iac.isu.edu).
Neutron fluxes in reactors
according to http://www.bnl.gov/bnlweb/history/HFBR_main.asp
a 40 MW reactor at Brookhaven's High Flux Beam Reactor (HBFR) produced a neutron flux of [math]1.5 \times 10^{15} \frac{n /cm^2}{s}[/math] for experiments. The neutron flux was a maximum outside the core because the neutrons were directed tangentially to the core instead of radially.
1e11 to 1e12 neutrons per cm^2 per second may be more typical
Let's assume this flux is an upper limit for a detector to measure neutron fluxes in a reactor core. The pulse width of a regular GEM detector is [math]50 \times 10^{-9}[/math] sec. Because of the high gain a signal may be observed over a surface area of 3 cm^2 (10 cm by 300 \times 10^{-3} cm ). A GEM detector with this active area would only be able to count neutron fluxes of [math]1 \times 10^{7} \frac{n /cm^2}{s}[/math] if the detector efficiency was 100 %. A detector efficiency of 10^{-5} would be able to see rates of 10^{11}.
The pulse width of a standard ionization chamber is on the order of 300 nsec, so a standard GEm detector would only be able to have a factor of 6 higher rate than a typical ionization/fission chamber.
1.) Neutron sensitive ionization chamber (no position readout)
File:NEUP Pre-app RFP.pdf
"Fission chambers for CANDU SDS neutronic trip applications", V. Mohindrs, M. Vartolomei, and A. McDonald, 28th Annual Canadian Nuclear Society (CNS) conference, June 3-6, 2007New Brunswick, Canada Media:Virender_CANDU2007.pdf
GE builds nuclear reactor instruementation http://www.ge-mcs.com/en/nuclear-reactor-instrumentation/
Forest_Proposals
Retrieved from "https://wiki.iac.isu.edu/index.php?title=TF_NEUP_2011&oldid=68787" | CommonCrawl |
There are $n$ different $3$-element subsets $A_1,A_2,…,A_n$ of the set $\{1,2,…,n\}$, with $|A_i \cap A_j| \not= 1$ for all $i \not= j$.
Determine all possible values of positive integer $n$, such that there are $n$ different $3$-element subsets $A_1,A_2,...,A_n$ of the set $\{1,2,...,n\}$, with $|A_i \cap A_j| \not= 1$ for all $i \not= j$.
Source: China Western Olympiad 2010
Attempt:
It is quite clear that for $n=4k$ such a system exist. For $n=4$, we have $A_1 =\{1,2,3\}$, $A_2 =\{1,2,4\}$, $A_3 =\{2,3,4\}$, $A_4 =\{1,3,4\}$. It is not hard to see that induction $n\to n+4$ works. Now I would like to prove that there is no such system if $4\nmid n$.
I thought about linear algebra approach. Observe the given sets as vectors in $\mathbb{F}_2^n$. Then since $A_i\cdot A_i =1$ and $A_i\cdot A_j = 0$ for each $i\ne j$ these vectors are linear independent: $$ b_1A_1+b_2A_2+...+b_nA_n = 0\;\;\;\; /\cdot A_i$$ $$ b_1\cdot 0+b_2\cdot 0+...+b_i\cdot 1+...b_n\cdot 0 =0\implies b_i=0$$ But now, I'm not sure what to do...
linear-algebra combinatorics contest-math algebraic-combinatorics
AquaAqua
Suppose that there are $n$ such sets $A_1,A_2,\ldots,A_n$, represented by indicator vectors $\mathbf{a}_1,\mathbf{a}_2,\ldots,\mathbf{a}_n\in\mathbb{F}_2^n$. Equip $\mathbb{F}_2^n$ with the usual inner product $\langle\_,\_\rangle$.
We already know that the vectors $\mathbf{a}_1,\mathbf{a}_2,\ldots,\mathbf{a}_n$ are linearly independent. Therefore, they span $\mathbb{F}_2^n$. Thus, the vector $\boldsymbol{1}:=(1,1,\ldots,1)$ can be written as $$\mathbf{a}_{j_1}+\mathbf{a}_{j_2}+\ldots+\mathbf{a}_{j_k}$$ for some $j_1,j_2,\ldots,j_k\in\{1,2,\ldots,n\}=:[n]$ with $j_1<j_2<\ldots<j_k$. If $k<n$, then there exists $r\in[n]$ such that $r\neq j_\mu$ for all $\mu=1,2,\ldots,k$. That is, $$1=\langle \mathbf{a}_r,\boldsymbol{1}\rangle =\sum_{\mu=1}^k\,\langle \mathbf{a}_{j_\mu},\mathbf{a}_r\rangle=0\,,$$ which is a contradiction. Therefore, $k=n$, whence $$\boldsymbol{1}=\sum_{j=1}^n\,\mathbf{a}_j\,.\tag{*}$$ Consequently, each element of $[n]$ belongs in an odd number of $A_1,A_2,\ldots,A_n$, whence at least one of the sets $A_1,A_2,\ldots,A_n$.
Furthermore, it is not difficult to show that every element of $[n]$ must belong in at least two of the $A_i$'s. (If there exists an element of $[n]$ belonging in exactly one $A_j$, then you can show that there are at most $n-2$ possible $A_i$'s.) Let $d_j$ be the number of sets $A_i$ such that $j\in A_i$. Then $$\sum_{j=1}^n\,d_j=3n\,.\tag{#}$$
Note that $d_j\geq 2$ for all $j\in[n]$.
From (*), we conclude that $d_j\geq 3$ for every $j\in[n]$. However, (#) implies that $d_j=3$ for all $j\in[n]$; i.e., every element of $[n]$ must be in exactly three of the $A_i$'s. Write $\mathbf{e}_1,\mathbf{e}_2,\ldots,\mathbf{e}_n$ for the standard basis vectors of $\mathbb{F}^n_2$. We see that $$\mathbf{e}_j=\mathbf{a}_p+\mathbf{a}_q+\mathbf{a}_r$$ where $j$ is in $A_p$, $A_q$, and $A_r$. This shows that $$A_p=\{j,x,y\}\,,\,\,A_q=\{j,y,z\}\,,\text{ and }A_r=\{j,z,x\}$$ for some $x,y,z\in[n]$. Since $x$ already belongs in $A_p$ and $A_r$, it must be belong in another $A_s$. Clearly, $A_s$ must be equal to $\{x,y,z\}$. From here, we conclude that the four elements $j,x,y,z$ belong in exactly four of the $A_i$'s, which are $\{j,x,y\},\{j,y,z\},\{j,z,x\},\{x,y,z\}$. The rest is easy.
BatominovskiBatominovski
$\begingroup$ I read it all. All I have to check now why is $d_i\geq 2$ for each $i$. $\endgroup$ – Aqua Jul 22 '18 at 19:55
$\begingroup$ If there is an element of $[n]$ contained in exactly in one of the $A_i$'s, say $1\in\{1,2,3\}$, then split $A_2,A_3,\ldots,A_n$ into two groups---those that contain $\{2,3\}$ and those that are disjoint from $\{2,3\}$. Now, if there are $k$ of those that contain $\{2,3\}$, then there are at most $n-k-3$ of those that are disjoint from $\{2,3\}$. Thus, you can end up with at most $1+k+(n-k-3)=n-2$ sets. $\endgroup$ – Batominovski Jul 22 '18 at 19:58
$\begingroup$ How you got $n-k-3$? $\endgroup$ – Aqua Jul 22 '18 at 20:07
$\begingroup$ The sets of the first kind must be of the form $\{2,3,t_1\},\{2,3,t_2\},\ldots,\{2,3,t_k\}$, and the sets of the second kind must be disjoint from $\{1,2,3,t_1,t_2,\ldots,t_k\}$. Therefore, the sets of the second kind are subsets of $[n]\setminus \{1,2,3,t_1,t_2,\ldots,t_k\}$, which has $n-k-3$ elements. $\endgroup$ – Batominovski Jul 22 '18 at 20:12
$\begingroup$ I still don't understand. Why does this mean that we have at most n-k-3 subsets? $\endgroup$ – Aqua Jul 22 '18 at 20:28
Let $n$ be a positive integer. If $A_1,A_2,\ldots,A_m$ are $3$-subsets of $[n]$ such that $\left|A_i\cap A_j\right|\neq 1$ for $i\neq j$, then the largest possible value of $m$ is $$m_\max=\left\{ \begin{array}{ll} n&\text{if }n\equiv0\pmod{4}\,,\\ n-1&\text{if }n\equiv1\pmod{4}\,,\\ n-2&\text{else}\,. \end{array} \right.$$
Remark: Below is a sketch of my proof of the claim above. Be warned that a complete proof is quite long, whence I am providing a sketch with various gaps to be filled in. I hope that somebody will come up with a nicer proof.
Proof. The first two cases follow from my first answer. I shall now deal with the last case, where $m_\max=n-2$.
Suppose contrary that there are $A_1,A_2,\ldots,A_{n-1}$ satisfying the intersection condition. Then, proceed as before. The indicator vectors $\mathbf{a}_1,\mathbf{a}_2,\ldots,\mathbf{a}_{n-1}\in\mathbb{F}_2^n$ are linearly independent. Thus, there exists $\mathbf{v}\in\mathbb{F}_2^n$ such that $\mathbf{a}_1,\mathbf{a}_2,\ldots,\mathbf{a}_{n-1},\mathbf{b}$ form a basis of $\mathbb{F}_2^n$. We can assume that $\langle \mathbf{a}_i,\mathbf{b}\rangle=0$ for all $i=1,2,\ldots,n-1$ (otherwise, replace $\mathbf{b}$ by $\mathbf{b}-\sum_{i=1}^{n-1}\,\langle \mathbf{a}_i,\mathbf{b}\rangle \,\mathbf{a}_i$). Observe that $\langle \mathbf{b},\mathbf{b}\rangle=1$.
Note that $$\boldsymbol{1}=\sum_{i=1}^{n-1}\,\mathbf{a}_i+\mathbf{b}\,.$$ Let $B$ be the subset of $[n]$ with the indicator vector $\mathbf{b}$. Let $X$ denote the set of $i$ such that $A_i$ is disjoint from $B$, and $Y$ the set of $i$ such that $A_i\cap B$ has two elements. Observe that $X$ and $Y$ form a partition of $\{1,2,\ldots,n-1\}$; moreover, $$\mathcal{X}:=\bigcup_{i\in X}\,A_i\text{ and }\mathcal{Y}:=\bigcup_{i\in Y}\,A_i$$ are disjoint subsets of $[n]$.
If $X\neq \emptyset$, then we can use induction to finish the proof, noting that $A_i\subseteq [n]\setminus (B\cup\mathcal{Y})$ for all $i\in X$. From now on, assume that $X=\emptyset$.
Consider a simple graph $G$ on the vertex set $B$ where two vertices $i,j\in B$ ($i\neq j$) are connected by an edge iff $i$ and $j$ belongs in some $A_p$ simultaneously. If $C$ is a connected component of $G$ and $k\in [n]\setminus B$, then we say that $k$ is adjacent to $C$ if there exists $A_p$ such that $A_p\cap B$ is an edge of $C$ and $k\in A_p$, in which case, we also say that $A_p$ is incident to $C$. It is important to note that, if $C_1$ and $C_2$ are two distinct connected components of $G$, and $k_1,k_2\in [n]\setminus B$ are adjacent to $C_1$ and $C_2$, respectively, then $k_1\neq k_2$.
Let $C$ be a connected component of $G$ with at least two vertices. We have three probable scenarios:
$C$ is a type-1 connected component, namely, $C$ is an isolated edge (i.e., it has only two vertices and one edge);
$C$ is a type-2 connected component, namely, $C$ is a triangle (i.e., $C$ consists of $3$ vertices and $3$ edges);
$C$ is a type-3 connected component, namely, $C$ is a star graph (i.e., there exists a vertex $v$ of $C$ such that every edge of $C$ takes the form $\{v,w\}$, where $w$ is any vertex of $C$ distinct from $v$).
It can be readily seen that, if $C$ is a connected component of type 2 or type 3 of $G$, then $C$ is adjacent to exactly one element of $[n]\setminus B$. If $G$ has a connected component $C$ of type 2, then the removal of vertices in $C$ along with the element $j\in[n]\setminus B$ which is adjacent to $C$ reduces the elements of $[n]$ by $4$, whilst ridding of only three sets $A_i$. Then, we finish the proof for this case by induction. Suppose from now on that $G$ has no connected components of type 2.
Now, assume that $G$ has a connected component $C$ of type 3, which has $s$ vertices. Let $j\in[n]\setminus B$ be adjacent to $C$. Then, the removal of vertices of $C$ along with $j$ from $[n]$ reduces the elements of $[n]$ by $s+1$, whilst ridding of only $s-1$ sets $A_i$. Therefore, the claim hold trivially.
Finally, assume that $G$ has only connected components of type 1 and possibly some isolated vertices. Then, it follows immediately that there are at most $n-2t$ sets $A_i$, where $t$ is the number of connected components of type 1. This shows that $t=0$. Thus, $G$ has only isolated vertices, but this is a contradiction as well (as $X=\emptyset$ is assumed).
Not the answer you're looking for? Browse other questions tagged linear-algebra combinatorics contest-math algebraic-combinatorics or ask your own question.
Is there a Pigeon hole principle proof
Math Olympiads: GCD of terms in a sequence equals GCD of terms in other sequence
With 4 rooks on a $4\times4$ chessboard such that no rook can attack another, what is the probability there are no rooks on the diagonal?
How to proof the linear independency of the random vectors with this specific structure?
How many 10-digit decimal sequences (using $0, 1, 2, . . . , 9$) are there in which digits $3, 4, 5, 6$ all appear?
Row equivalent matrices and proving
How many $i$ tuples of subsets $A_1,…,A_i$ are there such that the $A_j$ are pairwise disjoint?
Find the number of ways to write $2017$ in the form $2017 = a_3 \cdot 10^3 + a_2 \cdot 10^2 + a_1 \cdot 10 + a_0.$
Given $a_1 \leq a_2 \leq \dots \leq a_n$, how many possible orders are there for $\{a_i + a_j\}$?
Intersection of span | CommonCrawl |
Article | Open | Published: 02 November 2018
Topological LC-circuits based on microstrips and observation of electromagnetic modes with orbital angular momentum
Yuan Li1,
Yong Sun1,
Weiwei Zhu1,
Zhiwei Guo ORCID: orcid.org/0000-0002-8973-307X1,
Jun Jiang1,
Toshikaze Kariyado ORCID: orcid.org/0000-0002-3746-68032,
Hong Chen1 &
Xiao Hu2
Nature Communicationsvolume 9, Article number: 4598 (2018) | Download Citation
Microwave photonics
New structures with richer electromagnetic properties are in high demand for developing novel microwave and optic devices aimed at realizing fast light-based information transfer and information processing. Here we show theoretically that a topological photonic state exists in a hexagonal LC circuit with short-range textures in the inductance, which is induced by a band inversion between p- and d-like electromagnetic modes carrying orbital angular momentum, and realize this state experimentally in planar microstrip arrays. Measuring both amplitude and phase of the out-of-plane electric field accurately using microwave near-field techniques, we demonstrate directly that topological interfacial electromagnetic waves launched by a linearly polarized dipole source propagate in opposite directions according to the sign of the orbital angular momentum. The open planar structure adopted in the present approach leaves much room for including other elements useful for advanced information processing, such as electric/mechanical resonators, superconducting Josephson junctions and SQUIDs.
To harness at will the propagation of electromagnetic (EM) waves constitutes the primary goal of photonics, the modern science and technology of light, expected to enable novel applications ranging from imaging and sensing well below the EM wavelength to advanced information processing and transformation. So far, systems with spatially varying permittivity and/or permeability, or arrays of resonators were explored, and EM properties unavailable in conventional uniform media have been achieved, such as negative refractive index, superlensing, cloaking and slow light1,2,3,4,5,6, etc.
Inspired by the flourishing topological physics emerging in condensed matter7,8,9,10,11,12,13, robust EM propagation at the edge of photonic topological insulators immune to back-scattering from sharp corners and imperfections came into focus in the past decade. This is achieved by one-way edge EM modes in systems with broken time-reversal symmetry (TRS)14,15,16,17,18,19,20,21,22,23, and by pairs of counterpropagating edge EM modes carrying opposite pseudospins in systems respecting TRS24,25,26,27,28,29,30,31,32,33,34,35 (for a recent review, see ref. 36). Topological photonic systems with TRS which avoid the need of application of external magnetic fields—albeit at the price of sacrificing partially absolute robustness—attract increasing interest since they are more compatible with semiconductor-based electronic and optical devices. In a two-dimensional (2D) topological photonic crystal with C6v symmetry proposed recently27, p- and d-like EM modes of opposite parities with respect to spatial inversion are tuned to generate a frequency band gap, and the sign of the orbital angular momentum (OAM) plays the role of an emergent pseudospin degree of freedom (for OAM and spin angular momentum, a related physical quantity, of EM modes in various circumstances, see previous works37,38,39,40). While EM modes with pseudospin up and down are degenerate in bulk bands due to TRS, thus hard to manipulate, they are separated into two opposite directions in the topological interface EM propagation, which can be exploited for realizing novel EM functionality. However, up to now only field strengths along the interface between photonic crystals distinct in topology and transmission rates at output ports have been measured. Details of pseudospin states of the topological EM propagation remain unclear in topological photonic systems explored so far, which hampers their advanced applications (the valley degree of freedom and related OAM have been revealed in a photonic graphene by selectively exciting the two sublattices in terms of interfering probe beams41).
In this work, we present the direct experimental observation on pseudospin states of unidirectional interface modes in topological photonic metamaterials. Based on the insight obtained by analyzing a lumped element circuit model with honeycomb-type structure, we propose that topological EM propagation can be achieved experimentally in a planar microstrip array, a typical transmission line in the microwave frequency band42 constructed as a sandwich structure of bottom metallic substrate, middle dielectric film and top patterned metallic strips, which is common in various electronic devices. When the metallic strips form a perfect honeycomb pattern, linear frequency-momentum dispersions appear in the normal frequency EM modes, very similar to the Dirac cones in the electronic energy-momentum dispersions seen in graphene. Introducing a C6v-symmetric texture with alternating wide and narrow metallic strips opens a frequency band gap. In addition, a band inversion between p- and d-like EM modes arises when the inter-hexagon strips are wider than the intra-hexagon ones, yielding a topological EM state mimicking the quantum spin Hall effect (QSHE) in electronic systems. Taking advantage of the planar and open structure of this metamaterial, we measure distributions of both amplitude and phase of the out-of-plane electric field along the interface between two topologically distinct microstrip regimes using near-field techniques. EM waves are launched from a linearly polarized source located close to the interface. We resolve the weights of p- and d-like EM components in the interface modes and clarify their dependence on the source frequency swept across the bulk frequency band gap. We further map out the circulating local Poynting vectors and reveal explicitly the pseudospin states locked to the propagating directions. The simple structure of the present topological microstrip device displaying local OAM in its EM modes enables easy fabrication and on-chip integration, which is advantageous for harnessing EM transport inside the metamaterial, and potentially the system can be exploited for building novel microwave antennas which emit EM waves carrying OAM. Furthermore, the open 2D structure adopted in the present approach leaves much room for including other elements, such as electric/mechanical resonators, superconducting Josephson tunneling junctions, and SQUIDs, which are useful for advanced information processing.
Topological phase transition in lumped element circuit
As a simplified model of our system (see Fig. 1a, b), we begin with a lumped element circuit shown schematically in Fig. 1c. On-node capacitors with a uniform capacitance C establish shunts to a common ground plane. Link inductors with inductance L0 connect the nearest neighbor nodes within the honeycomb structure (drawn in red in Fig. 1c) and inductors with inductance L1 (shown in blue in Fig. 1c) connect to the next hexagonal cell. Topological LC circuits were proposed and realized previously34,35, in which cross-wirings with permutations were adopted to generate the nontrivial topology. In contrast, in the present approach, the nontrivial topology emerges purely from the symmetry of 2D honeycomb structure27,43.
Design principle of microstrip-based topological LC-circuit. a Schematics of the honeycomb microstrip structure with enlarged views of the topologically nontrivial (upper) and trivial (lower) unit cells shown in the right panels. When excited by a linearly polarized source located in the interface with a frequency within the bulk frequency band gap, electromagnetic (EM) waves propagate rightward/leftward (red/green arrow) along the interface carrying up/down pseudospin, which is represented by the phase winding of the out-of-plane electric field Ez accommodated in the hexagonal unit cells as indicated in the insets. b Photo of the experimental setup with a field probe placed right above the microstrip array, which is used to measure the distribution of the amplitude and phase of the out-of-plane electric field Ez, thereby resolving the pseudospin states and pseudospin-dominated unidirectional interface EM propagation. A lumped capacitor of C = 5.6 pF is loaded on the nodes. In the lower half of the system, the metallic strips of inter/intra hexagonal unit cell have widths of 1 and 2.6 mm, whereas in the upper half they are of 3.2 and 1.5 mm, respectively, and at the interface the width of metallic strips is taken as 2.6 mm. The length of all metallic strip segments is 10.9 mm and both lower and upper halves are composed of 14 × 8 hexagons. The whole microstrip system is fabricated on a F4B dielectric film with thickness of 1.6 mm and relative permittivity 2.2. c Schematic of the lumped element circuit of the hexagonal unit cells shown in the right panels of (a)
The voltage on a given node i with respect to the common ground is described by (see Supplementary Note 1 for details)
$${\mathrm{d}^2}V_i/{\mathrm{d}}{t}^2 = \frac{{ - 1}}{C}\mathop {\sum }\limits_{j = 1}^3 \frac{1}{{L_{ij}}}\left( {V_i - V_j} \right)$$
Taking the hexagonal unit cell shown in Fig. 1c, the normal frequency modes are governed by the following secular equation:
$$\left( {2 + \tau - \frac{{\omega ^2}}{{\omega _0^2}}} \right) {\bf V}_0 = Q{\bf V}_0$$
$$Q = \left( {\begin{array}{*{20}{c}} 0 & {Q_k} \\ {Q_k^\dagger } & 0 \end{array}} \right),Q_k = \left( {\begin{array}{*{20}{c}} {\tau XY^ \ast } & 1 & 1 \\ 1 & 1 & {\tau X^ \ast } \\ 1 & {\tau Y} & 1 \end{array}} \right)$$
with \({\mathbf V} \!=\! {\mathbf V}_0{\mathrm{exp}}\left( {i{\mathbf{k}} \cdot {\mathbf{r}} - i\omega t} \right) \equiv [V_1V_2V_3V_4V_5V_6]^t{\mathrm{exp}}\)\(\left( {i{\mathbf{k}} \cdot {\mathbf{r}} - i\omega t} \right)\) for the voltages at the six nodes (see Supplementary Figure 1 for the numbering of nodes), where \(X = {\mathrm{exp}}(i{\mathbf{k}} \cdot {\mathbf{a}}_1)\), \(Y = {\mathrm{exp}}(i{\mathbf{k}} \cdot {\mathbf{a}}_2)\), \(\omega _0^2 = 1/L_0C,\) τ = L0/L1, and the asterisk means complex conjugating.
Figure 2 displays the frequency band structures for three typical values of τ. As shown in Fig. 2a for τ < 1, there is a global frequency band gap around 1.515 GHz, and the EM modes exhibit double degeneracy both below and above the frequency band gap. When τ = 1, the frequency band gap is closed at the Γ point, center of the Brillouin zone (BZ), and two sets of linear dispersions, known as photonic Dirac cones, appear with four-fold degeneracy at the Γ point as displayed in Fig. 2b (accidental Dirac cones were achieved before in square lattices and were used to manipulate EM transport with zero refractive index44). For τ > 1, a global frequency gap reopens as shown in Fig. 2c. While the frequency band structures in Fig. 2a, c look similar to each other, the EM modes in the two cases are different which can be characterized by the eigenvalue of the two-fold rotation operator C245,46, or equivalently the parity with respect to the 2D spatial inversion, at the high-symmetry points of BZ (see Supplementary Figure 2 for details). For τ < 1 (Fig. 2a), the parities of the eigen EM modes below the frequency band gap are given by "+ − −" at both Γ and M points. In contrast, for τ > 1 (Fig. 2c), the parities are given by "+ + +" at the Γ point while "− + −" at the M point. With the parities of EM modes different at the Γ and M points, the case τ > 1 features a nontrivial topology. Therefore, the lumped element circuit exhibits a topological phase transition with the pristine honeycomb structure as the transition point.
Band structures and topological phase transition in LC circuit. Frequency band structures calculated based on Equations (2) and (3) for a τ < 1 with L0 = 3.60 nH and L1 = 6.35 nH, b τ = 1 with L0 = L1 = 4.22 nH, and c τ > 1 with L0 = 5.09 nH and L1 = 3.13 nH. The on-node capacitance is taken as C = 7.27 pF for all three cases. The distributed inductances and lumped capacitance are taken as tuning parameters, which reproduce the frequency band gap and the gap-center frequency of the experimental setup. The values of the distributed inductances are close to those evaluated from the experimental structures of microstrip arrays, and the on-node capacitance is slightly larger than the lumped one due to the distributed capacitances coming from the microstrip lines. The signs "+" and "−" inside the black dots denote the parities of the eigen EM modes with respect to the two-dimensional spatial inversion at the high-symmetry Γ point and M point of the Brillouin zone. d–g Phase distributions of the out-of-plane electric field Ez for the four eigenmodes at the Γ point close to the frequency band gap at 1.515 GHz in (a, c)
The details of the phase windings at the Γ point as displayed in Fig. 2d–g reveal that the EM eigenmodes can be designated as p± and d± orbitals. A \(k \cdot p\) Hamiltonian can be formulated around the Γ point based on these four orbitals27, where the 4 × 4 matrix is block diagonalized into two 2 × 2 matrices, associated with the EM modes with positive and negative OAM, respectively (see Supplementary Note 2 for details). This \(k \cdot p\) Hamiltonian takes the same form as the Bernevig–Hughes–Zhang model of QSHE proposed for HgTe quantum wells12, where the two 2 × 2 blocks are associated with the electronic spin-up and -down states. Parallelizing these two Hamiltonians, it is clear that the sign of OAM of the eigen EM mode in the present topological microstrip arrays plays the same role as the spin in spin–orbit coupled electronic systems, indicating that the sign of the OAM behaves as an emergent pseudospin degree of freedom27,28,29,30,31,47,48,49. In the case of Fig. 2c, band inversion between p and d orbitals takes place, which is thus topologically distinct from the case of Fig. 2a, in agreement with the conclusion derived from the analysis based on the eigenvalue of C2.
While for simplicity and transparency, we describe the topological phase transition by a lumped element circuit with a two-valued inductance and uniform capacitance, the phenomenon is generic for circuits with textured capacitances and/or inductances. The system can also be reformulated in terms of propagating electric and magnetic fields with dielectric permittivity and magnetic permeability as relevant parameters. Therefore, the physics revealed here also applies to a broad class of planar networks of waveguides50 including coaxial cables and striplines.
Observing pseudospin and p–d orbital hybridization
We then implement experimentally the topological photonic state revealed above by designing the planar microstrip arrays as shown in Fig. 1b. Because the distributed capacitances of the metallic strips with respect to the ground plate estimated following the standard procedure42 are smaller than the lumped ones by one order of magnitude, the distributed capacitances can be incorporated to good approximation into the on-node capacitance, resulting in the lumped element circuit discussed above. The widths of the metallic strips in the trivial and topological designs are chosen in such a way that the two bulk systems give a common frequency band gap, taking into account the common lumped capacitance. In order to reveal explicitly the topological EM properties, we put these two microstrip arrays side by side as displayed in Fig. 1a, b. As shown in Fig. 3a, obtained by numerical calculations based on a supercell for the lumped element circuit, two frequency dispersions appear in the common bulk frequency gap due to the inclusion of the interface between the two half-spaces of distinct topology. It is interesting to note that these interface modes are characterized mainly by two degrees of freedom, namely pseudospin and parity as resolved experimentally below.
Resolving pseudospin and p–d orbital hybridization. a Calculated frequency band structure for the whole system in Fig. 1b with an interface between the topologically trivial and nontrivial regimes. A supercell is adopted including 8 hexagonal unit cells on both sides of the interface where the parameters for Fig. 2a, c are taken, respectively. The system is considered infinite along the direction of the interface. The right panel is a zoomed-in view of the frequency band diagram around the bulk band gap at 1.515 GHz. The red/green arrow indicates the dispersion of the rightward/leftward-propagating interface mode. b, c Distributions of the out-of-plane electric field Ez obtained by the full-wave simulations (b) and experimental measurements (c) using a linearly polarized source located at the interface. The source frequency is set at f = 1.47 GHz for the full-wave simulations and f = 1.44 GHz for experimental measurements. d Phase distribution of the out-of-plane electric field Ez under the same condition as (b) which is mirror symmetric with respect to the mirror line perpendicular to the interface indicated by the two dark-blue arrows. The source is located on the mirror line. The two insets show zoomed-in views of the phase distributions in the two typical hexagonal unit cells close to the interface, with the left/right one accommodating clockwise/counterclockwise phase winding. e–g, h–j Full-wave simulated and experimentally measured phase distributions in the right highlighted hexagon in (d) with up pseudospin at three frequencies indicated in the right panel in (a). k–m, n–p Same as those in (e–g) and (h–j), respectively except for the left highlighted hexagon in (d). q Frequency dependence of weights of p and d orbitals obtained by the full-wave simulations and experimental measurements for the right hexagonal unit cell in (d). The p and d orbitals take the same weight at the frequency where the two interface frequency dispersions cross each other in (a), with an apparent difference of 0.03 GHz between the simulated and experimental results. The error bars indicate statistical uncertainty (standard deviation) during three measurements
In order to detect these topological interface EM modes experimentally, we launch an EM wave from a linearly polarized dipole source located in the interface with a frequency within the common bulk frequency gap (see inset of Fig. 1b). It is noticed that injecting an EM wave into the system without disturbing the bulk frequency band is a feature inherent to the bosonic property of photons which is not available for electrons. As displayed in Fig. 3b, c for the distributions of the out-of-plane electric field Ez obtained by the full-wave simulations and experimental measurements (see Methods for details), respectively, the EM wave propagates only along the interface. Figure 3d shows the phase distribution of the out-of-plane electric field Ez obtained by the full-wave simulations, which exhibits clockwise/counterclockwise phase winding in the half of the sample to the left/right of the source, as is clearly revealed by the mirror symmetry with respect to a mirror line perpendicular to the interface and passing through the source (indicated by the two dark-blue arrows in Fig. 3d). The two insets show the zoomed-in views of the phase distributions in the two typical hexagonal unit cells close to the interface, with the left/right one accommodating the clockwise/counterclockwise phase winding, which specifies the down/up pseudospin state of EM modes. This demonstrates a clear pseudospin-momentum locking in the topological interface EM propagation, mimicking the helical edge states in QSHE.
Now we investigate the variation of phase distribution in the topological interface EM modes when the source frequency is swept across the bulk frequency band gap. The interface EM modes intersecting the frequency bands below and above the band gap are composed from both p and d orbitals, which can be resolved by analyzing the phase winding noticing that for a p/d orbital the phase winds 2π/4π over a hexagonal unit cell (see Supplementary Figure 3 and 4, and Supplementary Note 3 and 4 for details). As displayed in Fig. 3e–p obtained by the full-wave simulations and experimental measurements, at a frequency close to the lower band edge (Fig. 3e, h, k, n) the interface EM modes consist mainly of p orbitals, and at a frequency close to the upper band edge (Fig. 3g, j, m, p) the interface EM modes are predominately d orbitals, whereas p and d orbitals contribute equally at the center frequency of the band gap (Fig. 3f, i, l, o). Figure 3q displays the full frequency dependence of the weights of p+ and d+ orbitals evaluated in terms of the Fourier analysis on the phase distribution in the right zoomed-in hexagon in Fig. 3d (same results are obtained for the left hexagon and p− and d− orbitals as assured by the mirror symmetry), with a systematic frequency shift of 0.03 GHz between the experimental results and the ones obtained by the full-wave simulations due to the tolerance of the material and structural parameters in the fabrication. Because the p and d orbitals correspond to the dipolar and quadrupolar EM modes, respectively, using a linearly polarized dipole source we can generate and guide EM waves with the desired sign of OAM by choosing the propagation direction, and desired relative weight of dipolar and quadrupolar EM modes by choosing the working frequency within the topological band gap. These properties may be exploited to design topology-based microwave antennas and receivers.
The OAM accommodated in the hexagonal unit cell of the present microstrip array is intimately related to the local Poynting vector through the Faraday relation (see Supplementary Note 5 for details). For a harmonic mode with frequency ω the local Poynting vector is given by
$${\mathbf{S}} = {\mathrm{Re}}[{\mathbf{E}} \times {\mathbf{H}}^ \ast ]/2 = \frac{{\left| {E_z} \right|^2}}{{2\mu _0\omega }}\left( {\frac{{\partial \varphi }}{{\partial x}}{\mathbf{x}} + \frac{{\partial \varphi }}{{\partial y}}{\mathbf{y}}} \right)^{}$$
where \({\mathbf{E}} = E_z{\mathbf{z}} = \left| {E_z} \right|e^{i\varphi }{\mathbf{z}}\) (with x, y, and z being the unit vectors in the three spatial directions) and H are the out-of-plane electric field and the in-plane magnetic field, respectively. For EM modes with fixed OAM such as \(p_ \pm\) and \(d_ \pm\) defined in the hexagonal unit cell, the local Poynting vectors circulate around the edges of the hexagon. It is obvious that \(p_ -\) and \(d_ -\) orbitals accommodate the Poynting vectors circulating clockwise whereas \(p_ +\) and \(d_ +\) orbitals accommodate those circulating counterclockwise, which correspond to the two pseudospin polarizations in the present system. One can evaluate explicitly the amount of angular momentum carried by the local Poynting vector given in Equation (4)
$${\mathbf{L}} = {\mathbf{r}} \times {\mathbf{S}}/c^2 = \frac{{\left| {E_z} \right|^2}}{{2\mu \omega c^2}}\left( {x\frac{{\partial \varphi }}{{\partial y}} - y\frac{{\partial \varphi }}{{\partial x}}} \right){\mathbf{z}}.$$
It can be shown (see Supplementary Note 5 for details) that for the EM mode \(E_z = \left| {E_z} \right|{\mathrm{exp}}\left( {i\varphi } \right) = \left| {E_z} \right|{\mathrm{exp}}\left( {il\theta } \right)\) where θ is the azimuthal angle and l = ±1 (for p±) or l = ±2 (for d±) one has \({\mathbf{L}} = \frac{{\left| {E_z} \right|^2}}{{2\mu \omega c^2}}l{\mathbf{z}}\). Therefore, one photon of energy ℏω carries a quantized OAM lℏ along the normal of microstrip plane.
The local Poynting vectors measured experimentally at 1.44 GHz are displayed in Fig. 4a for the two typical hexagonal unit cells close to the interface, and those obtained by the full-wave simulations at 1.47 GHz are displayed in Fig. 4b over a wide region along the interface (shown schematically in Fig. 4c). Good agreement is achieved between experiments and simulations. On the left-/right-hand side of the source, the local Poynting vectors circulate clockwise/counterclockwise in the hexagonal unit cells in both topological and trivial regimes. In hexagons above the interface (i.e., in the topological regime), the density of the local Poynting vectors is larger at the bottom edge (closer to the interface) than that on the top edge, which generates net energy flows in two directions along the interface. In hexagons below the interface (i.e., in the trivial regime), although the local Poynting vectors circulate in the same ways as those in the topological regime, the density of the local Poynting vectors is smaller at the top edge (closer to the interface) than that on the bottom edge, opposite to that in the topological regime, which therefore contributes the same net energy flows. This yields the winding phases, or equivalently the circulating local Poynting vectors, in hexagonal unit cells and the unidirectional energy flow along the interface. The distribution of the local Poynting vectors in Fig. 4 indicates explicitly that OAM of EM mode governs the topological interface EM propagation in the present system.
Distributions of local Poynting vectors in the interface EM modes. a Distribution of the local Poynting vectors obtained by experimental measurements of the amplitude and phase of the out-of-plane electric field Ez (see Equation (4)) in the two hexagonal unit cells bounded by the dashed lines in (b) at 1.44 GHz. The size and direction of the arrows denote the amplitude and direction of the local Poynting vectors. b Distribution of the local Poynting vectors obtained by the full-wave simulations at 1.47 GHz in the region bounded by the dashed line in (c). c Schematic of the distribution of the local Poynting vector S over hexagonal unit cells in the topological interface modes stimulated by a linearly polarized source located in one of the unit cells (the red dots)
As seen above, the planar and open structure of the present system permits us to observe directly the pseudospin states, pseudospin-momentum locking, and furthermore the p–d orbital hybridization in the interface EM modes, which constitute the essence of the topological state preserving TRS. So far, the relevance of pseudospin in topological interface propagation has been inferred based on comparisons between experimental observations and theoretical analyses, and there are few experimental studies on the relative weight of orbitals with opposite parities in topological interface propagations.
EM waves with OAM attract considerable current interest. It becomes clear that optical fields with OAM are ideal for many important applications such as communications, particle manipulation, and high-resolution imaging51. Even at microwave frequency, tunable OAM provides a new degree of freedom, which can be used for controlling on-chip propagation and can be exploited to develop novel high information density radar and wireless communication protocols52,53. The OAM explored in the present work is defined in unit cells and is oriented perpendicular to the propagating direction of the topological interface EM modes, different from OAM carried by light vortices in continuous media that is parallel to the propagating direction. To figure out a way to emit efficiently EM modes carrying OAM supported by the microstrip structure with C6v symmetry into free space is one of the most intriguing future problems. The topological EM properties achieved in the planar circuit can not only be exploited for microwave photonics54 and plasmonics55, but can also be extended up to the infrared frequency regime56. In the photonic wire laser working in the terahertz band57,58, a patterned, double-sided metal waveguide is used for confining and directing the emission from a quantum-cascade laser, with the metal–semiconductor–metal structure essentially being the same as the microstrip array investigated in the present study. In terms of a honeycomb-patterned network with typically micrometer strip-widths one can achieve a topological quantum-cascade laser, where the emission and propagation of terahertz EM waves are governed by OAM. It is also worth noticing that the 2D structure of the present scheme makes the topological microwave-guiding compatible with various lithographically fabricated planar devices. Extension of the lumped element circuit discussed in the present work to a network including resonators of quantum features, such as quantum bits (qubit) based on SQUID structures59, is of special interest.
Preparation of the sample and the full-wave simulations
To prepare the perimeter of the whole sample, we load lumped resistors between the metallic strips and the common ground plane (i.e., bottom metallic substrate), which corresponds to a perfect matching boundary condition. The values of lumped resistors are selected according to the characteristic impedances Z0 of microstrip lines, 115, 74, 97, and 66 Ω for microstrip lines with widths of 1.0, 2.6, 1.5, and 3.2 mm, respectively42. In order to numerically simulate the system, we perform three-dimensional full-wave finite-element simulations using Computer Simulation Technology Microwave Studio software based on a finite integration method in the time domain. The dielectric loss tangent (tan δ) of the substrate and the conductivity of the metallic microstrip lines are set to be 0.0079 and 5.8 × 107 S/m, respectively. The internal resistance in each lumped capacitor is taken as 1 Ω. An open boundary condition is applied for the whole sample including the terminal resistors. In order to demonstrate fully the unidirectional interface EM transport governed by pseudospin, we also prepare interfaces with sharp turns and stimulate the system by a source with signals overlapping exclusively with one of the two pseudospins of the interface modes (see Supplementary Figure 5 and Supplementary Note 6 for details).
Experimental setup
Signals generated from a vector network analyzer (Agilent PNA Network Analyzer N5222A) are transported into a port located in the sample, which works as the source for the system (see Supplementary Figure 6 and Supplementary Note 7 for details). A small homemade rod antenna of 2 mm length is employed to measure the out-of-plane electric field Ez at a constant height of 2 mm from the microstrip lines. We make sure by the full-wave simulations that the field distribution thus measured is almost the same as that at the very surface of microstrip lines. The antenna is mounted to a 2D translational stage to scan the field distribution over the whole system with a step of 2 mm. A finer step of 1 mm is taken in order to measure accurately the field distribution in several typical hexagonal unit cells, which reveals the pseudospin structure. The measured data are then sent to the vector network analyzer. By analyzing the recorded field values, we obtain the distributions of both amplitude and phase of the out-of-plane electric field Ez, which are used for analysis of detailed phase windings, weights of p and d orbitals, and local Poynting vectors in the topological interface states.
Code availability
All the computer codes that support the findings of this study are available from the corresponding authors upon reasonable request.
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
Shelby, R. A., Smith, D. R. & Schultz, S. Experimental verification of a negative index of refraction. Science 292, 77–79 (2001).
Pendry, J. B. Negative refraction makes a perfect lens. Phys. Rev. Lett. 85, 3966–3969 (2000).
Fang, N., Lee, H., Sun, C. & Zhang, X. Sub-diffraction-limited optical imaging with a silver superlens. Science 308, 534–537 (2005).
Pendry, J. B., Schurig, D. & Smith, D. R. Controlling electromagnetic fields. Science 312, 1780–1782 (2006).
Cai, W. S., Chettiar, U. K., Kildishev, A. V. & Shalaev, V. M. Optical cloaking with metamaterials. Nat. Photonics 1, 224–227 (2007).
Alekseyev, L. V. & Narimanov, E. Slow light and 3D imaging with non-magnetic negative index systems. Opt. Express 14, 11184–11193 (2006).
Berry, M. V. Quantal phase-factors accompanying adiabatic changes. Proc. R. Soc. Lond. A 392, 45–57 (1984).
Klitzing, K. V., Dorda, G. & Pepper, M. New method for high-accuracy determination of the fine-structure constant based on quantized hall resistance. Phys. Rev. Lett. 45, 494–497 (1980).
Thouless, D. J., Kohmoto, M., Nightingale, M. P. & Dennijs, M. Quantized Hall conductance in a two-dimensional periodic potential. Phys. Rev. Lett. 49, 405–408 (1982).
Haldane, F. D. M. Model for a quantum Hall-effect without Landau-levels—condensed-matter realization of the parity anomaly. Phys. Rev. Lett. 61, 2015–2018 (1988).
Hasan, M. Z. & Kane, C. L. Colloquium: topological insulators. Rev. Mod. Phys. 82, 3045–3067 (2010).
Qi, X. L. & Zhang, S. C. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057–1110 (2011).
Weng, H. M., Yu, R., Hu, X., Dai, X. & Fang, Z. Quantum anomalous Hall effect and related topological electronic states. Adv. Phys. 64, 227–282 (2015).
Haldane, F. D. M. & Raghu, S. Possible realization of directional optical waveguides in photonic crystals with broken time-reversal symmetry. Phys. Rev. Lett. 100, 013904 (2008).
Wang, Z., Chong, Y. D., Joannopoulos, J. D. & Soljăcić, M. Reflection-free one-way edge modes in a gyromagnetic photonic crystal. Phys. Rev. Lett. 100, 013905 (2008).
Wang, Z., Chong, Y. D., Joannopoulos, J. D. & Soljăcić, M. Observation of unidirectional backscattering-immune topological electromagnetic states. Nature 461, 772–775 (2009).
Lu, L., Joannopoulos, J. D. & Soljăcić, M. Topological photonics. Nat. Photonics 8, 821–829 (2014).
Rechtsman, M. C. et al. Photonic Floquet topological insulators. Nature 496, 196–200 (2013).
He, C. et al. Photonic topological insulator with broken time-reversal symmetry. Proc. Natl. Acad. Sci. U.S.A. 113, 4924–4928 (2016).
Poo, Y., Wu, R. X., Lin, Z. F., Yang, Y. & Chan, C. T. Experimental realization of self-guiding unidirectional electromagnetic edge states. Phys. Rev. Lett. 106, 093903 (2011).
Fang, K. J., Yu, Z. F. & Fan, S. H. Realizing effective magnetic field for photons by controlling the phase of dynamic modulation. Nat. Photonics 6, 782–787 (2012).
Lu, L. et al. Symmetry-protected topological photonic crystal in three dimensions. Nat. Phys. 12, 337–340 (2016).
Bahari, B. et al. Nonreciprocal lasing in topological cavities of arbitrary geometries. Science 358, 636–640 (2017).
Hafezi, M., Demler, E. A., Lukin, M. D. & Taylor, J. M. Robust optical delay lines with topological protection. Nat. Phys. 7, 907–912 (2011).
Hafezi, M., Mittal, S., Fan, J., Migdall, A. & Taylor, J. M. Imaging topological edge states in silicon photonics. Nat. Photonics 7, 1001–1005 (2013).
Khanikaev, A. B. et al. Photonic topological insulators. Nat. Mater. 12, 233–239 (2013).
Wu, L. H. & Hu, X. Scheme for achieving a topological photonic crystal by using dielectric material. Phys. Rev. Lett. 114, 223901 (2015).
Barik, S., Miyake, H., DeGottardi, W., Waks, E. & Hafezi, M. Two-dimensionally confined topological edge states in photonic crystals. New J. Phys. 18, 113013 (2016).
Barik, S. et al. Topological quantum optics interface. Science 359, 666–668 (2018).
Yves, S. et al. Crystalline metamaterials for topological properties at subwavelength scales. Nat. Commun. 8, 16023 (2017).
Yang, Y. T. et al. Visualization of a unidirectional electromagnetic waveguide using topological photonic crystals made of dielectric materials. Phys. Rev. Lett. 120, 217401 (2018).
Cheng, X. J. et al. Robust reconfigurable electromagnetic pathways within a photonic topological insulator. Nat. Mater. 15, 542–548 (2016).
Chen, W. J. et al. Experimental realization of photonic topological insulator in a uniaxial metacrystal waveguide. Nat. Commun. 5, 5782 (2014).
Albert, V. V., Glazman, L. I. & Jiang, L. Topological properties of linear circuit lattices. Phys. Rev. Lett. 114, 173902 (2015).
Ningyuan, J., Owens, C., Sommer, A., Schuster, D. & Simon, J. Time- and site-resolved dynamics in a topological circuit. Phys. Rev. X 5, 021031 (2015).
Khanikaev, A. B. & Shvets, G. Two-dimensional topological photonics. Nat. Photonics 11, 763–773 (2017).
Bliokh, K. Y., Rodríguez-Fortuño, F. J., Nori, F. & Zayats, A. V. Spin–orbit interactions of light. Nat. Photonics 9, 796–808 (2015).
Kerber, R. M., Fitzgerald, J. M., Reiter, D. E., Oh, S. S. & Hess, O. Reading the orbital angular momentum of light using plasmonic nanoantennas. ACS Photonics 4, 891–896 (2017).
Bliokh, K. Y., Smirnova, D. & Nori, F. Quantum spin Hall effect of light. Science 348, 1448–1451 (2015).
Bliokh, K. Y., Bekshaev, A. Y. & Nori, F. Optical momentum, spin, and angular momentum in dispersive media. Phys. Rev. Lett. 119, 073901 (2017).
Song, D. H. et al. Unveiling pseudospin and angular momentum in photonic graphene. Nat. Commun. 6, 6272 (2015).
Hong, J. S. & Lancaster, M. J. Microstrip Filters for RF/Microwave Application (John Wiley & Sons, 2001).
Fu, L. Topological crystalline insulators. Phys. Rev. Lett. 106, 106802 (2011).
Huang, X. Q., Lai, Y., Hang, Z. H., Zheng, H. H. & Chan, C. T. Dirac cones induced by accidental degeneracy in photonic crystals and zero-refractive-index materials. Nat. Mater. 10, 582–586 (2011).
Benalcazar, W. A., Teo, J. C. Y. & Hughes, T. L. Classification of two-dimensional topological crystalline superconductors and Majorana bound states at disclinations. Phys. Rev. B 89, 224503 (2014).
Noh, J. et al. Topological protection of photonic mid-gap cavity modes. Nat. Photonics 12, 408–415 (2018).
Wu, L. H. & Hu, X. Topological properties of electrons in honeycomb lattice with detuned hopping energy. Sci. Rep. 6, 24347 (2016).
He, C. et al. Acoustic topological insulator and robust one-way sound transport. Nat. Phys. 12, 1124–1129 (2016).
Brendel, C., Peano, V., Painter, O. & Marquardt, F. Snowflake phononic topological insulator at the nanoscale. Phys. Rev. B 97, 020102 (2018).
Yariv, A. & Yeh, P. Optical Waves in Crystals: Propagation and Control of Laser Radiation (John Wiley & Sons, 1984).
Qiu, C. W. & Yang, Y. J. Vortex generation reaches a new plateau. Science 357, 645 (2017).
Thidé, B. et al. Utilization of photon orbital angular momentum in the low-frequency radio domain. Phys. Rev. Lett. 99, 087701 (2007).
Tamburini, F. et al. Encoding many channels on the same frequency through radio vorticity: first experimental test. New J. Phys. 14, 033001 (2012).
Capmany, J. & Novak, D. Microwave photonics combines two worlds. Nat. Photonics 1, 319–330 (2007).
Ozbay, E. Plasmonics: merging photonics and electronics at nanoscale dimensions. Science 311, 189–193 (2006).
Schnell, M. et al. Nanofocusing of mid-infrared energy with tapered transmission lines. Nat. Photonics 5, 283–287 (2011).
Khalatpour, A., Reno, J. L., Kherani, N. P. & Hu, Q. Unidirectional photonic wire laser. Nat. Photonics 11, 555–559 (2017).
Williams, B. S., Kumar, S., Callebaut, H., Hu, Q. & Reno, J. L. Terahertz quantum-cascade laser at λ ≈ 100 μm using metal waveguide for mode confinement. Appl. Phys. Lett. 83, 2124–2126 (2003).
Devoret, M. H., Wallraff, A. & Martinis, J. M. Superconducting qubits: a short review. Preprint at https://arxiv.org/abs/cond-mat/0411174 (2004).
H. Chen and Y. Sun are supported by the National Key Research Program of China (No. 2016YFA0301101), the National Natural Science Foundation of China (Grant Nos. 11234010, 61621001, and 11674247), the Shanghai Science and Technology Committee (Nos. 18JC1410900 and 18ZR1442900), and the Fundamental Research Funds for the Central Universities. X. Hu is supported by Grants-in-Aid for Scientific Research No.17H02913, Japan Society of Promotion of Science.
MOE Key Laboratory of Advanced Micro-Structured Materials, School of Physics Science and Engineering, Tongji University, Shanghai, 200092, China
Yuan Li
, Yong Sun
, Weiwei Zhu
, Zhiwei Guo
, Jun Jiang
& Hong Chen
International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science, Tsukuba, 305-0044, Japan
Toshikaze Kariyado
& Xiao Hu
Search for Yuan Li in:
Search for Yong Sun in:
Search for Weiwei Zhu in:
Search for Zhiwei Guo in:
Search for Jun Jiang in:
Search for Toshikaze Kariyado in:
Search for Hong Chen in:
Search for Xiao Hu in:
Y.L. prepared the sample and conducted experimental measurements and the full-wave simulations. Y.S., W.Z., Z.G., and J.J. helped with experiments. T.K. joined discussions on the model and theoretical analyses. H.C. and X.H. conceived the idea, supervised the project, and wrote the manuscript. All authors fully contribute to the research.
Correspondence to Hong Chen or Xiao Hu.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Tunable Topological State, High Hole-Carrier Mobility, and Prominent Sunlight Absorbance in Monolayered Calcium Triarsenide
Feng Li
, Hong Wu
, Zhaoshun Meng
, Ruifeng Lu
& Yong Pu
The Journal of Physical Chemistry Letters (2019)
Quantum valley Hall effects and spin-valley locking in topological Kane-Mele circuit networks
Weiwei Zhu
, Yang Long
, Hong Chen
& Jie Ren
Physical Review B (2019)
Topological Phase Transition in a One-Dimensional Elastic String System
Ya-Wen Tsai
, Yao-Ting Wang
, Pi-Gang Luan
& Ta-Jen Yen
Crystals (2019)
Photonic Topological States in a Two-Dimensional Gyrotropic Photonic Crystal
Xiao-Chen Sun
, Cheng He
, Xiao-Ping Liu
, Yi Zou
, Ming-Hui Lu
, Xiao Hu
& Yan-Feng Chen
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Nature Communications menu
Editors' Highlights
Top 50 Read Articles of 2018 | CommonCrawl |
Latin American Economic Review
Implicit redistribution within Argentina's social security system: a micro-simulation exercise
Pedro E. Moncarz1
Latin American Economic Review volume 24, Article number: 2 (2015) Cite this article
The intra-generational redistribution in the Argentinean pension program is assessed in a lifetime basis. Using household surveys, the lifetime flows of labor income, contributions and retirement benefits are simulated. Then, the expected present values of pre- and post-social security labor income are computed. The results show that the pay-as-you-go defined-benefit system appears to be regressive, especially for women in the private sector. The results are robust to the use of alternative discount rates and different definitions of pre- and post-social security wealth. When income from informal jobs is taken into account, the system becomes slightly progressive. A weak enforcement of the law makes the system less regressive. Finally, in a counterfactual scenario in which there is no informal labor, the system becomes almost neutral, even showing a small level of progressivity.
One of the reasons for the existence of compulsory contributory pension schemes is to ensure that income earners save out part of their incomes to cover for their expenditure needs when they retire from labor markets. The implicit assumption behind the need for a compulsory regime is that otherwise people would not save enough if the decision is left only on a voluntary basis. However, social security (SS) programs are also used as tools to redistribute income from the better-off to the worst-off. In the light of this second objective, it is usually the case that pension formulas include, to a less or more extent, some redistributive components (e.g. minimum pensions). Moreover, even retirement regimes based on the principle of actuarial fairness, like individual accounts defined contribution (IA-DC) programs, may also incorporate non-actuarial redistributive ingredients.
Also, SS programs redistribute income through more subtle mechanisms. High mortality rates affect mostly to low-income workers when unified mortality tables are used (Garrett 1995; Duggan et al. 1995; Beach and Davis 1998). Government transfers to finance SS tend to favor the population covered by the programs which, as Rofman et al. (2008) point out, in developing countries tend to be the better-off. Low densities of contribution mean that some workers are left ineligible for benefits which, as shown in Forteza et al. (2009) and Berstein et al. (2006), are mostly low-income earners.
The aim of achieving a more progressive redistribution of income through the design of pension regimes might collide with the objective of having systems that are self-funded. This trade-off has been present along the history of almost all pension programs around the world, Latin-America has not been the exception. Examples of this have been the regimes changes in Argentina (2004–2008), Chile (2008), Brazil (1988, 2003), Uruguay (2008) and Bolivia (2008), among others.
As explained by Bertranou et al. (2011), the reform in Chile has its origin in the evidence that the system of individual accounts as designed in 1981 would meant that most pensioners would obtain very low benefits when compared with their incomes before retirement, with the most affected groups being low-income earners, seasonal workers, self-employed, and mostly women. To avoid this scenario, the reform of 2008 introduced a solidarity pension financed by the government, it also implemented different measures to increase the system coverage, especially among the most vulnerable groups mentioned above, and thirdly it looked at to improve the rate of return of the individual accounts through more competition among the fund administrators and the reduction of administrative fees. In the case of Brazil, the reform of 1988 introduced the concept of universal coverage, the no-discrimination against the rural population, and the protection of the real value of benefits. With these aims, a non-contributory pension was introduced for people with disabilities or older than 65-year-old that were in a situation of extreme poverty. While the reform of 2003 looked at the harmonization of the many pension regimes in order to minimize the inequalities between the general regime and the special ones. In Uruguay, in 2008, the government introduced a series of changes to the pension system looking at allowing that older people with insufficient contributions could access to a retirement benefit. In this regards, there was a reduction, from 35 to 30, in the minimum years of contributions, while keeping the retirement age at 60 years, while for unemployed workers with 28 years of contribution and 58-year-old it was implemented an unemployment subsidy for a maximum period of 2 years. Also, a non-contributory old-age pension was implemented for those in a more vulnerable social situation.Footnote 1 Finally, in the case of Bolivia, in 2008 it was implemented a universal non-contributory pension for all citizens 60 years or older which is financed with revenues that are not related to contributions made by workers and/or employers.Footnote 2 However, a challenge that needs to deal with is the huge unbalance between the contributory and non-contributory pillars of the system.
Moving to the case of Argentina, which is the objective of this study, during the first half of the 2000s there was a series of changes aimed at increasing the proportion of the population with a pension benefit through the implementation of several moratoriums allowing people which had not fulfilled with the required years of contribution to have access to a pension benefit. However, in 2008, at the peak of the world economic crisis, there took place an structural change, with the abolishment of the mixed-system that was in place since 1994, and replaced by a publicly administered pay-as-go defined-benefit (PAYG-DB) scheme.Footnote 3 The official reason put by the government behind this change was the intention of protecting the value of the funds saved in the individual accounts, which had lost a great deal because of the world economic crisis.Footnote 4 However, another reason, maybe even more important than the previous one, was the need by Federal Government, which was unable to have access to world capital markets, of getting control over an accumulated fund of around 29 billion USD, and not least important an annual flow of nearly 4 billion USD in contributions previously diverted into the system of individual accounts.
As can be appreciated from the examples just mentioned, in most cases the objectives behind the reforms were to increase the share of the population covered by the system and also to increase the replacement ratio of benefits. In what follows, we focus on the Argentine case, more particularly in the redistribution stemming from the fact that low-income workers tend to have systematically shorter contribution histories.Footnote 5 With this aim, we assess the implicit redistribution of the Argentine pension program on a lifetime basis. Using household surveys we simulate lifetime declared labor income and flows of contributions and benefits, and compute the expected present values of income and net flows. Standard distribution indexes are used to assess the distribution and redistribution implicit in the system.
The main finding is that the current PAYG-DB system in Argentina appears to be slightly regressive, especially in the case of women working in the private sector, these results are robust to the use of alternative discount rates and definitions of pre- and post-social security wealth (SSW). If income from informal jobs is also accounted for, the system becomes slightly progressive. A similar result emerges under a weak enforcement of the system rules.
Our main finding that a solidarity system as the PAYG-DB that is in place in Argentina is regressive appears to be at odds with the preconception that from a distributional point of view this type of arrangements are more progressive than a system with individual accounts, which almost by definition are actuarially fair. A possible explanation for our result that the PAYG-DB system does not improve income distribution is that if soft eligibility conditions were required, the system would be unbalanced from a financial point of view since contributions from current workers would not be sufficient to honor benefit payments, so the system requires strict eligibility conditions, especially in terms of the number of years of contributions, as well as low replacement ratios. This characteristic in an environment where informal job is widely present, at least for some population groups, means that many are left behind when reaching the age of retirement. This question, which is not present in a framework of individual accounts, mostly affects low-income earners, who are less likely to show compliance with the requirements for access to a retirement.
The paper is organized as follows. Section 2 presents the conceptual framework. A brief description of the old-age pension program is presented in Sect. 3. Section 4 describes the data, while Sect. 5 presents the methodology. The main results are discussed in Sect. 6, while Sect. 7 summarizes the main findings.
This section summarizes part of the proposal of the project "Assessing Implicit Redistribution within Social Insurance Systems" developed with the support of the World Bank, which included five-case studies: Argentina and Mexico (Moncarz 2011), Brazil (Zylberstajn 2011), Chile (Fajnzylber 2011) and Uruguay (Forteza and Mussio 2012). A summary of these five-country studies can be found in Forteza (2014).
Micro-simulations of lifetime labor income and SS contributions and benefits are used to assess SS redistribution. The focus in this paper is on intra-generational redistribution: one cohort, current pension rules. Even when they are not less important, we do not analyse inter-generational transfers (e.g. between current workers and current beneficiaries), nor among those reached by the system, either because they have contributed during their working life and/or have access to a benefit, and those that are never included by the system.
The individual is considered as the unit of analysis, but redistribution in the SS system may look very different at the family level. Gustman and Steinmeier (2001) find out that when analysed at the individual level, the US social security looks very redistributive, favoring low-income workers, but it looks much less so at the family level. Sadly, the lack of appropriate data impedes us to follow this route. Also, an element that may reduce the difference between outcomes when considering the family or the individual as the unit of analysis, is that differently from the US system, the Argentinean case only allows for a "survivor benefit", while in the case of the US there is a "spouse benefit" which is paid even before the main beneficiary dies. Gustman and Steinmeier (2001) find that the "spouse benefit" is quantitatively important, and implies more a transfer within the family itself than between families.
Ideally, the assessment of the redistributive impact of social security programs should be based on the comparison of income distribution with and without social security. This is not the same as comparing pre- and post-social security income (i.e. income minus contributions plus benefits), because social security is likely to induce changes in work hours, savings, wages and interest rates. One possible drawback of these models is the assumption of full rationality, something that has been subject to much controversy, especially regarding long-run decisions like those involved in social security. After all, the most appealed rationale for pension programs is individuals' myopia (Diamond 2005, chap. 4). In turn, much of fiscal incidence analysis is done on the non-behavioral type of assumption (Sutherland 2001; Immervoll et al. 2006). It is usually performed under the assumption that pre-tax income is not affected by the tax system. The approach proposed here is closer to the literature pioneered by Gruber and Wise (1999, 2004), who designed and computed a series of indicators of SS incentives to retire assuming no explicit behavioral responses.
The optimization models have the obvious advantage of incorporating behavioral responses, so not only the direct effects of policies are considered, but also the indirect effects that go through behavioral changes. However, in order to keep things manageable, these theoretically ambitious models necessarily make highly stylized assumptions regarding not only individual preferences and constraints, but also social security programs. Given the goals of the proposed research, this is a serious drawback. Non-behavioral micro-simulations are based on exogenously given work histories and geared to providing insights on the social security transfers that emerge from those histories. Thanks to their relative simplicity, non-behavioral models allow for a much more detailed specification of the policy rules and work histories than inter-temporal optimization models. An additional advantage of micro-simulations is that the effects are straightforward, so no black-box issues arise. At the very least, it can be expected to capture the first-order impact effects of social security on income distribution. The micro-simulation modeling can thus be seen as a first step in a more ambitious research program that incorporates behavioral responses in a more advanced phase.
The Argentinean pension and unemployment programs
With small variations, Diamond (2006), Valdés-Prieto (2006), Lindbeck and Persson (2003) and Lindbeck (2006) classify social security systems according to three dimensions: the degree of funding, the distribution of risks, and the degree of actuarial fairness.
PAYG programs are totally unfunded and so they lie at one extreme of the degree of funding dimension. In these programs, benefits are entirely financed by the current flow of contributions and there are no funds to back pension rights. At the other extreme lie programs in which accrued pension rights are fully backed by previous contributions. Individual savings accounts are the most common form of fully funded pension schemes. In this case, pension rights are linked to accumulated financial assets in the individual account.
In the second dimension, pension programs are usually classified as DB or DC. In a DC program, contributions are fixed and benefits are residually determined, adjusted to ensure financial sustainability. In a DB program, benefits are fixed—or more commonly the relation between earnings and pension is settled in a formula––and contributions are adjusted endogenously.
The third dimension refers to the link between individual contributions and benefits. The program is actuarially fair if the expected sum of discounted benefits and contributions are equal. It is said to be "non-actuarial" if there is no link between contributions and benefits.
Most PAYG pension programs are DB and non-actuarial, and individual savings accounts are in principle fully funded, DC and actuarially fair. But other combinations are also possible. Non-financial-defined-contributions pension programs––also known as notional accounts––are totally unfunded (e.g. PAYG), and yet they are DC and also exhibit high degrees of actuarial fairness. Many DB programs have reserves that back pension rights, particularly so when programs are relatively young.
PAYG-DB programs usually have some in-built redistributive components, like minimum and maximum pensions, so they are often considered to be better equipped in principle to perform redistribution than more actuarial DC programs (Palmer, 2006). Pure individual savings accounts are actuarially fair and hence, by construction, do not perform redistribution. In this light, if pension programs are expected to alleviate poverty and reduce inequality in all age (Barr 2001), PAYG-DB programs have an advantage over individual savings accounts. However, in the real world it is not always clear whether PAYG-DB programs are effective in alleviating poverty or reducing income inequality in old age. Also many saving accounts programs are complemented with redistributive non-actuarial components, like minimum pension guarantees and matching contributions. Therefore, whether a program contributes to reducing inequality is an empirical issue.
In the US there has been an active debate over how progressive social security is in practice. Gustman and Steinmeier (2001) on redistribution at the individual vs. family level. Garrett (1995), Duggan et al. (1995) and Beach and Davis (1998) on mortality rates. In developing countries, at least two additional factors may contribute to reduce the ability of social security to ameliorate poverty and reduce income inequality in old age. First, social security coverage is mostly limited to the better-off (Rofman et al. 2008). Also governments often subsidize social security and, given that coverage is very low among low-income individuals, these subsidies may be regressive. Second, low-income individuals tend to have short work histories (Forteza et al. 2009), which in most DB programs imply reduced or even no pension benefits at all (Forteza and Ourens 2012).
A brief history of Argentina's social security
In Argentina, the first pension funds appeared in the early of the 20th century (in 1904, the employees of the public administration, in 1905, the railway workers). Between 1916 and 1930 the system extended to other activities, covering most public employees; the financial, banking and insurance sector; journalists; printing industry; merchant seamen and aviation workers. Despite the expansion of pension funds, the overall coverage was quite limited, and also there was a high heterogeneity among the different sectors, in terms retirement age, amount of contributions, and benefits. However, one common feature was that of individual capitalization.
Between 1944 and 1954 the system was extended further covering almost all formal workers, although there was still a marked heterogeneity across sectors. This last feature changed in 1954, when the system moved from one of individual accounts to a one of PAYG type, also a progressive element was introduced with low- and medium-wage workers receiving higher replacement ratios at retirement. Another feature was that the system, due to its relative youth, enjoyed a financial surplus, but this would change quickly.
In 1958, it was introduced the mobility in benefits, with a guarantee of 82 % of the taxable wage that the beneficiary received before retirement. This element meant a certain homogenization of benefits among the different sectors, and the abolishment of the progressive component introduced with the 1954 reform. Also, and perhaps the most important development, it was that with the maturation of the system, and due to the existence of a high proportion of informal workers and high levels of evasion in the payment of contributions, the system began to experience deficits. These deficits led to a new reform in 1969, which involved the merge of the various pension funds, and the introduction of a centralized management, this last change in fact meant cross-sector transfers from those programs with surplus to those with deficits. The reform also introduced more stringent conditions to access to a benefit, with the increase of the minimum age and the number of years of contributions. Also, the benefit would be a function of the worker's earnings history, calculated as an average of the best 3 years of salaries during the last 10 years of work, which led to the replacement ratio to be between 70 and 82 %. The mobility of the benefits was maintained. However, all these changes meant only a temporary relief to the financial imbalances.
In 1980 a new reform contributed significantly to increase the system's deficit, with the elimination of employer contributions, and its replacement with resources from the collection of the value added tax. The growing of informal employment, evasion, and the greater maturity of the system led to the primary deficit to reach 60 % of total expenditures. This meant that in the eighties the system was almost near the collapse, which meant the need for the reintroduction of employer contributions.
Another reform, that meant a paradigm change, took place in October 1993 with the reintroduction of the system of individual accounts that would coexist with the public system. Under this mixed-system, the retirement benefit of the PAYG-DB pillar was conformed of three parts, a flat payment, a benefit based on contributions made before the reform, and another based on contributions made after the reform. The latter two components were calculated based on the years of contributions and the average wage received in the last years of work before retirement. For those who chose to migrate to, or new workers that choose the system of individual accounts, the total benefit also consisted of three parts. The same flat payment paid in the public pillar, a payment based on the contributions made to the public system, and other payment using the funds accumulated in the individual account. The first two components were the responsibility of the public sector. After a minimum period in one system, workers had the option to switch between systems. Even under the individual account system there were some redistributive components through the payment of a flat benefit, as well as the existence of a minimum pension. Benefits paid by both systems that were the responsibility of the public sector, were financed with contributions from employers and from workers belonging to the public pillar.
In the late 2008, at the peak of the global financial crisis, and under the official excuse that the balances in the individual accounts were rapidly losing much of their value, a last reform took place, and the individual account system was abolished. However, the most general belief was that the measure was heavily influenced by the needs of funding by the Federal Government. At that moment of the abolishment, the funds accumulated in the individual account system were about 29 billions USD, but not less than half of that amount was public bonds issued by the Government itself. These funds were used to constitute the Fondo de Garantía de Sustentabilidad Footnote 6 administered by the Administración Nacional de Seguridad SocialFootnote 7 (ANSES). However, maybe the most important issue was that the Government, by the intermediation of the ANSES, was able to take control over approximately, 4 billions USD each year in contributions made by employees that had in the past chosen the individual account system.Footnote 8
The current system
At the present time, there still coexist several retirement systems. On the one hand there is the national system which covers private sector workers and federal public employees, as well as people that work in the public sector in some provinces. At the sub-national level, several provinces have their own systems which cover provincial and municipal public employees; more or less half of these systems were merged with the national system during the second half of the 1990s. Finally, professional councils that regulate professional activities, such as engineers, lawyers, etc., have their own systems that are organized at a provincial level. Even more, both at the national and sub-national levels there is a wide number of specific regimes covering specific activities, for instance the judiciary, university researchers, etc. Finally, an additional component that has gained importance in recent years is the widespread grant of non-contributory pensions.Footnote 9 However, due to data availability, the analysis here will concentrate only on the general regime under the administration of the ANSES, which is the one with the largest coverage.Footnote 10
More specifically, the current system is regulated by the Law 24241. The conditions salaried workers must meet to be entitled to a retirement benefit are the followingFootnote 11:
A minimum of 30 years of contributions
To be 65-year-old for men and 60-year-old for women. Women, if they choose to, can continue working until they are 65-year-old.
People who do not meet the minimum years of contributions, can compensate each year of missing contribution with two additional years of work, counted after reaching the minimum retirement age.
People who do not meet the previous conditions can access an old-retirement pension if:
They are at lest 70-year-old.
Have a minimum of 10 years of contributions.
Have 5 years of contributions in the 8-year period previous to retirement.
The health and social security system is funded by contributions made by workers and employers. Workers contribute an 11 % of the gross salary, while employers contribute a 16 % to the retirement pillar of the social security system. In June 2011, the maximum gross salary to calculate both contributions was A$ 16213.72 (US$ 3925.85), while the minimum wage was $A 498.89 (US$ 120.79). Workers also contribute a 6 % for health insurance and 1 % in case they choose to affiliate to a trade union. Employers contribute an 8 % for health insurance.Footnote 12 Two others pillars of the system, which we do not include in the analysis below, are unemployment insuranceFootnote 13 and the labor accident coverage.
With respect to the benefits, the monthly payment is divided into two parts:
A flat benefit known as universal basic pension (PBU). In June 2011, the PBU was A$ 667.92 (US$ 161.72). If the person retired under the old-age pension scheme, the PBU is 70 % of the full amount.
A compensatory payment (PC) that is equal to 1.5 % for each year of contribution, or fraction above 6 months (with a maximum of 35 years) of the average real gross salary (including the worker contributions to the social security system but excluding the employer contributions) during the last 10 years before retirement. To calculate the average gross salary, periods in which the person was not working are excluded. Despite of the legal norm makes reference to the 10 years previous retirement, it is customary to consider the last 120 positive remunerations previous retirement. In June 2011, the maximum amount a person was entitled to receive under the PC was A$ 10507.90 (US$ 2544.28).
In June 2011, the System guarantied a minimum pension of A$ 1434.29 (US$ 347.28).
The data source is the Encuesta Permanente de Hogares (EPH) for the period 1995–2003.Footnote 14 The EPH is a household survey that in its previous design was carried out twice a year, usually in the months of April/May and October. Each household, and all its individuals, was surveyed four consecutive times after which they were dropped from the survey. In each survey, a quarter of households were replaced.
The sample we work with includes only individuals that have been observed the four times and that at least in one occasion have declared themselves as employed or unemployed. Working this way we feel we can approximate better the individual effects that are crucial for our simulations.
The variable that identifies the contributing status to the social security is available only for salaried employees. Thus, the sample will not include people that have declared a different employment status than salaried employees, when employed or in their previous job when unemployed, in any of the four opportunities they were surveyed.
Because of the potential differences in the system coverage for the different types of workers, the public and private sectors will be considered separately, as well as men and women. Because of the low number of observations for each individual, we cannot model, with a minimum degree of confidence, the transitions between the private and public sectors, so we consider only individuals that when employed did not change sectors. We only include individuals aged between 18- and 69-year-old the four times they were surveyed. In Tables 1, 2, 3 and 4, we present some descriptive statistics. The main picture is the high incidence of the not-contributing/working status, especially in the private sector, mostly for women.
Table 1 Sample sizes
Table 2 Distribution of samples depending on having contributed at least in one out of the four possible occasions
Table 3 Sample working status (%)
Table 4 Sample contributing status (%)
Estimation of contribution status
As it is clear from the sample description, there are an important percentage of cases in which the individual is working but not contributing. This behavior is more evident for those working in the private sector, especially for women. Because of this characteristic that emerges from our sample, and under the assumption that those individuals that contribute are not a random draw of the working population, we use the Heckman selection model to control the bias that would emerge if the contribution status were estimated without controlling for the probability that an individual could have a job but does not contribute to social security. In particular, we estimate the following model:
$$ L_{it} = \beta^{L} X_{it} + \varepsilon_{it}^{L} $$
$$ C_{it} = \beta^{C} Y_{it} + \varepsilon_{it}^{C} $$
where, L it is a dummy variable equal to 1 if individual i is working and zero otherwise; C it is a dummy variable equal to 1 if, conditional on working (L it = 1), individual i contributes and zero otherwise; X it is a set of variables that explain the probability of individual i working; Y it is a set of variables that explain the probability of individual i contributing; t stands for a semester.
Under the assumptions of the Heckman selection model, \( \varepsilon_{it}^{L} \) and \( \varepsilon_{it}^{C} \) are correlated with each other, such that the estimation of Eq. (2) without taking consideration of (1) would render a biased estimation of vector \( \beta^{C} \).
Our aim with Eqs. (1) and (2) is to project the probability of an individual working, and conditional on working the probability of contributing to social security. Both of these probabilities surely depend on individual's unobserved characteristics. To try controlling this unobserved characteristic in our simulations, we assume that the error terms in Eqs. (1) and (2) are composed of two parts:
$$ \varepsilon_{it}^{L} = \eta_{i}^{L} + u_{it}^{L} $$
$$ \varepsilon_{it}^{C} = \eta_{i}^{C} + u_{it}^{C} . $$
Equations (1) and (2) are estimated using the Heckman selection estimator, so the individual effects \( \eta_{i}^{L} \) and \( \eta_{i}^{C} \) are recovered as follows:
$$ \hat{\eta }_{i}^{L} = \frac{{\sum\nolimits_{t = 1}^{{T_{i} }} {\left( {L_{it} - \hat{\beta }^{L} X_{it} } \right)} }}{{T_{i} }} $$
$$ \hat{\eta }_{i}^{C} = \frac{{\sum\nolimits_{t = 1}^{{T_{i} }} {\left( {C_{it} - \hat{\beta }^{C} Y_{it} - \hat{\lambda} {{\text{IMR}}_{it}}} \right)} }}{{T_{i} }} $$
where IMR are the inverse Mills ratio which are defined as \( {\text{IMR}}_{it} = \frac{{\phi \left( {\hat{\beta }^{L} X_{it} } \right)}}{{\varPhi \left( {\hat{\beta }^{L} X_{it} } \right)}} \), where ϕ and Φ stand for the normal pdf and cdf, respectively; \( \hat{\lambda }\) is the selectivity effect. Since our aim is also to recover the individual effects, \( \hat{\eta }_{i}^{L} \) and \( \hat{\eta }_{i}^{C} \), the contribution status (Eq. 2) is assumed to be linear in its arguments, while for the working status (Eq. 1), which is estimated with a probit specification, we work with its lineal projection.
Equations (1) and (2) allow us to model, in a pretty much ad hoc way, transitions between informal and formal jobs. Sadly, the short time frame that each individual is observed, does not allows us to use a proper transition model. Finally, another element to keep in mind is that, since we are working with a non-behavioral model, we do not control for the role that Social Security might have on the choice between having a formal or informal job.
In sample simulations
The probability of individual i working at moment t is calculated as:
$$ \hat{P}_{it}^{L} = \hat{\beta }^{L} X_{it} + \hat{\eta }_{i}^{L} . $$
Then, the simulated working status is defined as \( \hat{L}_{it} = 1\,if\,\hat{P}_{it}^{L} > {\text{draw}}_{it}^{L} \) and 0 otherwise, where \( {\text{draw}}_{it}^{L} \) is a realization from a uniform (0, 1) distribution for each period t.
The probability of individual i, with individual effect \( \hat{\eta }_{i}^{C} \) and conditional on being working, contributing in time t is calculated as follows:
$$ \hat{P}_{it}^{C} = \hat{\beta}^{C} Y_{it} + \hat{\lambda}{\text{IMR}}_{it} + \hat{\eta }_{i}^{C} . $$
Then, conditional on \( \hat{L}_{it} = 1 \), the contribution status for individual i in time t is defined as \( \hat{C}_{it} = 1\,\,{\text{if}}\,\,\hat{P}_{it}^{C} > draw_{it}^{C} \,;\, \) and 0 otherwise, where \( {\text{draw}}_{it}^{C} \) is a realization from a uniform (0, 1) distribution for each period t.
Out of sample simulations
Since in this case the individual effects \( \eta_{i}^{L} \) and \( \eta_{i}^{C} \) are not directly observed, they are generated as follows:
$$ \tilde{\eta }_{i}^{L} = \hat{\sigma }_{{\eta^{L} }} \tilde{z}_{i}^{L} $$
$$ \tilde{\eta }_{i}^{C} = \hat{\sigma }_{{\eta^{C} }} \tilde{z}_{i}^{C} $$
where \( \hat{\sigma }_{{\eta^{L} }} \) and \( \hat{\sigma }_{{\eta^{C} }} \) are the standard deviations of the individual effects \( \hat{\eta }_{i}^{L} \) and \( \hat{\eta }_{i}^{C} \) respectively, and \( \tilde{z}_{i}^{L} \) and \( \tilde{z}_{i}^{C} \) are both pseudo-random draws from a Standard Normal distribution.
$$ \tilde{P}_{it}^{L} = \hat{\beta }^{L} X_{it} + \tilde{\eta }_{i}^{L} . $$
Then, the simulated working status is defined as \( \tilde{L}_{it} = 1\,\,{\text{if}}\,\,\tilde{P}_{it}^{L} > {\text{draw}}_{it}^{L} \) and 0 otherwise, where \( {\text{draw}}_{it}^{L} \) is a realization from a uniform (0, 1) distribution for each period t.
Then, the probability of contributing is calculated as:
$$ \tilde{P}_{it}^{C} = \hat{\beta }^{C} Y_{it} + \hat{\lambda}{\text{IMR}}_{it} + \tilde{\eta }_{i}^{C} $$
where t now stands for a month.
Then, conditional on \( \tilde{L}_{it} = 1 \), the contribution status for individual i in month t is defined as: \( \tilde{C}_{it} = 1\,\,{\text{if}}\,\,\tilde{P}_{it}^{C} > {\text{draw}}_{it}^{C} ; \) and 0 otherwise, where \( {\text{draw}}_{it}^{C} \) is a realization from a uniform (0, 1) distribution for each period t.
Projection of labor income
We estimate a maximum likelihood switching modelFootnote 15 which describes the behavior of an agent with two regression equations and a criterion function, I it that determines which regime the agent i faces at time t.
$$ \begin{aligned} I_{it} = 0 \, \quad {\text{if}}\; \quad \gamma Z_{it} + u_{it} \le 0 \\ I_{it} = 1 \, \quad {\text{if}}\; \quad \gamma Z_{it} + u_{it} > 0 \\ \end{aligned} $$
$$ {\text{Regime 0 }}\left( {L_{it} = 1, \, C_{it} = 0} \right) :\;\ln \,w_{it} = \beta^{0} X_{it}^{{}} + e_{it}^{0} \quad {\text{if}}\;I_{it} = 0 $$
$$ {\text{Regime 1 }}\left( {L_{it} = 1, \, C_{it} = 1} \right) :\;\ln \,w_{it} = \beta^{1} X_{it}^{{}} + e_{it}^{1} \quad {\text{if}}\;I_{it} = 1. $$
Given that our main goal is to project income, we are particularly interested in exploring the impact on wages of time invariant and deterministic covariates, like age and education. In the equations above \( w_{it} \) is the log of real wageFootnote 16 received by person i in time t (semester); X it is a set of regressors of personal characteristics, age and education; and the unemployment rate. As long as we expect \( w_{it} \) to be stationary, we do not introduce any deterministic time trend in the equation. Z it includes the same variables as the X it plus a dummy variable, which works as the exclusion restriction, equal to one if individual i is 65 years or older for men, and 60 years or older for women. The error terms \( \left( {u_{it} ; \, e_{it}^{0} ;{\text{ and }}e_{it}^{1} } \right) \) are assumed to have a trivariate normal distribution.
As with the working and contributing equations, to improve the goodness of our simulations, we assume that wages are also a function of some individual unobserved characteristics, which are time invariant and constant across regimes. As explained before, each individual in the sample is observed at most four times in which he/she can be in regime 0 (working but not contributing) or regime 1 (working and contributing). Thus, once Eqs. (13) and (14) are estimated, the individual effect \( \nu_{i} \) is recovered as follows:
$$ \hat{\nu }_{i} = \frac{{\sum\nolimits_{t = 1}^{{T_{i} }} {\left( {w_{it} - E\left( {\hat{w}_{it} |I_{it} = 0,X_{it} } \right)} \right)} + \sum\nolimits_{t = 1}^{{T_{i} }} {\left( {w_{it} - E\left( {\hat{w}_{it} |I_{it} = 1,X_{it} } \right)} \right)} }}{{T_{i} }}. $$
Conditional expectations in (13) and (14) are, respectively:
$$ E\left( {\hat{w}_{it} |I_{it} = 0,X_{it} } \right) = X_{it} \hat{\beta }^{0} - \hat{\sigma }_{0} \hat{\rho }_{0} \frac{{\phi \left( {\hat{\gamma }Z_{it} } \right)}}{{1 - \varPhi \left( {\hat{\gamma }Z_{it} } \right)}} $$
$$ E\left( {\hat{w}_{it} |I_{it} = 1,X_{it} } \right) = X_{it} \hat{\beta }^{1} + \hat{\sigma }_{1} \hat{\rho }_{1} \frac{{\phi \left( {\hat{\gamma }Z_{it} } \right)}}{{\varPhi \left( {\hat{\gamma }Z_{it} } \right)}} $$
where \( \hat{\sigma }_{0} \) and \( \hat{\sigma }_{1} \) are the estimated standard deviations of the errors \( e_{it}^{0} \) and \( e_{it}^{1} \) respectively; while \( \hat{\rho }_{0} \) and \( \hat{\rho }_{1} \) are the estimated correlation coefficients between u it and \( e_{it}^{0} \) and \( e_{it}^{1} \), respectively. ϕ(…) and Φ(…) are the normal pdf and cdf, respectively.
Predictions according to Eqs. (13) and (14) can only be computed for the individuals in the sample, e.g. individuals for which we can compute the individual effects. But the model is used to predict the labor income flow of "newborn" individuals. In this case, we simulate the individual effectsFootnote 17:
$$ \tilde{\nu }_{i}^{{}} = \hat{\sigma }_{\nu } \tilde{z}_{i}^{{}} $$
where \( \hat{\sigma }_{\nu } \) is the standard deviation of the individual effect \( \hat{v}_{i}^{{}} \). \( \tilde{z}_{i}^{{}} \) is a pseudo-random draw from a Standard Normal distribution. Thus, the labor income stream of the newborn individuals is computed as follows:
$$ \ln \tilde{w}_{it} = X_{it} \hat{\beta }^{0} + \tilde{\nu }_{i} - \hat{\sigma }_{0} \hat{\rho }_{0} \frac{{\phi \left( {\hat{\gamma }Z_{it} } \right)}}{{1 - \varPhi \left( {\hat{\gamma }Z_{it} } \right)}}\quad {\text{if}}\;\tilde{L}_{it} = 1\;{\text{and}}\;\tilde{C}_{it} = 0 $$
$$ \ln \tilde{w}_{it} = X_{it} \hat{\beta }^{1} + \tilde{\nu }_{i} + \hat{\sigma }_{1} \hat{\rho }_{1} \frac{{\phi \left( {\hat{\gamma }Z_{it} } \right)}}{{\varPhi \left( {\hat{\gamma }Z_{it} } \right)}}\quad {\text{if}}\;\tilde{L}_{it} = 1\;{\text{and}}\;\tilde{C}_{it} = 1. $$
Computation of SS contributions and benefits
Based on the simulated work and income histories, we compute social contributions and benefits according to the existing laws as described in Sect. 3. We assume that individuals leave no survivors and suffer no sickness or disability. We also assume that all individuals claim their retirement benefits as soon as they are eligible to do so.
Computation of pre- and post-social-security lifetime labor income, and distribution indexes
The expected pre-SS lifetime labor income is the present value of the expected simulated labor income:
$$ \bar{W}\left( r \right) = \sum\limits_{a = 0}^{a = r - 1} {p\left( a \right)W\left( a \right)} \left( {1 + \rho } \right)^{ - a} $$
where, \( r \) is age at retirement; \( p\left( a \right) \) is the probability of worker's survival at age \( a \); \( W\left( a \right) \) is total labor cost (including employee and employer contributions) at age \( a \); \( \rho \) is the discount rate (we use a 3 % rate).
We compute the lifetime SSW as an indicator of SS transfers. SSW is the present value of expected net transfers to SS. It can be obtained as the sum of the discounted expected flows of old-age pensions \( \left( {\text{PB}} \right) \) net of contributions \( \left( {\text{SSC}} \right) \).
$$ {\text{SSW}} = {\text{PB}} - {\text{SSC}} $$
$$ {\text{PB}} = \sum\limits_{a = r}^{{a = \hbox{max} \,\,{\text{age}}}} {p\left( a \right)B\left( {a,r} \right)} \left( {1 + \rho } \right)^{ - a} $$
$$ {\text{SSC}} = \sum\limits_{a = 0}^{a = r - 1} {p\left( a \right)C\left( a \right)} \left( {1 + \rho } \right)^{ - a} $$
where, max age is the maximum potential age; \( B\left( {a,r} \right) \) is the amount of retirement benefits at age a conditional on retirement at age r; and \( C\left( a \right) \) is the amount of contributions (both by the employee and the employer) to the SS at ages a, excluding health insurance contributions.
Finally, the expected post-SS lifetime labor income is defined as \( \bar{W}\left( r \right) + {\text{SSW}}. \)
As pointed out in Sect. 3, even when is possible a priori to distinguish between the distributional effects of different SS arrangements, it becomes mostly an empirical matter. In our case, to assess the redistributive impact of social security we use some descriptive statistics of pre-SS lifetime labor income, SSW, and SSW to pre-SS labor income ratio. We also calculate two additional indexes, the Gini coefficient (for pre- and post-SS lifetime labor incomes) and the Reynolds–Smolensky-type index of net redistributive effect (Lambert 1993, p 256). This index assesses the redistributive impact of a program computing the area between the Lorenz pre-program income and the concentration post-program income. A positive (negative) value indicates that the program reduces (increases) inequality.Footnote 18
For each population group we work with a simulated population of 10,000 individuals, starting at an age of 18-year-old. Each individual potentially work until he/she is 69-year-old (inclusive) if he/she does not retire earlier. The maximum age an individual lives is 100-year-old. In Eqs. (1, 2) and (13, 14) two dummies are included to control for the level of education (see Tables 5, 7 for a definition of these variables). These dummies are assigned following the proportion in the samples used for the estimation of Eqs. (1, 2). Even when some education levels are completed at an age older than 18, we assume that the proportion of population with such level of education has it from the beginning of the simulated period. In the case of the selection equation we also include a dummy variable equal to one if the individual is male/female and 65/60 years or older.Footnote 19
Table 5 Results of Eqs. (1) and (2)
Table 6 In sample simulations: right predictions
Table 7 Results of Eqs. (13) and (14)
Table 5 reports the results for the working and contribution status equations. In results do not reported here we obtained that for women in the private sector the IMR was not statistically significant, also the selection model generates too low simulated contribution densities when comparing with observed ones. Thus, for women in the private sector, we estimate Eqs. (1) and (2) without assuming the two error terms are correlated between them.
For the most of the variables we obtain the expected signs. In the case of the age effect, the interpretation is more difficult since this variable enter the regression through a cubic polynomial, a better picture is given by Fig. 1 that shows the observed and in-sample simulated densities. The goodness of fit is quite high when measured by the proportion of correct predictions for the in-sample simulations (see Table 6). In Fig. 2 we compare observed contribution densities with out-of-sample simulations, here again the goodness of fit appear to be quite high.
Observed and in-sample simulated contribution densities by age. a Share of overall sample. b Share of sample with a working status. Source: own calculations
Observed and out-of-sample simulated contribution densities by age. a Share of overall sample. b Share of sample with a working status. Note the unemployment rates used for the simulated densities are 15.3 for men and 17.4 for women. These figures are the average rates for the period covered by the samples used to estimate Eqs. (1) and (2). Source: own calculations
With regards to the income equation, the results are reported in Table 7. As expected the education dummies are positive and increasing in the level of education, they are always statistically significant. For the age coefficients these are mostly also significant.
Tables 8, 9 and 10 show some statistics about the simulated populations in relation to the history of contribution and access to a retirement benefit. We assume that each individual retires as soon as he/she meets the required conditions. Thus, it comes as no surprise that the average age of retirement is close to the minimum required age, especially in the case of men (see Table 8). Table 9 shows that the proportion of the simulated populations that access to a retirement benefit, excluding those that never contributed, is higher for public workers. Also, a higher proportion of men access to a benefit than women, independently of the sector they work in, but this difference is very much important in the case of the private sector, which does not come as a surprise since for women in the private sector our sample shows only a 27.7 % of cases with a declared contribution status (this percentage goes up to 47.3 % when the reference group are those that declare a working status), while for men the percentage is 58.9 % (71.7 %). Finally, in Table 10 we report the average years of contributions of the simulated populations. The average length of contributions is longer in the public than in the private sector (considering all individuals, regardless of whether they have access to a retirement benefit). This outcome is surely a reflection of the higher labor stability of public workers relative to private ones. Because of men need to contribute, at least, until they are 65-year-old while for women the minimum age is 60 years, men contribute more than women. When we restrict the analysis only to individuals that access to a pension benefit, the years of contributions are in all cases above the minimum requirement.
Table 8 Average retirement age of simulated populations
Table 9 Proportion of simulated populations that access to a retirement benefit
Table 10 Average number of years of contribution of simulated populations
With regards to the redistributive effects of the social security system, we first present the results for a benchmark case in which we work with the following assumptions, which are standard in the literature:
A discount rate of 3 %.
Pre- and post-SS lifetime labor incomes are calculated considering only labor income subject to contributions (\( \tilde{L}_{it} = 1 \) and \( \tilde{C}_{it} = 1 \)).
After discussing the results of the baseline scenario we run different sensitivity analyses:
Two alternative discount rates are considered: 1 and 2 %.
Pre- and post-SS lifetime labor incomes are calculated excluding employer and employee contributions.
Pre- and post-SS lifetime labor incomes are calculated not only including labor income subject to contributions, but also labor income from informal jobs over which no contributions are made (\( \tilde{L}_{it} = 1 \) and \( \tilde{C}_{it} = 0 \)).
A scenario that considers a weak enforcement of the eligibility conditions.
A counterfactual exercise in which we assume that all labor is formal, so every time an individual works we assume he/she contributes.
In Table 11, we present some descriptive statistics for the simulated populations for the pre-SS lifetime labor income, SSW, and SSW to pre-SS lifetime labor income ratio. Average expected pre-SS lifetime labor income goes between 111.2 thousand for women in the private sector to 263.4 thousand for men in the public sector. In the case of men, the difference between public and private sector is quite less important than for the case of women, 20 % in the case of men against a 70 % for women. Men, on average, have a higher pre-SS lifetime labor income than women, especially in the private sector with an average value 97 % higher than that of women, while in the public sector the difference is 39 %. This important difference against women in the private sector is a reflection of their much lower probability of working.
Table 11 Pre-social security lifetime labor income and social security wealth (in thousands of June 2011 US dollars)
The simulated populations show a large degree of income dispersion given by the ratio between the average income of the 99 and 1 percentile. These differences are much important in the private sector, and for women than for men. As expected, the distributions are skewed to the right, with the median pre-SS lifetime labor income consistently lower than the mean values.
It comes as no surprise that the average SSW is never positive since there is an important part of contributions, those made by the employer that has no effect on the amount of the pension benefit, while the PBU, which is not related to the contributions, is for most cases the smallest part of the total retirement benefit.
Average SSW ranges from −40.8 thousand (men in the public sector) to −16.6 thousand (women in the private sector). SSW is considerably more negative for men than for women, with 2.1–1 relation in the public sector and 2.2–1 in the private sector. The differences between public and private sectors are less important, either for men (10 %) or for women (20 %). Measured by the difference between percentiles 1 and 99, within each category, SSW shows a higher dispersion among men than among women. On average, the SSW to pre-SS lifetime labor income ratio ranges from −19 % among women in the private sector to −12.1 % among women in the public sector. Ranked by this ratio, there is an important dispersion, as for percentile 1 the ratio is about −21.5 %, while for percentile 99 its range is between −14 and −7.1 %.
The results just summarized show that social security redistributes wealth in the case of Argentina. We now move to look in what direction this redistribution goes. Figure 3 show the relationship between pre-SS lifetime labor income and SSW. The negative slope would suggest that the redistribution is progressive, the greater the pre-SS labor income the lower is SSW. However, it is possible to observe a certain degree of dispersion, which reflects some sort of redistribution but which does not reduce inequality. Similar results were found for Brazil (Zylberstajn 2011) and Uruguay (Forteza and Mussio 2012).Footnote 20 Liebman (2001) points out to the same issue for the United States. Also, there appears to be different sub-groups within each of the four population groups.
Social security wealth and lifetime labor income. Source: own calculations
Table 12 reports the Gini coefficients for pre- and post-SS lifetime labor incomes. The results show that the system is regressive for men in the private sector and women in the public sector (in both cases the Gini increases a 1.5 %, approximately 0.6 ppt.); while not surprisingly there is a considerable regressiveness for women in the private sector (the Gini increases a 2.9 %, 1.7 ppt.). For men in the public sector the system is slightly progressive (the Gini falls 0.2 %, 0.05 ppt.).
Table 12 Gini coefficients of lifetime labor income before and after social security
The same pattern emerges when looking at the Reynolds–Smolensky-type index (see Table 13). The index is negative for the first three groups, especially for women in the private sector, while it is positive for men in the public sector.
Table 13 Reynolds–Smolensky index of effective progression
The failure of the current Argentinean PAYG-DB social security program to reduce inter-generational inequality represents a puzzle. The vesting period condition might help explain it. A possible explanation for our results is that as Forteza et al. (2009) show, large segments of the population have a low probability of having contributed 30 or more years when they reach retirement ages, and this probability is particularly low among low-income individuals. Forteza and Ourens (2012) show that the implicit rate of return on contributions paid to these programs is very low when individuals have short contribution histories. Hence, low-income individuals might be getting a bad deal from social security because they have short histories of contribution. Figure 4 shows the kernel densities for the average labor cost per year of contribution distinguishing between people that contributed to the system and do not get a retirement benefit and those who do. It is very clear from the simulated data that low-wage earners have a much lower chance of fulfilling with the conditions the system requires to obtain a pension at the age of retirement.Footnote 21
Density distributions of average labor cost per year of contribution (includes employee and employer contributions). Source: own calculations
The use of a 3 % discount rate is a standard practice in the literature. In Table 14 we report also the redistributive effects of using two alternative discount rates: 1 and 2 %. A lower discount rate gives more weight to the income received during the retirement years in relation to the income received in the earlier years of the working life. Given that a high proportion of individuals do not fulfill the conditions for a retirement benefit, which are also those with lower income during their working life, it is not a surprise that the system become much more regressive when using lower discount rates for the three groups the system is already regressive when using a discount rates of 3 % (men and women in the private sector, and women in the public sector), while progressiveness increases when the system is already progressive under a discount rate of 3 % (men in the public sector).
Table 14 Redistributive effects under different discount rates
Alternative measures of pre- and post-SS lifetime labor incomes
The percentage of total labor cost represented by contributions made by workers and employers is substantial, 27 % of the taxable wage, and 30 % of the net wage received by the employees. This is probably an important reason for our result that SSW is negative in all cases. The inclusion of contributions by employees and employers in the formulas for pre- and post-SS lifetime labor incomes assumes implicitly that the burden of contributions falls on workers, this strategy is probably the most common in the literature, however the distribution of the burden is clearly an empirical issue (Saez et al. 2012).
To look into this issue we run the simulations using alternative definitions of pre- and post-SS lifetime labor incomes, which exclude employer and employee contributions. Working in this way, pension benefits are treated as any other social program and we ignore the contributions in the calculation. We consider no behavioral responses, in the sense that we assume that wages are not affected regardless of how pre- and pos-SS lifetime labor incomes are defined.
The results of excluding contributions from pre- and post-SS lifetime labor incomes are reported in Table 15. As we can see, the results are not homogeneous across the four population groups, with SS becoming more regressive for women in the private sector, more progressive for men in the public sector, while for men in the private sector and women in the public sector there is a reduction in the regressivity of SS.
Table 15 Redistributive effects under alternative definitions of pre- and post-SS lifetime labor incomes
The role of informal labor income
An important element of the labor market in Argentina is the high participation of informal jobs, over which no contributions are made. The inclusion into pre- and post-SS lifetime labor incomes of earnings from jobs for which there were no contributions, informal income, means an important reduction in the Gini coefficients of pre- and post-SS lifetime labor incomes. This result derives from the fact that those for whom informal income is an important part of their pre-SS lifetime labor earnings are mostly low-wage earners, so the inclusion of this type of income increases the participation of the low end of the income distribution.
Also, with the inclusion of informal labor the redistributive nature of the system changes significantly. As Table 16 shows, SS is still progressive for men in the public sector, and it becomes also now progressive for men and women in the private sector, while the negative effect observed in the public sector for women is now just one-third of the value obtained when only formal jobs were taken into account. The same results emerge when using the Reynolds–Smolensky-type index, SS is always progressive but for women in the public sector. Not surprisingly, the main improvement of the inclusion of the informal income takes place in the private sector, especially for women, where the incidence of informal labor is much important.
Table 16 Redistributive effects including informal jobs income
An interesting result from the inclusion of informal income arises when we use the alternative definitions of pre- and post-SS lifetime labor incomes, which exclude employee and employer contributions. Now, with the only exception of men in the public sector, the SS system is always regressive, and moreover, the regressivity of the system is stronger than when informal income is excluded. This result can be explained by the fact that when pre- and post-SS lifetime labor incomes exclude employee and employer contributions, Social Security works like an untied transfer program improving the conditions of those who benefit from it,Footnote 22 which as we saw earlier (see Fig. 4) are the ones with higher wages, since low-wage workers are less likely to fulfill the conditions to access a retirement benefit.
A weak enforcement of the law
A de facto progressive component, maybe one of the most important, is the weak enforcement of the law, in particular with regards to if a person fulfills the minimum requirements to access a retirement benefit. To account for the de facto application of the law we run an alternative scenario under a weak enforcement of the conditions to access to a benefit. We assume that everyone that having worked, when reaching 70-year-old does not have access to a retirement benefit, is granted the PBU. As reported in Table 17, not surprisingly, a scenario with a weak enforcement of the law reduces importantly the regressiveness of the system. This improvement is of a larger magnitude in the private than in the public sector, and for women than for men. These results are mainly driven by the lower probability that people in the private sector, and particularly women, have of fulfilling the conditions to access to a retirement benefit. It emerges clearly and once again without to be a surprise, the case of women in the private sector, which as shown before have a much lower probability of obtaining a retirement benefit if the law is strictly enforced.
Table 17 Redistributive effects under a weak law enforcement scenario
Table 18 Scenario with no informal jobs
A counterfactual with no informal jobs
Finally, we run a scenario under the assumption that there are no informal jobs, so every time an individual is working we assume he/she contributes to SS. In this case, we use the results of the labor status equation (Eq. 1) to calculate the working histories. Then we estimate a new single-equation random effect model to generate the income histories.Footnote 23 Working this way has the drawback that for those individuals in the sample who are in an informal job, we use their observed wage, instead of the wage he/she would have received if he/she would have had a formal job. This would bias downward the individual effect for these individuals, and so also their simulated income history.
As Table 18A shows, there is an important increase in the share of population that would have access to a retirement benefit (see Table 9 for a comparison with the baseline scenario). With regards to the distributive impacts of SS, now the system is almost neutral, showing a slight progressiveness (see Table 18B) but for women in the private sector. However, in this last case the Gini coefficient increases only 0.4 %, just a seventh of the increase obtained for the baseline scenario. This last result makes very clear the importance of reducing the incidence of informal labor.
Finally, an issue that cannot be ignored, as it is essential for all the results presented above, is the effect of using for the estimates of a data set that goes back to before the year for which the simulations are run, and perhaps even more important when a different legal framework was in place.Footnote 24 Footnote 25 From a purely practical perspective, and as pointed out before, one of the reasons for not working with a more recent period of time for the estimation of our equations is that starting in the second half of 2003 the EPH was subject to an important methodological change that impedes us to extend the period of analysis. Also, because of the timing when households are surveyed, the new EPH is less suitable for the purposes of the present study.Footnote 26 A second reason, but by no means less important, is the growing suspicions about the quality and truthfulness of the official statistics, which originally was limited to consumer prices, and which later extended to statistics about poverty, employment, and finally also to GDP figures.
The implicit assumption for using our data for the estimation of the working and contribution statuses is that changes in the legal framework governing the retirement benefits of the social security system had no effect on labor market and contribution behavior, or at least they were not as important to change the results substantially. To grasp an idea of how strong is this assumption of no behavioral response, in an exercise which we do not report here because of matter of space,Footnote 27 we simulated the working and contribution densities for the period 2009–2011,Footnote 28 when the new retirement regime was in place, using the estimates reported in Table 5. For the four population groups we were able to replicate the age patterns of working and contributions densities observed during 2009–2001. Additionally, for men both in the private and public sectors, the magnitude of the densities are quite close. Instead, for women in the case of the contribution status, especially in the public sector, simulated densities are lower than the observed ones. Thus, considering that one of the main reasons for our results is that an important proportion of individuals fail to comply with the requirement about the minimum number of years of contributions, especially in the case of women, the regressiveness of social security as reported previously should be interpreted as a worst-case scenario. However, having said the latter, taking into account the robustness of the results to the different sensitivity analysis, especially the one that allows for a weak enforcement of the law, and even more the counterfactual with no informal labor, the main message which is that, in its current state, the retirement benefit pillar of social security in Argentina would not be working as an effective and efficient tool to pursue a more progressive intra-generational distribution of income, remains the most likely outcome.
Argentina social security system, based on a PAYG-DB scheme, appears to be regressive, especially for women working in the private sector. This result is robust to using alternative discount rates, and to different definitions of pre-and post-SS lifetime labor incomes.
The main finding that the system appears to be regressive constitutes, a priori, a puzzle, that might find explanation in the lower probability that low-income earners have of accessing to a retirement benefit as reported in Forteza et al. (2009). This effect is much more important in the case of the private sector, especially for women.
As pointed out in Sect. 1, our results are at odds with the idea that PAYG-DB systems are mostly progressive, while systems based on individual accounts are mostly neutral. One possible explanation for the results we obtain here is that, to the system to be financially sustainable, there is a need for strict eligibility conditions, as well as low rates of replacement. Thus, the fact that low-wage earners, especially women in the private sector, have a high probability of working in the informal sector plays a crucial role, since they cannot fulfill the conditions for accessing to a retirement benefit loosing all their contributions.
To grasp an idea of the role of eligibility conditions, we run some alternative simulations. First, we obtain that the system becomes slightly progressive when inequality measures are calculated on the basis of income that also include that derived from jobs for which people do not make contributions.Footnote 29 This result is explained by the fact that according to our simulations are low-earner individuals, who show lower probabilities of being entitled for a retirement benefit, the ones that derive most of their labor income from jobs for which they do not make contributions. This last result means that low-earner workers have low incentives to look for jobs in the formal sector, with the negative externalities that this kind of behavior brings during the working life, such as lack of health service coverage. Second, when we assume a weak enforcement of the social security law, the PAYG-DB system becomes less regressive. These changes are more likely for women than for men and in the private than in the public sector. Both cases could be explained, once again, because women and those working in the private sector have lower probability of fulfilling the conditions to have access to a retirement benefit. Finally, assuming the removal of the informal labor market, the system becomes almost neutral, even showing a small level of progressivity.
Results from Forteza and Mussio (2012), Fajnzylber (2011) and Zylberstajn (2011) show that from a intra-generational perspective, SS in Uruguay, Chile and Brazil induces to a more progressive redistribution of lifetime income. Moncarz (2011) reports that for Mexico the pension system is almost neutral from a distributional point of view.
This pension is financed with a 30 % of revenues from a direct tax on hydrocarbons, and also with the dividends from privatized public enterprises.
The abolishment of the individual account part of the pension system took place in a record time of just over a month.
A not minor reason behind the losses suffered by the funds accumulated in the individual accounts during the year 2008, was the sharp reduction in the value of the public bonds issued by the Federal Government which constituted by far the most important component of the investment portfolio. This overexposure in government bonds was often the result of a government imposition on fund administrators.
The impact of different mortality rates and different coverage on implicit redistribution is not assessed.
Sustainability Guarantee Fund.
National Social Security Administration.
With the abolishment of the individual account system, the ANSES has become one of the most important sources of financing to the public sector, only behind the Central Bank.
Lustig and Pessino (2012) show that non-contributory pensions as a share of GDP rose by 2.2 % points between 2003 and 2009, while Argentina's total social spending as a share of GDP increased by 7.6 % points. The authors show that the increase in the weight of non-contributory pensions entailed a redistribution of income to the poor, and from the formal sector pensioners with above minimum pensions to the beneficiaries of the pension moratorium launched in 2004. At the moment of writing this paper a new moratorium was launched, expecting to reach about half million new pensioners.
This regime represents, approximately, between 75 and 80 % of all beneficiaries, including survivor benefits.
We exclude from the analysis people working under any other regime than salaried workers, such as self-employed.
Employees' contributions to health insurance are 3 % for their own coverage and another 3 % to finance health insurance for those already retired. Employers' contributions are also divided, but in this case, 6 % is for the employee health insurance, while the remaining 2 % is for those already retired.
Unemployment insurance only covers private sector workers. However, the degree of coverage, both in terms of number of beneficiaries as well as in monetary terms is very low.
From the second half of 2003 the EPH was subject to an important methodological change that impedes us to extend the period of analysis, also because of the timing when households are surveyed, the new EPH is less suitable for the purposes of the present study. Additionally, a second but not less important reason for not using the new EPH, especially for the most recent years, is the growing suspicions about the quality and truthfulness of the official statistics produced by the Instituto Nacional de Estadísiticas y Censos. Stating with consumer price statistics in 2007, official statistics has been since subject to increasing scrutiny with growing accusations of tampering with the data. In 2009, in the middle of the world economic crisis, suspicions also felt on poverty figures, and in the most recent years, especially since 2012 also on employment rates and GDP figures.
We use Stata command movestay (Lokshin and Sajaia 2004).
Wages are deflated using the Wage Index of Manufactures.
The implicit assumption here is that the distribution of the individual effects does not vary with age or cohort.
The Gini coefficients and the Reynolds–Somelinsky index were estimated using DASP (Araar and Duclos 2009).
Even when the household survey has a wide range of additional variables, both at the individual and the household levels, we are restricted to using deterministic variables that can be predicted over the life of each individual.
For Chile, Fajnzylber (2011) finds out that the introduction in 2008 of a non-contributory component into the otherwise actuarially fair individual account scheme, meant a progressive redistribution.
A very parsimonious linear probability model such as \( R_{i} = \delta_{1} \tilde{\eta }_{i}^{C} + \delta_{2} \ln \left( {\bar{w}_{i} } \right) + u_{i} \), where \( R_{i} = 1 \) if the person get a retirement benefit, and zero otherwise, \( \tilde{\eta }_{i}^{C} \) is the simulated individual fixed effect obtained from Eq. (1), and \( \bar{w}_{i} \) is the average wage (including employer and employee contributions) per year of contribution, explains a large proportion of the probability of getting a pension, with a 1 % increase in the average wage increasing the probability of getting a pension between 0.07 and 0.30 % depending on the type of worker and the sector, if we exclude women in the private sector the effect ranges between 0.20 and 0.30 %.
Let us remember that we assume no behavioral response, so wages are not affected regardless of how SS is financed.
The results for these estimates are available upon request.
I thank to two anonymous referees for bringing this point to my attention.
As described in Sect. 3.1, between 1994 and 2003 coexisted a PAYG-DB system together with a IA-DC system.
In the old EPH, each individual in a household was surveyed during four consecutive times. Under the new EPH, each individual is surveyed also four times, but now instead of these being consecutive with an equal lapse of between surveys, each person is included into the sample during two consecutive quarters, then is dropped the next two quarters, and finally included again for two additional quarters. The different lapses of time between surveys, 3 months between the first and the second and between the third and the fourth, and 6 months between the second and the third, introduces an additional difficulty in the estimations.
These are available from the author upon request.
From 2012, official employment figures are regarded as highly suspicious.
This result is reversed when pre- and post-SS lifetime incomes are defined such that we exclude contributions of employees and employers.
Araar A, Duclos J-Y (2009) DASP: distributive analysis Stata package. University of Labat, PEP, World Bank, UNDP
Barr N (2001) The welfare state as piggy bank: information, risk, uncertainty, and the role of the state. Oxford University Press, New York
Basualdo E, Arceo N, González M, Mendizabal N (2009) La evolución del Sistema previsional argentino. Documento de Trabajo No. 2. Centro de Investigación y Formación de la República Argentina
Beach W, Davis G (1998) Social security's rate of return. CDA 98-01. Heritage Foundation
Berstein S, Larraín G, Pino F (2006) Chilean pension reform: coverage facts and policy alternatives. Economía 6:227–279
Bertranou F, Cetrángelo O, Gruskka C, Casanova L (2011) Encrucijadas en la seguridad social Argentina: reformas, cobertura y desafíos para el sistema de reparto. CEPAL and Oficina Internacional del Trabajo, Buenos Aires
Diamond P (2005) Taxation, incomplete markets, and social security. The MIT Press, Massachusetts
Diamond P (2006) Conceptualization of non-financial defined contribution systems. In: Holzmann R, Palmer E (eds) Pension reform. Issues and prospects for non-financial defined contribution (NDC) schemes. The World Bank, Washington
Duggan J, Gillingham R, Greenlees J (1995) Progressive returns to social security? An answer from social security records. Research Paper 9501. US Department of the Treasury
Fajnzylber E (2011) Implicit redistribution in the Chilean Social Insurance System. Working Paper, Universidad Adolfo Ibáñez, Chile
Forteza A (2014) Assessing redistribution within social insurance systems. The cases of Argentina, Brazil, Chile, Mexico and Uruguay. In: Frölich M, Kaplan D, Pagés C, Rigolini J, Robalino DA (eds) Social security, informality and labor markets. How to protect workers while creating good jobs. Oxford University Press, UK
Forteza A, Mussio I (2012) Assessing redistribution in the Uruguayan social security system. J Income Distrib 21:65–87
Forteza A, Ourens G (2012) Redistribution, insurance and incentives to work in Latin-American pension programs. J Pension Econ Financ 11:337–364
Forteza A, Apella I, Fajnzylber E, Grushka C, Rossi I, Sanroman G (2009) Work histories and pension entitlements in Argentina, Chile and Uruguay. Social Protection Discussion Papers 0926. The World Bank
Garrett D (1995) The effects of differential mortality rates on the progressivity of social security. Econ Inq 33:457–475
Gruber J, Wise D (eds) (1999) Social security and retirement around the world. The University of Chicago Press, Chicago
Gruber J, Wise D (eds) (2004) Social security programs and retirement around the world: micro-estimation. The University of Chicago Press, Chicago
Gustman A, Steinmeier T (2001) How effective is redistribution under the social security benefit formula? J Public Econ 82:1–28
Immervoll H, Levy H, Lietz C, Mantovani D, O'Donoghue C, Sutherland H, Verbist G (2006) Household incomes and redistribution in the european union: quantifying the equalising properties of taxes and benefits. In: Papadimitriou D (ed) The distributional effects of government spending and taxation. Palgrave MacMillan, Hampshire
Lambert P (1993) The distribution and redistribution of income. A mathematical analysis. Manchester University Press, Manchester
Liebman J (2001) Redistribution in the current US social security system. NBER WP 8625. National Bureau of Economic Research
Lindbeck A (2006) Conceptualization of non-financial defined contribution systems. In: Holzmann R, Palmer E (eds) Pension reform. Issues and prospects for non-financial defined contribution (NDC) schemes. The World Bank, Washington DC
Lindbeck A, Persson M (2003) The gains from pension reform. J Econ Lit 41:74–112
Lokshin M, Sajaia Z (2004) Maximum likelihood estimation of endogenous switching regression models. Stata J 4:282–289
Lustig N, Pessino C (2012) Social spending and income redistribution in Argentina during the 2000s: the rising role of noncontributory pensions. Working Paper 449/2012. Universidad del CEMA
Moncarz PE (2011) Assessing implicit redistribution within social security systems in Argentina and Mexico. Working Paper. Universidad Nacional de Córdoba, Argentina
Palmer E (2006) What is NDC? In: Holzmann R, Palmer E (eds) Pension reform. Issues and prospects for non-financial defined contribution (NDC) schemes. The World Bank, Washington DC
Rofman R, Lucchetti L, Ourens G (2008) Pension systems in Latin America: concepts and measurements of coverage. Social Protection Discussion Papers 0616, The World Bank, Washington DC
Saez E, Matsaganis M, Tsakloglou P (2012) Earnings determination and taxes: evidence from a cohort-based payroll tax reform in Greece. Q J Econ 27:493–533
Sutherland H (2001) Euromod: an integrated European benefit-tax model. EUROMOD Working Paper No. EM9/01. University of Essex, UK
Valdés-Prieto S (2006) Conceptualization of non-financial defined contribution systems. In: Holzmann R, Palmer E (eds) Pension reform. Issues and prospects for non-financial defined contribution (NDC) schemes. The World Bank, Washington DC
Zylberstajn E (2011) Assessing implicit redistribution in the Brazilian social security system. Working Paper. University of Sao Paulo, Brazil
Facultad de Ciencias Económicas, Universidad Nacional de Córdoba, Consejo Nacional de Investigaciones Científicas y Técnicas, Av. Valparaíso s/n, Ciudad Universitaria, 5016, Córdoba, Argentina
Pedro E. Moncarz
Search for Pedro E. Moncarz in:
Correspondence to Pedro E. Moncarz.
I am indebted to Alvaro Forteza for his helpful comments and guidance throughout the process that led to the completion of this research, I also want to thank María Laura García for providing invaluable information, and to the participants at the IARIW-IBGE conference on "Income, Wealth and Well-being in Latin America" (San Pablo, 2013), 32nd IARIW General Conference (Boston, 2012), XXVII Jornadas de Economía (Montevideo, 2012), 16th Annual LACEA Meeting (Santiago de Chile, 2011) and workshops at the Departamento de Economía (Universidad de la República, 2011) and Instituto de Economía y Finanzas (Universidad Nacional de Córdoba, 2010). I appreciate the financial assistance provided by the International Association for Research in Income and Wealth (IARIW). As usual I'm solely responsible for all remaining errors.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Moncarz, P.E. Implicit redistribution within Argentina's social security system: a micro-simulation exercise. Lat Am Econ Rev 24, 2 (2015) doi:10.1007/s40503-015-0016-8
Revised: 21 October 2014
Micro-simulations
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Simplified crash courses and sparks of insights on scientific topics with intuitive lessons for everyone.
Crash courses and insights on the wonders of the universe, for all explorers, aliens, and other curious life forms out there seeking knowledge about the cosmos and beyond.
Sparks and crash courses for students and chemistry enthusiasts on topics related to the world of chemistry with easy to understand lessons.
Crash courses and sparks for students and explorers on the pillars of physical sciences with simplified straightforward lessons and insights.
Crash courses and sparks of insights on scientific concepts involving thoughts, mind, behaviour, development, and evolution of cognition.
Springs · 1 Aug 2022
Freud's Theory of Personality
The Austrian physicist Sigmund Freud is the father of psychoanalysis and 'the talking cure'. As a pioneer of psychodynamic personality theory, Freud believes t…
1 4 0 Psychology
1 0 Psychology
Private Self-Consciousness is the tendency to think about and attend to the more covert, hidden aspects of the self—aspects that are personal in nature and not accessible to the scrutiny of others. For example, one's privately held beliefs, aspirations, feelings, and values. 💭
Private Self-Consciousness can be assessed with psychometric tools such as the Original Self-Consciousness Scale (Fenigstein, Scheier & Buss 1975) and the Revised Self-Consciousness Scale (Scheier & Carver 1985). 📝
mindspace · 31 Oct 2020
3 0 Astronomy
Saturn's rings are new…relative to sharks. Sharks existed on Earth long before Saturn got its rings. Just learned this over a conversation and couldn't help but wonder how many things we assume to be old but in reality, they are not that old.
"The findings indicate that Saturn's rings formed between 10 million and 100 million years ago. From our planet's perspective, that means Saturn's rings may have formed during the age of dinosaurs." - Source.
karthik · 14 Oct 2020
5 0 Physics
Most people think that inhaling helium changes the pitch or frequency of the voice. No! That is no what happens. It's the timbre of the voice that changes.
When you inhale helium, the medium inside the vocal cavities changes from a dense medium, air, to a lighter medium, helium. And we know that in air, any vibrations travel at a speed of 343 m/s. For helium, it's 1007 m/s. The helium medium now increases the natural frequencies of the cavities. In other words, it changes the responsiveness of the cavities to higher frequencies. This results in the amplification of a higher range of frequencies compared to that of when the air was the medium—causing the squeaky voice!
The key observation here is that the frequency at which the sound is produced by the vocal folds doesn't change. It's the resonant frequencies that change, forcing a change in the timbre of the voice.
What's a timbre of a human voice? Let's start with the voice! The human voice is created when vocal folds vibrate. It's this vibration of air molecules when travelling through different cavities like pharynx, sinuses, nose, and mouth, that's converted to speech. And here is the interesting part that makes them unique to a person.
Like any physical objects, these cavities through which the sound travels have their own properties as well. They have their own distinct natural frequencies because of the geometries of the muscles that are unique to a person and the composition of the air.
And the sound from the vocal folds is made of not just a uniform sine wave with a fundamental frequency, but a composite of other distinct frequencies as well. So when certain frequencies of that sound wave hit the natural frequencies of the cavities, resonance happens, and those parts alone get amplified. The end result of all this is the distinct voice of a person. In other words, the lowest resonant frequency that's modulated by the rest of the frequencies is what gives that unique tone to your voice. And that's what we call the 'timbre' of the voice.
The voice you hear when you inhale helium, that's because of the timbre change too. Not the frequency shift!
The longest vertical straw you can drink from is 10.3 metres. Even if you use a vacuum pump it won't suck the liquid higher than that! Here is why!
Contrary to your intuition, when you drink from a straw you are not actually sucking up the fluid here. Just the air. So, when you do that, inside the straw, the pressure drops lower than that of the atmospheric pressure (101 kPa) outside. So, it's the outside air pressure that pushes the water into the straw.
As the liquid moves up the straw, it is fighting against the gravity that is pulling it downwards. But it still keeps rising as long as the atmospheric pressure is greater than the pressure inside the straw due to gravity (weight of the liquid column).
The more liquid enters the column, the more it weighs. And at a certain height, there'd be enough water in the straw that'd exert the same pressure as that of the atmospheric pressure. That height, at sea level on earth, for water is 10.3 m.
$$ p_{atm}= 101\;kPa $$
$$p_{straw}= \dfrac{F}{A} \Rightarrow \rho g h$$
$$\rho g h = 101 \times 10^3\;N/m^2$$
$$h = \dfrac{101 \times 10^3\;N/m^2}{10^3\;kg/m^3 \times 9.81\;m/s^2}$$
$$h = 10.3\; m$$
mindspace · 8 Oct 2020
Starry nights with clear skies are usually colder than a cloudy night. Ever noticed this? Why does it feel colder? Well, during the day, the sun heats up the Earth. But at night the heat is radiated back into the space. But when clouds are present, they act like an insulator and retain the radiated heat that would rather escape into the space without them in the way.
vishnu · 7 Oct 2020
Temperature is not the measure of the heat. Most people think that they are the same. Take a pair of small and large vessels with water and expose it to the sun. You will find the smaller vessel becoming warm quicker than the larger one. Although an equal quantity of heat is supplied to the two vessels, due to the difference in the quantity of the water, the time it takes to raise the temperature varies. This must clarify the confusion between temperature and heat.
Playful aggression or cuteness aggression is the term for when one experiences an aggressive response to an adorable stimulus like a baby's cheeks or a fluffy unicorn toy. The response is a harmless urge to pinch and squeeze a baby's face. People would say something like "You're so cute and I am going to eat you up!" in an aggressive manner, gritting their teeth or clenching their fists, but in a harmless and tender way.
This is similar to a stimulus of intense happiness evoking an accompanied negative emotion like crying to keep the hormones balanced.
karthik · 5 Oct 2020
If you have poured tea from a mug you'll intuitively know what a Coanda effect is. To put it simply, the Coanda effect is the phenomenon where fluids like water tend to follow and stick to a contour of an object.
So what happens here? When the water flows out of the mug, the water molecules encounter the air molecules and try to drag them along due to viscosity. As the air molecules under the mug get dragged off, the pressure at that spot, which is relatively constrained compared to the other side, decreases (Bernoulli's principle). And as the pressure is higher at the top of the water than on any other side, the water reaches equilibrium by moving towards the low-pressure region, which is what makes it to stick to the surface of the mug.
Earth's atmosphere is leaking a few grams of helium this very moment! Yep! As helium and hydrogen are the lightest elements of all, Earth's gravity has little effect on them in the hydrostatic equilibrium. With higher kinetic energies, hydrogen and helium reach velocities greater than that of Earth's escape velocity in the thermosphere and they shoot into space.
If you are in space and shine a torch in an arbitrary direction, the photons (although being massless) will impart a thrust on you that will propel you in the opposite direction. This is due to the conservation of momentum as well.
In other words, when photons are ejected out of the torch, they travel outward with a momentum*. And due to this, your momentum changes to conserve the total momentum of your initial state, pushing you in the opposite direction.
* Photons have a relativistic mass and they do have momentum, given by $p = \dfrac{h}{\lambda}$.
Learned helplessness is a psychological phenomenon where an animal, after repeated traumatic experiences, eventually stops making any attempts to act upon it.
Way back, behavioural psychologists experimented on dogs by putting them in a cage where one half of the floor can be shocked. One group of dogs had the option to jump over a small barrier to avoid the shock. Another group of dogs didn't have that choice. They had to endure the shock as long as it lasted.
Over time, after repeated trials, when the first group of dogs was put in an escapable shock cage again, they escaped by jumping the barrier — just how they were conditioned to. But the second group that had 'learned' to endure the shock refused to escape and took the shock passively, even when the escape was as easy as jumping the barrier.
Similarly, when humans are exposed to repeated negative emotions without escape, we eventually learn the 'helpless' state and continue to endure it even when we are capable of acting upon it or realising that we have control over the situation.
For the dogs, the learned helplessness was alleviated by training them to take the steps towards the escape. And in humans, just the realisation of the fact that we have control makes a huge difference. This is a widely considered area of research in product design as well.
Most of our older memories are partially distorted and has fake details in it. This is a very common occurrence as recalling or 'remembering' something is technically a reconstruction of events you learned and encoded into your memory ages ago. Between learning that memory and you recalling it now, you would have learned a lot of new and similar information. And these new encodings in your brain interferes with the old memories and affects the way how you recall them back.
In other words, new memories often interfere with old memories. Remembering is a very complicated process and most of the old stuff what you remember now is likely filled with made-up details. Biased, fabricated, and prone to errors. | CommonCrawl |
finite population correction factor calculator
0. View a sample solution. Calculate the finite population correction factor when the population size is N = 1,000 and the sample size is n = 100. b. Repeat part (a) when N = 3,000. c. Repeat part (a) when N = 5,000. d. What have you learned about the finite population correction factor when N is large relative to n? In this formula we use a finite population correction to account for sampling from populations that are small. a Calculate the finite population correction factor when the population a. What Is the Finite Population Correction Factor? The finite population correction (fpc) factor is used to adjust a variance estimate for an estimated mean or total, so that this variance only applies to the portion of the population … Ask Question + 100. Still have questions? Corresponding Textbook Statistics for Management and Economics | 11th Edition. Hypergeometric calculator finds hypergeometric (PDF) and cumulative hypergeometric (CDF) probability. Finite population correction factor • When population sizes are less than 10 times the estimated sample size, it is possible to use a finite population correction factor. Sokal and Rohlf (1981) give an equation of the correction factor for small samples of n < 20. Also of interest is the proportion of the total population that has been sampled. Back to top. a. That is: The finite population correction factor is given by. To correct for the impact of this, the Finite Correction Factor can be used to adjust the variance of the sampling distribution. Finite Population. ... Why is Bessel's correction applied in t-score calculation when true population mean and variance is assumed? Is it (N-n)/N or (N-n)/(N-1 ... for the fact that you're actually sampling with replacement from a finite population. It may also be called a countable population. Every time you draw, you leave the proportions in the population exactly … The 2nd formula is a correction factor that is used for finite populations. The number of vehicles crossing a bridge every day, the number of births per years and the number of words in a book are finite populations. Our members are the world's leading producers of intelligence, analytics and insights defining the needs, attitudes and behaviors of consumers, organizations and their employees, students and citizens. There are cases when the population is known, and therefore the correction factor must be applied. David Johnston/Photolibrary/Getty Images. Population size. A simulation-based approach is used to Calculate the finite population correction factor when the population size is N = 1,000 and the sample size is n = 100. b. Repeat part (a) when N = 3,000. c. … This is the total number of distinct individuals in your population. onyango - November, 2018 reply. 2. where \(m\) is defined as the sample size necessary for estimating the proportion \(p\) for a large population, that is, when a correction for the population being small and finite is not made. View this answer. What is the value of the finite population correction factor in the formula for oR when n100 and N 5000 a) 002 b) 005 ce) 098 d) 095 Answer A View Answer Calculate the finite population correction factor for each of the following situations: a. n = 50, N = 2,000 b. n = … It is appropriate when more than 5% of the population is being sampled and the population has a known population size. The formula for fpc is, Step 3: Decide the sample size ( n ) from the Sample Size Table based on desired precision. An online hypergeometric table. As 1000 is a lot less than 5% of 304 million, we will not need to use the Finite Population Correction Factor. Finally, the correction factor was applied to both areas assuming that the 1991 data represented the true density and that the same correction factor applied to both areas. It is appropriate when more than 5% of the population is being sampled and the population has a known population size. FinitePopulationSampling Introduction Samplingofindependentobservations I Wehavebeenassumingsamples X 1,X 2,...,X n madeofindependentobservations. Click now and learn the formula of sample size for infinite and finite population. 37 Finite Population Correction Factor . A population is called finite if it is possible to count its individuals. Finite Population Correction Factor If a simple random sample of size n is selected without replacement from a finite population of size N, and the sample size… MCQ 11.41 (a) Unbiased sample variance (b) Population variance (c) Biased sample variance (d) All of the above MCQ 11.42 (a) Unbiased sample variance (b) True variance (c) Biased sample variance (d) Variance of means MCQ 11.43 The sampling procedure in which the population is first divided into homogenous groups and then a sample Recall how the critical value(s) delineated our region of rejection. The second column is the sample size without using finite population correction (FPC) and the third is using FPC. 9.13 a. We saw that the sample size has an important effect on the variance and thus the standard deviation of the sampling distribution. View a full sample. 0 0. I don't know why the problem gave you those first three lines since you only need to know the sample size and population size to compue the finite correction factor. In addition to continuous data, discussed here, this can be an important concern when estimating proportions. How can you prove that the finite population correction factor when applied to the sample mean should be . Finite Population Correction factor When population sizes are less than 10 times the estimated sample size, it is possible to use a finite population correction factor (fpc) [6]. Accuracy is a reflection of the proportion of the sample size to the population. 9.14 a. If your population is large, but you don't know how large you can conservatively use 100,000. If valid estimates of the parameters of a finite population are to be produced, the finite population needs to be defined very precisely and the sampling method needs to be carefully designed and implemented. By Staff Writer Last Updated Mar 31, 2020 8:12:04 AM ET. Fast, easy, accurate. Sample size formula has been given and explained here in detail using a solved example question. Using the finite population correction factor, what is the probability that the sample mean is below 365 grams? Point and Interval Estimates. There are cases when the population is known, and therefore the correction factor must be applied. Finite Population Correction Factor - a SAGE encyclopedia entry - Knaub, J. Comment(0) Chapter , Problem is solved. Regarding a finite population correction factor for the stage at which you draw a random sample, you might look at the attached to see if it is of any help in figuring out what you need. Finite Population Correction Factor FILE INFORMATION. Explanation of finite correction factor (1 answer) Closed 2 years ago. Finite Population Correction Factor; Known vs. • The finite population correction factor measures how much extra precision we achieve when the sample size becomes close to the population size. iemr.org E n fin , l e facteur d e correction a été appl iq ué aux deux zones en supposant que les données de 1991 représentaient la densité réelle et que le mêm e facteur d e correction s' appl iqua it aux deux zones. As an example, the finite population for a survey conducted to estimate the unemployment rate might be all adults aged 18 or older living in a country at a given date. Unknown Population Variance; Statistical Precision; Testing rho=a (Correlation Coefficient): Fisher z; Testing rho=0 (Correlation Coefficient) Testing P=a (Population Proportion) Homework. You are correct about the second scenario, for the reason you give, but not about the first scenario. $$ \text{finite population correction} ~ = ~ \sqrt{\frac{N-n}{N-1}} $$ The name arises because sampling with replacement can be thought of as sampling without replacement from an infinite population. To correct for the impact of this, the Finite Correction Factor can be used to adjust the variance of the sampling distribution. Answer to Calculate the finite population correction factor for each of the following situations: a. n = 50, N = 2,000 b. n = 20, TV = 100 c. n = 300, TV = 1,500 | SolutionInn Therefore, the finite population correction factor is 0.9492. The theory of the finite population correction (fpc) applies only to a random sample without replacement (Lohr (2009) Sec 2.8,pp 51-530.The key word is random.The hallmark of a random sample is that selection is determined by random numbers or the physical equivalent. Bessel's Correction. Ranked as 9283 on our top downloads list for the past seven days with 2 downloads. infinite population), then LEAVE IT BLANK). (If the population size is very large but not known exactly (i.e. Get your answers by asking now. The finite population correction factor measures how much extra precision we achieve when the sample size becomes close to the population size. For instance, for a population of 1000, a sample size of 100 would theoretically be more accurate than a sample size of 10. Would you use the finite population correction factor in calculating the from CIS 310 at California Polytechnic State University, Pomona Gurland and Tripathi (1971) provide a correction and equation for this effect. Ranked as 34255 on our all-time top downloads list with 1024 downloads. Join Yahoo Answers and get 100 points today. (2008). 4 Finite Population Bootstrap Sampling 5 ... there is a nite population correction(1-n/N) arising from the lack of indepen- ... account for the nite population factor resulting from the lack of independence in sampling from a nite population. See unbiased estimation of standard deviation for further discussion. With n = 2, the underestimate is about 25%, but for n = 6, the underestimate is only 5%.
Midea Washing Machine Error Codes, Calamari Oil Supplements, Ppt On Law Of Demand Class 11, Island Bread Recipe, Syrian Short Coat Hamster, Sony Dsc-f707 Memory Stick Error, Red Dead Redemption 2 Gold Mine Location,
finite population correction factor calculator 2021 | CommonCrawl |
Identifying Increasing/Decreasing or Constant sections of Function Domains
Linear Rates of Change
Average Rates of Change
Gradients of tangents
Average Vs Instantaneous Rates of Change
Equations of Tangent Lines
Practical Applications of Rates of Change
Relating gradient and original functions (investigation) - including velocity time to position time
We can find the rate of change of a non-linear function at a point by looking at the gradient of the tangent. However, this will only tell us about the rate of change at one point and it requires knowing the tangent first. We'll look at a method to deal with the first issue, and in the process we'll deal with the second issue as well.
Suppose that the relationship between the population ($n$n) of a colony of bacteria and the time ($t$t, in days) is described by the relationship $n=2^t$n=2t. We want to find the growth rate, which is the rate of change of the population. We can plot this relationship:
Instead of a tangent, we will use a secant. A secant is a line which touches a curve at two specific points. We can choose two points on this curve, $\left(1,2\right)$(1,2) and $\left(5,32\right)$(5,32), and draw the line connecting them, $y=7.5x-5.5$y=7.5x−5.5:
This secant has a gradient of $7.5$7.5 so we say that the average rate of change from day $1$1 to day $5$5 is $8$8 per day.
We can draw secants through any two points of a graph. Here we have drawn the secants connecting $\left(1,2\right)$(1,2) to $\left(3,8\right)$(3,8), $\left(2,4\right)$(2,4) to $\left(4,16\right)$(4,16), and $\left(3,8\right)$(3,8) to $\left(5,32\right)$(5,32). These secants have gradients of $3$3, $6$6, and $12$12 respectively.
Notice that the average rate of change is variable since this is a non-linear function. This is true even when we take secants around the same point. $\left[1,5\right]$[1,5] and $\left[2,4\right]$[2,4] are both intervals around $3$3, but in the first case the average rate of change is $7.5$7.5 and in the second it is $6$6.
Worked example
The volume of a lake over five weeks has been recorded below:
$0$0 $1$1 $2$2 $3$3 $4$4 $5$5
Volume (m3)
$123000$123000 $142000$142000 $135000$135000 $111000$111000 $104000$104000 $123000$123000
(a) Find the average rate of change of volume in the first week
(b) Find the average rate of change of volume over the whole five weeks
(c) Find the average rate of change of volume in the last three weeks
Think: We don't know the function mapping the week to the volume. However, the average rate of change only requires two points on the graph. So we can find the average rate of change from just the data points.
Do: For each period, the average rate of change will be the change in volume divided by the number of weeks:
(a) average rate of change of volume in the first week $=$= $\frac{\text{change in volume}}{\text{number of weeks}}$change in volumenumber of weeks
$=$= $\frac{142000-123000}{1-0}$142000−1230001−0
$=$= $\frac{19000}{1}$190001
$=$= $19000$19000
(b) average rate of change of volume over the whole five weeks $=$= $\frac{\text{change in volume}}{\text{number of weeks}}$change in volumenumber of weeks
$=$= $\frac{0}{1}$01
$=$= $0$0
(c) average rate of change of volume in the last three weeks $=$= $\frac{\text{change in volume}}{\text{number of weeks}}$change in volumenumber of weeks
$=$= $\frac{-12000}{3}$−120003
$=$= $-4000$−4000
Reflect: We can tell that the function is non-linear because the average rate of change is variable. Also notice that the average rate of change can be positive, negative or zero depending on the interval we choose. This is a significant limitation of average rates of change.
Secants and average rates of change
A secant is a line which intersects with a curve at two points
The average rate of change of a function over an interval is the gradient of the secant on the function between the endpoints of the interval
Does the graphed function have a constant or a variable rate of change?
Consider a function which takes certain values, as shown in the table below.
$x$x
$3$3 $6$6 $8$8 $13$13
$y$y
$-12$−12 $-15$−15 $-17$−17 $-22$−22
Find the average rate of change between $x=3$x=3 and $x=6$x=6.
Find the average rate of change between $x=8$x=8 and $x=13$x=13.
Do the set of points satisfy a linear or non-linear function?
Non-linear
Does the function that is satisfied by the ordered pairs: $\left\{\left(-2,-5\right),\left(1,-20\right),\left(2,-25\right),\left(7,-50\right),\left(9,-60\right)\right\}${(−2,−5),(1,−20),(2,−25),(7,−50),(9,−60)} have a constant or a variable rate of change?
Sketch the graphs of functions and their gradient functions and describe the relationship between these graphs
Apply calculus methods in solving problems | CommonCrawl |
Informal unemployment and education
Ann-Sofie Kolm1 &
Birthe Larsen2
This paper develops a four-sector equilibrium search and matching model with informal sector employment opportunities and educational choice. We show that underground activities reduce educational attainments if informal employment opportunities mainly are available for low-educated workers. A more zealous enforcement policy will in this case improve educational incentives as it reduces the attractiveness of remaining a low-educated worker. However, unemployment also increases. Characterizing the optimal enforcement policies, we find that relatively more audits should be targeted towards the sector employing low-educated workers; elsewise, a too low stock of educated workers is materialized.
Researchers have been puzzled by the fact that observed tax evasion in high-income countries, despite low audit rates and fairly modest fines, is substantially lower than what is predicted by theory. Andreoni et al. (1998) argue that this discrepancy is most likely explained by non-economic factors, such as morality, guilt, and shame. However, Kleven et al. (2011), who conducted a large field experiment on individual tax filers in Denmark, suggest that this discrepancy is explained by the degree of third-party reporting. As incomes for individuals are not self-reported, rather reported by a third party such as the employer, it is difficult, and thus costly, to evade taxes. These costs, both due to third-party reporting, or even morality, guilt, or shame, tend to reduce the profitability of evading taxes and limit the size of the informal sector, although the expected punishment fees are low relative to taxes.
In this paper, we argue that these types of costs may explain why highly educated workers to a lesser extent evade taxes and work informally than low-educated workers. If highly educated workers to a smaller extent work in industries which handle cash payments and to a larger extent are subject to third-party reporting, it will be more difficult, and thus more costly, for these workers to evade taxes.
This is consistent with data. Evidence indicates that manual workers, or workers with a lower level of formal education, to a substantially higher degree face informal employment opportunities compared to highly educated workers. Pedersen (2003), using the same questionnaire design for Germany, Great Britain, Denmark, Norway, and Sweden, confirms that skilled blue collar workers carry out more informal market activities than others. Figure 1 shows the extent of informal activities in the five countries by industry. Most informal work are carried out in the construction sector, followed by the agricultural sector, hotels, and restaurants. This pattern is also confirmed for Denmark, by Hvidtfeldt et al. (2011), and for Germany, by Haigner et al. (2011), using representative survey data.
Fraction of informal sector work by industry. Pedersen (2003)
Furthermore, performing logistic regressions for the five countries, Pedersen (2003) confirms that the likelihood of informal market activities falls with the length of education. In addition, Boeri and Garibaldi (2005) show for Italy that mainly workers at the lower end of the skill distribution engage in informal activities.
The fact that mainly low-educated workers seem to work in the informal sector suggests that the choice of educational attainment is potentially distorted. Informal employment opportunities foregone with education may simply reduce the incentives for workers to acquire education.
The aim of this paper is to investigate the equilibrium impact of underground activities on labour market outcomes and educational attainment in high-income countries, as well as to characterize the optimal enforcement policy. Although harsher punishment policies may correct for a too low stock of educated workers, total unemployment may increase with such policy. In fact, we have little guidance from research to what extent formal sector jobs replace jobs in the underground economy as those jobs disappear with stricter informal sector punishment.
For this purpose, we develop a four-sector general equilibrium model featuring matching frictions on the labour market. Unemployed workers search for jobs in both a formal and an informal sector, and workers decide whether or not to acquire higher education based on their ability levels. Education is considered to be a once and for all investment in human capital and takes place as soon as the worker enters the labour market.1
In order to isolate the mechanisms and increase the transparency of the model, we keep the differences between the formal and informal sectors at a minimum.2 The only dissimilarities between the sectors are that taxes are not paid in the latter and that productivity in the formal sector may be higher than productivity in the informal sector. Instead of paying taxes, informal sector firms have to pay a fine in case they are hit by an audit and detected as tax cheaters. In addition, firms in the informal sector are assumed to face concealment costs. In our model, we let concealment costs capture costs associated with concealing taxable income due to third-party reporting or even morality, guilt, or shame. The costs reduce the profitability of evading taxes and limit the size of the informal sector although the expected punishment fees are low relative to taxes. In line with Kleven et al. (2011), we also let these costs be higher the more income that is hidden from the tax authorities.
We find that underground activities reduce the incentives to acquire higher education if informal employment opportunities mainly are available to low-educated workers. More zealous enforcement policies will in this case improve educational incentives as these reduce the attractiveness of remaining a low-educated worker. However, if also highly educated workers to a large extent are exposed to informal employment opportunities, the incentives to acquire higher education may fall with stricter enforcement policies as underground work pays off better to workers with high productivity. Moreover, we find that actual unemployment most likely increases, although the official unemployment falls. Finally, characterizing the optimal enforcement policies, we find that relatively more audits should be targeted towards the sector employing low-educated workers; elsewise, the outcome is a too low stock of educated workers.
The present paper extends the strand of tax evasion literature which departs from the assumption of imperfectly competitive labour markets by incorporating involuntary unemployment through the inclusion of search frictions.3 See, for example, Fugazza and Jacques (2004), Boeri and Garibaldi (2005), and Kolm and Larsen (2006) who also model underground activities in high-income countries. These studies focus on labour market outcomes and rely on asymmetries between the formal and the informal sector, such as heterogeneity in morality, in order to explain the co-existence of a formal and an informal sector.
There are also numerous studies based on search theoretical frameworks investigating issues of informal employment from the point of view of low- and middle-income countries. As one can argue that the nature of the informal sector can be quite different in low- and middle-income countries compared to high-income countries, the modelling strategies usually differ in these set-ups. As pointed out by La Porta and Shleifer (2014), the informal sector in low- and middle-income countries is usually huge and contains small, unproductive, and stagnant firms. Moreover, the informal sector in this literature is usually seen as an unregulated sector.
For an example, taking a Latin American perspective, see Albrecht et al. (2009) that accounts for worker heterogeneity and considers the impact of payroll taxes and severance pay on unemployment in the presence of an informal sector. The informal sector can be seen as an unregulated sector which is not affected by payroll taxes and other formal policies.4 The recent study by Meghir et al. (2015) takes a slightly different modelling approach in its focus on underground activities in Brazil as the paper considers on-the-job-search and firm heterogeneity. Workers may search for jobs both in the formal and the informal sector, and search frictions make it profitable for firms to start both types of jobs.
The paper is organized as follows. In Section 2, we provide an empirical background and motivation for the paper. In Section 3, the model is set up. Section 4 offers a comparative statics analysis of an increase in the relative punishment of informal activities. Section 5 considers optimal policy, and finally, Section 6 concludes.
Background and motivation
As individuals engaged in underground work do not wish to be identified, it is notoriously difficult to collect accurate information about these activities. For natural reasons, we therefore have limited knowledge about the empirical relationships between informal activities and other economic outcomes.
In this section, we construct a cross-sectional data set of 24 OECD countries to investigate the relationships between factors affecting underground activities and educational outcomes. All OECD countries are included in the sample provided that we have data on the size of the informal sector and information on the legal and regulatory framework for the purpose of tax compliance collected by the Global Forum on Transparency and Exchange of Information for Tax Purposes (OECD 2012).
Since the informal economy cannot directly be measured, one has to rely on indicators that capture informal sector activities in order to estimate the size of the sector. Here we use the most recent estimates derived by Schneider et al. (2010). Instead of using a method which assumes that a single factor or indicator can capture all activities in the informal sector, such as the currency demand approach or the electricity approach, they estimate the size of the informal sector using a method which includes multiple causes and indicators of the informal sector.5 Figure 2 provides a picture of how large the informal sector is in relation to GDP in the different countries.
The size of the shadow economy as a fraction of GDP for each country in 2007
If, as we argue, informal employment opportunities are foregone with higher education, we should observe a lower stock of educated workers in countries where it is more profitable to work in the underground economy. Thus, countries with less strict enforcement policies or lower concealment costs relative to the tax burden should have a smaller stock of highly educated workers.
To measure the costs of informal sector work in a country, we construct a variable based on the legal and regulatory framework on the availability of, and access to, information of importance for tax compliance. The data used is collected by the Global Forum on Transparency and Exchange of Information for Tax Purposes (OECD 2012). The Global Forum has set out a large number of standards in order to increase tax compliance, and through a process of peer reviewing, the Forum assesses the legal and administrative framework in each member country. More specifically, the peer-reviewing process provides information if the standards are "in place", "in place but there is need for improvements", and "not in place". From this information, we construct an index capturing the costs of evading taxes.6
The cost of evading taxes needs to be related to the cost of not evading taxes.7 We let these costs be captured by the tax wedge of total labour costs to the employer relative to the corresponding net take-home pay for the average single worker without children. This data from the OECD Taxing Wages database provides unique information on the income taxes paid by workers and the family benefits received in the form of cash transfers as well as the social security contributions and payroll taxes paid by their employers, for each of the OECD countries.
Figure 3 plots the percentage of the total population, 25–64 years old, holding a tertiary education in 2007 (OECD 2012) against our measure for the cost of evading taxes relative to not evading taxes. Consistent with our hypothesis, we observe a positive correlation between the measures; the less attractive it is to work in the informal sector, the more workers will choose a higher education.
Fraction of 25–64 years old with tertiary education as a function of the wedge between the informal and the formal sector for 2007
In Fig. 4, the aim is to see if the relative cost of evading taxes is negatively correlated with the size of the informal sector. Indeed, we observe a negative correlation between the size of the underground economy as a fraction of GDP and the percentage of the 25–64 years old of the population holding a tertiary education. Thus, economies where the informal sector is more extensive also tend to be economies where a lower fraction of the population educates themselves.
Fraction of 25–64 years old with tertiary education as a function of the size of the shadow economy as a fraction of GDP in 2007
Clearly, as it is challenging to get an accurate measure of the size of the informal sector and, as we have done here, to get a measure of the costs of evading taxes, this section only serves to provide correlations between the variables in focus. To identify causal relationships between, on the one hand, tax and punishment policies and, on the other hand, educational outcomes is giving the available data an overwhelming task. Next, we build an equilibrium model to investigate these relationships, as well as to pin down the mechanisms.
This section develops a four-sector general equilibrium model with formal and informal sector employment opportunities and educational choice. Workers differ in the ability to acquire education. Abilities, e, are uniformly distributed between 0 and 1, e∈ [0,1], and the cost of higher education, c(e), is decreasing in ability. Thus, workers with a high level of ability will find it more than worthwhile to invest in higher education, whereas workers with low ability will not. Workers not attaining higher education will from now on be referred to as manual workers. Both manual and highly educated workers allocate search effort optimally between the formal and the informal sector. Once matched with a firm, they bargain over the wage. The economy thus consists of four sectors: the formal and informal sectors for manual workers (denoted F,m and I,m) and the formal and informal sectors for highly educated workers (denoted F,h and I,h).
Manual and highly educated workers search for jobs in both a formal and an informal sector. For simplicity, we assume that only unemployed workers search for jobs. This is a simplification, i.e. we do not acknowledge that the connection to the labour market given by working in the formal or informal sector may bring about job opportunities not available while unemployed. The matching functions for the four categories of jobs are given by \({X_{l}^{j}} = \left ({v_{l}^{j}}\right)^{\frac {1}{2}} \left (\left ({\sigma _{l}^{j}}\right)^{\gamma }u_{l}\right)^{\frac {1}{2}}\), where \({X_{l}^{j}}\) is the sectorial matching rate, \({v_{l}^{j}}\) is the sectorial vacancy rate, and u l is the unemployment rate and j=F,I and l=m,h. The rates are defined as the numbers relatively to the labour force of manual and highly educated workers, respectively. The exponents in the matching function are set to be equal to half in order to simplify the welfare analysis where we derive the optimal tax and punishment system when we have imposed the traditional Hosios condition. In that case, we can disregard congestion externalities as the elasticity of the expected duration of a vacancy is equal to the bargaining power of workers in a symmetric Nash bargaining situation.8
Workers allocate search effort optimally across the formal and the informal sector. A worker with educational level l will direct \({\sigma _{l}^{F}}\) units of search for a formal sector job and \({\sigma _{l}^{I}}\) units of search for an informal sector job. Thus, workers with different levels of education may differ in their allocation of search time across a formal and informal sector. Each worker's total search intensity is, however, exogenously given and normalized to unity, i.e. \({\sigma _{l}^{F}}+{\sigma _{l}^{I}}=1,\;l={m,h}\). The parameter γ<1 captures the effectiveness of search falls with search effort, i.e. the first unit of search in one sector is more effective than the subsequent units of search. This could capture that different search methods are used when searching for a job in a market. The more time that is used in order to search in a market, the less efficient search methods have to be used. The transition rates into informal and formal sector employment for a particular worker i are \(\lambda _{li}^{I}=\left (\sigma _{li}^{I}\right)^{\gamma }\left ({\theta _{l}^{I}}\right)^{\frac {1}{2}}\) and \(\lambda _{li}^{F}=\left (1-\sigma _{li}^{I}\right)^{\gamma }\left ({\theta _{l}^{F}}\right)^{\frac {1}{2}}\), where \({\theta _{l}^{I}}={v_{l}^{I}}/\left (\left ({\sigma _{l}^{I}}\right)^{\gamma }u_{l}\right)\) and \({\theta _{l}^{F}}={v_{l}^{F}}/\left (\left (1-{\sigma _{l}^{I}}\right)^{\gamma }u_{l}\right)\) are labour market tightness, l=m,h, measured in effective search units. The rates at which vacant jobs become filled are \({q_{l}^{j}}=\left ({\theta _{l}^{j}}\right)^{-\frac {1}{2}},\;j={F,I},\;l={m,h}\).
Value functions
Let \(U_{l}, {E_{l}^{F}},\) and \({E_{l}^{I}}\) denote the expected present values of unemployment and employment for manual and highly educated workers. The value functions for worker i then reads
$$ {rU}_{li}=R+\lambda_{li}^{F}\left({E_{l}^{F}}-U_{li}\right)+\lambda_{li}^{I}\left({E_{l}^{I}}-U_{li}\right)-{aU}_{li},\;l=m,h, $$
$$ {rE}_{li}^{F}=R+w_{li}^{F}+s\left(U_{l}-E_{li}^{F}\right)-{aE}_{li}^{F},\;l=m,h, $$
$$ {rE}_{li}^{I}=R+w_{li}^{I}+s\left(U_{l}-E_{li}^{I}\right)-{aE}_{li}^{I},\;l=m,h, $$
where r is the exogenous discount rate, \({w_{l}^{j}}\) is the sector wage, and s is the exogenous separation rate. R is a lump sum transfer that all individuals receive from the government which reflects that the government has some positive revenue requirements.9 The parameter a is the rate by which a worker is dying, and it captures that there is a constant flow of workers out of the labour market at each instant of time. Analogously, there is an equally sized flow of workers into the labour market each time period as people are born at the same rate. This keeps the population constant, normalized to unity, and enables us to look at the impact of various policies on educational attainment despite the fact that education is an irreversible investment.
Let \({J_{l}^{j}}\) and \({V_{l}^{j}}\;j={F,I}\) represent the expected present values of an occupied job and a vacant job in the formal and informal sectors, respectively. The arbitrage equations for formal and informal sector jobs paying the wage \(w_{li}^{j}\;j={F,I}\) and a vacant job are then
$$ {rJ}_{li}^{F}={y_{l}^{F}}-w_{li}^{F}\left(1+z\right)+s\left({V_{l}^{F}}-J_{li}^{F}\right)-{aJ}_{li}^{F},\;l=m,h, $$
$$ r{V_{l}^{F}}={q_{l}^{F}}\left({J_{l}^{F}}-{V_{l}^{F}}\right)-k{y_{l}^{F}}-a{V_{l}^{F}},\;l=m,h, $$
$$ {rJ}_{li}^{I}={y_{l}^{I}}-w_{li}^{I}\left(1+p\alpha+\kappa_{l}\right)+s\left({V_{l}^{I}}-J_{li}^{I}\right)-{aJ}_{li}^{I},\;l=m,h, $$
$$ r{V_{l}^{I}}={q_{l}^{I}}\left({J_{l}^{I}}-{V_{l}^{I}}\right)-k{y_{l}^{I}}-a{V_{l}^{I}},\;l=m,h, $$
where z is the payroll tax rate and \({y_{l}^{j}},\;j={F,I},\;l={m,h}\), is productivity. The parameter p is the auditing rate which captures the probability of being detected employing a worker in the informal sector and α is the associated firm punishment fee rate. Vacancy costs are indexed by factor k to the productivity in the sector and written \(k{y_{l}^{j}}\;,j={F,I},\;l={m,h}\).10 The concealment costs, κ l , l=m,h, capture that it is costly to hide income from the tax authorities. The costs could, for example, capture what Kleven et al. (2011) refer to as third-party reporting. When there is third-party reporting of income, such as the firm reporting the wage payments directly to the tax authorities, this has to be agreed upon also by the worker, which is costly. These concealment costs could also be other direct costs associated with concealing evasion, as well as morality costs associated with evading taxes.
If firms hiring highly educated workers have a harder time concealing their activities than firms hiring manual workers, then κ h >κ m . This is the case if, for example, third-party reporting is more common for highly educated workers, or as assumed in Kleven et al. (2011), the marginal costs of evasion increase with the amount of income evaded. Although this is likely to be the case, we do not a priori impose any restriction on the values of κ l , l=h,m.
In order to improve the transparency of the model, we disregard taxation, expected punishment, and concealment costs on the worker side. This is of no importance for the results.
The unemployed worker i allocates search between the two sectors, \(\sigma _{li}^{I}\), in order to maximize the value of unemployment, r U li . A necessary condition for an interior solution is that γ<1, which holds by assumption. The first-order condition can be written as
$$ \frac{\left(1-\sigma_{li}^{I}\right)^{1-\gamma}}{\left(\sigma_{li}^{I}\right)^{1-\gamma}}=\left(\frac{{\theta_{l}^{F}}}{{\theta_{l}^{I}}}\right)^{\frac{1}{2}}\frac{{E_{l}^{F}}-U_{li}}{{E_{l}^{I}}-U_{li}},\;l=m,h. $$
Workers allocate their search between sectors to equalize the net marginal returns to search effort across the two sectors.
Wage determination
When a worker and firm meet, they bargain over the wage, \(w_{li}^{j}\), taking economy-wide variables as given. The first-order conditions from the Nash bargaining with equal bargaining power for workers and firms can be written as
$$ {J_{l}^{F}}=\left({E_{l}^{F}}-U_{l}\right)\left(1+z\right),\;l=m,h, $$
$$ {J_{l}^{I}}=\left({E_{l}^{I}}-U_{l}\right)\left(1+p\alpha+\kappa_{l}\right),\;l=m,h, $$
where we have imposed symmetry and the free entry condition, \({V_{l}^{j}}=0,\;j={F,I},\;l={m,h}.\)
We can now derive an equation determining how search is allocated between the formal and the informal sectors in a symmetric equilibrium by substituting (9) and (10) into (8) and using \({J_{l}^{F}}=\frac {k{y_{l}^{F}}}{{q_{l}^{F}}}\) and \({J_{l}^{I}}=\frac {k{y_{l}^{I}}}{{q_{l}^{I}}}\) from (5) and (7) together with free entry. This yields
$$ \frac{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}=\left(\frac{{\theta_{l}^{F}}}{{\theta_{l}^{I}}}\right)\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l},\;l=m,h, $$
where \(\psi _{l}=\frac {1+p\alpha +\kappa _{l}}{1+z}\) is the cost wedge between the informal sector and the formal sector. When workers allocate their search between the formal and the informal sectors in equilibrium, they account for the wedge, ψ l , and for the formal relative to the informal sectorial tightness, \({\theta _{l}^{F}}/{\theta _{l}^{I}}\), as well as for relative productivity, \({y_{l}^{F}}/{y_{l}^{I}}\). It follows that relatively more search will be directed towards the formal sector if expected punishment plus concealment costs are higher than the tax payments, i.e. if ψ l >1, if formal sector tightness exceeds informal sector tightness (i.e. \({\theta _{l}^{F}}/{\theta _{l}^{I}}>1\)), and/or if productivity in the formal sector is higher than productivity in the informal sector, \({y_{l}^{F}}/{y_{l}^{I}}>1\), and vice versa when ψ l <1, \({\theta _{l}^{F}}/{\theta _{l}^{I}}<1\), and \({y_{l}^{F}}/{y_{l}^{I}}<1\). By use of Eqs. (1)–(7) and (32) in Eqs. (9) and (10), equilibrium producer wages, \({\omega _{l}^{j}}\), are given by
$$ {\omega_{l}^{F}}={w_{l}^{F}}\left(1+z\right)=\frac{1}{2}{y_{l}^{F}}\left(1+k\frac{{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\;l=m,h, $$
$$ {\omega_{l}^{I}}={w_{l}^{I}}\left(1+p\alpha+\kappa_{l}\right)=\frac{1}{2}{y_{l}^{I}}\left(1+\frac{{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}k\right),\;l=m,h. $$
An increase in tightness, \({\theta _{l}^{j}}\), makes it easier for an unemployed worker to find a job and at the same time harder for a firm to fill a vacancy. This improves the worker's relative bargaining position, resulting in higher wage demands. An increase in search will instead increase the firm's relative bargaining position. This is the case as firms will then find it easier to match with a new worker in case of no agreement. The improved bargaining position for firms moderates wage pressure.
Labour market tightness
Labour market tightness for the formal sector and the informal sector are determined by Eqs. (4),(5), (6), and (7) using the free entry condition and the wage Eqs. (33) and (34):
$$ k\left(r+s+a\right)\left({\theta_{l}^{F}}\right)^{\frac{1}{2}}=\frac{1}{2}\left(1-\frac{k{\theta_{l}^{F}}}{\left(1-\sigma_{li}^{I}\right)^{1-\gamma}}\right),\;l=h,l, $$
$$ k\left(r+s+a\right)\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}=\frac{1}{2}\left(1-\frac{k{\theta_{l}^{I}}}{\left(\sigma_{li}^{I}\right)^{1-\gamma}}\right),\;l=h,l. $$
By use of the equilibrium search allocation equation in (32), where \(\frac {{\theta _{l}^{I}}}{\left (\sigma _{li}^{I}\right)^{1-\gamma }} = \frac {{\theta _{l}^{F}}}{\left (1-\sigma _{li}^{I}\right)^{1-\gamma }} \frac {{y_{l}^{F}}}{{y_{l}^{I}}}\psi _{l}\), in (15), it becomes clear that the wedge, ψ l , and productivity differences, \({y_{l}^{F}}/{y_{l}^{I}}\), are the crucial factors determining the size of the formal sector in relation to the informal sector.11 In case productivity is the same in the formal and informal sectors, hence, \({y_{l}^{F}}/{y_{l}^{I}}=1\), then when ψ l >1, and thus expected punishment plus concealment costs are higher than payroll taxes, informal sector producer wages are higher than formal sector producer wages. In this case, it is relatively more attractive for firms to enter the formal sector, which makes formal sector tightness exceed informal sector tightness. Hence, we obtain that \({\theta _{l}^{F}}>{\theta _{l}^{I}}\) and \({\sigma _{l}^{I}}<\frac {1}{2},\;l={m,h}\) if \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}>1\) and vice versa when \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}<1\). Notice that the formal sector exceeds the informal sector \({\theta _{l}^{F}}>{\theta _{l}^{I}}\) and \({\sigma _{l}^{I}}<\frac {1}{2},\;l={m,h}\) both if the wedge is equal to 1, ψ l =1, and the formal sector is more productive than the informal sector, \({y_{l}^{F}}/{y_{l}^{I}}>1\), as well as if the formal and informal sectors are equally productive and the wedge is larger than 1, ψ l >1.
As the formal sector exceeds the informal sector in size in most high-income countries, it is most realistic to consider the case when \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}>1\). This implies considering the situation when the expected punishment rate plus concealment costs exceed the tax rate, i.e. p α+κ l >z, when both the formal and informal sectors are equally productive, which does not seem unrealistic given a broad interpretation of concealment costs. In fact, as discussed in the introduction, positive concealment costs κ l >0 such that p α+κ l >z could potentially explain the puzzle of why we observe a relatively small informal sector although we, at the same time, observe rather low audit rates and fairly modest fines, i.e. p α<z. In addition, when the productivity in the formal sector exceeds that of the informal sector, the formal sector is even more likely to exceed the informal sector in size. However, we do not a priori impose any restrictions on the size of ψ l , p α, or κ l when deriving the results in this paper. When discussing results that depend on the size of ψ l , however, we focus the discussion on what we believe is the most realistic case.
In Fig. 5, we can use Eqs. (14) and (15) to derive relative tightness as a function of search intensity and illustrate this equation in a \(({\sigma _{l}^{I}},{\theta _{l}^{F}}/{\theta _{l}^{I}})\) diagram together with Eq. (11). Both equations have a negative slope, and the former curve will be flatter than the latter around the equilibrium insuring a stable equilibrium.12 When the wedge increases, ψ l′>ψ l (or \({y_{l}^{F}}/{y_{l}^{I}}\) increases), then the search intensity decreases for given relative labour market tightness, \({\theta _{l}^{F}}/{\theta _{l}^{I}}\); this reduction in search intensity increases \({\theta _{l}^{F}}/{\theta _{l}^{I}}\) and thereby \({\sigma _{l}^{I}}\) until a new equilibrium is reached. In Fig. 5, we have left out subscript l to ease exposition.
Tightness in the formal sector relative to tightness in the informal sector and search intensity
When workers decide whether to acquire higher education or remain as manual workers, they compare the value of unemployment as an educated worker and the associated costs of higher education to the value of unemployment as a manual worker. Workers that find it optimal to acquire higher education view this as a once and for all investment in human capital, and it takes place as soon as the worker enters the labour market. As in most studies, we assume that education is costly but it takes no time.13 The cost of higher education depends on individual ability, e i ∈[0,1], and is given by c(e i ), where c ′(e i )<0 and c ′′(e i )>0.14
The marginal worker has an ability level, \(\hat {e}\), which makes him or her just indifferent between acquiring higher education and remaining as a manual worker. We write the condition determining the ability level of the marginal worker as
$$ \left(r+a\right)U_{h}-c(\hat{e})=\left(r+a\right)U_{m} $$
By using Eqs. (1)–(3), it is clear that workers proceed to higher education if the expected income gain of education exceeds their cost of education. However, as wages are endogenous, we can use Eqs. (1) and (16) together with the first-order conditions for wages and Eqs. (5), (7), and (11) together with the free entry condition. This gives the following rewriting of condition (16):
$$ c(\hat{e})=\frac{k}{1+z}\left({y_{h}^{F}}o_{h}-{y_{m}^{F}}o_{m}\right), $$
where \(o_{l}={\theta _{l}^{F}}/\left (1-{\sigma _{l}^{I}}\right)^{1-\gamma },\;l={h,m}.\) Equation (17) gives \(\hat {e}\) as a function of the endogenous variables \({\theta _{l}^{F}}\) and \({\sigma _{l}^{I}},\;l={m,h}.\) Workers with \(e\leq \hat {e}\) choose not to acquire education, whereas workers with \(e>\hat {e}\) acquire education. Hence, \(\hat {e}\) and \(1-\hat {e}\) constitute the manual and educated labour forces, respectively. The right-hand side of Eq. (17) is the expected income gain of attaining education. This gain needs to be positive in order for, at least some, workers to proceed to higher education. The fact that productivity is higher for highly educated workers, which gives rise to an educational wage premium, provides incentives for higher education. However, higher education may potentially also be associated with losses in expected income. For example, if concealment costs are higher for highly educated workers, i.e. κ h >κ m , relatively more attractive informal employment opportunities for manual workers will be foregone in case of higher education. This reduces the incentives for education.15
Clearly, in order to study the non-trivial case where at least some workers proceed to higher education, it is necessary to assume that there is a net gain in the expected income of higher education. Thus, we need to assume that productivity differences between manual and highly educated workers are sufficiently high, i.e. \({y_{h}^{F}}/{y_{m}^{F}}>o_{m}/o_{h}\). Moreover, to guarantee a non-trivial interior solution where at least some, but not all, individuals choose to acquire education, the individual with the highest ability faces a very low cost of education, more specifically c(1)=0, and the individual with the lowest ability faces a very high cost of education, i.e. l i m e→0 c(e)=∞. See the Appendix for the proof of the existence of \(\hat {e}\in (0,1)\).
The equations determining the employment rates in the formal sector and the informal sector, \({n_{l}^{F}}\), \({n_{l}^{I}}\), and the unemployment rates, u l , l=m,h, are given by the flow equilibrium equations and the labour force identity.16 The official unemployment rate \({u_{l}^{o}}\) is given by \({u_{l}^{o}}=u_{l}+{n_{l}^{I}}\). Solving for the employment and unemployment rates yields
$$ {n_{l}^{F}}=\frac{{\lambda_{l}^{I}}}{s+a+{\lambda_{l}^{F}}+{\lambda_{l}^{I}}}, \;{n_{l}^{F}}=\frac{{\lambda_{l}^{I}}}{s+a+{\lambda_{l}^{F}}+{\lambda_{l}^{I}}},\;l=h,m, $$
$$ u_{l}=\frac{s+a}{s+a+{\lambda_{l}^{F}}+{\lambda_{l}^{I}}},\;{u_{l}^{o}} =\frac{s+a+{\lambda_{l}^{I}}}{s+a+{\lambda_{l}^{F}}+{\lambda_{l}^{I}}},\;l=h,m. $$
A comparison of the unemployment rates for manual and highly educated workers requires assumptions about the size of the concealment costs. If concealment costs are higher for educated workers, i.e. κ h >κ m , the official unemployment rate is always lower for highly educated workers than for manual workers, i.e. \({u_{h}^{o}}<{u_{m}^{o}}\). This is also what is observed in data. However, if furthermore, \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}>1,\;l={h,m}\), and hence the informal sector is smaller than the formal sector, the actual unemployment rate is higher for the highly educated workers, u h >u m , i.e. in this case, manual workers have a lower actual unemployment rate than highly educated workers. The following Proposition summarizes the results.
The official unemployment rate is lower for highly educated workers, \({u_{h}^{o}}<{u_{m}^{o}}\), if they face higher concealment costs, κ h >κ m . The actual unemployment rate is higher (lower) for highly educated workers, u h >u m (u h <u m ), if they face higher concealment costs κ h >κ m and these concealment costs are high (low) enough to induce \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}>1 \left (\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}<1\right)\), l=h,m.
For proofs of all the Propositions, see the Appendix. The actual and the official total number of unemployed workers are given by
$$U_{\text{TOT}}=\hat{e}u_{m}+\left(1-\hat{e}\right)u_{h}, $$
$$U_{\text{TOT}}^{o}=\hat{e}{u_{m}^{o}}+\left(1-\hat{e}\right){u_{h}^{o}}. $$
Comparative statics
This section is concerned with the impact of more severe punishment of informal activities on labour market performance and educational attainment. We only consider fully financed changes in enforcement policies. Hence, changes in the audit rate and the punishment fees are always followed by adjustments in the tax rate so as to balance the government budget constraint given by \(\hat {e}{n_{m}^{F}}{w_{m}^{F}}z+\hat {e}{n_{m}^{I}}{w_{m}^{I}}p\alpha +(1-\hat {e}){n_{h}^{F}}{w_{h}^{F}}z+ (1-\hat {e}){n_{h}^{I}}{w_{h}^{I}}p\alpha =R\). Rewriting this budget constraint in terms of producer wages using \({\omega _{m}^{F}}={w_{l}^{F}}(1+z)\) and \({\omega _{l}^{I}}={w_{l}^{I}}(1+p\alpha +\kappa _{l}),\;l={m,h}\) yields
$$ \frac{z\hat{e}{n_{m}^{F}}{w_{m}^{F}}}{1+z}+ \frac{p\alpha\hat{e}{n_{m}^{I}}{w_{m}^{I}}}{1+p\alpha+\kappa_{m}}+ \frac{z\left(1-\hat{e}\right){n_{h}^{F}}{w_{h}^{F}}}{1+z}+ \frac{p\alpha\left(1-\hat{e}\right){n_{m}^{I}}{w_{h}^{I}}}{1+p\alpha+\kappa_{h}}=R $$
where R is the exogenous revenue requirement.
From (40), it follows that an increase in the audit rate or the punishment fee, p or α, or an increase in the tax rate, z, will, for a given tax base, always increase government revenues. The tax base may, however, fall and thereby reduce revenues. If we assume that we are located on the positively sloped side of the "Laffer curves", the analysis is straightforward. Such an assumption implies that the direct effect of taxation and punishment on government revenues will always dominate the impact on revenues since the tax base may be reduced. An increase in the audit or punishment rate then always calls for a reduction in the tax rate in order to regain a balanced government budget. A fully financed increase in the punishment of the informal sector then induces ψ l to increase both because p α increases and because z falls.
Although the most likely scenario is when higher punishment rates call for tax reductions in order to fulfil the government budget, the results obtained in this section for the impact of higher relative punishments of informal activities on producer wages, tightness, search, employment, and unemployment rates do not depend on this assumption. The reason is that these variables are only affected by the wedge, ψ l , and not directly by z and p α. However, which will become clear, as educational attainment could be discouraged by a direct increase in taxation which, in turn, may have a compositional effect on total unemployment, the repercussions through the government budget constraint will be of importance for these variables.
To illustrate this, we discuss the potential scenario where the government revenue falls as harsher punishment of the informal sector is implemented, and the government needs to increase the tax rate in order to balance the budget.17 In this case, when there is a simultaneous increase in z and p α, there will be less sizeable adjustments in the labour market outcome variables (for example, producer wages, \({\omega _{l}^{j}}\), and employment rates, \({n_{l}^{j}}\)) as these variables are only affected by the wedge, ψ l , which is not altered as much when both z and p α increase. The tax base adjustment of importance in this case is then the number of educated workers. The stock of educated workers is affected by the reform both because the wedge is altered but also directly as z enters into (37) for given wedges. With the effect working through the wedge being smaller in this case, the higher tax rate is reducing the incentives to acquire higher education through the direct effect. This tax base adjustment then reduces tax revenues. However, as long as the direct impact on R dominates the negative effect on the tax base through less education, the increase in z will balance the government budget. This scenario will not alter the results in the labour market analysis considering the effect of more harsh punishment of the informal sector on producer wages, tightness, search, employment, and unemployment rates. The reason is, as said, that these variables are only affected by the wedge, ψ l . The required increase in z in the above considered case only implies that ψ l increases by less than if z was reduced, and the effect on the variables will be less sizeable. In fact, even if z increases to such an extent that ψ l actually falls, the results will hold.18 Moreover, it is of no importance for the results which side of the Laffer curves we are located on. However, the repercussions through the government budget constraint will be of importance for educational attainment and thus for the composition of unemployment.
In the budget constraint in (20), potential auditing costs are left out. To include auditing costs will not affect any of the Propositions we derive in this section. However, it affects the welfare analysis as it tends to favour costless taxation and punishment fees at the expense of auditing. The implications for the case of auditing costs is shown in the Appendix.
Although the results of fully financed punishment of informal activities in Propositions 2 and 3 hold irrespective of how the government budget restriction is affected, to stress the intuition, we present the results based on the standard case when an increase in p or α increases relative punishment ψ l .19 The effects on the allocation of search and employment across the formal and the informal sector are summarized in the following Proposition.
A fully financed increase in the relative punishment of the informal sector, ψ l , will reallocate search intensity and employment towards the formal sector, i.e. \({\sigma _{l}^{I}}\) falls, \({n_{l}^{F}}\) increases, and \({n_{l}^{I}}\) falls.
More zealous enforcement will make informal work less attractive, inducing unemployed workers to reallocate their search effort towards the formal sector. In addition, when search is reallocated towards the formal sector, the wage bargaining position strengthens for firms in the formal sector whereas it falls for firms in the informal sector. The lower producer wages in the formal sector stimulate formal firms to open vacancies, while at the same time, informal firms are discouraged to open new vacancies as they now face higher producer wages. As a consequence that both vacancies and search effort are reallocated towards the formal sector, the formal sector employment rate increases at the expense of informal employment. These mechanisms can explain the empirical findings in Almeida and Carneiro (2012) who use data on inspections carried out in Brazil.
As became clear in Proposition 2, employment in the formal sector increases at the expense of employment in the informal sector following more severe punishment of the informal sector. While this is somewhat expected, it is a priori not clear what would happen to the unemployment rates. We have the following results:
A fully financed increase in the relative punishment of the informal sector, ψ l , will always cause the official unemployment rate (\({u_{l}^{o}}\)) to fall, whereas the actual unemployment rate (u l ) increases if \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}>1\) (falls if \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}<1\)).
The actual unemployment rates increase with more severe punishment of informal work if \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}>1\). The reason for this is that the large concealment costs discourage workers from searching, and firms from opening vacancies, in the informal sector. Increased punishment of the informal sector will encourage further reallocation of search and workers away from the informal sector, where relatively efficient search methods are used, towards the formal sector. Total search efficiency then falls, inducing unemployment to increase. The fact that search becomes less efficient when reallocated towards the formal sector also has an impact on unemployment working through wage formation and tightness. As search is reallocated towards the formal sector, the wage demand is moderated in the formal sector and exaggerated in the informal sector. As the efficiency of search in the formal sector increases by less than the efficiency of search in the informal sector is reduced, the informal sector wage push will dominate the formal sector wage moderation. Thus, the incentives to open up a vacancy in the formal sector subsides the disincentives to open up a vacancy in the informal sector; the formal sector tightness will increase by less than the informal sector tightness falls when \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}>1\). The opposite holds if \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}<1\). In this case, too much search and too many firms are allocated into the informal sector as there is a relative cost advantage of producing underground. Total search efficiency would then improve when the government tries to combat the informal sector. The official unemployment rate always falls with more harsh punishment of informal activities as workers to a larger extent become formally employed. In this unemployment measure, workers in the informal sector were counted as unemployed to start with.
From (17), it is clear that more severe relative punishment of the informal sector affects the number of educated workers as such policy increases ψ l . This effect is further reinforced if the tax rate is reduced in order to assure a balanced government budget as the increase in ψ l is reinforced by a reduction in z. However, a reduced payroll tax rate will also have a direct effect on the stock of educated workers. More specifically, a reduction in the tax rate, z, for a given wedge, will increase the number of educated workers. This follows as taxation is more harmful to high income earners, and consequently, a tax reduction will improve the income relatively more for high income earners. However, before considering repercussions working through the budget constraint, let us first consider the impact of a more zealous enforcement policy on education, for a given tax rate. We have the following results:
An increase in the audit rate, p, or in the punishment rate, α, which then increases ψ l , will increase (reduce) the number of educated workers if the relative productivity of education is in the following range \({y_{h}^{F}}/{y_{m}^{F}}\in \left [o_{m}/o_{h},g(\kappa _{h},\kappa _{m})o_{m}/o_{h}\right ]({y_{h}^{F}}/{y_{m}^{F}}\in g(\kappa _{h},\kappa _{m})o_{m}/o_{h},\infty) where\ g(\kappa _{h},\kappa _{m})>1\;if\;\kappa _{h}>\kappa _{m}\) and \(\left ({y_{h}^{F}}/{y_{h}^{I}}\right)\geq \left ({y_{m}^{F}}/{y_{m}^{I}}\right).\)
Proof.
We know from above that the existence of an interior solution of \(\hat {e}\) requires that y h /y m >o m /o h . Differentiating the educational equation with respect to expected punishment reveals that the impact on education is determined by the sign of y m |d o m /d(p α)|−y h |d o h /d(p α)| which is equal to the sign of \({y_{h}^{F}}/{y_{m}^{F}}-g(\kappa _{h},\kappa _{m})\left (o_{m}/o_{h}\right)\), where the term g(κ h ,κ m ) is larger than 1 for κ h >κ m and
$$g\left(\kappa_{h},\kappa_{m}\right) = \frac{A_{h}\left(\frac{{\theta_{h}^{F}}}{{\theta_{h}^{I}}}\right)^{\frac{1}{1-\gamma}- \frac{1}{2}}\left(\frac{{y_{h}^{F}}}{{y_{h}^{I}}}\psi_{h}\right)^{\frac{1}{1-\gamma}}+\psi_{h}}{A_{m} \left(\frac{{\theta_{m}^{F}}}{{\theta_{m}^{I}}}\right)^{\frac{1}{1-\gamma}- \frac{1}{2}}\left(\frac{{y_{m}^{F}}}{{y_{m}^{I}}}\psi_{m}\right)^{\frac{1}{1-\gamma}}+\psi_{m}}>1\;for\;\kappa_{h}>\kappa_{m}\ \text{and } {\frac{{y_{h}^{F}}}{{y_{h}^{I}}}\geq\frac{{y_{m}^{F}}}{{y_{m}^{I}}},} $$
where \(A_{l}=(1+o_{l})/\left (1/\psi _{l}+\left ({y_{l}^{F}}/{y_{l}^{I}}\right)o_{l}\right)\). See the Appendix for the full proof. Q.E.D. □
The impact of a more zealous enforcement policy on educational attainment depends on how attractive underground work is to manual and educated workers. When concealment costs are higher for highly educated workers, more zealous enforcement policies tend to induce more workers to educate themselves. This follows as κ h >κ m implies that manual workers to a larger extent face informal labour market opportunities. Therefore, more zealous enforcement policies, which make it less attractive to work in the informal sector, will be more harmful to manual workers. This effect may, however, be counteracted by the fact that highly educated workers have higher productivity and therefore earn higher wages. As also informal activities are highly productive for these workers, this implies that more harsh punishment, in this perspective, is more harmful for the highly educated worker. Thus, even if highly educated workers face less informal employment opportunities, these opportunities are more profitable. This reduces educational incentives.
Which of the two effects dominate will thus depend on how sizeable the differences in informal employment opportunities and productivity are. If underground employment opportunities in an economy foremost are available to manual workers, more harsh punishment of underground activities will push more workers into education, thus increasing the stock of educated workers in the economy. However, if these employment opportunities to a large extent also are available for highly educated workers, harder punishment will harm highly educated workers more as these opportunities are more profitable to productive workers. This leads to less workers educating themselves.
Note that Proposition 4 only provides the sufficient conditions for when the educational stock increases and when it falls with more harsh punishment of the informal sector without considering the financing of the reform. Provided that we are located on the positively sloped side of the Laffer curve, we can conclude the following:
If an increase in the audit rate, p, or in the punishment rate, α, increases the number of educated workers as given by Proposition 4, the financing of the reform will further reinforce the increase in the stock of educated workers if z needs to fall so as to balance the government budget.
This simply follows as taxation as a direct effect is more harmful for high income earners, and consequently, a tax reduction, in order to maintain a balanced government budget, will be more beneficial for high income earners, thus encouraging educational attainments.
From Propositions 3, 4, and 5, it follows that more severe punishment of the informal sector potentially increases the total number of unemployed workers. If the formal sector is larger than the informal sector, the unemployment rates for both manual and highly educated workers are augmented. Moreover, if informal employment opportunities to a significantly larger extent are available for manual workers, more workers will attain higher education when informal activities are punished more severely. This tends to increase total unemployment as the actual unemployment rate, including informal work, is higher for highly educated workers. Also, recall that this reallocation effect is reinforced if we are located on the positively sloped side of the Laffer curve. Thus, in this case, total unemployment increases both because the unemployment rates for all workers increase and because workers are reallocated towards the sector where the unemployment rate is highest. More generally, the Proposition summarizes the result:
A fully financed increase in the audit rate, p, or in the punishment fee, α, increases(decreases) the number of unemployed workers if the relative productivity of education is in the following range \(\frac {{y_{h}^{F}}}{{y_{m}^{F}}}\in \left [\frac {o_{m}}{o_{h}},g\left (\kappa _{h},\kappa _{m}\right)\frac {o_{m}}{o_{h}}\right ]\) where g(κ h ,κ m )>1 if κ h >κ m , where the financing of the reform further reinforces the reallocation effect if z needs to fall so as to balance the government budget.
This section is concerned with welfare analysis and the optimal design of punishment policies. As shown above, increasing the punishment fees or the audit rates affect the number of educated workers as well as the allocation of search and jobs across the formal and informal sectors. This is essential when considering the impact on welfare. For simplicity, we here let \({y_{l}^{F}}={y_{l}^{I}},\;l={h,m}.\)
Moreover, as the Hosios condition holds by assumption, as we have assumed that the elasticity of the expected duration of a vacancy is equal to the bargaining power of workers in a Nash bargaining situation, we can disregard congestion externalities on the labour market. Moreover, we do not need to be concerned about inefficiencies in terms of too low or too high educational attainments due to the holdup problem as the labour markets for workers with high and low education are separated. This enables us to focus on other, less well-known, distortions in this section. Clearly, however, if, for example, the Hosios condition does not hold, the tax and punishment policies could potentially be used to correct for congestion externalities.
The standard social welfare measure, analogous to the one described in, for example, Pissarides (2000) under no discounting, is used and can be written as
$$ W=\hat{e}W_{m}+\int_{\hat{e}}^{1}W_{h}de, $$
$$ W_{m}=\left(1-u_{m}\right)y_{m}-u_{m}{ky}_{m}\Theta_{m}, $$
$$ W_{h}=\left(1-u_{h}\right)y_{h}-u_{h}{ky}_{h}\Theta_{h}-c(e), $$
where \(\Theta _{l}=\left (1-{\sigma _{l}^{I}}\right)^{\gamma }{\theta _{l}^{F}}+\left ({\sigma _{l}^{I}}\right)^{\gamma }{\theta _{l}^{I}},~l={m,h}\). The welfare measure consists of aggregate production minus total vacancy costs, i.e. note that \(u_{l}\Theta _{l}k=\left ({v_{l}^{F}}+{v_{l}^{I}}\right)k,~l={m,h}\), and minus the aggregate costs of education. With the assumption of risk neutral individuals, we ignore distributional issues, and hence, wages will not feature in the welfare function. See the Appendix for the derivation of this welfare measure.
Let us first derive the socially optimal choice of tightness, search, and stock of educated workers by maximizing the welfare function in (21)–(23) with respect to \({\theta _{m}^{F}},{\theta _{m}^{I}},{\theta _{h}^{F}},{\theta _{h}^{I}},{\sigma _{m}^{I}},{\sigma _{h}^{I}}\), and \(\hat {e}\). The socially optimal solution is solved from the following seven conditions:20
$$ \left(\sigma_{l}^{I\ast}\right)^{\left(\gamma-1\right)}-\left(1-\sigma_{l}^{I\ast}\right)^{\gamma-1}=0,~\rightarrow\ \sigma_{l}^{I\ast}=\frac{1}{2},\;l=m,h, $$
$$ -sk\left(\theta_{l}^{\ast I}\right)^{\frac{1}{2}}+\frac{1}{2}\left[1-\frac{k\theta_{l}^{*I}}{\left(\frac{1}{2}\right)^{1-\gamma}}\right]=0,\;l=m,h, $$
$$ \left(y_{h}-y_{m}\right)\frac{k\theta_{l}^{\ast I}}{\left(\frac{1}{2}\right)^{1-\gamma}}-c\left(\hat{e}^{*}\right)=0. $$
We can now compare the socially optimal solution with the market outcome. From (11), (14), and (15), it follows that the market solution for search and tightness coincides with the socially optimal allocation if the imposed tax and punishment policy are such that ψ m =ψ h =1.21
This conclusion is intuitive as any policy that induces a deviation of the ψ l , l=m,h from unity implies a favourable treatment of the formal or the informal sector which, in turn, induces a distortion in the sectorial allocation of search and tightness between the formal and informal sectors. For example, if search to a larger extent is allocated to the formal or informal sector instead of the other, the search is inefficiently used as less efficient search methods in that sector need to be used. Moreover, as discussed in relation to Proposition 3, a favourable treatment of either the formal or the informal sector induces too many firms to open vacancies in that sector without accounting for the externality they impose on others. In fact, unemployment is minimized when the allocation of search and tightness across the formal and informal sectors is equal, and so is vacancy costs. Thus, welfare is maximized when search and tightness are allocated equally across the formal and the informal sector.
Now let us compare the socially optimal stock of educated workers with the educational outcome induced by the market. As the market outcome in terms of sectorial allocation of search and tightness coincided with the socially optimal one when the government lets the market face ψ m =ψ h =1, we evaluate also the private outcome of education under these conditions. This yields the following market outcome of the stock of educated workers:
$$ \left(y_{h}-y_{m}\right)\frac{k{\theta_{l}^{I}}}{\left(1+z\right)\left(\frac{1}{2}\right)^{1-\gamma}}-c\left(\hat{e}\right)=0. $$
It immediately follows that a tax and punishment policy which implies that ψ m =ψ h =1 will not provide incentives to the market to generate a socially optimal stock of educated workers. Comparing (26) and (27), in fact, reveals that the market outcome induces too few workers to educate themselves if formal and informal sector jobs face uniform treatment in terms of ψ m =ψ h =1. This follows as taxes, captured by (1+z) in (27), hit highly educated workers more severely than manual workers, which reduces the incentives of education. From this, we can conclude that welfare would increase if more workers chose to educate themselves when ψ m =ψ h =1.22
This discussion brings us to the government's explicit choice of tax and punishment policy. How should the government punish informal work in order to maximize welfare?
Optimal punishment policy
The welfare analysis above indicates that it may be optimal to punish tax-evading activities carried out by manual workers more severely than those carried out by highly educated workers. For example, if concealment costs are higher for highly educated workers, a punishment policy with ψ m =ψ h =1 is only possible if the manual workers to a larger extent than highly educated workers face punishment of informal activities. That is, p α has to be set relatively higher for manual workers if κ m <κ h in order to induce ψ m =ψ h =1.
This raises the question of whether it is possible or not to target the punishment fees and audit rates towards the sector employing manual vs highly educated workers. While governments potentially could, and in fact do,23 target their audits to specific sectors, i.e. allowing for p m to differ from p h , this is less likely the case for the fee rates.
To find the socially optimal choice of audit rates for the sector employing manual workers and the sector employing highly educated workers, the welfare function in (21)–(23) is maximized by the choice of p m and p h subject to the market reactions given by (11), (14), (15), (17), and (19) and the government budget restriction in (20). This yields the following first-order conditions:
$$ \frac{dW}{{dp}_{m}}=\hat{e}\frac{{dW}_{m}}{d\psi_{m}}\frac{d\psi_{m}}{{dp}_{m}}+\frac{dW}{d\left(1-e\right)}\frac{d\left(1-e\right)}{{dp}_{m}}=0, $$
$$ \frac{dW}{{dp}_{h}}=\left(1-\hat{e}\right)\frac{{dW}_{h}}{d\psi_{h}}\frac{d\psi_{h}}{{dp}_{h}}+\frac{dW}{d\left(1-e\right)}\frac{d\left(1-e\right)}{{dp}_{h}}, $$
where \(\frac {{dW}_{l}}{d\psi _{l}} = \left [\sum _{j={F,I}}\frac {{dW}_{l}}{d{\theta _{l}^{j}}} \frac {d{\theta _{l}^{j}}}{d\psi _{l}}+\frac {{dW}_{l}}{d{\sigma _{l}^{I}}} \frac {d{\sigma _{l}^{I}}}{d\psi _{l}}\right ],\;j={m,h}.\) Evaluating the first-order conditions at the levels of p m and p h ensuring that ψ m =ψ h =1 turns out to be very convenient and gives
$$ \frac{dW}{{dp}_{m}}\mid{}_{\psi_{m}=1}=\frac{dW}{d\left(1-\hat{e}\right)}\frac{d\left(1-\hat{e}\right)}{{dp}_{m}}>0, $$
$$ \frac{dW}{{dp}_{h}}\mid{}_{\psi_{h}=1}=\frac{dW}{d\left(1-\hat{e}\right)}\frac{d\left(1-\hat{e}\right)}{{dp}_{h}}<0 . $$
Provided that we are located on the positively sloped side of the Laffer curves, we can conclude that
Welfare is maximized when the sector employing manual workers is audited to a larger extent than the sector employing highly educated workers, i.e. p m >p h so as to get \(\psi _{h}^{\ast }<1<\psi _{m}^{\ast }\) if κ h ≥κ m .
Evaluate the first-order conditions (28) and (29) at ψ m =ψ h =1. From the socially optimal allocation of search and tightness, ψ l =1 implies that \(\frac {{dW}_{l}}{d{\theta _{l}^{F}}} = \frac {{dW}_{l}}{d{\theta _{l}^{I}}} = \frac {{dW}_{l}}{d{\sigma _{l}^{I}}}=0,~l={m,h}\). Then \(\frac {dW}{{dp}_{m}}\mid _{\psi _{m}=1} = \frac {dW}{d\left (1-e\right)}\frac {d\left (1-e\right)}{{dp}_{m}}>0\) and \(\frac {dW}{{dp}_{h}}\mid _{\psi _{h}=1} = \frac {dW}{d\left (1-e\right)}\frac {d\left (1-e\right)}{{dp}_{h}}<0\) as \(\frac {dW}{d\left (1-e\right)}>0\) from (26) and (27) and \(\frac {d\left (1-\hat {e}\right)}{{dp}_{m}}>0,\;\frac {d\left (1-\hat {e}\right)}{{dp}_{h}}<0\) from (17). Thus, welfare improves by reallocation of audits towards the manual sector. If κ h =κ m , p m =p h at ψ m =ψ h =1, welfare improves by setting p m >p h . If κ h >κ m , the results are reinforced as p m >p h already when ψ m =ψ h =1, and welfare improves by further increasing p m and reducing p h . Q.E.D. □
The result in Proposition 7 follows straightforwardly from the first-order conditions when evaluated at the p m and p h which induces ψ m =ψ h =1. The first term on the right-hand side of Eqs. (28) and (29) then disappears as the distortions in search and allocation of tightness across the formal and the informal sector are fully eliminated. In this case, there are no other distortions present except that too few workers have chosen to educate themselves. Recall that this is a consequence that taxation harms high income earners relatively more. This distortion can, however, be corrected for by increasing the audits in the manual sector and reducing them in the sector for highly educated workers, which is captured by the right-hand side in (30) and (31). As informal sector work for manual workers becomes less attractive when the government increases the number of audits, manual workers are encouraged to acquire higher education. Similarly, less audits in the highly educated sector further encourages workers to acquire higher education.
If concealment costs are higher in the sector employing highly educated workers, i.e. κ h >κ m , there are even further incentives for the government to focus their audits on the manual sector. This follows as high concealment costs work as a self-regulating punishment of informal sector activities. Thus, if concealment costs are higher in the sector employing highly educated workers, this sector will be in less need of audits as concealment costs will do part of the job of limiting the size of the informal sector.
Moreover it follows that
Corollary 8
The stock of educated workers is below its socially optimal value when the audit rates are chosen so as to maximize welfare.
See the Appendix. □
When deciding on the optimal audit rates, the government faces a trade-off between two distortions and it is never optimal to fully eliminate one of them. When the stock of educated workers is at its socially optimal level, there is an inefficient allocation of search and jobs across the formal and informal sectors. Welfare then improves as the stock of educated workers is reduced below its socially optimal level as this will only be a second-order effect in comparison to the improved welfare following a more efficient sectorial allocation.
Optimal punishment policy when concealment costs are high
In deriving the optimal audit rates in the previous section, it was implicitly assumed that audit rates could be chosen freely without restrictions. For example, according to Proposition 7, the audit rates should be chosen such that \(p_{m}^{\ast }>p_{h}^{\ast }\) so as to get \(\psi _{h}^{\ast }<1<\psi _{m}^{\ast }\). However, this is only possible if concealment costs are not too high. If, for example, κ h >z, then ψ h >1 even when p h is very small. Replacing the first-order condition in (29) with the appropriate Kuhn-Tucker conditions, \(\frac {dW}{{dp}_{h}}+\mu =0\), p h ≥0, and μ p h =0, where μ is the Lagrange multiplier for the constraint p h ≥0, then suggests that the audit rate in the sector should be set as low as possible when κ h >z. Concealment costs are simply high enough to self-regulate the size of the informal sector facing highly educated workers, and there is no need for additional audits of this sector.24
Taking off in real-world observations from high-income economies, this may not be an unrealistic scenario. Evidence indicates that manual workers, or workers with a lower level of formal education, to a substantially larger degree face informal employment opportunities compared to highly educated workers. Pedersen and Smith (1998) using comprehensive survey data find that almost half of the informal sector activities in Denmark is carried out within the construction sector. They also find that around 70 % of the total hours performed in the informal sector is carried out within the service sector or construction sector.
Potential explanations for why manual, in contrast to highly educated, workers engage in informal activities are that manual workers to a larger extent work in industries which handle cash payments or are to a lesser extent subject to third-party reporting. Firms and workers in industries dealing with cash payments, or which to a lesser extent are subject to third-party reporting, will find it easier, and thus less costly, to conceal their tax evasion. Taking this at face value implies that concealment costs for highly educated workers, κ h , could be very large. If κ h is assumed to approach infinity, informal employment opportunities facing highly educated workers will become infinitely small, leading to that basically no firms will post informal sector vacancies to highly educated workers and none of the highly educated workers will allocate search effort into the informal sector. All the results derived in Propositions 1 to 6 account for this special case, including the now clear-cut result that higher punishment fees, or a general increase in the audit rate, encourage more workers to educate themselves. This follows as less workers will remain as manual workers as the foregone informal employment opportunities when attaining education has become less attractive. Moreover, the socially optimal audit rate is again being determined by an audit rate which implies that \(p_{m}^{\ast }\) is set large enough so as to get \(\psi _{m}^{\ast }>1,\) although not high enough to induce an efficient stock of educated workers.
Multiple equilibria
Again, consider the case when the government can target the audits towards the sectors for manual and highly educated workers. That ψ l =(1+p l α+κ l )/(1+z) can be obtained both through high tax and enforcement rates and through low tax and enforcement rates raises the issue of multiple equilibria.25 The relationship between the punishment rates and the tax rate in each sector can then be written as \(p_{l}\alpha =\bar {\psi }_{l}(1+z)-(1+\kappa _{l})\) where the wedge in each sector is set to some fixed value \(\bar {\psi }_{l}\). A 1-unit increase in z followed by an increase in p l α by \(\bar {\psi _{l}}\) units maintains the relative punishment rate given by \(\bar {\psi }_{l}\).
From the government budget constraint in (20), it is clear that any revenue requirement could then be reaped through such simultaneous increases in z and p l α, if it was not for adjustments in the stock of educated workers. This follows as the tax base in terms of producer wages, \({\omega _{l}^{j}}\), and employment rates, \({n_{l}^{j}}\), only depends on ψ l whereas education falls with higher taxes for given wedges. The fact that the tax base falls with higher taxation through reduced incentives for education opens up for the possibility of two equilibria where the government can collect the same revenue although at different levels of tax and enforcement rates.
In the high tax and enforcement economy, very few workers may choose to educate themselves which reduces the tax base and thus induces modest tax revenues even though tax rates are high (negatively sloped side of the Laffer curve). In the low tax and enforcement economy, in contrast, many workers find it worthwhile to educate themselves which induces a large tax base which enables the government to obtain equally high revenues despite low tax rates (positively sloped side of the Laffer curve). This scenario is potentially possible in our model.
The scenario is shown graphically in Fig. 6 in terms of a Laffer curve with tax revenues, R, on the vertical axis whereas the horizontal axis captures the tax rate, z, for given wedges, \(\bar {\psi }_{l}\). As the only tax base adjustment taking place is with regard to education, the direct effect is fairly strong indicating that revenues always tend to increase with z (the filled curve). However, the direct effect becomes less strong as taxation becomes heavier because there are fewer highly educated workers to tax. On the other hand, the response in terms of the number of workers acquiring higher education is stronger when z is low. This is captured by the convex cost function for education. Including auditing costs into the government budget constraint clearly tends to increase the likelihood of being in a situation where an increase in tax rate, and increases in p l α so as to keep \(\bar {\psi }_{l}\) constant, no longer increases government revenues, thus also increasing the likelihood of multiple equilibria (dotted line in Fig. 6).
The Laffer curve; revenue R as a function of payroll taxes, z
In the case of two equilibria, the low tax and enforcement equilibria is preferable from a welfare point of view. As was seen in Section 5.1, it was optimal to correct for the distortion in terms of that too few workers did choose to educate themselves. However, it was not optimal to fully correct for this distortion, leaving the educated stock below a socially optimal value, since distortions on the labour market then became inefficiently high. Clearly, the high tax and enforcement economy worsens the problem by inducing an educational stock which is, for the same wedge, even further away from what is socially optimal. This leads the government to use the wedges to push the economy further away from an efficient labour market outcome in order to correct for this additional distortion in education. Both the labour market distortions and the educational distortion is in this economy larger at the socially optimal wedges than at the socially optimal wedges chosen under the low tax and enforcement economy.
There has recently been an intensified focus on issues related to tax evasion and informal activities from both a policy and research perspective.26 The study by Kleven et al. (2011), which conducted a large field experiment in Denmark, made it possible to address, and convincingly answer, a number of questions related to tax compliance behaviour that had not been answered before.
This paper uses this knowledge to investigate the general equilibrium implications of informal sector activities on economic performance. A number of questions can be asked. How will informal employment opportunities affect labour market performance and educational attainments? Can informal jobs really be turned into formal jobs by more zealous punishment policies? And if so, to what extent will formal sector jobs replace jobs in the informal sector?
In order to address these questions, we develop a four-sector equilibrium search and matching model with informal sector employment opportunities and educational choice. We find that informal activities reduce the incentives to acquire higher education if informal employment opportunities mainly are available to low-educated workers. More zealous enforcement policies will in this case improve educational incentives as it reduces the attractiveness of remaining a low-educated worker. Moreover, we find that stricter enforcement policies will create new jobs in the formal sector, although most likely to a lesser extent than the number of jobs destructed in the informal sector. This will lead to an increase in the actual unemployment rates although the official unemployment rates fall. Finally, characterizing the optimal enforcement policies, we find that relatively more audits should be targeted towards the sector employing low-educated workers; elsewise, a too low stock of educated workers could materialize.
Including unemployment insurance and that a successful audit implies a termination of a match are possible extensions which are on our agenda for future research.
1 A number of studies have studied various implications of education in search and matching models. See, for example, Acemoglu (1996) and Charlot et al. (2005).
2 Productivity differences between formal and informal work are usually considered to be very important in the literature on informality in low- and middle-income countries. See La Porta and Shleifer (2014).
3 Note that this literature considers, as do we, workers and firms that either fully operate in the informal sector or not, rather than partially doing so. The traditional literature on tax evasion, in contrast, focused mainly on under-reporting of income. See Allingham and Sandmo for a seminal paper on tax evasion where under-reporting of income is modelled as a decision made under uncertainty. Thus, tax evasion can be seen both as an intensive margin decision and as an extensive margin decision, where our focus is on the latter.
4 See also Bosch and Esteban-Pretel (2012) for a model based on a similar set-up calibrated by use of flow data from Brazil.
5 More specifically, they a use a Multiple Indicators Multiple Causes (MIMIC) model to analyse and estimate the size of the informal sector of 162 countries around the world. They define the sector as given by all market-based legal production of goods and services that are deliberately concealed from public authorities either to avoid payment of taxes or social security contributions or to avoid meeting certain legal labour market standards or avoid complying with certain administrative procedures. Thus, the definition does not include crimes like burglary, robbery, and drug dealing.
6 The availability and accessibility of information to authorities on jurisdictional ownership, accounting records, and banking, are divided into five categories according to if the standards are "in place", "in place but there is need for improvements", and "not in place". The index is constructed as the proportion of the five categories that are in place. Thus, the index takes on values between 0 and 1, where index value 1 is given to countries that have all the standards in place.
7 A country with relaxed regulations against tax evasion does not automatically make it attractive to work in the informal sector or the opposite if the regulation is strict. If taxes are very low, the strictness of the regulation against tax evasion becomes less relevant for tax evasion.
8 Problems with holdups can appear if workers make their educational investment prior knowing what type of employer they will meet, as for firms making their investments in physical capital prior knowing what type of worker they will meet, as they pay the full cost of the investment but only reap part of its benefits. As firms may meet a low-educated worker, they have lower incentives to invest in capital, and as workers may meet a firm with low capital, they have lower incentives to invest in human capital. This tends to induce underinvestments in both physical and human capital from a social point of view. This problem is, however, ruled out in our paper as educated workers direct their search towards jobs exclusively for educated workers (see Acemoglu 1996; Acemoglu and Shimer 1999).
9 Everyone receives the transfer R. The government cannot exclude the informal sector workers as the government does not know who the informal sector workers are (if it did, it could punish all of them). We disregard unemployment insurance as these will complicate the model significantly and in order to keep the formal and informal sectors as symmetric as possible. In the presence of unemployment insurance, informal sector workers will also receive unemployment insurance. As formal sector workers do not receive unemployment insurance, this will tend to raise formal sector wages but will have no direct impact on informal sector wages. Therefore, unemployment insurance will tend to reduce the supply of formal sector jobs relatively to informal sector jobs.
10 It is natural to think that α≥z as the punishment fee should at least cover the evaded taxes.
11 When \({y_{l}^{F}}/{y_{l}^{I}}=1\), relative tightness is determined by \({\theta _{l}^{F}}/{\theta _{l}^{I}} = \left (\frac {1-k{\theta _{l}^{F}}\left (1-\sigma _{l}\right)^{\gamma -1}}{1-k{\theta _{l}^{F}}\left (1-\sigma _{l}\right)^{\gamma -1}\psi _{l}}\right)^{2} \gtreqless 1\text { if }\psi _{l}\gtreqless 1.\)
12 An appendix with the equations and derivations is available upon request.
13 See Charlot et al. (2005) for a study that investigates the educational decision in a search and matching framework when education is time consuming.
14 The costs of education can capture a number of things, for example, direct costs of education such as tuition fees. Workers with high ability may face lower costs of this type due to, for example, scholarships. Also, such direct costs of education can be managed by student loans in a time when the worker has no funds and just entered the labour market. Workers need to pay these loans back, with interest, also in many periods after the education has ended. Thus, although the educational attainment is a once and for all investment, the cost of the education can be paid in future periods. The results on how punishment policies affect labour market and educational outcomes will not change if we let \(c(e)=(r+a)\tilde {c}(e)\) where r+a is the overall interest rate and \(\tilde {c}(e)\) is the cost of attaining the education. The costs of education can also capture the indirect cost, such as effort costs, of being a highly educated worker.
15 See the Appendix for the proof that o h <o m when κ h >κ m .
16 For highly educated workers, \({\lambda _{h}^{F}}u_{h}(1-\hat {e})=\left (s+a\right){n_{h}^{F}}(1-\hat {e})\), \({\lambda _{h}^{I}}u_{h}\hat {e}=\left (s+a\right){n_{h}^{I}} (1-\hat {e})\) and \({n_{h}^{F}}+{n_{h}^{I}}=1-u_{h}\), and for manual workers, \({\lambda _{m}^{F}}u_{m}\hat {e}=\left (s+a\right){n_{m}^{F}}\hat {e}\), \({\lambda _{m}^{I}}u_{m}\hat {e}=\left (s+a\right){n_{m}^{I}}\hat {e}\), and \({n_{m}^{F}}+{n_{m}^{I}}=1-u_{m}.\)
17 This could, for example, be the case if we have auditing costs in the government budget constraint. It is often assumed that taxes and punishment fees are cheap government instruments whereas audits are costly to carry out.
18 Let us provide an illustration of the rather peculiar case when z increases to such extent that ψ l actually falls with the reform. In this case, an increase in the relative punishment of the informal sector instead takes place through a reduction in the informal punishment which increases the government revenues. This in turn enables a reduction in z which, in this special case, is large enough to increase ψ l . And through the large reduction in z, the punishment of the informal sector has increased relative to the taxation of the formal sector as taxation has fallen significantly. Although the results in the analysis of the impact of fully financed punishment of informal activities on producer wages, search, tightness, unemployment, and employment rates holds irrespective of how the government budget restriction is affected, to stress the intuition, we present the results in terms of the standard scenario in the paper.
19 That is, the corresponding adjustment in z in order to regain a balanced budget implies that z either falls or does not increase to such an extent inducing ψ l to fall. See the footnote for this special case.
20 See the Appendix for the second-order conditions.
21 When ψ m =ψ h =1 is imposed on the private solution, it follows from (14) and (15) that tightness in the formal and the informal sector is equal and that search must be split equally between the formal and the informal sector, i.e. \(\sigma ^{I}=\frac {1}{2}\) from (11). Imposing \(\sigma ^{I}=\frac {1}{2}\) and \({\theta _{l}^{F}}={\theta _{l}^{I}}\), l=m,h, under the assumption of no discounting, in (14) and (15), yields the same expression as (25).
22 There may, of course, be other more direct instruments if the pure aim is to correct for inefficiencies in the educational level in an economy, but nevertheless, it should be acknowledged that the wedge actually has an impact on the number of educated workers and thereby potentially has an impact on welfare in the economy.
23 See, for example, Kleven et al. (2011).
24 This clearly holds also for the manual sector if concealment costs are higher than the tax rate.
25 The general principal in this section holds also when the government cannot target the audits. It is then not possible for the government to increase the tax rate holding both wedges constant through adjustments in p l α as κ l differs across the sectors. The government would then increase p α so that the wedges marginally change. There would then be marginal adjustments in the producer wages and employment rates.
26 The OECD recently initiated the "Global forum of transparency and exchange of information for tax purposes" (OECD 2012), whereas the European commission conducted the first EU-wide comparable questionnaire in order to increase the knowledge about tax evasion in Europe (see EC (2007)).
The model is given by
$$ \frac{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}= \left(\frac{{\theta_{l}^{F}}}{{\theta_{l}^{I}}}\right) \frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l},\;l=m,h. $$
where \(\psi _{l}=\frac {1+p\alpha +\kappa _{l}}{1+z}\)
$$ {\omega_{l}^{F}}={w_{l}^{F}}\left(1+z\right),=\frac{1}{2}{y_{l}^{F}} \left(1+k\frac{{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\;l=m,h, $$
$$ {\omega_{l}^{I}}={w_{l}^{I}}\left(1+p\alpha+\kappa_{l}\right)= \frac{1}{2}{y_{l}^{I}}\left(1+\frac{{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}k\right),\;l=m,h, $$
$$ k\left(r+s+a\right)\left({\theta_{l}^{F}}\right)^{\frac{1}{2}} = \frac{1}{2}\left(1-\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right),\;m=h,l, $$
$$ k\left(r+s+a\right)\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}= \frac{1}{2}\left(1-\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right),\;m=h,l. $$
$$ u_{l}=\frac{s+a}{s+a+{\lambda_{l}^{F}}+{\lambda_{l}^{I}}}, \;{u_{l}^{o}}=\frac{s+a+{\lambda_{l}^{I}}}{s+a+{\lambda_{l}^{F}}+{\lambda_{l}^{I}}},\;l=h,m. $$
Budget constraint in terms of producer wages using \({\omega _{m}^{F}}={w_{l}^{F}}(1+z)\) and \({\omega _{l}^{I}}={w_{l}^{I}}(1+p\alpha +\kappa _{l}),\;l={m,h},\) yields
Tightness relatively to search intensity
We show that \(\frac {{\theta _{t}^{F}}}{\left (1-{\sigma _{t}^{I}}\right)^{1-\gamma }}<\frac {{\theta _{x}^{F}}}{\left (1-{\sigma _{x}^{I}}\right)^{1-\gamma }}\) when κ t >κ x in the following way. Differentiating equations (35), (36), and (32) with respect to κ l gives around the equilibrium
$$\frac{d{\theta_{l}^{F}}}{d\kappa_{l}}=\frac{\frac{\left(1-\gamma\right)}{1-{\sigma_{l}^{I}}}\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\frac{1}{2}\left(1+\frac{k{\theta_{l}^{F}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\frac{1}{{\theta_{l}^{I}}}}{D_{l}}\left(1+p\alpha+\kappa_{l}\right)>0, $$
$$\frac{d{\theta_{l}^{I}}}{d\kappa_{l}}=-\frac{\frac{1}{2}\left(1+\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\frac{1}{{\theta_{l}^{F}}}\left(1-\gamma\right)k{\theta_{l}^{I}}\left({\sigma_{l}^{I}}\right)^{\gamma-2}}{D_{l}}\left(1+p\alpha+\kappa_{l}\right)<0, $$
$$\frac{d{\sigma_{l}^{I}}}{d\kappa_{l}}=-\frac{\frac{1}{2}\left(1+\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\frac{1}{{\theta_{l}^{F}}}\frac{1}{2}\left(1+\frac{k\theta^{I}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\frac{1}{{\theta_{l}^{I}}}}{D_{l}}\left(1+p\alpha+\kappa_{l}\right)<0, $$
$$\begin{array}{@{}rcl@{}} D_{l}&=&\frac{\left(1-\gamma\right)\frac{1}{{\sigma_{l}^{I}}}}{{\theta_{l}^{I}}{\theta_{l}^{F}}4\left(1-{\sigma_{l}^{I}}\right)}\left(\frac{\frac{1}{\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}}k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}+1\right)\left(1-\frac{k\theta^{I}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\left(1-{\sigma_{l}^{I}}\right)\\ &&+\left(\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}+1\right)\left(1-\frac{\frac{1}{\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}}k\theta^{I}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right) \end{array} $$
Now, differentiating \(\frac {{\theta _{l}^{F}}}{\left (1-{\sigma _{l}^{I}}\right)^{1-\gamma }}\) with respect to κ l gives
$${} {{\begin{aligned} \frac{d\frac{{\theta_{l}^{F}}}{\left(1-\sigma_{l}\right)^{1-\gamma}}}{d\kappa_{l}}=\frac{d{\theta_{l}^{F}}\left(1-\sigma_{l}\right)^{\gamma-1}}{d\kappa_{l}}={\theta_{l}^{F}}\left(1-\sigma_{l}\right)^{\gamma-1}\left(\left(\theta^{F}\right)^{-1}\frac{d{\theta_{l}^{F}}}{d\kappa_{l}}+\left(1-\gamma\right)\left(1-\sigma_{l}\right)^{-1}\frac{d\sigma^{I}}{d\kappa_{l}}\right)= \end{aligned}}} $$
$${} {{\begin{aligned} =-\frac{{\theta_{l}^{F}}\left(1-\sigma_{l}\right)^{\gamma-1}}{4D_{l}}\frac{\left(1-\gamma\right)}{\left(1-{\sigma_{l}^{I}}\right)}\left(1+\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\frac{1}{{\theta_{l}^{F}}{\theta_{l}^{I}}}\left(1-\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\left(1+p\alpha+\kappa_{l}\right)<0. \end{aligned}}} $$
Hence, if κ t >κ x , then \(\frac {{\theta _{t}^{F}}}{\left (1-\sigma _{t}\right)^{1-\gamma }}<\frac {{\theta _{x}^{F}}}{\left (1-\sigma _{x}\right)^{1-\gamma }}.\)
Existence of \(\hat {e}\in \left (0,1\right)\)
Consider the educational Eq. (37). For a non-trivial solution, there needs to be a net gain in expected income of higher education. Thus, \({y_{h}^{F}}/{y_{m}^{F}}>o_{m}/o_{h}\). Moreover, to guarantee a non-trivial interior solution where at least some, but not all, individuals choose to acquire education, the individual with the highest ability faces a very low cost of education, more specifically c(1)=0, and the individual with the lowest ability faces a very high cost of education, i.e. \({\lim }_{e\rightarrow 0}c(e)=\infty \).
In the case κ h ≤κ m , then o m /o h <1, and hence, \({y_{h}^{F}}/{y_{m}^{F}}>o_{m}/o_{h}\) holds as \({y_{h}^{F}}>{y_{m}^{F}}\). If educated workers face higher concealment costs than manual workers κ h >κ m , then we need to assume that the productivity gain of education is large enough to assure that \({y_{h}^{F}}/{y_{m}^{F}}>o_{m}/o_{h}\) holds, which is possible as the right-hand side is independent of y l .
Relative unemployment rates (Proposition 1)
Unemployment is increasing in concealment costs if ψ l >1. Hence, if κ t >κ x , then u t >u x if ψ l >1. We show that in the following way, u t >u x if and only if \(s/\left (s+{\lambda _{x}^{F}}+{\lambda _{x}^{I}}\right)<s/\left (s+{\lambda _{t}^{F}}+{\lambda _{t}^{I}}\right)\) if an only if \({\lambda _{t}^{F}}+{\lambda _{t}^{I}}<{\lambda _{x}^{F}}+{\lambda _{x}^{I}}\). Hence, the condition holds if
$${} {{\begin{aligned} \frac{d\left({\lambda_{l}^{F}}+{\lambda_{l}^{I}}\right)}{d\kappa_{l}}= \frac{d\left[\left(1-{\sigma_{l}^{I}}\right)^{\gamma}\left({\theta_{h}^{F}}\right)^{\frac{1}{2}}+ \left({\sigma_{l}^{I}}\right)^{\gamma}\left({\theta_{h}^{I}}\right)^{\frac{1}{2}}\right]}{d\kappa_{l}} \end{aligned}}} $$
$${} {{\begin{aligned} =\gamma\left(-\left(1-{\sigma_{l}^{I}}\right)^{\gamma-1} \left({\theta_{l}^{F}}\right)^{\frac{1}{2}}+\left({\sigma_{l}^{I}}\right)^{\gamma-1} \left({\theta_{l}^{I}}\right)^{\frac{1}{2}}\right)\frac{d{\sigma_{l}^{I}}}{d\kappa_{l}}+ \frac{1}{2}\left(\frac{\left(1-\sigma^{I}\right)^{\gamma}}{\left({\theta_{l}^{F}}\right)^{\frac{1}{2}}} \frac{d{\theta_{l}^{F}}}{d\kappa_{l}}+\frac{\left(\sigma^{I}\right)^{\gamma}}{\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}} \frac{d{\theta_{l}^{I}}}{d\kappa_{l}}\right)<0. \end{aligned}}} $$
We substitute for the derivatives and the first-order condition for search intensity to obtain the condition equal to
$${} {{\begin{aligned} =\gamma\left(\frac{1}{\psi_{l}\frac{{y_{l}^{F}}}{{y_{l}^{I}}} \left({\theta_{l}^{F}}\right)^{\frac{1}{2}}}-\frac{1}{\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}}\right) \left(1+\frac{k{\theta_{l}^{I}}}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\left(\sigma_{li}^{I}\right)^{1-\gamma}}\right) \left(1+\frac{k{\theta_{l}^{I}}}{\left(\sigma_{li}^{I}\right)^{1-\gamma}}\right) \end{aligned}}} $$
$${} {{\begin{aligned} +\left(1-\gamma\right)\frac{k{\theta_{l}^{I}}}{\left(\sigma_{li}^{I}\right)^{1-\gamma}} \left(\frac{1}{\left({\theta_{l}^{F}}\right)^{\frac{1}{2}}}\frac{1}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}} \left(\frac{1}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}}+\frac{k{\theta_{l}^{I}}}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l} \left(\sigma_{li}^{I}\right)^{1-\gamma}}\right)-\frac{1}{\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}}\left(1+ \frac{k{\theta_{l}^{I}}}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\left(\sigma_{li}^{I}\right)^{1-\gamma}}\right)\right) \end{aligned}}} $$
which is negative when y F/y F≥1 and ψ l >1, as then \({\theta _{l}^{F}}>{\theta _{l}^{I}}\) giving \(\frac {1}{\frac {{y_{l}^{F}}}{{y_{l}^{I}}}\psi _{l}}\frac {1}{\left ({\theta _{l}^{F}}\right)^{\frac {1}{2}}}< \frac {1}{\left ({\theta _{l}^{I}}\right)^{\frac {1}{2}}}\) and \(\left (\frac {1}{\frac {{y_{l}^{F}}}{{y_{l}^{I}}}\psi _{l}}+ \frac {1}{\frac {{y_{l}^{F}}}{{y_{l}^{I}}}\psi _{l}}\frac {k\theta ^{I}}{\left ({\sigma _{l}^{I}}\right)^{1-\gamma }}\right)< \left (1+\frac {1}{\frac {{y_{l}^{F}}}{{y_{l}^{I}}}\psi _{l}}\frac {k\theta ^{I}}{\left ({\sigma _{l}^{I}}\right)^{1-\gamma }}\right)\). Hence, unemployment increases with ψ l , and hence, u t >u x when \({y_{h}^{F}}/{y_{l}^{I}}\geq 1\) and κ t >κ x .
The official unemployment rate facing t workers is higher than the official unemployment rate facing x workers; \({u_{t}^{o}}>{u_{x}^{o}}\) if and only if \(\left (s+{\lambda _{t}^{I}}\right)/\left (s+{\lambda _{t}^{F}}+{\lambda _{t}^{I}}\right)> \left (s+{\lambda _{x}^{I}}\right)/\left (s+{\lambda _{x}^{F}}+{\lambda _{x}^{I}}\right)\). This holds if an only if \({\lambda _{x}^{F}}\left (s+{\lambda _{t}^{I}}\right)>{\lambda _{t}^{F}}\left (s+{\lambda _{x}^{I}}\right),\) which is true when \({\lambda _{x}^{F}}>{\lambda _{t}^{F}}\) and \({\lambda _{x}^{I}}>{\lambda _{t}^{I}},\) that is, when κ t >κ x .
Impact of higher punishment on sector allocation (Proposition 2)
Raising the audit rate p l or the punishment fee α increases the wedge, ψ l =(1+p α+κ l )/(1+z). Differentiating Eqs. (35), (36), and (32) with respect to ψ l gives around the equilibrium
$$\frac{d\theta^{F}}{d\psi_{l}}=\frac{\frac{\left(1-\gamma\right)}{\left(1-{\sigma_{l}^{I}}\right)} \frac{1}{2}\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}} \left(1+\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right) \frac{1}{{\theta_{l}^{I}}}}{D_{l}}\frac{1}{\psi_{l}}>0 $$
$$\frac{d\theta^{I}}{d\psi_{l}}=\frac{-\frac{\left(1-\gamma\right)}{\left({\sigma_{l}^{I}}\right)} \frac{1}{2}\left(1+\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right) \frac{1}{{\theta_{l}^{F}}}\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}}{D_{l}}\frac{1}{\psi_{l}}>0 $$
$$\frac{d\sigma^{I}}{d\psi_{l}}=\frac{-\frac{1}{2}\frac{1}{{\theta_{l}^{F}}{\theta_{l}^{I}}} \left(1+\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\frac{1}{2} \left(1+\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)}{D_{l}}\frac{1}{\psi_{l}}<0, $$
$$\begin{array}{@{}rcl@{}} D_{l}&=&\frac{\left(1-\gamma\right)}{4{\theta_{l}^{F}}{\theta_{l}^{I}}}\frac{1}{\left(1-{\sigma_{l}^{I}}\right){\sigma_{l}^{I}}}\left\{ \left(1+\frac{k{\theta_{l}^{I}}}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\left(1-\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\left(1-\sigma^{I}\right)\right.\\ &&+\left.\sigma_{li}^{I}\left(1-\frac{k{\theta_{l}^{I}}}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\left(1+\frac{k{\theta_{l}^{I}}}{\left(\sigma_{li}^{I}\right)^{1-\gamma}}\right)\right\}, \end{array} $$
which is positive. Hence, as \(\lambda _{li}^{I}=\left (\sigma _{li}^{I}\right)\gamma \left ({\theta _{l}^{I}}\right)^{\frac {1}{2}}\) and \(\lambda _{li}^{F}=\left (1-\sigma _{li}^{I}\right)^{\gamma }\left ({\theta _{l}^{F}}\right)^{\frac {1}{2}},\) by inspection of Eq. (38), it follows that \(d{n_{l}^{F}}/d\psi _{l}>0,\;d{n_{l}^{I}}/d\psi _{l}<0,l={m,h}\). The impact on wages is then
$$ \frac{d{\omega_{l}^{F}}}{d\psi_{l}}=\frac{1}{2}{y_{l}^{F}}k\frac{\frac{{\theta_{l}^{F}}}{\left(1-\sigma_{li}^{I}\right)^{1-\gamma}}}{d\psi_{l}},\;l=m,h, $$
$$ \frac{d{\omega_{l}^{I}}}{d\psi_{l}}=\frac{1}{2}{y_{l}^{I}}k\frac{d\frac{{\theta_{l}^{I}}}{\left(\sigma_{li}^{I}\right)^{1-\gamma}}}{d\psi_{l}},\;l=m,h, $$
$${} \frac{d\frac{{\theta_{l}^{F}}}{\left(1-\sigma_{l}\right)^{1-\gamma}}}{d\psi_{l}}=\frac{d{\theta_{l}^{F}}\left(1-\sigma_{l}\right)^{\gamma-1}}{d\psi_{l}}={\theta_{l}^{F}}\left(1-\sigma_{l}\right)^{\gamma-1}\left(\frac{1}{{\theta_{l}^{F}}}\frac{d{\theta_{l}^{F}}}{d\psi_{l}}+\left(\!1-\gamma\right)\left(1-\sigma_{l}\right)^{-1}\!\frac{d\sigma^{I}}{dd\psi_{l}}\right)= $$
$${\kern24pt} =-\frac{\left(1-\sigma_{l}\right)^{\gamma-1}}{4D_{l}\psi_{l}}\frac{1-\gamma}{1-{\sigma_{l}^{I}}}\left(1+\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\frac{1}{{\theta_{l}^{I}}}\left(1-\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right)<0. $$
$$\frac{d\frac{{\theta_{l}^{I}}}{\sigma_{l}^{1-\gamma}}}{d\psi_{l}}=\frac{d{\theta_{l}^{I}}\left(\sigma_{l}\right)^{\gamma-1}}{d\psi_{l}}={\theta_{l}^{I}}\left(\sigma_{l}\right)^{\gamma-1}\left(\frac{1}{{\theta_{l}^{I}}}\frac{d{\theta_{l}^{I}}}{d\psi_{l}}-\left(1-\gamma\right)\left(\sigma_{l}\right)^{-1}\frac{d\sigma^{I}}{d\psi_{l}}\right)= $$
$$=-\frac{\left(\sigma_{l}\right)^{\gamma-1}}{4D_{l}\psi_{l}}\frac{1-\gamma}{{\sigma_{l}^{I}}}\left(1-\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\frac{1}{{\theta_{l}^{F}}}\left(1+\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}\right)<0. $$
Impact of higher punishment on unemployment rates (Proposition 3)
Raising the audit rate p or the punishment fee α increases the wedge, ψ l =(1+p α+κ l )/(1+z). Differentiating Eq. (39) with respect to ψ l gives
$$\frac{du_{l}}{d\psi_{l}}=-\frac{s}{\left(s+{\lambda_{l}^{I}}+{\lambda_{l}^{F}}\right)^{2}}\left(\frac{d{\lambda_{l}^{F}}}{d\psi_{l}}+\frac{d{\lambda_{l}^{I}}}{d\psi_{l}}\right) $$
$$\frac{d{\lambda_{l}^{F}}}{d\psi_{l}}+\frac{d{\lambda_{l}^{I}}}{d\psi_{l}}=\frac{d\left(1-{\sigma_{l}^{I}}\right)^{\gamma}\left({\theta_{l}^{F}}\right)^{\frac{1}{2}}+d\left({\sigma_{l}^{I}}\right)^{\gamma}\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}}{d\psi_{l}} $$
$$\gamma\left(-\left(1-{\sigma_{l}^{I}}\right)^{\gamma-1}\left({\theta_{l}^{F}}\right)^{\frac{1}{2}}+\left({\sigma_{l}^{I}}\right)^{\gamma-1}\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}\right)\frac{d{\sigma_{l}^{I}}}{d\psi_{l}} $$
$$+\frac{1}{2}\left(\frac{1-{\sigma_{l}^{I}}}{\left({\theta_{l}^{F}}\right)^{\frac{1}{2}}}\frac{d{\theta_{l}^{F}}}{d\psi_{l}}+\frac{\left({\sigma_{l}^{I}}\right)^{\gamma}}{\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}}\frac{d{\theta_{l}^{I}}}{d\psi_{l}}\right) $$
Substituting for the derivatives and the first-order condition for search intensity, we obtain that d u l /d ψ l has the same sign as
$$\gamma\left(\frac{1}{\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}}\frac{1}{\left({\theta_{l}^{F}}\right)^{\frac{1}{2}}}-\frac{1}{\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}}\right)\left(1+\frac{1}{\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}}\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\left(1+\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right) $$
$$\begin{array}{@{}rcl@{}}{\kern20pt} &+&\left(1-\gamma\right)\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\left(\frac{1}{\left({\theta_{l}^{F}}\right)^{\frac{1}{2}}}\frac{1}{\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}}\left(\frac{1}{\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}}+\frac{1}{\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}}\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\right.\\ &-&\left.\frac{1}{\left({\theta_{l}^{I}}\right)^{\frac{1}{2}}}\left(1+\frac{1}{\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}}\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\right) \end{array} $$
$$\frac{{du}_{l}}{d\psi_{l}}\lesseqgtr0\text{ if and only if } {\left({y_{l}^{F}}/{y_{l}^{I}}\right)\psi_{l}\lesseqgtr1.} $$
The impact on the official unemployment rate resulting from an increase in the audit rate or the punishment fee corresponds to
$$\frac{d{u_{l}^{o}}}{d\psi_{l}}=\frac{\left(s+{\lambda_{l}^{I}}+{\lambda_{l}^{F}}\right)\frac{d{\lambda_{l}^{I}}}{d\psi_{l}}-\left(s+{\lambda_{l}^{I}}\right)\left(\frac{d{\lambda_{l}^{F}}}{d\psi_{l}}+\frac{d{\lambda_{l}^{I}}}{d\psi_{l}}\right)}{\left(s+{\lambda_{l}^{I}}+{\lambda_{l}^{F}}\right)^{2}}=\frac{{\lambda_{l}^{F}}\frac{d{\lambda_{l}^{I}}}{d\psi_{l}}-\left(s+{\lambda_{l}^{I}}\right)\left(\frac{d{\lambda_{l}^{F}}}{d\psi_{l}}\right)}{\left(s+{\lambda_{l}^{I}}+{\lambda_{l}^{F}}\right)^{2}} $$
Impact of higher punishment on education (Propositions 4 and 5)
A closer examination of (37) reveals that changes in the audit rates or punishment rates affect the share of educated workers, \(1-\hat {e}\), through ψ l only, whereas changes in the tax rate, z, have a direct effect on \(1-\hat {e}\) in addition to the effects working through ψ l . Therefore, in order to consider the effects of a fully financed change in the punishment rates on the number of educated workers, we have to account for repercussions on \(1-\hat {e}\) following adjustments in the tax rate. However, let us first consider the impact on \(1-\hat {e}\) of a change in the tax and expected punishment separately:
$$\frac{\partial\left(1-\hat{e}\right)}{\partial\left(p\alpha\right)}|_{z}=-\frac{k}{c^{\prime}(e)\left(1+z\right)}\left({y_{h}^{F}}\frac{d\frac{{\theta_{h}^{F}}}{\left(1-{\sigma_{h}^{I}}\right)^{1-\gamma}}}{d\left(p\alpha\right)}-{y_{m}^{F}}\frac{d\frac{{\theta_{m}^{F}}}{\left(1-{\sigma_{m}^{I}}\right)^{1-\gamma}}}{d\left(p\alpha\right)}\right) $$
$$\frac{\partial\left(1-\hat{e}\right)}{\partial z}|_{p_{l}\alpha}=-\psi_{l}\frac{\partial\left(1-\hat{e}\right)}{\partial\left(p\alpha\right)}|_{z}+\frac{c\left(\hat{e}\right)}{c^{\prime}\left(\hat{e}\right)\left(1+z\right)} $$
Using Eq. (43), we obtain
$$ \frac{d\frac{k{\theta_{l}^{F}}}{\left(1-{\sigma_{l}^{I}}\right)^{1-\gamma}}}{d\left(p\alpha\right)}=-\frac{\frac{k{\theta_{l}^{I}}}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\left({\sigma_{l}^{I}}\right)^{1-\gamma}}}{\frac{\left(1+\frac{k{\theta_{l}^{I}}}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\left(1-\frac{k{\theta_{l}^{I}}}{\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)}{\left(1-\frac{k{\theta_{l}^{I}}}{\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\left({\sigma_{l}^{I}}\right)^{1-\gamma}}\right)\left(1+\frac{k{\theta_{l}^{I}}}{\left(\sigma_{li}^{I}\right)^{1-\gamma}}\right)}\frac{1-\sigma^{I}}{{\sigma_{l}^{I}}}+1}\frac{1}{\psi_{l}}\frac{1}{1+z},l=h,m, $$
whereby the educational impacts become
$$\frac{\partial\left(1-\hat{e}\right)}{\partial\left(p\alpha\right)}|_{z}=-\frac{k}{c^{\prime}(e)\left(1+z\right)^{2}}\left({y_{h}^{F}}\frac{{do}_{h}}{d\left(p\alpha\right)}-{y_{m}^{F}}\frac{{do}_{m}}{d\left(p\alpha\right)}\right) $$
where \(o_{l}=\frac {1}{\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}}\frac {k{\theta _{l}^{I}}}{\left ({\sigma _{l}^{I}}\right)^{1-\gamma }}=\frac {k{\theta _{l}^{F}}}{\left (1-{\sigma _{l}^{I}}\right)^{1-\gamma }},l=h,m\) and
$$ \frac{{do}_{l}}{d\psi_{l}}=-\frac{\frac{1}{\psi_{l}}o_{l}}{\frac{\left(o_{l}+1\right)\left(1-\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\right)\psi_{l}o_{l}\right)}{\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}o_{l}+1\right)\left(1-o_{l}\right)}\frac{\left(1-{\sigma_{l}^{I}}\right)}{{\sigma_{l}^{I}}}+1}<0,\;l=h,m. $$
For existence of an interior solution for education, we need \({y_{h}^{F}}o_{h}-{y_{m}^{F}}o_{m}>0\). Hence, education increases if \({y_{h}^{F}}\frac {{do}_{h}}{d\psi _{h}}-{y_{m}^{F}}\frac {{do}_{m}}{d\psi _{m}}>0\). As \(\frac {{do}_{l}}{d\psi _{l}},\;l={h,m}\) is negative, and \({y_{h}^{F}}/{y_{m}^{F}}>o_{m/}o_{h}\), then for existence of an interior solution for \(\hat {e}\), if
$$ \left\vert \frac{{do}_{m}}{d\psi_{m}}\right\vert /\left\vert \frac{{do}_{h}}{d\psi_{h}}\right\vert >{y_{h}^{F}}/{y_{m}^{F}}>o_{m}/o_{h}. $$
then education increases with p α. Consider the case where κ h >κ m . As ψ l increases with κ l , then for such a solution to exist, we need that \(\left \vert \frac {{do}_{l}}{d\psi _{l}}\right \vert,\;l={m,h}\) is decreasing in concealment costs whereby \(\left \vert \frac {{do}_{m}}{d\psi _{m}}\right \vert >\left \vert \frac {{do}_{h}}{d\psi _{h}}\right \vert \). We first show that that is the case. Multiply the numerator and denominator by ψ l to obtain
$$\left|\frac{{do}_{l}}{d\psi_{l}}\right|=\frac{o_{l}}{\frac{\left(o_{l}+1\right)\left(1-\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\right)\psi_{l}o_{l}\right)}{\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}o_{l}+\frac{1}{\psi_{l}}\right)\left(1-o_{l}\right)}\frac{\left(1-{\sigma_{l}^{I}}\right)}{{\sigma_{l}^{I}}}+\psi_{l}},\;l=h,m. $$
Substituting for the tightness equations, \(1-\frac {k{\theta _{l}^{F}}}{\left (1-{\sigma _{l}^{I}}\right)^{1-\gamma }}= 1-o_{l}=k\left (r+s+a\right)\left ({\theta _{l}^{F}}\right)^{\frac {1}{2}}2\) and \(1-\frac {k{\theta _{l}^{I}}}{\left ({\sigma _{l}^{I}}\right)^{1-\gamma }}= 1-\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}o_{l}=k\left (r+s+a\right)\left ({\theta _{l}^{I}}\right)^{\frac {1}{2}}2\) and use the fact that \(\frac {1-{\sigma _{l}^{I}}}{{\sigma _{l}^{I}}}=\left (\frac {{\theta _{l}^{F}}}{{\theta _{l}^{I}}}\right)^{\frac {1}{1-\gamma }} \left (\frac {{y_{l}^{F}}}{{y_{l}^{I}}}\psi _{l}\right)^{\frac {1}{1-\gamma }}\) according to the search equation to obtain
$$ \left|\frac{{do}_{l}}{d\psi_{l}}\right|=\frac{o_{l}}{A_{l}\left(\frac{{\theta_{l}^{F}}}{{\theta_{l}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\right)^{\frac{1}{1-\gamma}}+\psi_{l}},\;l=h,m, $$
where \(A_{l}=\frac {\left (o_{l}+1\right)}{\left (\frac {{y_{l}^{F}}}{{y_{l}^{I}}}o_{l}+\frac {1}{\psi _{l}}\right)}.\) Differentiating (47), ψ l , we obtain the following expression for \(\frac {d\left |\frac {{do}_{l}}{d\psi _{l}}\right |}{d\psi _{l}}\):
$${} {{\begin{aligned} \frac{\frac{{do}_{l}}{d\psi_{l}}\left(\!\!A_{l}\left(\!\frac{{\theta_{l}^{F}}}{{\theta_{l}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}\left(\!\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\right)^{\frac{1}{1-\gamma}}\!+\psi_{l}\right)-o_{l}\left(\!\left(\!\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\right)^{\frac{1}{1-\gamma}}\left(\!\left(\!\frac{{\theta_{l}^{F}}}{{\theta_{l}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}\left(\frac{{dA}_{l}}{d\psi_{l}}+\frac{1}{\psi_{l}}\frac{A_{l}}{1-\gamma}\right)+A_{l}\frac{d\left(\frac{{\theta_{l}^{F}}}{{\theta_{l}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}}{d\psi_{l}}\!\right)+1\right)}{\left(A_{l}\left(\frac{{\theta_{l}^{F}}}{{\theta_{l}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}\right)^{\frac{1}{1-\gamma}}+\psi_{l}\right)^{2}}<0, \end{aligned}}} $$
as substituting for \(\frac {{do}_{l}}{d\psi _{l}}\) using the expression from Eq. (45) gives
$${} {{\begin{aligned} \frac{{dA}_{l}}{d\psi_{l}}=\frac{\frac{{do}_{l}}{d\psi_{l}}\left(\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}-1\right)o_{l}+\frac{1}{\psi_{l}}-1\right)+\frac{o_{l}+1}{{\psi_{l}^{2}}}}{\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}o_{l}+\frac{1}{\psi_{l}}\right)^{2}}=\frac{\frac{1}{\psi_{l}}o_{l}\left(1-\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}-1\right)o_{l}\right)+\frac{o_{l}+1}{{\psi_{l}^{2}}}\frac{\left(o_{l}+1\right)\left(1-\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\right)\psi_{l}o_{l}\right)}{\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}o_{l}+1\right)\left(1-o_{l}\right)}\frac{\left(1-{\sigma_{l}^{I}}\right)}{{\sigma_{l}^{I}}}+\frac{1}{{\psi_{l}^{2}}}}{\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}o_{l}+\frac{1}{\psi_{l}}\right)^{2}\left(\frac{\left(o_{l}+1\right)\left(1-\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\right)\psi_{l}o_{l}\right)}{\left(\frac{{y_{l}^{F}}}{{y_{l}^{I}}}\psi_{l}o_{l}+1\right)\left(1-o_{l}\right)}\frac{\left(1-{\sigma_{l}^{I}}\right)}{{\sigma_{l}^{I}}}+1\right)}>0, \end{aligned}}} $$
for \(\frac {{y_{l}^{F}}}{{y_{l}^{I}}}-1<1\) (sufficient condition) and from the equilibrium equations we have \(d\left ({\theta _{h}^{F}}/{\theta _{h}^{I}}\right)/d\psi _{l}>0\) and d o l /d ψ l <0.
Hence, as \(\frac {d\left \vert \frac {{do}_{l}}{d\psi _{l}}\right \vert }{d\psi _{l}}<0\), then \(\frac {d\left \vert \frac {{do}_{l}}{d\psi _{l}}\right \vert }{d\kappa _{l}}<0\), so when κ h >κ m , then \(\left |\frac {{do}_{m}}{d\psi _{m}}\right |>\left |\frac {{do}_{h}}{d\psi _{h}}\right |\). We observe that \(\frac {d\left \vert \frac {{do}_{l}}{d\psi _{l}}\right \vert }{d\psi _{l}}<0\) both because the numerator decreases with ψ l and the denominator increases with ψ l . Rewriting the expression determining the sign of \(\frac {\partial \left (1-\hat {e}\right)}{\partial \left (p\alpha \right)}|_{z}\), Eq. (46) as
$$\frac{\frac{o_{m}}{A_{m}\left(\frac{{\theta_{m}^{F}}}{{\theta_{m}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}\left(\frac{{y_{m}^{F}}}{{y_{m}^{I}}}\psi_{m}\right)^{\frac{1}{1-\gamma}}+\psi_{m}}}{\frac{o_{h}}{A_{h}\left(\frac{{\theta_{h}^{F}}}{{\theta_{h}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}\left(\frac{{y_{h}^{F}}}{{y_{h}^{I}}}\psi_{h}\right)^{\frac{1}{1-\gamma}}+\psi_{h}}}=g\left(\kappa_{h},\kappa_{m}\right)o_{m/}o_{h}>{y_{h}^{F}}/{y_{m}^{F}}>o_{m/}o_{h}, $$
$$g\left(\kappa_{h},\kappa_{m}\right)=\frac{D_{\frac{{do}_{h}}{d\psi_{h}}}}{D_{\frac{{do}_{m}}{d\psi_{m}}}}=\frac{A_{h}\left(\frac{{\theta_{h}^{F}}}{{\theta_{h}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}\left(\frac{{y_{h}^{F}}}{{y_{h}^{I}}}\psi_{h}\right)^{\frac{1}{1-\gamma}}+\psi_{h}}{A_{m}\left(\frac{{\theta_{m}^{F}}}{{\theta_{m}^{I}}}\right)^{\frac{1}{1-\gamma}-\frac{1}{2}}\left(\frac{{y_{m}^{F}}}{{y_{m}^{I}}}\psi_{m}\right)^{\frac{1}{1-\gamma}}+\psi_{m}}>1, $$
when κ h >κ m and \(\frac {{y_{h}^{F}}}{{y_{h}^{I}}}\geq \frac {{y_{m}^{F}}}{{y_{m}^{I}}}\) (or equivalently \(\frac {{y_{h}^{F}}}{{y_{m}^{F}}}\geq \frac {{y_{h}^{I}}}{{y_{m}^{I}}}\)) as the denominator of \(\left \vert \frac {{do}_{l}}{d\psi _{l}}\right \vert \) increases with ψ l . We conclude that if \(\frac {{y_{h}^{F}}}{{y_{m}^{F}}}\in \left [\frac {o_{m}}{o_{h}},g\left (\kappa _{h},\kappa _{m}\right)\frac {o_{m}}{o_{h}}\right ]\) education increases with p α and when \(\frac {{y_{h}^{F}}}{{y_{m}^{F}}}\in \left [g\left (\kappa _{h},\kappa _{m}\right)\frac {o_{m}}{o_{h}},\infty \right ]\) education falls with p α.
Impact of higher punishment on unemployment (Proposition 6)
Raising the audit rate p or the punishment fee α increases the wedge, ψ l =(1+p l α+κ l )/(1+z). Differentiating total unemployment with respect to ψ l gives
$$\frac{{dU}_{TOT}}{d\psi_{l}}=\frac{d\hat{e}}{d\psi_{l}}\left(u_{m}-u_{h}\right)+\hat{e}\frac{{du}_{m}}{d\psi_{l}}+(1-\hat{e})\frac{{du}_{h} }{d\psi_{l}} $$
The last two terms are positive (≤0) when \(\left ({y_{l}^{F}}/{y_{l}^{I}}\right)\psi _{l}\) is larger than one (≤1). The first term is positive if \(\frac {{y_{h}^{F}}}{{y_{m}^{F}}}\in \left [\frac {o_{m}}{o_{h}},g\left (\kappa _{h},\kappa _{m}\right)\frac {o_{m}}{o_{h}}\right ]\) where g(κ h ,κ m )>1 if when κ h >κ m and \(\frac {{y_{h}^{F}}}{{y_{h}^{I}}}\psi _{h}\geq \frac {{y_{m}^{F}}}{{y_{m}^{I}}}\psi _{m}\geq 1\) as then (u m −u h )<(=)0 and \(\frac {d\hat {e}}{d\psi _{l}}<0\). However, when \(\frac {{y_{l}^{F}}}{{y_{l}^{I}}}\psi _{l}<1\) and κ h >κ m , then (u m −u h )>0, and in case \(\frac {d\hat {e}}{d\psi _{l}}<0\), then unemployment falls, \(\frac {{dU}_{TOT}}{d\psi _{l}}<0\). If, \(\frac {{y_{h}^{F}}}{{y_{m}^{F}}}\in \left [g\left (\kappa _{h},\kappa _{m}\right)\frac {o_{m}}{o_{h}},\infty \right ]\), then \(\frac {d\hat {e}}{d\psi _{l}}>0\) and \(\frac {{dU}_{TOT}}{d\psi _{l}}\) has an ambiguous sign.
Total official unemployment changes according to
$$\frac{{dU}_{TOT}^{o}}{d\psi_{l}}=\frac{d\hat{e}}{d\psi_{l}}\left({u_{m}^{o}}-{u_{h}^{o}}\right)+\hat{e}\frac{d{u_{m}^{o}}}{d\psi_{l}}+(1-\hat{e})\frac{d{u_{h}^{o}}}{d\psi_{l}}<0, $$
where the last two terms are negative and therefore when κ h >κ m and \(\frac {{y_{h}^{F}}}{{y_{h}^{I}}}\psi _{h}\geq \frac {{y_{m}^{F}}}{{y_{m}^{I}}}\psi _{m}\geq 1 \frac {{dU}_{TOT}^{o}}{d\psi _{l}}<0\) when \(\frac {d\hat {e}}{d\psi _{l}}\leq 0,\) as \(\left ({u_{m}^{o}}-{u_{h}^{o}}\right)>0\). When \(\frac {d\hat {e}}{d\psi _{l}}\leq 0\), the sign of \(\frac {{dU}_{TOT}^{o}}{d\psi _{l}}\) is ambiguous.
Socially optimal solution for \({\theta _{m}^{F}},{\theta _{m}^{I}},{\theta _{h}^{F}},{\theta _{h}^{I}},{\sigma _{m}^{I}},{\sigma _{h}^{I}},\hat {e.}\)
For simplicity, we here let \({y_{l}^{F}}={y_{l}^{I}},\;l={h,m}.\) We make use of a utilitarian welfare function, which is obtained by adding all individuals' steady state flow values of welfare and let r+a=r a . This accounts for that both the formal and the informal economy generate welfare in the economy. The social welfare function is written as
$$W=\hat{e}\tilde{W}_{m}+\int_{\hat{e}}^{1}\tilde{W}_{h}de, $$
$$\tilde{W}_{m}=u_{m}r_{a}U_{m}+\sum_{j=F,I}{n_{m}^{j}}r_{a}{E_{m}^{j}}+\sum_{j=F,I}{n_{m}^{j}}r_{a}{J_{m}^{j}}+\sum_{j=F,I}{v_{m}^{j}}r_{a}{V_{m}^{j}}+{n_{m}^{I}}r_{a}J_{m}^{law}, $$
$$\tilde{W}_{h}=u_{h}r_{a}U_{h}+\sum_{j=F,I}{n_{h}^{j}}r_{a}{E_{h}^{j}}+\sum_{j=F,I}{n_{h}^{j}}r_{a}{J_{h}^{j}}+\sum_{j=F,I}{v_{h}^{j}}r_{a}{V_{h}^{j}}+{n_{h}^{I}}r_{a}J_{h}^{law}-c(e). $$
We assume that firms are owned by renters who do not work. This explains the presence of \(\sum _{j={F,I}}{n_{m}^{j}}r_{a}{J_{m}^{j}}+\sum _{j={F,I}}{v_{m}^{j}}r_{a}{V_{m}^{j}}\) and \(\sum _{j={F,I}}{n_{h}^{j}}r_{a}{J_{h}^{j}}+\sum _{j={F,I}}{v_{h}^{j}}r_{a}{V_{h}^{j}}\) in the welfare function. Moreover, we assume that the concealment costs for tax evasion-facing firms are payments to "lawyers" who only engage in concealing taxable income for other firms. The welfare function therefore includes \({n_{m}^{I}}r_{a}J_{m}^{\text {law}}={n_{m}^{I}}{w_{m}^{I}}\kappa _{m}\) and \({n_{h}^{I}}r_{a}J_{h}^{\text {law}}={n_{h}^{I}}{w_{h}^{I}}\kappa _{h}\). This assumption enables us to disregard from the waste associated with tax evasion if firms only pay these expenses to nobody.
By making use of the asset equations, imposing the flow equilibrium conditions as well as the government budget restriction in (40), and considering the case of no discounting, i.e. r+a→0, we can write the welfare function as
where \(\Theta _{l}=\left (1-{\sigma _{l}^{I}}\right)^{\gamma }{\theta _{l}^{F}}+\left ({\sigma _{l}^{I}}\right)^{\gamma }{\theta _{l}^{I}},~l={m,h}\). This welfare measure is analogous to the welfare measure described in, for example, Pissarides (2000) as it includes aggregate production minus total vacancy costs, i.e. note that \(u_{l}\Theta _{l}k=\left ({v_{l}^{F}}+{v_{l}^{I}}\right)k,~l={m,h}\). With the assumption of risk neutral individuals, we ignore distributional issues and hence wages will not feature in the welfare function. We have to find the socially optimal choice of audit rates for the sector employing manual workers and the sector employing highly educated workers; the welfare function in (49)–(51) is maximized by choice of p m and p h subject to the market reactions given by (32), (35), (36), (37), and (39) and the government budget restriction in (40). This yields the following first-order conditions:
$$ \frac{dW}{{dp}_{m}}=\hat{e}\frac{{dW}_{m}}{d\psi_{m}}\frac{d\psi_{m}}{{dp}_{m}}+\frac{dW}{d\left(1-e\right)} \frac{d\left(1-e\right)}{{dp}_{m}}=0, $$
$$ \frac{dW}{{dp}_{h}}=\left(1-\hat{e}\right)\frac{{dW}_{h}}{d\psi_{h}}\frac{d\psi_{h}}{{dp}_{h}}+ \frac{dW}{d\left(1-e\right)}\frac{d(1-e)}{{dp}_{h}}=0 $$
where \(\frac {{dW}_{l}}{d\psi _{l}}=\left [\sum _{j={F,I}}\frac {{dW}_{l}}{d{\theta _{l}^{j}}} \frac {d{\theta _{l}^{j}}}{d\psi _{l}}+\frac {{dW}_{l}}{d{\sigma _{l}^{I}}}\frac {d{\sigma _{l}^{I}}}{d\psi _{l}}\right ],\;j={m,h}.\) Evaluating the first-order conditions at the levels of p m and p h ensuring that ψ m =ψ h =1 turns out to be very convenient and gives
$$ \frac{dW}{{dp}_{h}}\mid{}_{\psi_{h}=1}=\frac{dW}{d\left(1-\hat{e}\right)}\frac{d\left(1-\hat{e}\right)}{{dp}_{h}}<0 $$
Let us first derive the socially optimal choice of tightness, search, and stock of educated workers by maximizing the welfare function in (49)–(51) with respect to \({\theta _{m}^{F}}\), \({\theta _{m}^{I}}\), \({\theta _{h}^{F}}\), \({\theta _{h}^{I}}\), \({\sigma _{m}^{I}}\), \({\sigma _{h}^{I}}\), and \(\hat {e}\). The socially optimal solution is solved from the following seven conditions:
From the first-order conditions for tightness in the formal and informal sector for manual and highly educated workers, i.e. \(\frac {\partial W}{{\partial \theta _{l}^{I}}}=0,\;\frac {\partial W}{{\partial \theta _{l}^{F}}}=0,\;l={F,I},\) we get the following conditions: \(2sk\left ({\theta _{l}^{I}}\right)^{\frac {1}{2}}=u_{l}\left [1+k\Theta _{l}\right ]\) and \(2sk\left ({\theta _{l}^{F}}\right)^{\frac {1}{2}}=u_{l}\;l={m,h},\) which gives \({\theta _{l}^{F}}={\theta _{l}^{I}}\). Substitute \({\theta _{l}^{F}}={\theta _{l}^{I}}\) into the first-order condition for search effort, \(\frac {\partial W}{{\partial \sigma _{l}^{I}}}=0\), and the following condition determines the social optimal level of search: \(\left ({\sigma _{m}^{I}}\right)^{\gamma -1}-\left (1-{\sigma _{m}^{I}}\right)^{\gamma -1}=0\). This yields \({\sigma _{l}^{I}}=\frac {1}{2},\;l={m,h}\). Substitute \({\sigma _{l}^{I}}=\frac {1}{2},\;l={m,h}\) into \(2sk\left ({\theta _{l}^{I}}\right)=u_{l}\left [1+k\Theta _{l}\right ]\) and \(2sk\left ({\theta _{l}^{F}}\right)^{\frac {1}{2}}=u_{l}\left [1+k\Theta _{l}\right ],\;l={m,h},\) which yields the four equations in (57) determining \({\theta _{m}^{F}},{\theta _{m}^{I}},{\theta _{h}^{F}},\) and \({\theta _{h}^{I}}\) in equilibrium. The socially optimal educational stock is determined by \(\partial W/\partial \left (1-\hat {e}\right)=W_{h} \left (\hat {e}\right)-W_{m}=y_{h}\left [1-u_{h} \left [1+k\Theta _{h}\right ]\right ]-y_{m}\left [1-u_{m}y_{m}\left [1+k\Theta _{m}\right ]\right ]-c\left (\hat {e}\right)=0\). Now use the equations determining the optimal levels of tightness, \(2sk\left ({\theta _{l}^{I}}\right)^{\frac {1}{2}}=u_{l}\) \right] and \(2sk\left ({\theta _{l}^{F}}\right)^{\frac {1}{2}}=u_{l}\left [1+k\Theta _{l}\right ],\;l={m,h},\) and the equation for the optimal educational level given by (58). To show that we have a global maximum, we differentiate W with respect to \({\sigma _{l}^{I}}\), \({\theta _{l}^{I}}\), \({\theta _{l}^{F}},l={m,h}\) and \(1-\hat {e}\) to obtain
$$\left(\sigma_{l}^{I\ast}\right)^{\gamma-1}-\left(1-\sigma_{l}^{I\ast}\right)^{\gamma-1}=0,\;l=m,h, $$
$$-sk\left(\theta_{l}^{\ast I}\right)^{\frac{1}{2}}+\frac{1}{2}\left[1-\frac{k\theta_{l}^{*I}}{\sigma^{1-\gamma}}\right]=0,\;l=m,h, $$
$$-sk\left(\theta_{l}^{\ast F}\right)^{\frac{1}{2}}+\frac{1}{2}\left[1-\frac{k\theta_{l}^{*F}}{\left(1-\sigma\right)^{1-\gamma}}\right]=0,\;l=m,h, $$
$$\left(y_{h}\frac{k\theta_{h}^{\ast I}}{\left({\sigma_{h}^{I}}\right)^{1-\gamma}}-y_{m}\frac{k\theta_{m}^{\ast I}}{\left({\sigma_{m}^{I}}\right)^{1-\gamma}}\right)-c\left(\hat{e}^{\ast}\right)=0. $$
The associated Hessian matrix is then
$$\begin{array}{@{}rcl@{}} \left|\begin{array}{ccccccc} \left(\gamma-1\right)S_{m} & 0 & 0 & 0 & 0 & 0 & 0\\ -\frac{k{\theta_{m}^{I}}}{2\left({\sigma_{m}^{I}}\right)^{2-\gamma}}\left(\gamma-1\right) & {\Delta_{m}^{I}} & 0 & 0 & 0 & 0 & 0\\ \frac{k{\theta_{m}^{F}}}{2\left(1-{\sigma_{m}^{I}}\right)^{2-\gamma}} & 0 & {\Delta_{m}^{F}} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \left(\gamma-1\right)S_{h} & 0 & 0 & 0\\ 0 & 0 & 0 & -\frac{k{\theta_{h}^{I}}}{2\left({\sigma_{h}^{I}}\right)^{2-\gamma}}\left(\gamma-1\right) & {\triangle_{h}^{I}} & 0 & 0\\ 0 & 0 & 0 & \frac{k\theta_{h}}{2\left(1-{\sigma_{h}^{I}}\right)^{2-\gamma}}\left(\gamma-1\right) & 0 & {\triangle_{h}^{F}} & 0\\ -y_{m}\left(\gamma-1\right)\frac{k\theta_{m}^{\ast I}}{\left({\sigma_{m}^{I}}\right)^{2-\gamma}} & -y_{m}k\left({\sigma_{m}^{I}}\right)^{\gamma-1} & 0 & \left(\gamma-1\right)y_{h}\frac{k\theta_{h}^{\ast I}}{\left({\sigma_{h}^{I}}\right)^{2-\gamma}} & y_{h}k\left({\sigma_{h}^{I}}\right)^{\gamma-1} & 0 & c'\left(\hat{e}^{\ast}\right) \end{array}\right| \end{array} $$
where \(S_{l}=\left (\left ({\sigma _{l}^{I}}\right)^{\gamma -2}+\left (1-{\sigma _{l}^{I}}\right)^{\gamma -2}\right),\;l={m,h}, \;{\Delta _{l}^{I}}=-\frac {1}{2}\left (sk\left ({\theta _{l}^{I}}\right)^{-\frac {1}{2}}+k\left ({\sigma _{l}^{I}}\right)^{\gamma -1}\right), \;l={m,h}\) and \(\;{\Delta _{l}^{F}}=-\frac {1}{2}\left (sk\left ({\theta _{l}^{F}}\right)^{-\frac {1}{2}}+ k\left (1-{\sigma _{l}^{I}}\right)^{\gamma -1}\right),\;l={m,h}\). Therefore, \(H_{1}=\left (\gamma -1\right)\left (\left ({\sigma _{m}^{I}}\right)^{\gamma -2}+\left (1-{\sigma _{m}^{I}}\right)^{\gamma -2}\right)<0\) and the principal minors alternate in sign, for all variable values, i.e. \(H_{2}=-\left (\gamma -1\right)\left (\left ({\sigma _{m}^{I}}\right)^{\gamma -2}+ \left (1-{\sigma _{m}^{I}}\right)^{\gamma -2}\right){\Delta _{m}^{I}}>0,\ldots,H_{7}=\left (\gamma -1\right) \left (\left ({\sigma _{m}^{I}}\right)^{\gamma -2}+\left (1-{\sigma _{m}^{I}}\right)^{\gamma -2}\right) {\Delta _{m}^{I}}{\Delta _{m}^{F}}\left (\gamma -1\right)\left (\left ({\sigma _{h}^{I}}\right)^{\gamma -2}+ \left (1-{\sigma _{h}^{I}}\right)^{\gamma -2}\right){\Delta _{h}^{I}}{\Delta _{h}^{F}}c^{\prime }\left (\hat {e}^{\ast }\right)<0\), by which we have a global maximum.
Optimal does not induce the socially efficient stock of education (Corollary 8)
Evaluating (52) and (53) at \({p_{m}^{e}}\) and \({p_{h}^{e}}\) such that the socially optimal level of education is reached, i.e. \(\frac {dW}{d\left (1-e\right)}=0\). From Proposition 7, this requires that \({\psi _{m}^{e}}>1>{\psi _{h}^{e}}.\) This yields \(\frac {dW}{{dp}_{m}}\mid _{{\psi _{m}^{e}}>1}=\hat {e}\left [\frac {dW}{d{\theta _{m}^{F}}} \frac {d{\theta _{m}^{F}}}{d\psi _{m}}+\frac {dW}{d{\theta _{m}^{I}}}\frac {d{\theta _{m}^{I}}}{d\psi _{m}}+ \frac {dW}{d{\sigma _{m}^{I}}}\frac {d{\sigma _{m}^{I}}}{d\psi _{m}}\right ]\frac {d\psi _{m}}{{dp}_{m}}\) and \(\frac {dW}{{dp}_{h}}\mid _{{\psi _{h}^{e}}<1}=\left (1-\hat {e}\right)\left [\frac {dW}{d{\theta _{h}^{F}}} \frac {d{\theta _{h}^{F}}}{d\psi _{h}}+\frac {dW}{d{\theta _{h}^{I}}}\frac {d{\theta _{h}^{I}}}{d\psi _{h}}+ \frac {dW}{d{\sigma _{h}^{I}}}\frac {d{\sigma _{h}^{I}}}{d\psi _{h}}\right ]\frac {d\psi _{h}}{{dp}_{h}}\). From the derivations of the socially optimal solution for \({\theta _{m}^{F}},{\theta _{m}^{I}},{\theta _{h}^{F}},{\theta _{h}^{I}},{\sigma _{m}^{I}},{\sigma _{h}^{I}}\), it follows that \(\frac {dW}{d{\theta _{l}^{F}}}\mid _{\psi _{l}>1}<0, \frac {dW}{d{\theta _{l}^{I}}}\mid _{\psi _{l}>1}>0,\frac {dW}{{d{\sigma _{l}^{I}}}}\mid _{\psi _{l}>1}>0\) and \(\frac {dW}{d{\theta _{l}^{F}}}\mid _{\psi _{l}<1}>0,\;\frac {dW}{d{\theta _{l}^{I}}}\mid _{\psi _{l}<1}<0,\frac {{dW}_{l}}{d{\sigma _{l}^{I}}}\mid _{\psi _{l}<1}<0\) as the welfare function is maximized at ψ l =1, i.e., \(\frac {dW}{d{\theta _{l}^{F}}}\mid _{\psi _{l}=1}=\frac {{dW}_{l}}{d{\theta _{l}^{I}}}\mid _{\psi _{l}=1}= \frac {{dW}_{l}}{d\sigma l}\mid _{\psi _{l}=1}=0\). It then follows that \(\frac {dW}{{dp}_{m}}\) and \(\frac {dW}{{dp}_{h}}\mid _{{\psi _{h}^{e}}<1}>0\).
Optimal punishment policy including auditing costs
The government budget constraint with auditing costs, φ(p), is \(\frac {z\hat {e}{n_{m}^{F}}{w_{m}^{F}}}{1+z}+ \frac {p\alpha \hat {e}{n_{m}^{I}}{w_{m}^{I}}}{1+p\alpha +\kappa _{m}}+ \frac {z\left (1-\hat {e}\right){n_{h}^{F}}{w_{h}^{F}}}{1+z}+ \frac {p\alpha \left (1-\hat {e}\right){n_{m}^{I}}{w_{h}^{I}}}{1+p\alpha +\kappa _{h}}-\varphi (p)=R\), where p is the total intensity of audits, p=p m +p h . Adding costs of auditing has no impact on the positive analyses. The welfare function, however, is equal to \(W=\hat {e}W_{m}+\int _{\hat {e}}^{1}W_{h}de-\varphi (p),\) with first-order conditions for optimal audit rates:
$$\frac{dW}{{dp}_{m}}=\hat{e}\frac{{dW}_{m}}{d\psi_{m}}\frac{d\psi_{m}}{{dp}_{m}}+\frac{dW}{d\left(1-e\right)}\frac{d\left(1-e\right)}{{dp}_{m}}-\varphi^{\prime}(p)=0, $$
$$\frac{dW}{{dp}_{h}}=\left(1-\hat{e}\right)\frac{{dW}_{h}}{d\psi_{h}}\frac{d\psi_{h}}{{dp}_{h}}+\frac{dW}{d\left(1-e\right)}\frac{d\left(1-e\right)}{{dp}_{h}}-\varphi^{\prime}(p)=0, $$
where \(\frac {{dW}_{l}}{d\psi _{l}}=\sum _{j={F,I}}\frac {{dW}_{l}}{d{\theta _{l}^{j}}} \frac {d{\theta _{l}^{j}}}{d\psi _{l}}+\frac {{dW}_{l}}{d{\sigma _{l}^{I}}}\frac {d{\sigma _{l}^{j}}}{d\psi _{l}},\;j={m,h}.\) The optimal level of audits is reduced in both sectors. The result from Proposition 7, that welfare is maximized when the government to a larger extent targets its audits to the manual sector, i.e. p m >p h if κ h ≥κ m , will still hold.
Acemoglu, DT (1996) A microfoundation for social increasing returns in human capital accumulation. Q J Econ 61: 779–804.
Acemoglu, D, Shimer R (1999) Holdups and efficiency with search frictions. Int Econ Rev 40: 827–849.
Albrecht, J, Navarro L, Vroman S (2009) The effects of labor market policies in an economy with an informal sector. Econ J 119: 1105–1129.
Almeida, R, Carneiro A (2012) Enforcement of Labor Regulation and Informality. American Economic Journal: Applied Economics 4(3): 64–89.
Andreoni, L, Erard B, Feinstein J (1998) Tax compliance. J Econ Lit 36: 818–860.
Boeri, T, Garibaldi P (2005) Shadow sorting. In: Frankel JA Pissarides CA (eds)NBER International Seminar on Macroeconomics, 125–163.. University of Chicago Press, Chicago.
Bosch, M, Esteban-Pretel J (2012) Job creation and job destruction in the presence of informal markets. J Dev Econ 98(2): 270–286.
Charlot, OL, Decreuse B, Granier P (2005) Adaptability, productivity, and educational incentives in a matching model. Eur Econ Rev 49(4): 1007–1032.
EC (2007) Undeclared work in the European Union, social affairs and equal opportunities and coordination by Directorate General Communication, Brussels.
Fugazza, M, Jacques J-F (2004) Labour market institutions, taxation and the underground economy. J Public Econ 88: 395–418.
Haigner, S, Jenewein S, Schneider F, Wakolbinger F (2011) Dissatisfaction, Fear and Annoyance: Driving Forces of Informal Labor Supply and Demand, Discussion Paper, Department of Economics, University of Linz, Austria. Paper presented at the European Public Choice meeting.
Hvidtfeldt, C, Jensen B, Larsen C (2011) Undeclared work and the Danes, University Press of Southern Denmark, June 2010, English summary reported In: Rockwool Foundation Research Unit, March 2011, Copenhagen, Denmark.
Kleven, H, Knudsen M, Kreiner C, Pedersen S, Saez E (2011) Unwilling or unable to cheat? Evidence from a tax audit experiment in Denmark. Econometrica 79(3): 651–692.
Kolm, A-S, Larsen B (2006) Wages, unemployment, and the underground economy. In: Agell J Sørensen PB (eds)Tax Policy and Labor Market Performance.. MIT press, Cambridge, Massachusetts.
La Porta, R, Shleifer A (2014) Informality and Development. Journal of Economic Perspectives 28(3): 109–26.
Meghir, C, Narita R, Robin J-M (2015) Wages and informality in developing countries. Am Econ Rev 105(4): 1509–1546.
OECD (2012) The global forum on transparancy and exchange of information for tax purposes, Tax Tranparency 2012 Report on progress. OECD, Paris.
Pedersen, S, Smith N (1998) Sort arbejde og sort løn i Danmark (Black Activities and Black Wages in Denmark). National Økonomisk Tidskrift no. 136, Copenhagen, Denmark: 289–314.
Pedersen, S (2003) The Shadow Economy in Germany, Great Britain and Scandinavia—a measurement based on questionnaire surveys. The Rockwool Foundation Research Unit, Study no 10, Copenhagen, Denmark.
Pissarides, C (2000) Equilibrium search theory. MIT Press, Cambridge.
We want to thank Gerard van den Berg; Per Engström; Tomas Lindström; participants at SAM, EEA, SOLE, CESifo, CIM, and the conference on heterogenous labour: search friction and human capital investments, Konstanz; Lund University; Bristol University; and two anonymous referees.
Stockholm University, Stockholm, Sweden
Ann-Sofie Kolm
Copenhagen Business School, Frederiksberg, Denmark
Birthe Larsen
Search for Ann-Sofie Kolm in:
Search for Birthe Larsen in:
Correspondence to Birthe Larsen.
Kolm, A., Larsen, B. Informal unemployment and education. IZA J Labor Econ 5, 8 (2016). https://doi.org/10.1186/s40172-016-0048-6
The informal sector | CommonCrawl |
Volume 18 Supplement 8
Selected articles from the Fifth IEEE International Conference on Computational Advances in Bio and Medical Sciences (ICCABS 2015): Bioinformatics
A greedy alignment-free distance estimator for phylogenetic inference
Sharma V. Thankachan1,
Sriram P. Chockalingam2,
Yongchao Liu3,
Ambujam Krishnan4 &
Srinivas Aluru2,3
Alignment-free sequence comparison approaches have been garnering increasing interest in various data- and compute-intensive applications such as phylogenetic inference for large-scale sequences. While k-mer based methods are predominantly used in real applications, the average common substring (ACS) approach is emerging as one of the prominent alignment-free approaches. This ACS approach has been further generalized by some recent work, either greedily or exactly, by allowing a bounded number of mismatches in the common substrings.
We present ALFRED-G, a greedy alignment-free distance estimator for phylogenetic tree reconstruction based on the concept of the generalized ACS approach. In this algorithm, we have investigated a new heuristic to efficiently compute the lengths of common strings with mismatches allowed, and have further applied this heuristic to phylogeny reconstruction. Performance evaluation using real sequence datasets shows that our heuristic is able to reconstruct comparable, or even more accurate, phylogenetic tree topologies than the kmacs heuristic algorithm at highly competitive speed.
ALFRED-G is an alignment-free heuristic for evolutionary distance estimation between two biological sequences. This algorithm is implemented in C++ and has been incorporated into our open-source ALFRED software package (http://alurulab.cc.gatech.edu/phylo).
Accurate estimation of the evolutionary distance between two sequences is fundamental and critical to phylogenetic analysis aiming to reconstruct the correct evolutionary history and estimate the time of divergence between species. One popular approach to evolutionary distance estimation relies on sequence alignment. Typically, the pipeline for alignment-based phylogenetic inference generally works by three steps. Firstly, we perform all-to-all pairwise sequence alignment to gain a pairwise distance matrix for the input sequences. The evolutionary distance between two sequences in the matrix is typically inferred from an optimal alignment, e.g. equal to one minus percent identity in the optimal alignment. Secondly, we construct a guide tree from the pairwise distance matrix and then conduct progressive alignment of multiple sequences following the order determined by the guide tree. Finally, we infer a phylogenetic tree from the resulting multiple alignments using a tree inference program which can be distance-, parsimony-, bayesian, or likelihood-based. Nevertheless, it needs to be stressed that we could also choose to construct a phylogenetic tree directly from the pairwise distance matrix computed in the first step, using some distance-based tree construction algorithm such as unweighted pair group method with arithmetic mean (UPGMA) [1] or neighbor-joining (NJ) [2].
Although they may have high accuracy, alignment-based approaches involve high computational cost, thus resulting in slow speed. This is because pairwise alignment using dynamic programming has a quadratic complexity with respect to sequence length. This is even more challenging when constructing the phylogenetic tree for a large number of sequences, especially long sequences (e.g. eukaryotic genomes). In this case, some research efforts have been devoted to accelerating the tree construction using high performance computing architectures [3–6]. In addition to acceleration, as an alternative to alignment-based approaches, alignment-free approaches emerge and become popular, mainly owing to their speed superiority. For instance, given a collection of d sequences of average length n, the time complexity for pairwise distance matrix computation can be as high as O(d 2 n 2) when using pairwise alignment. In contrast, by using alignment-free exact k-mer (a k-mer is a string of k characters) counting, the whole computation can be done in O(d 2 n) time, significantly reducing the run-time by a factor of n. Moreover, alignment-free approaches are capable of overcoming some difficulties, which challenge alignment-based approaches, such as genetic recombination and shuffling during the evolution process.
A variety of alignment-free approaches have been proposed, most of which are based on the concept of sequence seeding that extracts fixed- or variable- length substrings from a given sequence. Based on fixed-length seeding, there are two kinds of alignment-free approaches: exact k-mer counting [7] and spaced k-mer counting [8]. For the exact k-mer counting approach, it builds a k-mer frequency (or occurrence) vector for each sequence and computes the pairwise distance using some distance measure based on the frequency vectors. Example distance measures include Euclidean distance [9], Kullback-Lebler divergence [10] and the one proposed by Edgar [11]. The Edgar's distance measure models the similarity between two sequences as the fraction of exact k-mers shared by them, and then computes the pairwise distance by subtracting the similarity value from one. This distance measure has been shown to be highly related to genetic distance and has been used in other applications like metagenomic sequence classification [12]. For the spaced k-mer counting approach, it allows character mismatches between k-mers at some predefined positions and usually employs multiple pattern templates in order to improve accuracy.
Based on variable-length seeding, there are three kinds of approaches: the average common substring (ACS) method [13], the k-mismatch ACS (k-ACS) method [14, 15] and the mutation distances (K r ) [16]. The distance based on these methods can be computed using suffix trees/arrays. Given two sequences, the ACS method first calculates the length of the longest substring that starts at each position i in one sequence and matches some substring of the other sequence. Subsequently, it averages and normalizes all of the lengths computed to represent the similarity of the two sequences. Finally, the resulting similarity value is used to compute the pairwise distance. The time complexity of the ACS method is directly proportional to the sum of lengths of the two sequences.
In contrast, the k-ACS method computes the pairwise distance by finding substring pairs with upto k mismatches, given two sequences. Specifically, instead of determining the longest common substrings, the k-ACS method aims to find the longest substring starting at each position in one sequence and matching some substring in the other sequence with upto k mismatches. The mutation distances is closely related to ACS, where the difference is only in the conversion from the similarity value to a pairwise distance.
Unlike the ACS method, the solutions to the k-ACS method involves high computational cost. For example, an algorithm given by Leimeister and Morgenstern [14] takes O(k n 2) time in the worst case, which is certainty not a suitable replacement of alignment based methods. However, they proposed a faster algorithm, namely kmacs, that computes an approximation to k-ACS based distance. Another algorithm by Apostolico et al. runs in O(n 2/ logn) time [17]. This raises an open question, whether the exact k-ACS based distance can be computed in strictly sub-quadratic time. Initial attempts were focused on the special case of k=1 [18, 19]. Later, Aluru et al. [15, 20] positively answered this question by presenting an algorithm with a worst case run time of O(n logk n) for any constant k. The algorithm is much more complicated than the original ACS method and even the k-ACS approximation by [14]. Moreover the practical variant of this algorithm can get quite slow for even moderately large values of k due to its exponential dependency on k [21]. However, this algorithm has its merit as the first sub-quadratic time algorithm for exact k-ACS computation for any positive integer k. A recently proposed algorithm by Pizzi is based on filtering approaches [22]. In summary, on one hand, we have a fast approximation algorithm [14] and on the other hand, we have an exact (theoretical) algorithm [15], that might work well for small values of k in practice. Inspired by both algorithms, we introduce a new greedy heuristic for alignment-free distance estimation, named ALFRED-G. The heuristic is implemented in C++ and has been incorporated into our open-source ALFRED software package (http://alurulab.cc.gatech.edu/phylo).
We use X and Y to denote the two sequences to be compared. The length of sequence X is denoted by |X|, its ith character by X[i], and the substring that starts at position i and ends at position j by X[i…j]. For brevity, we use X i to denote the suffix of X starting at i. The total length of X and Y is denoted by n. A key data structure in our algorithm is the generalized suffix tree (GST). The GST of X and Y is a compact trie of all suffixes of X and Y. It consists of n leaves and at most n−1 internal nodes. Corresponding to each leaf, there is a unique suffix of X or Y. The edges are labeled with a sequence of characters. The string-depth of a node u is the length of the string obtained by concatenating the edge labels on the path from the root of GST to u. The space and the construction time of GST are O(n) [23]. For any (i,j) pair, |LCP(X i ,Y j )|, the length of the longest common prefix of X i and Y j is same as the string-depth of the lowest common ancestor node of the leaves corresponding to X i and Y j . Using GST, we can compute it in constant time. Also, we can compute |LCP k (X i ,Y j )|, the length of the longest common prefix of X i and Y j with first k mismatches ignored, in O(k) time as follows. Let z=|LCP(X i ,Y j )|, then for any k≥1,
$$ \left|\mathsf{LCP}_{k}\left(\mathsf{X}_{i}, \mathsf{Y}_{j}\right)\right| = z+1+\left|\mathsf{LCP}_{k-1}\left(\mathsf{X}_{i+z+1}, \mathsf{Y}_{j+z+1}\right)\right| $$
The k-mismatch average common substring of X w.r.t. Y, denoted by ACS k (X,Y) is defined as the average of the length of the prefix of a suffix of X, that appears as a substring of Y within hamming distance k. Specifically, let λ k (i)= maxj|LCP k (X i ,Y j )|, then
$$ \mathsf{ACS}_{k}(\mathsf{X}, \mathsf{Y}) = \frac{1}{|\mathsf{X}|}\sum_{i=1}^{|\mathsf{X}|}\lambda_{k}(i) $$
The distance Dist k (X,Y), based on ACS k is given below [13, 14].
$${} \begin{array}{ll} \mathsf{Dist}_{k}(\mathsf{X},\mathsf{Y}) &= \frac{1}{2} \left(\frac{\log |\mathsf{Y}|} {\mathsf{ACS}_{k}(\mathsf{X},\mathsf{Y})}+\frac{\log |\mathsf{X}|}{\mathsf{ACS}_{k}(\mathsf{Y},\mathsf{X})}\right) -\left(\frac{\log |\mathsf{X}|}{|\mathsf{X}|}+\frac{\log |\mathsf{Y}|}{|\mathsf{Y}|}\right)\\ \end{array} $$
Approximating ACS k (·,·)
It is observed that ACS k (·,·) can be easily computed in O(n 2 k) time via |X|×|Y| number of |LCP k (·,·)| queries, which is clearly not affordable. The first attempt to circumvent this issue was made by Leimeister and Morgenstern [14], who presented a heuristic method, named kmacs, that quickly computes an approximation to ACS k (X,Y). The key idea is to replace λ k (i) with λ k′(i) in the equation for ACS k , where α i =a r g maxj|LCP(X i ,Y j )| and \(\lambda _{k}'(i) =|\mathsf {LCP}_{k}(\mathsf {X}_{i},\mathsf {Y}_{\alpha _{i}})|\phantom {\dot {i}\!}\). Using GST, we can compute α i for all values of i in O(n) time. Therefore, λ k′(i) for all values of i and the corresponding distance can be easily obtained in O(n k) time. Note that the ratio of λ k (i) to λ k′(i) can be as high as Θ(n). Nonetheless, it has been shown that for most practical cases, the average of the latter can serve as a good approximation to the average of the former.
The idea is to follow a simple adaptation of Aluru et al.'s exact algorithm [15] for 1-mismatch case and then use the heuristic approach by Leimeister and Morgenstern [14] to extend the result to k-mismatch. Specifically, our approximation to ACS k is obtained by replacing λ k (i) in the equation for ACS k by λ k″(i), where β i =a r g maxj|LCP 1(X i ,Y j )| and \(\lambda _{k}''(i) =|\mathsf {LCP}_{k}(\mathsf {X}_{i},\mathsf {Y}_{\beta _{i}})|\phantom {\dot {i}\!}\). To compute β i for i=1,2,…,|X|, we first construct GST and an array A[1,|X|]. Then for each internal node u in GST, process the set \(\mathcal {S}(u)\) of suffixes corresponding to the leaves in the subtree of u. Let h be the string-depth of u. Then (h+1) is the first position, in which the prefixes of two suffixes in \(\mathcal {S}(u)\) can differ. We sort all suffixes in \(\mathcal {S}(u)\) by treating the (h+1)th character all suffixes to be identical, or equivalently first (h+1) characters to be the same. To do so, we follow the steps below:
Map each \(\mathsf {X}_{i} \in \mathcal {S}(u)\) to a pair (X i ,k e y), where key is the lexicographic rank of the suffix X i+h+1 among all suffixes of X and Y. In other words, key is the lexicographic rank of the suffix obtained by deleting the first (h+1) characters of X i . Using GST, we can compute key in constant time.
Likewise, map each \(\mathsf {Y}_{j} \in \mathcal {S}(u)\) to a pair (Y j ,k e y), where key is the lexicographic rank of Y j+h+1 among all suffixes of X and Y.
Sort all pairs in the ascending order of key.
For each pair (X i ,·), find the closest pairs, say (Y a ,·) and (Y b ,·), towards the left and right side (if they exist) that are created from a suffix of Y, and update A[i]←a r g maxj∈{a,b,A[i]}|LCP 1(X i ,Y j )|.
After processing all internal nodes as described above, compute the following and report it as our approximation to ACS k (X,Y)
$$\frac{1}{|\mathsf{X}|} \sum_{i=1}^{|\mathsf{X}|} \lambda^{\prime\prime}_{k}(i) = \frac{1}{|\mathsf{X}|} \sum_{i=1}^{|\mathsf{X}|} \left|\mathsf{LCP}_{k}\left(\mathsf{X}_{i},\mathsf{Y}_{\beta_{i}}\right)\right| $$
It can be easily verified that A[i] will be correctly updated to β i while processing the lowest common ancestor node of the leaves corresponding to X i and \(\mathsf {Y}_{\beta _{i}}\). The overall run time is \(nk+\sum _{u} |\mathcal {S}(u)|\log |\mathcal {S}(u)|=O(nk+nH\log n)\), where H is the height of GST and its expected value is O(logn) [24].
ALFRED-G is implemented in C++ and is incorporated in our open-source ALFRED software package (http://alurulab.cc.gatech.edu/phylo). This algorithm takes a collection of sequences as input and computes an approximation to ACS k (·,·) for all pairs of sequences. For this, we have used the open-source libdivsufsort library [25] to construct the suffix array (SA) and have used the implementations in the SDSL library [26] to build the corresponding LCP table (using the Kasai algorithm [27]) and the range minimum query (RMQ) table (using the Bender-Farach's algorithm [28]). (Note that the operations on a suffix tree can be simulated using the corresponding SA, inverse SA, LCP array and RMQ table). The SDSL library has support for using bit compression techniques to reduce the size of the tables and arrays in exchange for slower query time. However, we don't compress these data structures, and instead we have used 32-bit integers for indices as well as prefix lengths.
Benchmark datasets
We have assessed the performance of ALFRED-G for the reconstruction of phylogenetic trees by using three sequence datasets, which contain prokaryotic DNA sequences, eukaryotic DNA sequences, and protein sequences, respectively. The prokaryotic sequence dataset consists of 27 Primate mitochondrial genomes, which was previously studied by [16] in order to assess the performance of alignment-free approaches for phylogenetic tree reconstruction. In the study, a reference tree was constructed based on a multiple alignment of the sequences.
The eukaryotic sequence dataset is constructed by Newton et al. [29] from 32 Roseobacter genomes, by extracting 70 universal single-copy genes for the 32 genomes with each gene being completely sequenced in all genomes and having no ambiguous start/stop sites. The 70 genes for each genome are, subsequently, concatenated and aligned with ClustalW in Geneious 4.0 (available from http://www.geneious.com) using Escherichia coli K12 substrain MG1655 as the outgroup. The multiple sequence alignment file is available at http://alurulab.cc.gatech.edu/phylo, from which the raw sequences corresponding to the 32 Roseobacter genomes are extracted and then used in our study. In our study, we have used the phylogenetic tree presented in Newton et al. [29] as the reference tree.
The protein sequence dataset is taken from BAliBASE (v3.0) [30], which is popular benchmark dataset for multiple sequence alignment. We have used 218 sets of protein sequences in BAliBASE, and constructed the reference trees from the corresponding reference alignments using the proml program available in PHYLIP [31], which implements the Maxmimum Likelihood method. For each of the parameter selected for our experiments, we report the average RF-distance of the 218 trees constructed from this set.
Phylogenetic tree construction and comparison
Given a set of d sequences, we first compute the distance between any sequence pair and then construct a pairwise distance matrix of size d×d. Subsequently, the neighbor-joining (NJ) algorithm [2] is applied on the pairwise distance matrix to reconstruct the phylogenetic tree, where the neighbor program in PHYLIP is used. Finally, the topology of the tree is compared with the reference tree using the Robinson-Foulds (RF) distance metric, where the treedist program in PHYLIP is used to compute the RF distance between two trees. Note that the lower the RF distance is, the better the tree topology matches. In particular, if the RF distance equals zero, it means exact topology match between the two trees.
All experiments are preformed in an Apple Macbook Pro (Mid-2012 model) running Mac OS 10.10.4 (OS X Yosemite). The machine features a 2.9 GHz dual-core Intel Core i7-3667U processor with 4MB L3 cache and 8GB RAM.
As our method is closely related to kmacs, we compared the performance of ALFRED-G with kmacs in terms of speed and accuracy (based on RF-distance) for different values of k, ranging from 0 to 9. Note that for k=0, both kmacs and ALFRED-G are the same as the ACS method.
Figure 1 shows the results for the prokaryotic dataset. It can be observed that for all values of k, ALFRED-G provides either the same or better accuracy (in terms of RF distance). Interestingly, for k=4 and 5, the phylogenetic tree created based on ALFRED-G coincides exactly with the reference tree (see Fig. 2). We notice that the only other alignment-free method, that was able to recreate this exact reference tree is the recently proposed spaced-seed method [8] (but needs careful parameter turning).
RF distance and run-time plots for the prokaryotic dataset
Tree generated by ALFRED-G for the prokaryotic dataset with k=4
Figure 3 shows the results for the eukaryotic dataset. Likewise, our RF distance is never worse than that obtained by kmacs. In particular, when setting k=6,7 and 8, our RF distance is lower, indicating better performance. Figure 4 shows the topological comparison between the tree generated by our approach and the reference tree, which is generated by the Dendroscope software [32].
RF distance and run-time plots for the eukaryotic dataset
Reference tree and the tree generated by ALFRED-G for the eukaryotic dataset with k=7 (RF distance = 8)
Figure 5 shows the results for the protein dataset. Here both ALFRED-G and kmacs gave almost the same RF score for each value of k. As expected, ALFRED-G is slower than kmacs (by a factor of 2 to 4), however the difference in run-time is independent of k.
RF distance and run-time plots for the BAliBASE protein dataset
In the earlier work by Leimeister and Morgenstern [14], it has been show that kmacs and spaced-seed [8] are superior to other alignment-free methods, when applied to the aforementioned three datasets. Our experiments show that ALFRED-G is comparable and often more accurate than kmacs, albeit involving higher computational cost. It needs to be mentioned that the comparison with spaced-seed is not as straightforward as with kmacs, because spaced-seed has different input parameters and requires tedious pattern templates tuning. Nevertheless, we have carefully evaluated spaced-seed based on the suggestions from [8]. Our evaluation shows that spaced-seed is able to recover the entire reference tree (i.e. RF distance = 0) for the prokaryotic dataset, in just 4 seconds. However, for the rest, the performance of spaced-seed is roughly comparable to both ours and kmacs.
In this paper, we have introduced a greedy alignment-free approach to estimating the evolutionary distance between two sequences. The core of the heuristic is to identify a 1-mismatch longest substring in sequence Y that appears as a prefix of any given suffix in sequence X, and vice versa. This heuristic has been further applied to reconstruct the phylogenetic tree, given a collection of sequences that are believed to be close enough and have some evolutionary relationship between them. The performance of our heuristic has been evaluated using three real datasets: one prokaryotic dataset, one eukaryotic dataset and one protein dataset, in terms of tree-topology RF score and speed. Our experimental results show that our heuristic can exactly reconstruct the same phylogenetic tree topology with the reference tree for the prokaryotic dataset, whereas kmacs cannot. On the remaining two datasets, our heuristic also demonstrates comparable or even better performance than kmacs. As for speed, our heuristic is slightly slower than kmacs.
Although our heuristic has been shown effective for phylogenetic inference, there are still some limitations that could be improved in the future. Firstly, our heuristic assumes an evolution model having only mismatches, not involving insertions or deletions, for simplicity. This model may not exactly fit the real evolutionary process given a collection of sequences. Nevertheless, our performance evaluation has shown that even though there are some insertions or deletions between sequences (observed from multiple sequence alignment), their evolutionary distances can still be estimated with reasonable accuracy using our heuristic. However, it should be noted that the existence of insertions or deletions may cause our heuristic to underestimate the similarity values, i.e. ACS k (·,·), between sequences, thus overestimating their distances, i.e. Dist k (X,Y).
Secondly, our heuristic assumes that the homologous regions between two sequences are on the same strand. Actually, this is not always the case. Given a homologous region, the substring in sequence X may have an opposite strand to the corresponding homology in sequence Y. In this case, directly applying our heuristic to such sequences may overestimate the distance, since these homologies with opposite strands are not counted in the computation of similarity values. In some sense, we would expect that the estimation accuracy of alignment-free approaches could be further improved by incorporating support for strand differences in homologies.
Thirdly, our heuristic has only used Eq. (3) to estimate the distance from the similarity values computed from Eq. (2). Actually, we usually need to tune distance equations for different similarity computation approaches and even for similarity values in different ranges. For example, Edgar [11] used percent identity D (0≤D≤1) between two sequences as a similarity measure, but proposed to use two different distance computations depending on the value of D. In this case, Edgar computed the distance as − ln(1−D−D 2/5) if D>0.25, and retrieved the distance value from a pre-computed lookup table, otherwise. Hence, it may be beneficial to design some new distance computation equations that better match our approach. Finally, considering the generality and fast speed of our heuristic, we would expect that related research in bioinformatics and computational biology could benefit from our algorithm.
Sokal RR. A statistical method for evaluating systematic relationships. Univ Kans Sci Bull. 1958; 38:1409–38.
Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987; 4(4):406–25.
Stewart CA, Hart D, Berry DK, Olsen GJ, Wernert EA, Fischer W. Parallel implementation and performance of fastdnaml-a program for maximum likelihood phylogenetic inference. In: Supercomputing, ACM/IEEE 2001 Conference. IEEE: 2001. p. 32–2.
Ott M, Zola J, Stamatakis A, Aluru S. Large-scale maximum likelihood-based phylogenetic analysis on the ibm bluegene/l. In: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing. ACM: 2007. p. 4.
Liu Y, Schmidt B, Maskell DL. Parallel reconstruction of neighbor-joining trees for large multiple sequence alignments using cuda. In: Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium On. IEEE: 2009. p. 1–8.
Zhou J, Liu X, Stones DS, Xie Q, Wang G. Mrbayes on a graphics processing unit. Bioinformatics. 2011; 27(9):1255–61.
Vinga S, Almeida J. Alignment-free sequence comparison-a review. Bioinformatics. 2003; 19(4):513–23.
Leimeister CA, Boden M, Horwege S, Lindner S, Morgenstern B. Fast alignment-free sequence comparison using spaced-word frequencies. Bioinformatics. 2014; 30(14):1991. doi:http://dx.doi.org/10.1093/bioinformatics/btu177,http://dx.doi.org/10.1093/bioinformatics/btu177.
Blaisdell BE. Effectiveness of measures requiring and not requiring prior sequence alignment for estimating the dissimilarity of natural sequences. J Mol Evol. 1989; 29(6):526–37.
Wu TJ, Hsieh YC, Li LA. Statistical measures of dna sequence dissimilarity under markov chain models of base composition. Biometrics. 2001; 57(2):441–8.
Edgar RC. Muscle: a multiple sequence alignment method with reduced time and space complexity. BMC Bioinforma. 2004; 5(1):1.
Sun Y, Cai Y, Liu L, Yu F, Farrell ML, McKendree W, Farmerie W. Esprit: estimating species richness using large collections of 16s rrna pyrosequences. Nucleic Acids Res. 2009; 37(10):76–6.
Ulitsky I, Burstein D, Tuller T, Chor B. The average common substring approach to phylogenomic reconstruction. J Comput Biol. 2006; 13(2):336–50.
Leimeister CA, Morgenstern B. kmacs: the k-mismatch average common substring approach to alignment-free sequence comparison. Bioinformatics. 2014; 30(14):2000–8.
Aluru S, Apostolico A, Thankachan SV. Efficient alignment free sequence comparison with bounded mismatches. In: International Conference on Research in Computational Molecular Biology. Springer: 2015. p. 1–12.
Haubold B, Pfaffelhuber P, Domazet-Loso M, Wiehe T. Estimating mutation distances from unaligned genomes. J Comput Biol. 2009; 16(10):1487–500.
Apostolico A, Guerra C, Landau GM, Pizzi C. Sequence similarity measures based on bounded hamming distance. Theor Comput Sci. 2016; 638:76–90.
Flouri T, Giaquinta E, Kobert K, Ukkonen E. Longest common substrings with k mismatches. Inf Process Lett. 2015; 115(6):643–7.
Manzini G. Longest common prefix with mismatches. In: International Symposium on String Processing and Information Retrieval. Springer: 2015. p. 299–310.
Thankachan SV, Apostolico A, Aluru S. A provably efficient algorithm for the k-mismatch average common substring problem. J Comput Biol. 2016; 23(6):472–82.
Thankachan SV, Chockalingam SP, Liu Y, Apostolico A, Aluru S. Alfred: a practical method for alignment-free distance computation. J Comput Biol. 2016; 23(6):452–60.
Pizzi C. Missmax: alignment-free sequence comparison with mismatches through filtering and heuristics. Algorithm Mol Biol. 2016; 11(1):1.
Weiner P. Linear pattern matching algorithms. In: Switching and Automata Theory, 1973. SWAT'08. IEEE Conference Record of 14th Annual Symposium On. IEEE: 1973. p. 1–11.
Devroye L, Szpankowski W, Rais B. A note on the height of suffix trees. SIAM J Comput. 1992; 21(1):48–53.
Mori Y. Libdivsufsort: a lightweight suffix array construction library. 2003.
Gog S, Beller T, Moffat A, Petri M. From theory to practice: Plug and play with succinct data structures. In: International Symposium on Experimental Algorithms. Springer: 2014. p. 326–37.
Kasai T, Lee G, Arimura H, Arikawa S, Park K. Linear-time longest-common-prefix computation in suffix arrays and its applications. In: Annual Symposium on Combinatorial Pattern Matching. Springer: 2001. p. 181–92.
Bender MA, Farach-Colton M. The lca problem revisited. In: Latin American Symposium on Theoretical Informatics. Springer: 2000. p. 88–94.
Newton RJ, Griffin LE, Bowles KM, Meile C, Gifford S, Givens CE, Howard EC, King E, Oakley CA, Reisch CR, et al.Genome characteristics of a generalist marine bacterial lineage. ISME J. 2010; 4(6):784–98.
Thompson JD, Koehl P, Ripp R, Poch O. Balibase 3.0: latest developments of the multiple sequence alignment benchmark. Proteins Struct Funct Bioinforma. 2005; 61(1):127–36.
Felsenstein J. {PHYLIP}: phylogenetic inference package, version 3.5 c. 1993.
Huson DH, Richter DC, Rausch C, Dezulian T, Franz M, Rupp R. Dendroscope: an interactive viewer for large phylogenetic trees. BMC Bioinforma. 2007; 8(1):1.
Thankachan SV, Chockalingam SP, Liu Y, Krishnan A, Aluru S. A greedy alignment-free distance estimator for phylogenetic inference. In: International Conference on Computational Advances in Bio and Medical Sciences (ICCABS). IEEE: 2015. p. 1–1.
This research is supported in part by the U.S. National Science Foundation grant IIS-1416259. We thank the reviewers of this article and its preliminary version [33]. We also thank the authors of [29] for sharing the multiple sequence alignment file for the 32 Roseobacter genomes.
The funding for publication of the article was by the U.S. National Science Foundation grant IIS-1416259.
Availability of data and material
Both dataset and code are available at http://alurulab.cc.gatech.edu/phylo.
ST conceived the algorithm and wrote the initial manuscript; SC implemented the code and performed some experiments; YL wrote the manuscript; AK performed the experiments; SA conceptualized the study. All authors have read and approved the final manuscript.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 18 Supplement 8, 2017: Selected articles from the Fifth IEEE International Conference on Computational Advances in Bio and Medical Sciences (ICCABS 2015): Bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-18-supplement-8.
Department of Computer Science, University of Central Florida, Orlando, 32816, FL, USA
Sharma V. Thankachan
Institute for Data Engineering and Science, Georgia Institute of Technology, Atlanta, 30332, GA, USA
Sriram P. Chockalingam & Srinivas Aluru
School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, 30332, GA, USA
Yongchao Liu & Srinivas Aluru
School of Electrical Engineering and Computer Science, Louisiana State University, Baton Rouge, 70703, LA, USA
Ambujam Krishnan
Sriram P. Chockalingam
Yongchao Liu
Srinivas Aluru
Correspondence to Srinivas Aluru.
From Fifth IEEE International Conference on Computational Advances in Bio and Medical Sciences(ICCABS 2015) Miami, FL, USA.15-17 October 2015
Thankachan, S., Chockalingam, S., Liu, Y. et al. A greedy alignment-free distance estimator for phylogenetic inference. BMC Bioinformatics 18, 238 (2017). https://doi.org/10.1186/s12859-017-1658-0
Alignment-free methods
Sequence comparison
Phylogeny reconstruction | CommonCrawl |
5.4: Identical Particles
[ "article:topic", "Bosons", "fermions", "authorname:rfitzpatrick", "Bose-Einstein statistics", "Fermi-Dirac statistics", "showtoc:no", "Slater determinant" ]
Book: Introductory Quantum Mechanics (Fitzpatrick)
5: Multi-Particle Systems
Contributed by Richard Fitzpatrick
Professor (Physics) at University of Texas at Autin
Consider a system consisting of two identical particles of mass \(m\). As before, the instantaneous state of the system is specified by the complex wavefunction \(\psi(x_1,x_2,t)\). This wavefunction tells us is that the probability of finding the first particle between \(x_1\) and \(x_1+dx_1\), and the second between \(x_2\) and \(x_2+dx_2\), at time \(t\) is \(|\psi(x_1,x_2,t)|^{\,2}\,dx_1\,dx_2\). However, because the particles are identical, this must be the same as the probability of finding the first particle between \(x_2\) and \(x_2+dx_2\), and the second between \(x_1\) and \(x_1+dx_1\), at time \(t\) (because, in both cases, the result of the measurement is exactly the same). Hence, we conclude that
\[|\psi(x_1,x_2,t)|^{\,2} = |\psi(x_2,x_1,t)|^{\,2},\] or \[\psi(x_1,x_2,t) = {\rm e}^{\,{\rm i}\,\varphi}\,\psi(x_2,x_1,t),\]
where \(\varphi\) is a real constant. However, if we swap the labels on particles 1 and 2 (which are, after all, arbitrary for identical particles), and repeat the argument, we also conclude that
\[\psi(x_2,x_1,t) = {\rm e}^{\,{\rm i}\,\varphi}\,\psi(x_1,x_2,t).\]
\[{\rm e}^{\,2\,{\rm i}\,\varphi} = 1.\]
The only solutions to the previous equation are \(\varphi=0\) and \(\varphi=\pi\). Thus, we infer that, for a system consisting of two identical particles, the wavefunction must be either symmetric or anti-symmetric under interchange of particle labels. That is, either \[\psi(x_2,x_1,t) = \psi(x_1,x_2,t),\] or \[\psi(x_2,x_1,t) = -\psi(x_1,x_2,t).\] The previous argument can easily be extended to systems containing more than two identical particles.
It turns out that the question of whether the wavefunction of a system containing many identical particles is symmetric or anti-symmetric under interchange of the labels on any two particles is determined by the nature of the particles themselves . Particles with wavefunctions that are symmetric under label interchange are said to obey Bose-Einstein statistics , and are called bosons. For instance, photons are bosons. Particles with wavefunctions that are anti-symmetric under label interchange are said to obey Fermi-Dirac statistics , and are called fermions. For instance, electrons, protons, and neutrons are fermions.
Consider a system containing two identical and non-interacting bosons. Let \(\psi(x,E)\) be a properly normalized, single-particle, stationary wavefunction corresponding to a state of definite energy \(E\). The stationary wavefunction of the whole system is written
\[\psi_{E\,{\rm boson}}(x_1,x_2) = \frac{1}{\sqrt{2}}\left[\psi(x_1,E_a)\,\psi(x_2,E_b)+\psi(x_2,E_a)\,\psi(x_1,E_b)\right],\]
when the energies of the two particles are \(E_a\) and \(E_b\). This expression automatically satisfies the symmetry requirement on the wavefunction. Incidentally, because the particles are identical, we cannot be sure which particle has energy \(E_a\), and which has energy \(E_b\)—only that one particle has energy \(E_a\), and the other \(E_b\).
For a system consisting of two identical and non-interacting fermions, the stationary wavefunction of the whole system takes the form
\[\psi_{E\,{\rm fermion}}(x_1,x_2) = \frac{1}{\sqrt{2}}\left[\psi(x_1,E_a)\,\psi(x_2,E_b)-\psi(x_2,E_a)\,\psi(x_1,E_b)\right],\]
Again, this expression automatically satisfies the symmetry requirement on the wavefunction. Note that if \(E_a=E_b\) then the total wavefunction becomes zero everywhere. Now, in quantum mechanics, a null wavefunction corresponds to the absence of a state. We thus conclude that it is impossible for the two fermions in our system to occupy the same single-particle stationary state.
Finally, if the two particles are somehow distinguishable then the stationary wavefunction of the system is simply
\[\psi_{E\,{\rm dist}}(x_1,x_2) = \psi(x_1,E_a)\,\psi(x_2,E_b).\]
Let us evaluate the variance of the distance, \(x_1-x_2\), between the two particles, using the previous three wavefunctions. It is easily demonstrated that if the particles are distinguishable then
\[\langle (x_1-x_2)^{\,2}\rangle_{ {\rm dist}} = \langle x^{\,2}\rangle_a + \langle x^{\,2}\rangle_b - 2\,\langle x\rangle_a\,\langle x\rangle_b,\] where \[\langle x^{\,n}\rangle_{a,b} = \int_{-\infty}^\infty\psi^\ast(x,E_{a,b})\,x^{\,n}\,\psi(x,E_{a,b})\,dx.\]
For the case of two identical bosons, we find
\[\label{ebos} \langle (x_1-x_2)^{\,2}\rangle_{ {\rm boson}} = \langle (x_1-x_2)^{\,2}\rangle_{ {\rm dist}} - 2\,|\langle x\rangle_{ab}|^{\,2},\]
\[\langle x \rangle_{ab} = \int_{-\infty}^\infty \psi^\ast(x,E_a)\,x\,\psi(x,E_b)\,dx.\]
Here, we have assumed that \(E_a\neq E_b\), so that
\[\int_{-\infty}^\infty \psi^\ast(x,E_a)\,\psi(x,E_b)\,dx = 0.\]
Finally, for the case of two identical fermions, we obtain
\[\label{efer} \langle (x_1-x_2)^{\,2}\rangle_{ {\rm fermion}} = \langle (x_1-x_2)^{\,2}\rangle_{ {\rm dist}} + 2\,|\langle x\rangle_{ab}|^{\,2},\]
Equation \ref{ebos} indicates that the symmetry requirement on the total wavefunction of two identical bosons causes the particles to be, on average, closer together than two similar distinguishable particles. Conversely, Equation \ref{efer} indicates that the symmetry requirement on the total wavefunction of two identical fermions causes the particles to be, on average, further apart than two similar distinguishable particles. However, the strength of this effect depends on square of the magnitude of \(\langle x\rangle_{ab}\), which measures the overlap between the wavefunctions \(\psi(x,E_a)\) and \(\psi(x,E_b)\). It is evident, then, that if these two wavefunctions do not overlap to any great extent then identical bosons or fermions will act very much like distinguishable particles.
For a system containing \(N\) identical and non-interacting fermions, the anti-symmetric stationary wavefunction of the system is written
\[\psi_{E}(x_1,x_2,\ldots x_N) =\frac{1}{\sqrt{N!}} \left| \begin{array}{cccc} \psi(x_1,E_1)&\psi(x_2,E_1)&\ldots&\psi(x_N,E_1)\\[0.5ex] \psi(x_1,E_2)&\psi(x_2,E_2)&\ldots&\psi(x_N,E_2)\\[0.5ex] \vdots&\vdots&\vdots&\vdots\\[0.5ex] \psi(x_1,E_N)&\psi(x_2,E_N)&\ldots&\psi(x_N,E_N) \end{array}\right|.\]
This expression is known as the Slater determinant, and automatically satisfies the symmetry requirements on the wavefunction. Here, the energies of the particles are \(E_1, E_2, \ldots, E_N\). Note, again, that if any two particles in the system have the same energy (i.e., if \(E_i=E_j\) for some \(i\neq j\)) then the total wavefunction is null. We conclude that it is impossible for any two identical fermions in a multi-particle system to occupy the same single-particle stationary state. This important result is known as the Pauli exclusion principle .
Richard Fitzpatrick (Professor of Physics, The University of Texas at Austin)
\( \newcommand {\ltapp} {\stackrel {_{\normalsize<}}{_{\normalsize \sim}}}\) \(\newcommand {\gtapp} {\stackrel {_{\normalsize>}}{_{\normalsize \sim}}}\) \(\newcommand {\btau}{\mbox{\boldmath$\tau$}}\) \(\newcommand {\bmu}{\mbox{\boldmath$\mu$}}\) \(\newcommand {\bsigma}{\mbox{\boldmath$\sigma$}}\) \(\newcommand {\bOmega}{\mbox{\boldmath$\Omega$}}\) \(\newcommand {\bomega}{\mbox{\boldmath$\omega$}}\) \(\newcommand {\bepsilon}{\mbox{\boldmath$\epsilon$}}\)
5.3: Two-Particle Systems
5.E: Multi-Particle Systems (Exercises)
Richard Fitzpatrick
Bose-Einstein statistics
Fermi-Dirac statistics
fermions
Slater determinant | CommonCrawl |
ERA-MS Home
This Volume
On Totally integrable magnetic billiards on constant curvature surface
2012, 19: 120-130. doi: 10.3934/era.2012.19.120
Locally decodable codes and the failure of cotype for projective tensor products
Jop Briët 1, , Assaf Naor 2, and Oded Regev 3,
Centrum Wiskunde & Informatica (CWI), Science Park 123, 1098 SJ Amsterdam, Netherlands
Courant Institute, New York University, 251 Mercer Street, New York NY 10012, United States
École normale supérieure, Département d'informatique, 45 rue d'Ulm, Paris, France
Received August 2012 Revised October 2012 Published November 2012
It is shown that for every $p\in (1,\infty)$ there exists a Banach space $X$ of finite cotype such that the projective tensor product $l_p\hat\otimes X$ fails to have finite cotype. More generally, if $p_1,p_2,p_3\in (1,\infty)$ satisfy $\frac{1}{p_1}+\frac{1}{p_2}+\frac{1}{p_3}\le 1$ then $l_{p_1}\hat\otimes l_{p_2} \hat\otimes l_{p_3}$ does not have finite cotype. This is proved via a connection to the theory of locally decodable codes.
Keywords: projective tensor product, locally decodable codes..
Mathematics Subject Classification: Primary: 46B07; Secondary: 46B2.
Citation: Jop Briët, Assaf Naor, Oded Regev. Locally decodable codes and the failure of cotype for projective tensor products. Electronic Research Announcements, 2012, 19: 120-130. doi: 10.3934/era.2012.19.120
F. Albiac and N. J. Kalton, "Topics in Banach space theory,", 233 of Graduate Texts in Mathematics, 233 (2006). Google Scholar
A. Arias and J. D. Farmer, On the structure of tensor products of $l_p$-spaces,, Pacific J. Math., 175 (1996), 13. Google Scholar
A. Beimel, Y. Ishai, E. Kushilevitz and I. Orlov, Share conversion and private information retrieval,, in, (2012), 258. Google Scholar
A. Ben-Aroya, O. Regev and R. de Wolf, A hypercontractive inequality for matrix-valued functions with applications to quantum computing and LDCs,, in, (2008), 477. Google Scholar
J. Bourgain, New Banach space properties of the disc algebra and $H^{\infty}$,, Acta Math., 152 (1984), 1. Google Scholar
J. Bourgain, On martingales transforms in finite-dimensional lattices with an appendix on the $K$-convexity constant,, Math. Nachr., 119 (1984), 41. doi: 10.1002/mana.19841190104. Google Scholar
J. Bourgain and G. Pisier, A construction of $\mathcal L_{\infty }$-spaces and related Banach spaces,, Bol. Soc. Brasil. Mat., 14 (1983), 109. Google Scholar
Q. Bu, Observations about the projective tensor product of Banach spaces. II. $L^p(0,1)\hat\otimes X,\ 1
<\infty$,, Quaest. Math., 25 (2002), 209. doi: 10.2989/16073600209486010. Google Scholar
Q. Bu and J. Diestel, Observations about the projective tensor product of Banach spaces. I. $l^p\hat\otimesX,\ 1
<\infty$,, Quaest. Math., 24 (2001), 519. doi: 10.1080/16073606.2001.9639238. Google Scholar
Q. Bu and P. N. Dowling, Observations about the projective tensor product of Banach spaces. III. $L^p[0,1]\hat\otimes X,\ 1
J. Diestel, J. Fourie and J. Swart, The projective tensor product. I,, in, 321 (2003), 37. Google Scholar
J. Diestel, J. H. Fourie and J. Swart, "The Metric Theory of Tensor Products,", American Mathematical Society, (2008). Google Scholar
K. Efremenko, 3-query locally decodable codes of subexponential length,, in, (2009), 39. Google Scholar
V. Grolmusz, Superpolynomial size set-systems with restricted intersections mod 6 and explicit Ramsey graphs,, Combinatorica, 20 (2000), 71. doi: 10.1007/s004930070032. Google Scholar
A. Grothendieck, Résumé de la théorie métrique des produits tensoriels topologiques,, Bol. Soc. Mat. São Paulo, 8 (1953), 1. Google Scholar
J. Katz and L. Trevisan, On the efficiency of local decoding procedures for error-correcting codes,, in, (2000), 80. Google Scholar
I. Kerenidis and R. de Wolf, Exponential lower bound for 2-query locally decodable codes via a quantum argument,, J. Comput. System Sci., 69 (2004), 395. doi: 10.1016/j.jcss.2004.04.007. Google Scholar
D. R. Lewis, Duals of tensor products,, in, 604 (1977), 57. Google Scholar
B. Maurey and G. Pisier, Séries de variables aléatoires vectorielles indépendantes et propriétés géométriques des espaces de Banach,, Studia Math., 58 (1976), 45. Google Scholar
G. Pisier, Un théorème sur les opérateurs linéaires entre espaces de Banach qui se factorisent par un espace de Hilbert,, Ann. Sci., 13 (1980), 23. Google Scholar
G. Pisier, Counterexamples to a conjecture of Grothendieck,, Acta. Math., 151 (1983), 181. Google Scholar
G. Pisier, Factorization of operator valued analytic functions,, Adv. Math., 93 (1992), 61. Google Scholar
G. Pisier, Random series of trace class operators,, in, (1992), 29. Google Scholar
R. A. Ryan, "Introduction to Tensor Products of Banach Spaces,", Springer Monographs in Mathematics. Springer-Verlag London Ltd., (2002). Google Scholar
N. Tomczak-Jaegermann, The moduli of smoothness and convexity and the Rademacher averages of trace classes $S_p(1\leq p<\infty )$,, Studia Math., 50 (1974), 163. Google Scholar
L. Trevisan, Some applications of coding theory in computational complexity,, in, 13 (2004), 347. Google Scholar
D. P. Woodruff, New lower bounds for general locally decodable codes,, Electronic Colloquium on Computational Complexity (ECCC), 14 (2007). Google Scholar
S. Yekhanin, Towards 3-query locally decodable codes of subexponential length,, J. ACM, 55 (2008). Google Scholar
S. Yekhanin, Locally decodable codes,, Found. Trends Theor. Comput. Sci., 7 (2011), 1. Google Scholar
Olof Heden, Martin Hessler. On linear equivalence and Phelps codes. Addendum. Advances in Mathematics of Communications, 2011, 5 (3) : 543-546. doi: 10.3934/amc.2011.5.543
H. M. Hastings, S. Silberger, M. T. Weiss, Y. Wu. A twisted tensor product on symbolic dynamical systems and the Ashley's problem. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 549-558. doi: 10.3934/dcds.2003.9.549
Jesús Carrillo-Pacheco, Felipe Zaldivar. On codes over FFN$(1,q)$-projective varieties. Advances in Mathematics of Communications, 2016, 10 (2) : 209-220. doi: 10.3934/amc.2016001
Christine Bachoc, Alberto Passuello, Frank Vallentin. Bounds for projective codes from semidefinite programming. Advances in Mathematics of Communications, 2013, 7 (2) : 127-145. doi: 10.3934/amc.2013.7.127
Sara D. Cardell, Joan-Josep Climent. An approach to the performance of SPC product codes on the erasure channel. Advances in Mathematics of Communications, 2016, 10 (1) : 11-28. doi: 10.3934/amc.2016.10.11
David Clark, Vladimir D. Tonchev. A new class of majority-logic decodable codes derived from polarity designs. Advances in Mathematics of Communications, 2013, 7 (2) : 175-186. doi: 10.3934/amc.2013.7.175
Susanne Pumplün, Andrew Steele. The nonassociative algebras used to build fast-decodable space-time block codes. Advances in Mathematics of Communications, 2015, 9 (4) : 449-469. doi: 10.3934/amc.2015.9.449
Susanne Pumplün. How to obtain division algebras used for fast-decodable space-time block codes. Advances in Mathematics of Communications, 2014, 8 (3) : 323-342. doi: 10.3934/amc.2014.8.323
Fernando Hernando, Diego Ruano. New linear codes from matrix-product codes with polynomial units. Advances in Mathematics of Communications, 2010, 4 (3) : 363-367. doi: 10.3934/amc.2010.4.363
Alexander A. Davydov, Massimo Giulietti, Stefano Marcugini, Fernanda Pambianco. Linear nonbinary covering codes and saturating sets in projective spaces. Advances in Mathematics of Communications, 2011, 5 (1) : 119-147. doi: 10.3934/amc.2011.5.119
Jinghong Liu, Yinsuo Jia. Gradient superconvergence post-processing of the tensor-product quadratic pentahedral finite element. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 495-504. doi: 10.3934/dcdsb.2015.20.495
Kathryn Haymaker, Beth Malmskog, Gretchen L. Matthews. Locally recoverable codes with availability t≥2 from fiber products of curves. Advances in Mathematics of Communications, 2018, 12 (2) : 317-336. doi: 10.3934/amc.2018020
Carlos Munuera, Wanderson Tenório, Fernando Torres. Locally recoverable codes from algebraic curves with separated variables. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020019
Fernando Hernando, Tom Høholdt, Diego Ruano. List decoding of matrix-product codes from nested codes: An application to quasi-cyclic codes. Advances in Mathematics of Communications, 2012, 6 (3) : 259-272. doi: 10.3934/amc.2012.6.259
Christine A. Kelley, Deepak Sridhara, Joachim Rosenthal. Zig-zag and replacement product graphs and LDPC codes. Advances in Mathematics of Communications, 2008, 2 (4) : 347-372. doi: 10.3934/amc.2008.2.347
Daniele Bartoli, Adnen Sboui, Leo Storme. Bounds on the number of rational points of algebraic hypersurfaces over finite fields, with applications to projective Reed-Muller codes. Advances in Mathematics of Communications, 2016, 10 (2) : 355-365. doi: 10.3934/amc.2016010
Steve Limburg, David Grant, Mahesh K. Varanasi. Higher genus universally decodable matrices (UDMG). Advances in Mathematics of Communications, 2014, 8 (3) : 257-270. doi: 10.3934/amc.2014.8.257
Kristian Bjerklöv, Russell Johnson. Minimal subsets of projective flows. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 493-516. doi: 10.3934/dcdsb.2008.9.493
Jungkai A. Chen and Meng Chen. On projective threefolds of general type. Electronic Research Announcements, 2007, 14: 69-73. doi: 10.3934/era.2007.14.69
Liliana Trejo-Valencia, Edgardo Ugalde. Projective distance and $g$-measures. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3565-3579. doi: 10.3934/dcdsb.2015.20.3565
Jop Briët Assaf Naor Oded Regev | CommonCrawl |
Public debt dynamics under ambiguity by means of iterated function systems on density functions
DCDS-B Home
Population dynamics and economic development
November 2021, 26(11): 5849-5871. doi: 10.3934/dcdsb.2021232
Transitions between metastable long-run consumption behaviors in a stochastic peer-driven consumer network
Jochen Jungeilges 1,2,, , Trygve Kastberg Nilssen 1, , Tatyana Perevalova 3, and Alexander Satov 2,
University of Agder, School of Business and Law, Department of Economics and Finance, Servicebox 422, N-4604 Kristiansand S, Norway
Ural Federal University, Institute of Natural Science and Mathematics, 51 Lenin Avenue, Ekaterinburg 620000, Russian Federation
Ural Federal University, Institute of Natural Science and Mathematics, Ural Mathematical Center, 51 Lenin Avenue, Ekaterinburg 620000, Russian Federation
* Corresponding author: Jochen Jungeilges
The authors want to thank two anonymous referees for there work.
Received November 2020 Revised July 2021 Published November 2021 Early access September 2021
We study behavioral change - as a transition between coexisting attractors - in the context of a stochastic, non-linear consumption model with interdependent agents. Relying on the indirect approach to the analysis of a stochastic dynamic system, and employing a mix of analytical, numerical and graphical techniques, we identify conditions under which such transitions are likely to occur. The stochastic analysis depends crucially on the stochastic sensitivity function technique as it can be applied to the stochastic analoga of closed invariant curves [14], [1]. We find that in a moderate noise environment increased peer influence actually reduces the complexity of observable long-run consumer behavior.
Keywords: Non-invertible maps, metastable attractors, escapes, transition.
Mathematics Subject Classification: Primary: 37G35, 37H20; Secondary: 37N40.
Citation: Jochen Jungeilges, Trygve Kastberg Nilssen, Tatyana Perevalova, Alexander Satov. Transitions between metastable long-run consumption behaviors in a stochastic peer-driven consumer network. Discrete & Continuous Dynamical Systems - B, 2021, 26 (11) : 5849-5871. doi: 10.3934/dcdsb.2021232
I. Bashkirtseva and L. Ryashko, Stochastic sensitivity of the closed invariant curves for discrete-time systems, Phys. A, 410 (2014), 236-243. doi: 10.1016/j.physa.2014.05.037. Google Scholar
I. Bashkirtseva, L. Ryashko and A. Sysolyatina, Analysis of stochastic effects in Kaldor-type business cycle discrete model, Commun. Nonlinear Sci. Numer. Simul., 36 (2016), 446-456. doi: 10.1016/j.cnsns.2015.12.020. Google Scholar
J. Benhabib and R. H. Day, Rational choice and erratic behaviour, Rev. Econom. Stud., 48 (1981), 459-471. doi: 10.2307/2297158. Google Scholar
H. W. Broer, M. Golubitsky and G. Vegter, Geometry of resonance tongues, Singularity Theory, 327–356, World Sci. Publ., Hackensack, NJ, (2007). https://www.researchgate.net/publication/252963138_Geometry_of_resonance_tongues doi: 10.1142/9789812707499_0012. Google Scholar
E. Ekaterinchuk, J. Jungeilges, T. Ryazanova and I. Sushko, Dynamics of a minimal consumer network with bi-directional influence, Commun. Nonlinear Sci. Numer. Simul., 58 (2018), 107-118. doi: 10.1016/j.cnsns.2017.04.007. Google Scholar
E. Ekaterinchuk, J. Jungeilges, T. Ryazanova and I. Sushko, Dynamics of a minimal consumer network with uni-directional influence, Journal of Evolutionary Economics, 27 (2017), 831-857. doi: 10.1007/s00191-017-0517-5. Google Scholar
M. I. Freidlin and A. D. Wentzell, Random Perturbations of Dynamical Systems, 3rd edition, Springer, Heidelberg, 2012. doi: 10.1007/978-3-642-25847-3. Google Scholar
W. Gaertner and J. Jungeilges, A non-linear model of interdependent consumer behaviour, Economics Letters, 27 (1988), 145-150. doi: 10.1016/0165-1765(88)90087-0. Google Scholar
W. Gaertner and J. Jungeilges, "Spindles" and coexisting attractors in a dynamic model of interdependent consumer behavior: A note, Journal of Economic Behavior & Organization, 21 (1993), 223-231. doi: 10.1016/0167-2681(93)90049-U. Google Scholar
J. Jungeilges, E. Maklakova and T. Perevalova, Stochastic sensitivity of bull and bear states, Journal of Economic Interaction and Cooperation, (2021). doi: 10.1007/s11403-020-00313-2. Google Scholar
J. Jungeilges and T. Ryazanova, Transitions in consumption behaviors in a peer-driven stochastic consumer network, Chaos Solitons Fractals, 128 (2019), 144-154. doi: 10.1016/j.chaos.2019.07.042. Google Scholar
J. Jungeilges, T. Ryazanova, A. Mitrofanova and I. Popova, Sensitivity analysis of consumption cycles, Chaos, 28 (2018), 055905, 12 pp. doi: 10.1063/1.5024033. Google Scholar
Z. Li, K. Guo, J. Jiang and L. Hong, Study on critical conditions and transient behavior in noise-induced bifurcations, Control of Self-Organizing Nonlinear Systems, 169–187, Underst. Complex Syst., Springer, [Cham], (2016). doi: 10.1007/978-3-319-28028-8_9. Google Scholar
G. Mil'shtein and L. Ryashko, The first approximation in the quasipotential problem of stability of non-degenerate systems with random perturbations, Journal of Applied Mathematics and Mechanics, 59 (1995), 47-56. Google Scholar
A. Panchuk, CompDTIMe: Computing one-dimensional invariant manifolds for saddle points of discrete time dynamical systems, Gecomplexity Discussion Paper Series 11, Action IS1104 "The EU in the new complex geography of economic systems: Models, tools and policy evaluation", 2015, https://EconPapers.repec.org/RePEc:cst:wpaper:11. Google Scholar
L. Ryashko, Noise-induced transformations in corporate dynamics of coupled chaotic oscillators, Mathematical Methods in the Applied Sciences. doi: 10.1002/mma.6578. Google Scholar
A. N. Silchenko, S. Beri, D. G. Luchinsky and P. V. E. McClintock, Fluctuational transitions through a fractal basin boundary, Phys. Rev. Lett., 91 (2003), 174104. doi: 10.1103/PhysRevLett.91.174104. Google Scholar
E. Slepukhina, L. Ryashko and P. Kügler, Noise-induced early afterdepolarizations in a three-dimensional cardiac action potential model, Chaos, Solitons & Fractals, 131 (2020), 109515. doi: 10.1016/j.chaos.2019.109515. Google Scholar
Y. Tadokoro, H. Tanaka and M. I. Dykman, Noise-induced switching from a symmetry-protected shallow metastable state, Scientific Reports, 10 (2020), 1-10. Google Scholar
J. Xu, T. Zhang and K. Song, A stochastic model of bacterial infection associated with neutrophils, Appl. Math. Comput., 373 (2020), 125025, 12 pp. doi: 10.1016/j.amc.2019.125025. Google Scholar
Z. T. Zhusubaliyev, E. Soukhoterin and E. Mosekilde, Quasiperiodicity and torus breakdown in a power electronic dc/dc converter, Math. Comput. Simulation, 73 (2007), 364-377. doi: 10.1016/j.matcom.2006.06.021. Google Scholar
Figure 1. Bird's eye view of the parameter plane $ D $, where remaining parameters have been fixed at $ (p_x, p_y) = \left(\frac{1}{4}, 1\right), \ $$ (b_1, b_2) = (10,20), \ \alpha_1 = 0.0002, \ \alpha_2 = 0.00052$
Figure 3">Figure 2. Bifurcation diagram for $ D^N $ with $ (p_x, p_y) = \left(\frac{1}{4}, 1\right), \ (b_1, b_2) = (10,20), \ \alpha_1 = 0.0002, \ \alpha_2 = 0.00052 $. $ NS $ indicates the Neimark-Sacker bifurcation curve related to the fixed point. $ SN_3 $ curve gives the loci at which a saddle 3-cycle is born together with the attracting 3-cycle ($ C_3 $) via a saddle-node bifurcation. $ NS_3 $ designates the Neimark-Sacker bifurcation curve of the 3-cycle. The horizontal line through $ D_{21} = 0.0075 $ indicates the interval of parameter values for which our study of transitions between coexisting attractors focuses on. The $ NS $ and $ NS_3 $ curves are crossed twice at $ \star $ (red star) and $ \star $ (green star). Also the saddle node bifurcation curve $ SN_3 $ is intersected twice. The intersection points are indicated by $ \bullet $ (blue circles). Related details are revealed in Figure 3
Figure 2(a) and an enlargement (b) focussing on the interval $ 0.00145 \leq D_{12} \leq 0.001975 $ over which two attractors coexists">Figure 3. For $ D_{21} = 0.0075 $, we give bifurcation diagrams for $ 0 \leq D_{12} \leq 0.00245 $ linked to the horizontal black line in Figure 2(a) and an enlargement (b) focussing on the interval $ 0.00145 \leq D_{12} \leq 0.001975 $ over which two attractors coexists
Figure 4. Bifurcation diagram for the case of additive noise with $ \varepsilon = 0.1 $ ($ D_{21} = 0.0075 $). If the initial value $ (x_{1,0},x_{2,0}) $ lies on the deterministic blue (red) attractor, then elements of the trajectory are colored light blue (red)
Figure 5. Bifurcation diagram for the case of parametric noise with $ \varepsilon = 0.1 $ ($ D_{21} = 0.0075 $). If $ (x_{1,0},x_{2,0}) $ lies on the deterministic blue (red) attractor, then elements of the trajectory are colored light blue (red)
Figure 6. Confidence sets for fixed point $ E $ ($ \bullet $) and 3-cycle $ C_3 $ ($ \bullet $) at $ D_{12} = 0.00195 $, $ D_{21} = 0.0075 $ with trajectories superimposed ($ \varepsilon = 0.1 $ (white), $ \varepsilon = 0.05 $ (grey))
Figure 7. The top panel shows the graph of the sensitivity function for $ \Gamma $, i.e. a plot of the maximum eigenvalue ($ \lambda $) of the sensitivity matrix at a point on $ \Gamma $ versus the angle $ \phi $ identifying the point on the attractor. The subfigures on the bottom give the confidence sets $ \mathcal{C}(\Gamma, \varepsilon = 0.1) $ at $ D_{12} = 0.00157 $ for additive (a) and parametric noise (b)
Figure 8. The figure shows the attractor $ \Gamma_3 $ at $ D_{12} = 0.0017 $ (a), the sensitivity functions for $ \Gamma_3 $ (b) as well as the related confidence sets $ \mathcal{C}(\Gamma_3, \varepsilon = 0.1) $ for additive (c) and parametric noise (d)
Figure 9. 1D bifurcation diagrams ((a),(b)) and critical intensities for coexisting attractors (c) with additive (solid lines) and parametric (dashed lines) noise for $ D_{12} \in D^{ms} $
Figure 10. For $ (D_{12}, D_{21}) = (0.001706,0.0075) $ we show the state space representation of the coexisting attractors $ \Gamma_3 $ (dark red curves) and $ \Gamma $ (blue curve) together with their immediate basins $ \mathcal{B}(\Gamma_3) $ (light red) and $ \mathcal{B}(\Gamma) $ (light blue). The confidence sets are superimposed ($ \varepsilon \in \{ 0.1, 0.2, 0.3\} $). Periodic points (red triangles) of the 3-saddle cycle are exhibited together with its stable (black lines) and unstable (red lines) manifolds. In addition, the unstable fixed point $ E $ (blue circle) and the unstable 3-cylce (periodic point given by red circles) are given
Figure 11. ( $ \Gamma $, $ \Gamma_3 $) at $ (D_{12}, D_{21}) = (0.001706,0.0075) $ with sample trajectory (single simulation run with $ \varepsilon = 0.2 $) superimposed
Manuela Giampieri, Stefano Isola. A one-parameter family of analytic Markov maps with an intermittency transition. Discrete & Continuous Dynamical Systems, 2005, 12 (1) : 115-136. doi: 10.3934/dcds.2005.12.115
Liviana Palmisano, Bertuel Tangue Ndawa. A phase transition for circle maps with a flat spot and different critical exponents. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5037-5055. doi: 10.3934/dcds.2021067
Pierluigi Colli, Antonio Segatti. Uniform attractors for a phase transition model coupling momentum balance and phase dynamics. Discrete & Continuous Dynamical Systems, 2008, 22 (4) : 909-932. doi: 10.3934/dcds.2008.22.909
Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalizable Expanding Baker Maps: Coexistence of strange attractors. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1651-1678. doi: 10.3934/dcds.2017068
Kota Kumazaki. Periodic solutions for non-isothermal phase transition models. Conference Publications, 2011, 2011 (Special) : 891-902. doi: 10.3934/proc.2011.2011.891
Youngna Choi. Attractors from one dimensional Lorenz-like maps. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 715-730. doi: 10.3934/dcds.2004.11.715
Philip Boyland, André de Carvalho, Toby Hall. Statistical stability for Barge-Martin attractors derived from tent maps. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2903-2915. doi: 10.3934/dcds.2020154
Thorsten Hüls, Yongkui Zou. On computing heteroclinic trajectories of non-autonomous maps. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 79-99. doi: 10.3934/dcdsb.2012.17.79
Meiyu Su. True laminations for complex Hènon maps. Conference Publications, 2003, 2003 (Special) : 834-841. doi: 10.3934/proc.2003.2003.834
Tien-Cuong Dinh, Nessim Sibony. Rigidity of Julia sets for Hénon type maps. Journal of Modern Dynamics, 2014, 8 (3&4) : 499-548. doi: 10.3934/jmd.2014.8.499
Thorsten Hüls. A model function for non-autonomous bifurcations of maps. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 351-363. doi: 10.3934/dcdsb.2007.7.351
Song Shao, Xiangdong Ye. Non-wandering sets of the powers of maps of a star. Discrete & Continuous Dynamical Systems, 2003, 9 (5) : 1175-1184. doi: 10.3934/dcds.2003.9.1175
Imen Bhouri, Houssem Tlili. On the multifractal formalism for Bernoulli products of invertible matrices. Discrete & Continuous Dynamical Systems, 2009, 24 (4) : 1129-1145. doi: 10.3934/dcds.2009.24.1129
Feng Rong. Non-algebraic attractors on $\mathbf{P}^k$. Discrete & Continuous Dynamical Systems, 2012, 32 (3) : 977-989. doi: 10.3934/dcds.2012.32.977
Jean-Pierre Eckmann, C. Eugene Wayne. Breathers as metastable states for the discrete NLS equation. Discrete & Continuous Dynamical Systems, 2018, 38 (12) : 6091-6103. doi: 10.3934/dcds.2018136
Helmut Maurer, Willi Semmler. Expediting the transition from non-renewable to renewable energy via optimal control. Discrete & Continuous Dynamical Systems, 2015, 35 (9) : 4503-4525. doi: 10.3934/dcds.2015.35.4503
Rua Murray. Ulam's method for some non-uniformly expanding maps. Discrete & Continuous Dynamical Systems, 2010, 26 (3) : 1007-1018. doi: 10.3934/dcds.2010.26.1007
T. Gilbert, J. R. Dorfman. On the parametric dependences of a class of non-linear singular maps. Discrete & Continuous Dynamical Systems - B, 2004, 4 (2) : 391-406. doi: 10.3934/dcdsb.2004.4.391
Marina Gonchenko, Sergey Gonchenko, Klim Safonov. Reversible perturbations of conservative Hénon-like maps. Discrete & Continuous Dynamical Systems, 2021, 41 (4) : 1875-1895. doi: 10.3934/dcds.2020343
Khashayar Filom, Kevin M. Pilgrim. On the non-monotonicity of entropy for a class of real quadratic rational maps. Journal of Modern Dynamics, 2020, 16: 225-254. doi: 10.3934/jmd.2020008
Jochen Jungeilges Trygve Kastberg Nilssen Tatyana Perevalova Alexander Satov | CommonCrawl |
Intensive Care Medicine Experimental
Electrolyte-based calculation of fluid shifts after infusing 0.9% saline in severe hyperglycemia
Robert Svensson1,
Joachim Zdolsek2,3,
Marcus Malm4 &
Robert G. Hahn ORCID: orcid.org/0000-0002-1528-38035,6
Intensive Care Medicine Experimental volume 8, Article number: 59 (2020) Cite this article
Early treatment of severe hyperglycemia involves large shifts of body fluids that entail a risk of hemodynamic instability. We studied the feasibility of applying a new electrolyte equation that estimates the degree of volume depletion and the distribution of infused 0.9% saline in this setting.
The new equation was applied to plasma and urinary concentrations of sodium and chloride measured before and 30 min after a 30-min infusion of 1 L of 0.9% saline on two consecutive days in 14 patients with severe hyperglycemia (mean age 50 years). The extracellular fluid (ECF) volume was also estimated based on the volume dilution kinetics of chloride.
On day 1, the baseline ECF volume amounted to 11.5 L. The saline infusion expanded the ECF space by 160 mL and the intracellular fluid space by 375 mL. On day 2, the ECF volume was 15.5 L, and twice as much of the infused fluid remained in the ECF space. The chloride dilution kinetics yielded baseline ECF volumes of 11.6 and 15.2 L on day 1 and day 2, respectively. No net uptake of glucose to the cells occurred during the two 1-h measurement periods despite insulin administration in the intervening time period.
The electrolyte equation was feasible to apply in a group of hyperglycemic patients. The ECF space was 3 L smaller than expected on admission but normal on the second day. Almost half of the infused fluid was distributed intracellularly.
High competence is needed to manage fluid, electrolyte, and insulin therapy in patients with poorly controlled diabetes [1]. The severely hyperglycemic patient is always dehydrated, and insulin might aggravate the situation by relocating extracellular fluid (ECF) to the cells, whereby hemodynamic collapse can occur. Adequate matching between fluid and insulin is delicate, and the clinician has to follow early clinical signs to prevent complications [2]. However, few details are known about the fluid shifts that actually occur when treating severely hyperglycemic patients [3].
We have developed an equation system based on a fluid challenge with 0.9% saline followed by measurements of sodium and chloride that estimate the severity of the dehydration and also the fluid-induced shift of ECF volume into the cells.
The aim of the present study was to explore the feasibility of applying the new equation to a group of patients likely to be volume depleted. Severely hyperglycemic patients were considered suitable for this purpose. They receive 0.9% saline and are monitored with measurements of sodium and chloride for clinical reasons.
Our hypothesis was that the calculations would enhance knowledge about the severity of the dehydration and the degree of fluid shifts that occur in the treatment of hyperglycemia. The ECF volume was also calculated with volume kinetic analysis of the plasma electrolyte concentrations over time [4], which is an approach that has been validated by bromide and iohexol dilution [5].
An infusion experiment was performed on two consecutive days in 14 fully conscious patients who had been admitted for treatment of poorly controlled diabetes to the intensive care unit (ICU) at the Vrinnevi Hospital in Norrköping, Sweden, between 2014 and 2019. The Regional Ethics Committee of Linköping had approved the protocol (Ref. 2014/123-31), and the study was registered at ClinicalTrials.gov NCT02172092 before any patient was enrolled. Each patient gave his/her oral and written consent for participation.
The patients underwent the first infusion experiment soon after their arrival to the ICU and had the repeat infusion on the next day. Each infusion consisted of 1 L of 0.9% saline over 30 min followed by a free interval of 30 min.
Arterial blood was withdrawn on 9 occasions between 0 and 60 min. The samples were analyzed for plasma concentrations of sodium (Na), chloride (Cl), and ionized calcium (Ca) on a Radiometer ABL 800 FLEX blood gas machine (Radiometer Medical, Copenhagen, Denmark) with a coefficient of variation of approximately 1%.
The patients had a bladder catheter through which the excreted urine was collected, measured, and sampled at 60 min. Urinary electrolytes were measured on the Cobas 8000. The excretion of glucose and electrolytes was taken as the product of the urine volume and the urinary concentration of the solute in question.
Monitoring consisted of pulse oximetry and invasive arterial pressures.
Electrolyte equation
The background to the proposed electrolyte calculation comes from a previously used mass balance equation based on changes in Na [6, 7]. By assuming that the ECF volume at baseline (time 0) is 20% of the body weight [5], we have only one unknown (∆ICF, flow of fluid to or from the intracellular space) in the following equation, which covers the time period between the baseline and a later time (t). In the present study, we use time frame from 0 to 60 min.
$$ \frac{{\mathrm{Na}}_{\mathrm{o}}{\mathrm{E}\mathrm{CF}}_{\mathrm{o}}+\left(\mathrm{infused}-\mathrm{excreted}\right)\mathrm{Na}}{\mathrm{E}{\mathrm{CF}}_{\mathrm{o}}+\left(\mathrm{infused}-\mathrm{excreted}\right)\mathrm{volume}+\Delta \mathrm{ICF}}={\mathrm{Na}}_{\mathrm{t}} $$
All data relating to Na appear in the nominator and the corresponding data for fluid volumes in the denominator. The measured Nat should equal the left side of the equation if ∆ICF is zero. Any deviation in Nat from its theoretical value means that ∆ICF is separated from zero.
Because ECFo cannot be assumed to be 20% of the body weight in poorly controlled diabetes, we strived to develop an equation that determines both ∆ICF and ECFo. To solve both unknowns, an equation system was set up that considers the changes in both the Na and Cl concentrations after having infused isotonic saline. Chloride occupies the same physiological space as sodium [8, 9], and the content of both ions in 0.9% saline (154 mmol/L) deviate markedly from the plasma concentrations.
By rearrangement, the equation system holds that the finally arrived Nat and Clt after infusion of sodium chloride are given by:
$$ \left\{\frac{\frac{{\mathrm{Na}}_{\mathrm{o}}{\mathrm{ECF}}_{\mathrm{o}}+\left(\mathrm{infused}-\mathrm{excreted}\right)\kern0.28em \mathrm{Na}}{{\mathrm{ECF}}_{\mathrm{o}}+\left(\mathrm{infused}-\mathrm{excreted}\right)\kern0.28em \mathrm{fluid}+\Delta \mathrm{ICF}}={\mathrm{Na}}_{\mathrm{t}}}{\frac{\mathrm{Cl}\kern0.5em \mathrm{ECF}+\left(\mathrm{infused}-\mathrm{excreted}\right)\mathrm{Cl}}{{\mathrm{ECF}}_{\mathrm{o}}+\left(\mathrm{infused}-\mathrm{excreted}\right)\mathrm{fluid}+\Delta \mathrm{ICF}}={\mathrm{Cl}}_{\mathrm{t}}}\right) $$
As ∆ICF and ECFo must be the same in both equations, the following re-arrangement can be made and let the electrolyte changes after the fluid challenge estimate ECFo:
$$ {\mathrm{ECF}}_{\mathrm{o}}=\frac{\left[\mathrm{N}{\mathrm{a}}_{\mathrm{t}}\kern0.5em \left(\mathrm{infused}-\mathrm{excreted}\right)\mathrm{Cl}\right]-\left[{\mathrm{Cl}}_{\mathrm{t}}\kern0.5em \left(\mathrm{infused}-\mathrm{excreted}\right)\mathrm{Na}\right]}{\left({\mathrm{Na}}_{\mathrm{o}}\kern0.5em {\mathrm{Cl}}_{\mathrm{t}}\right)-\kern0.5em \left(\mathrm{N}{\mathrm{a}}_{\mathrm{t}}\kern0.5em {\mathrm{Cl}}_{\mathrm{o}}\right)} $$
In turn, ∆ICF that has occurred from baseline to time (t) can be solved as:
$$ \Delta \mathrm{ICF}=\frac{\left[{\mathrm{N}\mathrm{a}}_{\mathrm{o}}{\mathrm{ECF}}_{\mathrm{o}}+\left(\mathrm{infused}-\mathrm{excreted}\right)\mathrm{Na}\right]}{\mathrm{N}{\mathrm{a}}_{\mathrm{t}}}-{\mathrm{ECF}}_{\mathrm{o}}-\left(\mathrm{infused}-\mathrm{excreted}\right)\mathrm{fluid} $$
The change in ECF volume up to the later time t can now be obtained as
$$ \Delta \mathrm{ECF}=\left(\mathrm{infused}-\mathrm{excreted}\right)\ \mathrm{fluid}-\Delta \mathrm{ICF} $$
Volume kinetic analysis
In the present study, a one-compartment volume kinetic analysis was applied to the infused fluid load, using the Cl concentration measured at 0, 10, 20, 30, 35, 40, 45, 50, and 60 min as index of dilution. However, the dilution of Cl could not be applied directly as a considerable amount of Cl was infused. Therefore, the kinetic analysis was based on the volume of distribution of Cl, denoted as v, which constantly changes as fluid is infused and eliminated. The key output measure is still ECFo.
In the kinetic model, fluid is infused in ECFo, which is then expanded to (v − ECFo) (Fig. 1a). Elimination occurs at a rate proportional to the dilution of the ECF volume, i.e., (v − ECFo)/ECFo (dependent variable), by the renal fluid clearance, CL (as 0.9% saline is eliminated via the kidneys). The change of the ECF volume is then described by the following differential equation [4, 5]:
$$ \mathrm{d}v/\mathrm{dt}=\mathrm{Infusion}\ \mathrm{rate}-\mathrm{CL}\left(v-{\mathrm{ECF}}_{\mathrm{o}}\right)/{\mathrm{ECF}}_{\mathrm{o}} $$
a Schematic drawing of the model used for the analysis of chloride dilution kinetics. CL is the renal fluid clearance and "Chloride" represents the urinary excretion of chloride ions. b Chloride dilution of the extracellular fluid space (ECF). Open circles are the measured chloride dilutions corrected for the addition of Cl with the 0.9% saline and the connected filled circles the dilution after correction also for urinary losses of chloride
Correction for losses and additions of Cl was done by taking the total amount of chloride in the ECF volume as the product of ECFo (as given by the electrolyte equation) and Clo to which the infused amount of chloride was added. Losses of chloride were taken as the total excreted amount at 60 min divided by the area under the curve (AUC) for the dilution of plasma Cl but considering only a fraction of AUC for the time segment up to each time t [4]. Hence, the volume of distribution of Cl, which is chloride amount/Clt and represented by the symbol v, was given by:
$$ \left[\ {\mathrm{ECF}}_{\mathrm{o}}\ {\mathrm{Cl}}_{\mathrm{o}}+\mathrm{infused}\ \mathrm{Cl}-\left[\frac{\mathrm{excreted}\ \mathrm{Cl}\ \mathrm{at}\ 60\ \min }{\mathrm{AUC}\ \mathrm{for}\ \mathrm{dilution}\ \mathrm{of}\ \mathrm{plasma}\ \mathrm{Cl}}\ \left(\mathrm{t}-{\mathrm{t}}_{\mathrm{o}}\right)\ {\mathrm{Cl}}_{\mathrm{t}}\ \right]\ \right]/{\mathrm{Cl}}_{\mathrm{t}} $$
where each of the 9 time segments are denoted (t − to). ECFo was the key outcome measure of the subsequent volume kinetic calculation. The kinetic calculations were performed by least-squares regression based on a Gauss-Newton routine, using Matlab version 4.2 (Math Works Inc., Notich, Mass) [4].
An exploratory analysis of the dilution of plasma Ca was also performed. The dependent variable was then (Cao − Cat)/Cat (here, it is important that the diluted concentration is placed in the denominator). A non-compartment model analysis (NCA) was chosen because the maximum dilution occurred with a delay of 10–15 min from the end of the infusion. The Phoenix 8.2 software (Pharsight, St. Louis, MO) was used for the analysis, which was designed as "exploratory" because urinary losses for Ca were not measured. However, these losses are small in response to infusion of 0.9% saline [10, 11], and they primarily affect the clearance of the infused fluid volume.
A kinetic analysis of the blood hemoglobin data from these patients has been published previously [12].
Group data are presented as the mean and deviation (SD) and the paired t test was used for selected statistical comparisons of changes during experiments. Only mean values were applied to the equations because calculations based on individuals would be too much affected by the precision for the electrolyte measurements. Calculations were conducted using the StatView SE+Graphics v.1.02 software (Abacus Concepts, NJ), and P < 0.05 accepted as statistically significant.
The sample size was not determined by a power calculation as the purpose of the study is the feasibility of estimating the body fluid volumes from changes in electrolytes after infusing 0.9% saline.
The 14 patients were aged between 18 and 86 (mean, 50) years and had body weight of 71 (SD, 11) kg. Eight patients had been given 1 L of buffered Ringer's solution before admission to the ICU, and 3 had received 6–10 units of insulin. The second infusion took place 20.7 (4.2) h after the first one. The patients had then received a total of 7.0 (1.9) L of crystalloid fluid and 26 (16) units of insulin since their arrival to the ICU.
Blood and urine chemistry
Blood and urine chemistry and hemodynamic data are shown in Table 1. Before the first infusion, the patients had severe hyperglycemia (mean 35.4 mmol/L). The first infusion reduced plasma lactate concentration (P < 0.02) but had little acute effect on the systemic acidosis. Plasma glucose decreased by 20% (P < 0.001). There was profuse glucosuria that averaged to almost 100 mmol during the hour of study in response to the saline. The glucosuria was only 1/4 as large during the second infusion, when many of the blood chemistry parameters had been restored to be within the normal range.
Table 1 Fluid balance variables
The electrolyte equation was applied for the entire period between 0 and 60 min. The calculation employed the mean plasma concentrations of Na and Cl at 0 and 60 min and the mean excreted fluid volume and amounts of Na and Cl at 60 min.
On day 1, the ECF volume before the first infusion was initiated amounted to 11.5 L, the expansion of the ECF space from the infusion was 160 mL, and the expansion of the ICF was 375 mL. The remainder of the 1 L of infused fluid had been excreted as urine.
On day 2, the ECF volume before the infusion started was 15.5 L, the expansion of the ECF volume was 377 mL, and the expansion of the ICF was 385 mL.
Volume kinetic analyses
The ECF volume was estimated by volume kinetic analysis based on the dilution of the mean plasma chloride concentration for the 14 patients on the 9 points of measurement. A schematic drawing of the kinetic model is shown in Fig. 1a and the effect of the correction of the dilution for chloride excretion is illustrated in Fig. 1b.
This analysis based on Cl showed that the size of ECFo on day 1 was 11.6 (0.8) L. On day 2, the size of V amounted to 15.2 (1.1) L.
The exploratory analysis based on Ca showed that the ECFo at steady state on day 1 was 11.7 L. On day 2, the size of V amounted to 14.8 L.
Integrated view
Figure 2 highlights the estimates of the ECF volume, and Fig. 3 gives an integrated view of the amounts of glucose in the ECF and urine volumes, and also the flux of glucose to or from the ICF (= the difference between the other two variables). Figure 3 shows that glucose content of the ECF was reduced by 50% between day 1 and day 2, and the glycosuria dropped by 75%. However, the output of glucose from the cells remained at 25–30 mmol/h despite the administration of insulin.
Estimates of the extracellular fluid volume (ECF) on day 1 and day 2 obtained by the electrolyte equation and by kinetic analysis of the dilution data on chloride and calcium
a Glucose content of the extracellular fluid before and after infusion of 0.9% saline on two consecutive days. The data are the product of plasma glucose and the estimates of extracellular fluid volume as obtained by the new electrolyte equation. b Excreted amount of glucose during the 60-min infusion experiment. c Net release of glucose from the body cells during the 60-min experiment needed to account for the excreted glucose and the amount remaining in the extracellular fluid
This study comprises patients who had been admitted to ICU for treatment of poorly controlled diabetes, most of which had developed acidosis. Only few had received modest initial treatment with fluid and insulin before being enrolled in the study. We applied a new mass balance approach based on electrolyte shifts to estimate the distribution of short-term infusions of crystalloid fluid on two consecutive days.
The results show that the ECF space had a volume of approximately 11.5 L on admission, which is almost 3 L less than expected based on tracer measurements in healthy individuals [5]. The total fluid deficit would certainly be much greater if the ICF had been included in this figure.
Most of the subsequently infused fluid distributed equally between the urine and the ICF space, whereas only 1/6 hydrated the ECF. On day 2, the deficit in the ECF had been corrected, and perhaps even slightly overcompensated. The electrolyte equation further implicated, in light of a smaller diuretic response to the saline, that the infused volume now distributed equally between the ECF and the ICF.
Isotonic (0.9%) saline is a fluid composed to remain only the ECF volume, and the pronounced distribution to the ICF on day 1 may then seem odd. Prior to this study, we believed that administration of insulin would promote relocation of fluid to the ICF. However, not even on day 2 had the insulin reversed the catabolic state and promoted uptake of glucose to the ICF volume. This finding might be due to insulin resistance being maintained due to the hyperglycemic environment "glucose toxicity" [13].
The intracellular distribution of infused fluid seemed to be better explained by the glycosuria. When diabetic hyperglycemia develops, gluconeogenesis and glycogenolysis release glucose that translocates ICF water to the ECF by virtue of osmosis. When this glucose is excreted, the osmotic "power" to hold the translocated fluid in the ECV is lost, and it returns to the cells. The lack of glucose uptake also suggests that glycosuria was the key factor, and perhaps the only one, responsible for the decrease in plasma glucose during the first 24 h of treatment.
Intravenous fluid should be the initial treatment of poorly controlled diabetes, and the fluid shifts reported in the present study highlight why they entail a risk of inducing hemodynamic instability [1, 2]. The effectiveness of insulin seems to be poor and, therefore, the effect of early insulin treatment at this stage should mainly be to alleviate the acidosis. The recommended administration of insulin is 0.1 units/kg/h [14]. The risk of hypovolemic hypotension would probably be much greater if pronounced if early uptake of glucose to the ICF occurred.
Bergamasco et al. [15] have developed equations to help estimate fluid volume derangements by comparing measured with estimated normal values of plasma sodium, plasma glucose, and serum osmolality. Olde Engeberink et al. compared several similar equations with new experimental data [16]. All these equations make assumptions about the size of body fluid volumes at baseline, which is problematic in the presence of severe fluid derangements. Alternatively, baseline volumes can be measured by radioactive tracer technologies, but these are hardly ethical to apply in acute clinical situations.
Our current approach overcomes these shortcomings by analyzing the electrolyte changes induced by a fluid challenge. No radioactive tracer is needed and no assumptions about body fluid volumes or "normal" electrolyte concentrations have to be made. Only two plasma samples and one urinary sample are required for the calculations, and only two electrolyte concentrations need to be analyzed. The measurements, as well as the fluid load, are likely to be implemented for clinical purposes anyway, which makes ethical aspects a less crucial issue.
The new electrolyte equation was derived from an existing equation used to estimate ICF distribution of hypo-osmotic irrigating fluid in patients with an assumed normal ECF space [6, 7]. The change in plasma sodium resulting from dilution with sodium-free fluid can be predicted by assuming a uniform distribution of sodium (not fluid) within the ECF volume. Any deviation from the predicted change implies that fluid has passed across the cell membrane. However, in poorly controlled diabetes, one cannot assume that ECF is normal. Therefore, we combined the sodium equation with a similar equation using chloride shifts, which allowed the size of ECF to be estimated because both ions should indicate the same flow of fluid across the cell membrane.
The new equation is mixture of dilution and electrolyte kinetics and provides information about both the degree of pre-infusion volume depletion and the subsequent fluid shift between the ECF and the ICF during the measurement period. The first parameter, but not the second, can also be obtained by applying volume kinetic analysis to a serial analysis of extracellular electrolytes [4, 5].
We used chloride for such a complementary calculation, and estimates of ECFo were quite similar to those found by the electrolyte equation. However, the kinetic analysis of Cl may not be considered a control method because some data were used in both approaches, although the kinetic analyses used many more of the performed measurements. By contrast, the volume kinetic analysis based on plasma Ca was completely independent of the electrolyte equation, as 0.9% saline solution does not contain calcium. Unfortunately, the Ca dilution showed a delayed time course that necessitated the use of an alternative kinetic model not planned for when the study started.
Limitations include that only 1 L of saline was infused, and the changes in the plasma concentrations of sodium and chloride were not large enough to safely overcome the associated measurement errors in individual patients. With the present set-up, we recommend that the equation is applied only to group data. The electrolyte changes should be greater to allow reasonably safe application of the new equation on individual patients. Hence, the electrolyte equation was probably more accurate on day 1 than on day 2 because the 0.9% saline induced greater changes in plasma electrolytes at that time. Infusing a larger volume of saline, 2 L rather than 1 L, might overcome the confounding effect of errors in sampling and analysis of electrolyte concentrations.
The electrolyte equation does not take non-osmotic sodium, which is found in the bone, skin, and in the glycocalyx, into account. Olde Engeberink et al. showed that a significant amount of sodium was stored in the body when hypertonic saline was infused rapidly to sodium-depleted volunteers [16]. However, the surplus of sodium in the 0.9% saline we infused is limited. Furthermore, the glycocalyx is damaged in diabetic hyperglycemia [17] and, therefore, might lack adequate storage capacity [18].
Limitations also include that plasma sodium but not the plasma chloride is known to have a somewhat lower concentration in the interstitial fluid than in the plasma because most plasma proteins have negative charge (the Donnan effect). Experiments with radioactive Na and Cl show a 3% smaller ECFo for Cl than for Na, although these calculations were not corrected for urinary losses of tracer electrolytes [9]. The relevance of the above concerns is unclear at this time, but the body volumes obtained by the electrolyte equation might be regarded as functional volumes until validated with isotope dilution. Nevertheless, kinetic analysis based on sodium dilution during infusion of isotonic (5%) mannitol has previously showed excellent correlation with bromide and iohexol measurements of the ECF volume in volunteers [5].
In conclusion, a new mass balance equation based on plasma and urinary electrolytes as well as fluid volume kinetics based on chloride dilutions showed an extracellular fluid deficit of approximately 3 L in patients with poorly controlled diabetes admitted to the ICU. The first liter of 0.9% was distributed mainly to the ICF and the urine. Hydration of the ECF improved when the infusion was repeated on the next day. The decrease in plasma glucose during the first hour of fluid treatment was due to osmotic diuresis.
All data are available on request to the corresponding author.
Kitabchi AE, Umpierrez GE, Murphy MB et al (2004) American Diabetes Association. Hyperglycemic crises in diabetes. Diabetes Care 27(suppl 1):S94–S102
Menzel R, Zander E, Jutzi E (1976) Treatment of diabetic coma with low-dose of insulin. Endokrinologie 67:230–239
Dhatariya KK, Vellanki P (2017) Treatment of diabetic ketoacidosis (DKA)/hyperglycemic hyperosmolar state (HHS): novel advances in the management of hyperglycemic crises (UK versus USA). Curr Diab Rep 17:33
Hahn RG (2003) Measuring the sizes of expandable and non-expandable body fluid spaces by dilution kinetics. Austral Asian J Cancer 2:215–219
Zdolsek J, Lisander B, Hahn RG (2005) Measuring the size of the extracellular space using bromide, iohexol and sodium dilution. Anest Analg 101:1770–1777
Hahn RG (2001) Natriuresis and "dilutional" hyponatremia after infusion of glycine 1.5%. J Clin Anesth 13:167–174
Hahn RG, Drobin D (2003) Rapid water and slow sodium excretion of Ringer's solution dehydrates cells. Anesth Analg 97:1590–1594
Gamble JL, Robertson JS, Hannigan C, Foster CG, Farr LE (1953) Chloride, bromide, sodium, and sucrose spaces in man. J Clin Invest 32:483–489
Dou Y, Zhu F, Kotanko P (2012) Assessment of extracellular fluid volume and fluid status in hemodialysis patients: current status and technical advances. Siminal Dial 25:377–387
Nakamura T, Ichikawa S, Sakamaki T, Sato K, Fujie M, Kurahsina, et al. (1991) Effect of saline infusion on urinary calcium excretion in essential hypertension. Am J Hypertens 4:113–118
Foley KF, Bocuzzi L (2010) Urine calcium: laboratory measurement and clinical utility. Lab Med 41:683–686
Hahn RG, Svensson R, Zdolsek J (2020) Kinetics of crystalloid fluid in hyperglycemia; an open-label exploratory clinical trial. Acta Anaesthesiol Scand 64:1177–1186
Tomás E, Lin Y-S, Dagher Z, Saha A, Luo Z, Ruderman NB (2002) Hyperglycemia and insulin resistance: possible mechanisms. Ann N Y Acad Sci 967:43–51
Karslioglu French E, Donihi AC (2019) Korytkowski MT (2019) Diabetic ketoacidosis and hyperosmolar syndrome: review of acute decompensated diabetes in adult patients. BMJ 365:1114
Bergamasco L, Sainaghi PP, Castello L, Vitale E, Casagranda I, Bartoli E (2012) Assessing water-electrolyte changes of hyperosmolar coma. Exp Clin Endocrinol Diabetes 120:296–302
Olde Engberink RHG, Rorije NMG, van den Born BJH, Voigt L (2017) Quantification of nonosmotic sodium storage capacity following acute hypertonic saline infusion in healthy individuals. Kidney Int 91:738–745
Nieuwdorp M, van Haeften TW, Gouverneur MC, Mooij HL, van Lieshout MH, Levi M, Meijers JC, Holleman F, Hoekstra JB, Vink H, Kastelein JJ, Stroes ES (2006) Loss of endothelial glycocalyx during acute hyperglycemia coincides with endothelial dysfunction and coagulation activation in vivo. Diabetes 55:480–486
Bertram A, Stahl K, Hegermann J, Haller H (2016) The glycocalyx layer. In: Hahn RG (ed) Clinical Fluid Therapy in the Perioperative Setting, 2nd edn. Cambridge University Press, Cambridge, pp 73–81
The authors are grateful to the staff of the intensive care unit at the Vrinnevi Hospital in Norrköping, Sweden, for assistance during the data collection.
Public funds from Region Östergotland, Sweden, were used (LIO – 697501). The fund had no influence on how the study was conducted. Open access funding provided by Karolinska Institute.
Department of Anesthesiology and Intensive Care, Vrinnevi Hospital, Norrköping, Sweden
Robert Svensson
Department of Anesthesiology and Intensive Care, Linköping University, Linköping, Sweden
Joachim Zdolsek
Department of Biomedical and Clinical Sciences (BKV), Linköping University, Linköping, Sweden
Swedish Defence Research Agency, Linköping, Sweden
Marcus Malm
Research Unit, Södertälje Hospital, 152 40, Södertälje, Sweden
Robert G. Hahn
Karolinska Institutet at Danderyds Hospital (KIDS), Stockholm, Sweden
RGH helped in planning the study, performed the kinetic analysis, and co-wrote the manuscript. SR organized and supervised the collection of data. JZ planned the study and co-wrote the manuscript. The authors read and approved the final manuscript.
Correspondence to Robert G. Hahn.
The Regional Ethics Committee of Linköping approved the protocol (Ref. 2014/123-31), and the study was registered at ClinicalTrials.gov NCT02172092 on June 24, 2014, which is before any patient had been recruited (Principal Investigator: Joachim Zdolsek).
RGH holds a grant from Grifols for the study of 20% albumin as infusion fluid. RS and JZ declare that they have no conflict of interest.
Svensson, R., Zdolsek, J., Malm, M. et al. Electrolyte-based calculation of fluid shifts after infusing 0.9% saline in severe hyperglycemia. ICMx 8, 59 (2020). https://doi.org/10.1186/s40635-020-00345-9
Body fluid compartments
physiopathology | CommonCrawl |
Home Journals EJEE Backstepping Control Associated to Modified Space Vector Modulation for Quasi Z-source Inverter Fed by a PEMFC
Submisssion
Achive
CiteScore 2019: ℹCiteScore:
SCImago Journal Rank (SJR) 2019: ℹSCImago Journal Rank (SJR):
Source Normalized Impact per Paper (SNIP) 2019: ℹSource Normalized Impact per Paper(SNIP):
Backstepping Control Associated to Modified Space Vector Modulation for Quasi Z-source Inverter Fed by a PEMFC
Oussama Herizi* | Said Barkat
Laboratory of Electrical Engineering, University of M'sila, M'sila 28000, Algeria
Corresponding Author Email:
[email protected]
https://doi.org/10.18280/ejee.210201
| Citation
21.02_01.pdf
In this paper, a backstepping control combined with a modified space vector modulation (MSVM) for a quasi z-source inverter (QZSI) fed by a fuel cell is proposed. The QZSI employs a unique impedance network to couple the main circuit of the inverter to a proton exchange membrane fuel cell (PEMFC). This topology provides an attractive single stage DC-AC conversion with buck-boost capability unlike the traditional voltage source inverter (VSI). The MSVM is used to insert the shoot through state within the traditional switching signals in order to boost the inverter input voltage, and to keep the same performances of the traditional SVM. A DC peak voltage controller using backstepping approach is proposed to overcome the fuel cell voltage fluctuations under load changes, and to reduce the inductor current ripples as well. Comprehensive simulations are presented to prove the effectiveness and the performances of the proposed control strategy under different operating conditions.
quasi z-source inverter, modified space vector modulation, backstepping control, fuel cell
The most DC/AC power converter used in modern energy conversion systems is the voltage source inverter (VSI). However, the VSI is a buck converter, thus the output voltage range is limited to applications requiring smaller voltages than the input DC voltage. To handle this drawback, a DC/DC boost converter is necessary to step up the input DC voltage of the VSI, which is commonly required in the applications where both buck and boost voltage are demanded. Unfortunately, this leads to high cost, low efficiency, and reduced reliability of the resulting double stage conversion system. For that reason, a z-source inverter (ZSI) was proposed in [1] as an alternative power conversion concept to overcome the limitations of the traditional VSI. In the ZSI topology, two inductors and two capacitors connected in X shape are needed to couple the inverter main circuit to the DC source. Consequently, the ZSI achieves voltage buck/boost conversion in one stage, without need to extra switching devices.
On the same context, an improved z-source topology known as quasi-z-source inverter (QZSI) is considered in literature in order to control both DC link voltage and AC output voltage. This topology presents some advantages like having a continuous input current that is suitable for fuel cell applications without need to an additional filter. Also, it allows the use of the shoot-through switching states, which eliminates the need for dead-times that are used in the traditional inverter to avoid the risk of damaging the inverter circuit [1, 2]. Furthermore, for the applications where the DC input source has a wide voltage variation range such as fuel cells and batteries, QZSI is a good option. In addition to that, the QZSI has been used in many other applications such as direct-drive wind generation system, photovoltaic power system, and hybrid electric vehicles [3, 4].
On the other hand, different modulation techniques and control strategies are required to control the ZSI/QZSI to get the desired phase, frequency and amplitude of the AC output voltage. There are four important shoot-through modulation techniques commonly proposed in the literature, which are simple boost control (SBC), maximum boost control, maximum constant boost control (MCBC), and modified space vector modulation (MSVM) [5-16]. In this paper, a MSVM is adopted to insert the shoot through (ST) state within the traditional switching signals in order to boost the fuel cell voltage. This modulation technique reduces effectively the switches commutation time, decreases the output voltage/current harmonic content, and ensures better utilization of the DC-link voltage. Consequently, the voltage stress and switching losses will be reduced significantly compared to the other traditional PWM techniques [5, 6].
Controlling the ZSI/QZSI is an important issue where several closed-loop control methods have been widely studied in the literature [5-13]. There are mainly four methods to control DC-link voltage of the ZSI/QZSI, which are: direct DC-link voltage control, indirect DC-link voltage control, and unified control [10]. Indeed, in [7] the capacitor voltage was controlled by using a PID controller and the modulation index is controlled by the SBC method. While in [8] a sliding mode control method was used to control the ST duty ratio, and a PI controller combined with a neural network for a robust control was adopted in [9]. An example of connecting a bidirectional z-source inverter (BZSI) to the grid during the battery charging/discharging operation mode using a proportional plus resonance (PR) controller was demonstrated in [11]. In all aforementioned control methods, the capacitor voltage or the DC-link voltage are controlled by regulating the shoot-through duty cycle using different controller types and the output voltage is controlled by regulating the modulation index using the différents shoot-through modulation techniques.
In this paper, a dual loop capacitor voltage controller using backstepping approach for a quasi z-source inverter (QZSI) is proposed to overcome the fuel cell voltage variations under load changes, and to reduce the inductor current ripples as well. The MSVM is adopted to control the AC output voltage with less voltage/current harmonic content, and ensures better utilization of the DC-link voltage.
The paper is organized as follows: the fuel cell and QZSI modeling is introduced in section two. The MSVM technique and backstepping controllers based on Lyapunov approach are presented in section three. Section four is devoted for simulation results showing the behavior of the proposed control strategy under different operating conditions including input voltage changes, load disturbances, and steady state operation. Finally, a conclusion is pointed out in section five.
2. System Description and Modeling
The proposed system shown in Fig.1 consists of a three-phase QZSI feeding a three-phase linear load. The DC side impedance network couples the fuel cell and the inverter to achieve voltage boost in a single stage. The peak DC-link voltage, defined as ${{\hat{V}}_{dc}}={{V}_{C1}}+{{V}_{C2}}$, is directly controlled by regulating the ST duty ratio using a backstepping approach, while the AC voltage is controlled by the modulation index using MSVM technique.
Figure 1. Backstepping control of QZSI fed by a PEMFC
2.1 Proton exchange membrane fuel cell modeling
Different models of PEMFC are reported in the literature, Figure 2 shows the I(V) curve per cell. Thus, a single fuel cell can produce less than 1V. To produce a higher output voltage, multiple fuel cells are connected in series. The output fuel cell voltage ${{V}_{fc}}$ is defined as a function of the fuel cell losses [17-19], as follows
${{V}_{fc}}={{E}_{Nerst}}-{{V}_{act}}-{{V}_{ohm}}-{{V}_{conc}}$ (1)
In the above equation, ${{E}_{Nerst}}$ stands for the reversible voltage based on the Nernst equation given
${{E}_{Nerst}}=N\left[ \begin{align} & 1.229-0.85\times {{10}^{-3}}({{T}_{fc}}-298,15)+ \\ & 4,3085\times {{10}^{-5}}{{T}_{fc}}(\ln ({{P}_{H2}})+\frac{1}{2}\ln ({{P}_{O2}})) \\ \end{align} \right]$ (2)
where N is the number of cells; Tfc is the operation temperature; PH2 and PO2 are the partial pressures of hydrogen, and oxygen, respectively.
Figure 2. PEM full cell polarization curve
${{V}_{act}}$ is the activation voltage drop due to the rate of reactions on the surface of the electrodes; ${{V}_{ohm}}$ is the ohmic voltage loss from the resistances of proton flow in the electrolyte; ${{V}_{conc}}$ is the voltage loss from the reduction in concentration gases or the transport of mass of oxygen and hydrogen. Their equations are given as follows
${{V}_{act}}=N\frac{R{{T}_{fc}}}{2\alpha F}ln(\frac{{{I}_{fc}}+{{I}_{n}}}{{{I}_{o}}})$ (3)
${{V}_{ohm}}=N{{I}_{fc}}r$ (4)
${{V}_{con}}=Nm\exp (n{{I}_{fc}})$ (5)
where ${{I}_{fc}}$, ${{I}_{o}}$ and ${{I}_{n}}$ are the output current, exchange current, and internal current, respectively; $R$, $\alpha $ and F are the universal gas constant [J/gm-mol-k], charge transfer coefficient and Faraday constant [C/mole], respectively; r is the membrane and contact resistances; n and m are constants in the mass transfer voltage.
2.2 Quasi Z-source inverter modeling
The operation of the QZSI can be understood from two states: one called the shoot through (ST) state and the other called the non-shoot-through state (NST). The shoot through state appears when the inverter is shorted (two switches in the same leg are turned ON at the same time). In this sequence, the DC-link voltage (${{V}_{DC}}$) is equal to zero and the diode D is OFF, as shown in Figure 3(a). The second state is when the inverter operates in six active vectors and two zero vectors, in this case, the diode is turned ON, and the DC-link voltage is equal to (${{V}_{C1}}+{{V}_{C2}}$), as shown in Figure 3(b).
From Figure 3 and by applying Kirchhoff laws and neglecting Joule and iron losses, the state model of the QZSI fed by a fuel cell source can be obtained as
$\left\{ \begin{align} & {{L}_{1}}(d{{I}_{fc}}/dt)+M(d{{I}_{L2}}/dt)={{V}_{fc}}-{{V}_{C1}}(1-u)+{{V}_{C2}}u \\ & {{L}_{2}}(d{{I}_{L2}}/dt)+M(d{{I}_{fc}}/dt)={{V}_{C2}}(1-u)+{{V}_{C1}}u \\ & {{C}_{1}}(d{{V}_{C1}}/dt)=-{{I}_{L2}}u+{{I}_{fc}}(1-u)-{{I}_{inv}}(1-u) \\ & {{C}_{2}}(d{{V}_{C2}}/dt)=-{{I}_{fc}}u+{{I}_{L2}}(1-u)-{{I}_{inv}}(1-u) \\ \end{align} \right.$ (6)
where u represents the logical control variable that takes two values according to the operating state (u=1 in shoot-through state, and u=0 in non-shoot-through state), and Iinv represents the current absorbed by the inverter during the non-shoot-through states. Note that Iinv =0 when the QZSI operates in ST state and during the application if the two zero-vector in NST states, as illustrated in Figure 4.
The averaged model of the QZSI can be written as function of the duty cycle d that represents the mean value of the control variable u, as follows
Figure 3. Operation modes of the QZSI
Figure 4. Logical command, DC-link voltage, and input current of QZSI
$\left\{ \begin{align} & {{L}_{1}}\frac{d{{I}_{fc}}}{dt}+M\frac{d{{I}_{L2}}}{dt}={{V}_{fc}}-{{V}_{C1}}(1-d)+{{V}_{C2}}d \\ & {{L}_{2}}\frac{d{{I}_{L2}}}{dt}+M\frac{d{{I}_{fc}}}{dt}={{V}_{C2}}(1-d)+{{V}_{C1}}d \\ & {{C}_{1}}\frac{d{{V}_{C1}}}{dt}=-{{I}_{L2}}d+{{I}_{fc}}(1-d)-\frac{{{V}_{\alpha }}{{I}_{\alpha }}+{{V}_{\beta }}{{I}_{\beta }}}{{{V}_{C1}}+{{V}_{C2}}} \\ & {{C}_{2}}\frac{d{{V}_{C2}}}{dt}=-{{I}_{fc}}d+{{I}_{L2}}(1-d)-\frac{{{V}_{\alpha }}{{I}_{\alpha }}+{{V}_{\beta }}{{I}_{\beta }}}{{{V}_{C1}}+{{V}_{C2}}} \\ \end{align} \right.$ (7)
where Iinv can be replaced by its mean value ${P}/{({{V}_{C1}}+{{V}_{C2}})}\;$ with $P={{v}_{a}}{{i}_{a}}+{{v}_{b}}{{i}_{b}}+{{v}_{c}}{{i}_{c}}={{V}_{\alpha }}{{I}_{\alpha }}+{{V}_{\beta }}{{I}_{\beta }}$, where (${{v}_{a}}$,${{v}_{b}}$, ${{v}_{c}}$) and (${{i}_{a}}$,${{i}_{b}}$, ${{i}_{c}}$) are the AC voltages and line currents, respectively. The QZSI has the same capacitance and inductance (C1 =C2 =C and L1 = L2 =L), so it is possible to reduce the system order by rewriting (7) as function of the capacitive voltages sum ${{V}_{C}}={{V}_{C1}}+{{V}_{C2}}$ and the inductive currents sum ${{I}_{L}}={{I}_{fc}}+{{I}_{L2}}$ [20]. Finally, the equation (7) becomes
$\left\{ \begin{align} & C\frac{d{{V}_{C}}}{dt}={{I}_{L}}(1-2d)-2\frac{{{V}_{\alpha }}{{I}_{\alpha }}+{{V}_{\beta }}{{I}_{\beta }}}{{{V}_{C}}} \\ & (L+M)\frac{d{{I}_{L}}}{dt}={{V}_{fc}}-{{V}_{C}}(1-2d) \\ \end{align} \right.$ (8)
The steady state value of the peak DC link voltage ${{\hat{V}}_{dc}}={{V}_{C}}={{V}_{C1}}+{{V}_{C2}}$ is expressed as
${{\hat{V}}_{dc}}=\frac{{{V}_{fc}}}{1-2d}=B{{V}_{fc}}$ (9)
where B is the boost factor.
$B=\frac{1}{1-2d}\ge 1$ (10)
The output peak phase voltage of the inverter can be expressed as
${{\hat{V}}_{ac}}=m\frac{{{{\hat{V}}}_{dc}}}{2}=mB\frac{{{V}_{fc}}}{2}$ (11)
Equation (11) means that the output voltage can be stepped up and down by controlling m and B.
3. Qzsi Control Strategy
3.1 MSVM control
The SVM for three-phase VSI is based on the representation of the voltage vectors in a two-dimensional plane. The desired output voltage can be represented by an equivalent rotating vector ${{V}_{ref}}$ [5, 6]. According to the different values of switching functions, the inverter has eight kinds of working states, including six active vectors, which are v1(001), v2(010), v3(011), v4(100), v5(101), v6(110) and two zero vectors v0(000) and v7(111), as shown in Figure 5. The αβ plane is divided in six sectors (60-degree per sector). In each sector, the switching function and time can be calculated by the two adjacent vectors to the reference one according to the following equation in which the first sector is chosen as an example.
${{V}_{ref}}={{v}_{1}}\frac{{{T}_{1}}}{{{T}_{s}}}+{{v}_{2}}\frac{{{T}_{2}}}{{{T}_{s}}}+{{v}_{0(7)}}\frac{{{T}_{0}}}{{{T}_{s}}}$ (12)
where T1 and T2 are the switching time of active states, T0 is the switching time of zero vectors (V0 or V7), given by
$\left\{ \begin{align} & {{T}_{1}}={{m}_{i}}{{T}_{s}}\sin (\frac{\pi }{3}-\theta +\frac{\pi }{3}(i-1) \\ & {{T}_{2}}={{m}_{i}}{{T}_{s}}\sin (\theta -\frac{\pi }{3}(i-1) \\ & {{T}_{0}}={{T}_{s}}-{{T}_{1}}-{{T}_{2}} \\ \end{align} \right.$ (13)
Figure 5. Traditional SVM of VSI
where i=1,2,...,6 is the sector number; Ts is the switching period; $\theta $ is the angle of the desired output voltage Vref and miis the modulation index defined as
${{m}_{i}}=\sqrt{3}\frac{{{V}_{ref}}}{{{{\hat{V}}}_{dc}}}$ (14)
The three-phase QZSI has an additional zero state when the load terminals are shorted through both the upper and lower switches of any one phase leg, any two-phase legs or all three-phase legs. This ST zero state provides the sole buck-boost attribute to the inverter. However, the insertion of ST state must not influence the ON active states, though this switching is applied during the zero state intervals; consequently the performance of the SVM is not affected. Various modified SVM control methods of QZSI are presented in the literature, a comparison between these MSVM is summarized in [6]. In this paper, the MSVM illustrated in Figure 6 is adopted in which the total shoot-through time interval is equally divided into six parts per one control cycle. In case of the first sector, for instance, the switching pulses of the MSVM are generated by the following steps
Determine the number of sector 1,
Calculate the switching times T1, T2 and T0,
Calculate the switching times Tmin, Tmid and Tmax corresponding to the traditional SVM, as follows
$\left\{ \begin{align} & {{T}_{\min }}={{T}_{0}}/4 \\ & {{T}_{mid}}={{T}_{0}}/4+{{T}_{1}}/2 \\ & {{T}_{\max }}={{T}_{0}}/4+{{T}_{1}}/2+{{T}_{2}}/2 \\ \end{align} \right.$ (15)
Calculate the modified switching times
$\left\{ \begin{align}& {{T}_{\min +}}={{T}_{\min }}-{{T}_{st}}/4 \\ & {{T}_{\min -}}={{T}_{\min }}-{{T}_{st}}/12 \\ \end{align} \right.$$\left\{ \begin{align}& {{T}_{\operatorname{mi}d+}}={{T}_{\operatorname{mi}d}}-{{T}_{st}}/12 \\ & {{T}_{\min -}}={{T}_{\min }}+{{T}_{st}}/12 \\ \end{align} \right.$$\left\{ \begin{align}& {{T}_{\max +}}={{T}_{\max }}+{{T}_{st}}/12 \\ & {{T}_{\max -}}={{T}_{\max }}+{{T}_{st}}/4 \\ \end{align} \right.$ (16)
Figure 6. MSVM switching moment of three-phase QZSI
where the subscripts + and - denote the modified switching times of the upper and lower switches in one bridge leg, respectively, and Tst is the shoot-through time interval. It should be noted that the maximum shoot-through time interval has to meet (Tst ≤ T0) in order to ensure that the active state times will not be affected [5].
3.2 Backstepping control of QZSI
Backstepping method is a nonlinear control based on Lyapunov approach, where its achievement can be divided into several steps. Let us consider the state variables ${{V}_{C}}={{\hat{V}}_{dc}}$ and IL defined by
$\left\{ \begin{align} & {{{\dot{V}}}_{C}}=\frac{1}{C}[(1-2d){{I}_{L}}-2\frac{{{V}_{\alpha }}{{I}_{\alpha }}+{{V}_{\beta }}{{I}_{\beta }}}{{{V}_{C}}}] \\ & {{{\dot{I}}}_{L}}=\frac{1}{L+M}({{V}_{fc}}-{{V}_{C}}(1-2d)) \\ \end{align} \right.$ (17)
First step: peak DC-link voltage controller design
In this step, we look for the virtual control that ensures the asymptotic convergence of the capacitor voltage VC to its reference ${{\hat{V}}_{dc\_ref}}$. So the following error is introduced
${{e}_{1}}={{\hat{V}}_{dc\_ref}}-{{V}_{C}}$ (18)
By differentiating equation (18) and using (17), the following error dynamic equation can be obtained
${{\dot{e}}_{1}}={{\dot{\hat{V}}}_{dc\_ref}}-\frac{1}{C}((1-2d){{I}_{L}}-2\frac{{{V}_{\alpha }}{{I}_{\alpha }}+{{V}_{\beta }}{{I}_{\beta }}}{{{V}_{C}}})$ (19)
The amount (1-2d) can be replaced by its steady state value given by equation (9): $1-2d={{V}_{fc}}/{{\hat{V}}_{dc}}={{V}_{fc}}/{{V}_{C}}$, so the equation (19) becomes
${{\dot{e}}_{1}}={{\dot{\hat{V}}}_{dc\_ref}}-\frac{1}{C}(\frac{{{V}_{fc}}}{Vc}{{I}_{L}}-2\frac{{{V}_{\alpha }}{{I}_{\alpha }}+{{V}_{\beta }}{{I}_{\beta }}}{{{V}_{C}}})$ (20)
Let us check the tracking error stability by choosing the Lyapunov candidate function as below
${{V}_{1}}=\frac{1}{2}e_{1}^{2}$ (21)
Using the derivative of the equation (21), the virtual control reference that stabilizes the tracking error ${{e}_{1}}$ is given by
${{I}_{L\_ref}}=C\frac{{{V}_{C}}}{{{V}_{fc}}}({{K}_{1}}{{e}_{1}}+{{\dot{\hat{V}}}_{dc\_ref}})-2\frac{{{V}_{\alpha }}{{I}_{\alpha }}+{{V}_{\beta }}{{I}_{\beta }}}{{{V}_{fc}}}$ (22)
where K1 is a positive design gain introduced to influence the closed loop dynamic. The choice of the control law given by equation (22) will lead to ${{\dot{V}}_{1}}=-{{K}_{1}}e_{1}^{2}<0$, this is obviously a semi-negative definite function, so the tracking error ${{e}_{1}}$ will be stabilized.
Second step: control law design
Let us consider the following error between the inductor current and its reference value given by (22).
${{e}_{2}}={{I}_{L\_ref}}-{{I}_{L}}$ (23)
By differentiating the equation (23), the following error dynamic equation can be obtained
${{\dot{e}}_{2}}={{\dot{I}}_{L\_ref}}-\frac{1}{L+M}({{V}_{fc}}-{{V}_{C}}(1-2d))$ (24)
Now, a new Lyapunov function based on the peak DC-link voltage and inductor current errors can be defined as
${{V}_{2}}=\frac{1}{2}e_{1}^{2}+\frac{1}{2}e_{2}^{2}$ (25)
In order to make the derivative of the Lyapunov function given by (25) negative definite, the choice of ${{\dot{e}}_{2}}=-{{K}_{2}}{{e}_{2}}$ is necessary. Where K2 is a positive parameter selected to ensure that the dynamic of the inductor current is faster than the dynamic of the peak DC-link voltage. So, the duty cycle reference that stabilizes the variables Vc and IL to their desired values is given by
${{d}_{ref}}=\frac{1}{2}-\frac{{{V}_{fc}}}{Vc}+\frac{L+M}{{{V}_{C}}}({{K}_{2}}{{e}_{2}}+{{\dot{I}}_{L\_ref}})$ (26)
The choice of the control law given by (26) guarantees an asymptotic stability of the whole system since the derivative of (25) is negative definite function.
4. Simulation Results
The performance of the proposed backstepping control has been tested using the parameters of the overall system listed in Table 1. To verify the robustness of the proposed control shown in Figure 1, sudden change of the load resistance at 0.1s (Figure 7) and sudden change of the modulation index reference at 0.2s (Figure 8) are introduced. Figures 9 to 11 present the simulation results of the proposed control under load disturbance and AC voltage references variation.
Table 1. System parameters
PEMFC
Number of cells N
Rated power Pfc_nom
25 (kW)
Maximum operating point [Iend,Vend]
[75A,325V]
Nominal supply pessure[PO2,PH2]
[1, 1.5] (bar)
QZSI
QZSI inductances L1=L2
0.5 (mH)
QZSI mutual inductance M
QZSI capacitors C1=C2
500 (µF)
Switching frequency f
10 (kHz)
DC-linkvoltage rerference Vdc_ref
AC load inductance l
2 (mH)
AC voltage frequency f
50 (Hz)
PI Parameters [KP,KI]
[0.6, 15]
Backstepping Parameters[K1,K2]
[500, 4000]
Figure 7. AC load resistance
Figure 8. Modulation index reference
Figure 9. Peak DC-link voltage and its reference
Figure 10. PEMFC voltage Vfc
From these results, it can be noted that the peak DC-link voltage is adequately regulated to its reference with the presence of slight variations due to sudden changes in the load resistance at 0.1s and the modulation index reference at 0.2s, as shown in Figure 9. From Figure 10, it can be seen that the drop on the fuel cell voltage between 0.1 s and 0.2 s, manifested by the increasing fuel cell current presented in Figure 12, is caused by the load change. It can be seen from Figure 13 that the shoot trough duty cycle depends strongly on the fuel cell voltage variation, and it can be stated that the relationship between the peak DC-link voltage, the fuel cell voltage, and the shoot trough duty cycle is verified according to the equation 9. From Figure 15, it is clear that the shoot-through time remains less than the zero-state time independently of the introduced load and voltage variations. Furthermore the modulation index is properly regulated to its reference by the proposed MSVM-backstepping control, and the AC output voltage remains unchanged during the load increase as shown in Figures 16 and 17. From Figure 18, it is remarkable that the AC line current depends linearly on the load variation and the AC output voltage variation as well. These simulation results prove the robustness of the proposed control approach and the good dynamic of both DC and AC variables under sudden changes of the load and AC voltage reference.
Figure 11. DC-link voltage Vdc
Figure 12. Inductor current IL
Figure 13. Shoot-trough duty cycle dref
Figure 14. Modulation index and its reference
Figure 15. Shoot trough and zero times
Figure 16. AC output voltage Va
Figure 17. Average AC output voltage <Va> and its reference
Figure 19 shows a comparative study between backstepping and PI controllers in which a sudden change in the three-phase resistance load is introduced between 0.2 and 0.6s. It is clearly shown that the drop voltage in the case of PI controller is more significant compared with the voltage drop with backstepping controller. On the other hand the proposed backstepping controller can achieve faster response, lower voltage ripple and better stability for the QZSI when the fuel cell and load variation is large. So, all this result demonstrates the validity of the theoretical design and the robustness of the proposed controller.
Figure 18. AC load currents
Figure 19. Disturbance rejection evaluation by using backstepping controller and PI controller
This paper has proposed a backstepping control technique combined with a modified space vector modulation for a three-phase QZSI powered by a PEMFC. In this approach, the peak DC-link voltage is controlled by regulating the ST duty ratio using a backstepping controller and the AC voltage is controlled by regulating the modulation index using a modified space vector modulation. The proposed control strategy is verified by simulation under different disturbances of the load and AC voltage reference variations. The simulation results prove the robustness of the proposed control and the good dynamic of the DC and AC variables. It is worth noticing that the proposed control method can be useful in different applications such as photovoltaic and wind power generation systems.
[1] Peng, F.Z. (2003). Z-source inverter. IEEE Transactions on Industry Applications, 39(2): 504-510. https://doi.org/10.1109/TIA.2003.808920
[2] Peng, F.Z., Shen, M., Qian, Z. (2004). Maximum boost control of the Z-source inverter. 2004 IEEE 35th Annual Power Electronics Specialists Conference (IEEE Cat. https://doi:10.1109/pesc.2004.1355751
[3] Li, Y., Jiang, S., Cintron-Rivera, J.G., Peng, F.Z. (2013). Modeling and control of quasi-z-source inverter for distributed generation applications. IEEE Transactions on Industrial Electronics, 60(4): 1532-1541. https://doi:10.1109/tie.2012.2213551
[4] Adamowicz, M., Guzinski, J., Strzelecki, R., Peng, F.Z., Abu-Rub, H. (2011). High step-up continuous input current LCCT-Z-source inverters for fuel cells. 2011 IEEE Energy Conversion Congress and Exposition. https://doi:10.1109/ecce.2011.6064070
[5] Siwakoti, Y.P., Peng, F.Z., Blaabjerg, F., Loh, P.C., Town, G.E., Yang, S. (2015). Impedance-source networks for electric power conversion Part II: Review of control and modulation techniques. IEEE Transactions on Power Electronics, 30(4): 1887–1906. https://doi:10.1109/tpel.2014.2329859
[6] Liu, Y., Ge, B., Abu-Rub, H., Peng, F.Z. (2014). Overview of space vector modulations for three-phase Z-source/quasi-Z-source inverters. IEEE Transactions on Power Electronics, 29(4): 2098–2108. https://doi:10.1109/tpel.2013.2269539
[7] Ding, X., Qian, Z., Yang, S., Cui, B., Peng, F.Z. (2007). A PID control strategy for DC-link boost voltage in Z-source inverter. APEC 07 - Twenty-Second Annual IEEE Applied Power Electronics Conference and Exposition. https://doi:10.1109/apex.2007
[8] Rajaei, A.H., Kaboli, S., Emadi, A. (2008). Sliding-mode control of z-source inverter. 2008 34th Annual Conference of IEEE Industrial Electronics. https://doi:10.1109/iecon.2008.4758081
[9] Rastegar-Fatemi, M.J., Mirzakuchaki, S., Rastegar Fatemi, S. (2008). Wide-range control of output voltage in Z-source inverter by neural network. The International Conference on Electrical Machines and Systems, ICEMS, Orlando, USA, pp.1653-1658.
[10] Ellabban, O., Van Mierlo, J., Lataire, P. (2011). Capacitor voltage control techniques of the Z-source inverter: A comparative study. EPE Journal, 21(4): 13-24. https://doi:10.1080/09398368.2011.11463806
[11] Ellabban, O., Mierlo, J.V., Lataire, P. (2011). Control of a bidirectional Z-source inverter for electric vehicle applications in different operation modes. Journal of Power Electron, 11(2): 120-131. https://doi.org/10.6113/jpe.2011.11.2.120
[12] Liang, W., Liu, Y., Ge, B., Wang, X. (2018). DC-link voltage control strategy based on multi-dimensional modulation technique for quasi-Z-source cascaded multilevel inverter photovoltaic power system. IEEE Transactions on Industrial Informatics. https://doi:10.1109/tii.2018.2863692
[13] Lv, Y., Yu, H., Liu, X. (2018). Switching control of sliding mode and passive control for DC-link voltage of isolated shoot-through Z-source inverter. 2018 Chinese Automation Congress (CAC). https://doi:10.1109/cac.2018.8623636
[14] Ellabban, O., Van Mierlo, J., Lataire, P. (2009). Comparison between different PWM control methods for different Z-source inverter topologies. The 13th European Conference on Power Electronics and Applications, Barcelona, 8-10 Sept. https://doi.org/10.6113/jpe.2011.11.2.120
[15] Rostami, H., Khaburi, D.A. (2009). Voltage gain comparison of different control methods of the Z-source inverter. International Conference on Electrical and Electronics Engineering - ELECO 2009, Bursa.
[16] Shen, M., Wang, J., Joseph, A., Peng, F.Z., Tolbert, L.M., Adams, D.J. (2006). Constant boost control of the Z-source inverter to minimize current ripple and voltage stress. IEEE Transactions on Industry Applications, 42(3), 770–778. https://doi:10.1109/tia.2006.872927
[17] Na, W. (2011). Ripple current reduction using multi-dimensional sliding mode control for fuel cell DC to DC converter applications. 2011 IEEE Vehicle Power and Propulsion Conference. https://doi:10.1109/vppc.2011.6043038
[18] Herizi, O., Barkat, S. (2016). Backstepping control and energy management of hybrid DC source based electric vehicle. 2016 4th International Symposium on Environmental Friendly Energies and Applications (EFEA). https://doi:10.1109/efea.2016.7748792
[19] Gao, D., Jin, Z., Zhang, J., Li, J., Ouyang, M. (2016). Development and performance analysis of a hybrid fuel cell/battery bus with an axle integrated electric motor drive system. International Journal of Hydrogen Energy, 41(2): 1161–1169. https://doi:10.1016/j.ijhydene.2015.10.046
[20] Battiston, A., Miliani, E.H., Pierfederici, S., Meibody-Tabar, F. (2016). Efficiency improvement of a quasi-Z-source inverter-fed permanent-magnet synchronous machine-based electric vehicle. IEEE Transactions on Transportation Electrification, 2(1): 14–23. https://doi:10.1109/tte.2016.2519349 | CommonCrawl |
Auslander–Reiten sequence group algebra group scheme Lie algebras Alperin weight conjecture Alperin's weight conjecture braided categories Brauer character Clifford theory confluence contravariant finiteness crossed product curves decent ring decent scheme
Jan 1998 Dec 1998
United Kingdom 5 (%)
United States 5 (%)
France 3 (%)
Belgium 1 (%)
Queen Mary and Westfield College 3 (%)
University of Leicester 2 (%)
Chiba University 1 (%)
CNRS, Institut de Mathématiques de Jussieu 1 (%)
Department of Mathematics, SUNY 1 (%)
Donkin, Stephen 3 (%)
Robinson, Geoffrey R. 2 (%)
Berger, Roland 1 (%)
Caenepeel, S. 1 (%)
Fan, Yun 1 (%)
Algebras and Representation 18 (%)
Associative Rings and Algebras 18 (%)
Commutative Rings and Algebras 18 (%)
Mathematics 18 (%)
Non-associative Rings and Algebras 18 (%)
Submodule Structure of Generalized Verma Modules Induced from Generic Gelfand-Zetlin Modules
Algebras and Representation (1998-03-01) 1: 3-26 , March 01, 1998
By Mazorchuk, V. S.; Ovsienko, S. A.
For complex Lie algebra sl(n, C) we study the submodule structure of generalized Verma modules induced from generic Gelfand-Zetlin modules over some subalgebra of type sl(k, C). We obtain necessary and sufficient conditions for the existence of a submodule generalizing the Bernstein-Gelfand-Gelfand theorem for Verma modules.
On the Existence of Auslander–Reiten Sequences of Group Representations. I
Algebras and Representation (1998-06-01) 1: 97-127 , June 01, 1998
By Donkin, Stephen
This is the first part of our study of the existence ofAuslander–Reiten sequences of group representations. In this part weconsider representations of group schemes in characteristic 0; in Part II weconsider representations of group schemes in characteristic p; andin Part III we give applications to representations of groups and Liealgebras.
On Blocks with Nilpotent Coefficient Extensions
Algebras and Representation (1998-03-01) 1: 27-73 , March 01, 1998
By Fan, Yun; Puig, Lluis
For modular group algebras over an arbitrary field we define new type of blocks: blocks with nilpotent extensions, and describe their source algebras. To do it, a general pattern is proposed for relations between the source algebra of a block and the source algebra of a block appearing in its decomposition in a suitable extension of the field of coefficients.
Seminormal or t-Closed Schemes and Rees Rings
Algebras and Representation (1998-09-01) 1: 255-309 , September 01, 1998
By Picavet, Gabriel
We define decent schemes and canonically decent projective schemes. Forsuch schemes, total quotient schemes exist, allowing to get thenormalization, seminormalization and t-closure of a decent scheme as ascheme. We exhibit the seminormalization and t-closure of a filtration on aring. If A is a decent ring and F a regular filtration onA, the associated Rees ring R is decent andProj(R) is canonically decent. The seminormalization and t-closureof R are Rees rings and the seminormalization and t-closure ofProj(R) are gotten by using projective morphisms.
Curves on Quasi-Schemes
Algebras and Representation (1998-12-01) 1: 311-351 , December 01, 1998
By Smith, S. Paul; Zhang, James J.
This paper concerns curves on noncommutative schemes, hereafter called quasi-schemes. Aquasi-scheme X is identified with the category $$Mod{\text{ }}X$$ ofquasi-coherent sheaves on it. Let X be a quasi-scheme having a regularly embeddedhypersurface Y. Let C be a curve on X which is in 'good position' withrespect to Y (see Definition 5.1) – this definition includes a requirement that Xbe far from commutative in a certain sense. Then C is isomorphic to $$\mathbb{V}_n^1 $$ , where n is the number of points of intersection of Cwith Y. Here $$\mathbb{V}_n^1 $$ , or rather $$Mod{\text{ }}\mathbb{V}_n^1 $$ , is the quotient category $$GrModk[x_1 , \ldots ,x_n ]/\{ {\text{K}}\dim \leqslant n - 2\} {\text{ of }}\mathbb{Z}^n $$ -graded modules over the commutative polynomial ring, modulo the subcategory ofmodules having Krull dimension ≤ n − 2. This is a hereditary category whichbehaves rather like $$Mod\mathbb{P}^1 $$ , the category of quasi-coherentsheaves on $$\mathbb{P}^1 $$ .
On the Existence of Auslander–Reiten Sequences of Group Representations. III
This is the third and final part of our study of the existence of Auslander–Reiten sequences ofgroup representations. In Part I we considered representations of group schemes in characteristic 0.In Part II we considered representations of group schemes in characteristic p. In this partwe give applications to representations of abstract groups and Lie algebras.
A Result on Ext Over Kac–Moody Algebras
Algebras and Representation (1998-06-01) 1: 161-168 , June 01, 1998
By Neidhardt, Wayne
We prove the following result for a not necessarily symmetrizable Kac–Moody algebra: Let x,y ∈ W with x ≥ y, and let λ ∈ P+. If n=l(x)-l(y), then Ext C(λ)n(M(x·λ),L(y·λ))=1.
On Alperin's Conjecture and Certain Subgroup Complexes
By Külshammer, Burkhard; Robinson, Geoffrey R.
We prove a new formula about local control of the number of p-regular conjugacyclasses of a finite group. We then relate the results to Alperin's weight conjecture to obtain newresults describing the number of simple modules for a finite group in terms of weights of solvablesubgroups. Finally, we use the results to obtain new formulations of Alperin's weight conjecture,and to obtain restrictions on the structure of a minimal counterexample.
Quantum Deformations of α-Stratified Modules
By Futorny, Viatcheslav M.; Melville, Duncan J.
We construct quantum analogues of a class of generalized Verma modulesinduced from nonsolvable parabolic subalgebras of simple Lie algebras. Weshow that these quantum modules are true deformations of the underlyingclassical modules in the sense that the weight-space decomposition ispreserved.
On a Projective Generalization of Alperin's Conjecture
By Robinson, Geoffrey R.
In this paper, we prove that a projective generalization of theKnörr–Robinson formulation of Alperin's conjecture holds ifthe 'ordinary' form holds for a certain quotient group. | CommonCrawl |
Steady state solutions of ferrofluid flow models
Lyapunov type inequalities for $n$th order forced differential equations with mixed nonlinearities
November 2016, 15(6): 2301-2328. doi: 10.3934/cpaa.2016038
Finite dimensional smooth attractor for the Berger plate with dissipation acting on a portion of the boundary
George Avalos 1, , Pelin G. Geredeli 2, and Justin T. Webster 3,
Department of Mathematics, University of Nebraska-Lincoln, Lincoln, Nebraska 68588
Department of Mathematics, Faculty of Science, Hacettepe University, Beytepe 06800, Ankara
Haceteppe University, Ankara , Turkey
Received February 2016 Revised July 2016 Published September 2016
We consider a (nonlinear) Berger plate in the absence of rotational inertia acted upon by nonlinear boundary dissipation. We take the boundary to have two disjoint components: a clamped (inactive) portion and a controlled portion where the feedback is active via a hinged-type condition. We emphasize the damping acts only in one boundary condition on a portion of the boundary. In [24] this type of boundary damping was considered for a Berger plate on the whole boundary and shown to yield the existence of a compact global attractor. In this work we address the issues arising from damping active only on a portion of the boundary, including deriving a necessary trace estimate for $(\Delta u)\big|_{\Gamma_0}$ and eliminating a geometric condition in [24] which was utilized on the damped portion of the boundary.
Additionally, we use recent techniques in the asymptotic behavior of hyperbolic-like dynamical systems [11, 18] involving a ``stabilizability" estimate to show that the compact global attractor has finite fractal dimension and exhibits additional regularity beyond that of the state space (for finite energy solutions).
Keywords: Global attractor, boundary dissipation, dissipative dynamical system., nonlinear plate equation.
Mathematics Subject Classification: Primary: 35B41, 74K20; Secondary: 35Q74, 35A0.
Citation: George Avalos, Pelin G. Geredeli, Justin T. Webster. Finite dimensional smooth attractor for the Berger plate with dissipation acting on a portion of the boundary. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2301-2328. doi: 10.3934/cpaa.2016038
J. P. Aubin, Une théorè de compacité, C.R. Acad. Sci. Paris, 256 (1963), 5042-5044. Google Scholar
G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system without mechanical dissipation, Rend. Istit. Mat. Univ. Trieste, 28 (1997), 1-28. Google Scholar
G. Avalos and I. Lasiecka, Boundary controllability of thermoelastic plates via the free boundary conditions, SIAM J. Control. Optim., 38 (2000), 337-383. doi: 10.1137/S0363012998339836. Google Scholar
A. Babin and M. Vishik, Attractors of Evolution Equations, North-Holland, Amsterdam, 1992. Google Scholar
J. M. Ball, Global attractors for damped semilinear wave equations, Discrete Cont. Dyn. Sys, 10 (2004), 31-52. doi: 10.3934/dcds.2004.10.31. Google Scholar
H. M. Berger, A new approach to the analysis of large deflections of plates, J. Appl. Mech., 22 (1955), 465-472. Google Scholar
V. V. Bolotin, Nonconservative Problems of Elastic Stability, Pergamon Press, Oxford, 1963. Google Scholar
S. C. Brenner and R. Scott, The Mathematical Theory of Finite Element Methods, 15, Springer Science & Business Media, 2008. doi: 10.1007/978-0-387-75934-0. Google Scholar
F. Bucci, I. Chueshov and I. Lasiecka, Global attractor for a composite system of nonlinear wave and plate equations, Comm. Pure and Appl. Anal., 6 (2007), 113-140. Google Scholar
F. Bucci and I. Chueshov, Long-time dynamics of a coupled system of nonlinear wave and thermoelastic plate equations, Dynam. Sys., 22 (2008), 557-586. doi: 10.3934/dcds.2008.22.557. Google Scholar
I. Chueshov, Dynamics of Quasi-Stable Dissipative Systems, Springer, 2015. doi: 10.1007/978-3-319-22903-4. Google Scholar
I. Chueshov, Long-time dynamics of Kirchhoff wave models with strong nonlinear damping, J. Diff. Equs., 252 (2012), 1229-1262. doi: 10.1016/j.jde.2011.08.022. Google Scholar
I. Chueshov, Introduction to the Theory of Infinite Dimensional Dissipative Systems, Acta, Kharkov, 1999, in Russian; English translation: Acta, Kharkov, 2002; http://www.emis.de/monographs/Chueshov/ Google Scholar
I. Chueshov, M. Eller and I. Lasiecka, Finite dimensionality of the attractor for a semilinear wave equation with nonlinear boundary dissipation, Comm. PDE, 29 (2004), 1847-1976. doi: 10.1081/PDE-200040203. Google Scholar
I. Chueshov and I. Lasiecka, Global attractors for von Karman evolutions with a nonlinear boundary dissipation, J. Differ. Equs., 198 (2004), 196-231. doi: 10.1016/j.jde.2003.08.008. Google Scholar
I. Chueshov and I. Lasiecka, Long-time behavior of second-order evolutions with nonlinear damping, Memoires of AMS, 195, 2008. doi: 10.1090/memo/0912. Google Scholar
I. Chueshov and I. Lasiecka, Long-time dynamics of von Karman semi-flows with non-linear boundary/interior damping, J. Differ. Equs., 233 (2008), 42-86. doi: 10.1016/j.jde.2006.09.019. Google Scholar
I. Chueshov and I. Lasiecka, Von Karman Evolution Equations, Springer-Verlag, 2010. doi: 10.1007/978-0-387-87712-9. Google Scholar
I. Chueshov, I. Lasiecka and D. Toundykov, Global attractor for a wave equation with nonlinear localized boundary damping and a source term of critical exponent, J. Dyn. Diff. Equs., 21 (2009), 269-314. doi: 10.1007/s10884-009-9132-y. Google Scholar
I. Chueshov, I. Lasiecka and J. T. Webster, Attractors for delayed, non-rotational von Karman plates with applications to flow-structure interactions without any damping, Comm. in PDE, 39 (2014), 1965-1997. doi: 10.1080/03605302.2014.930484. Google Scholar
P. Ciarlet and P. Rabier, Les Equations de Von Karman, Springer, 1980. Google Scholar
A. Eden and A. J. Milani, Exponential attractors for extensible beam equations, Nonlinearity, 6 (1993), 457-479. Google Scholar
P. Fabrie, C. Galusinski, A. Miranville and S. Zelik, Uniform exponential attractors for a singularly perturbed damped wave equation, Discrete Cont. Dyn. Sys, 10 (2004), 211-238. doi: 10.3934/dcds.2004.10.211. Google Scholar
P. G. Geredeli and J. T. Webster, Qualitative results on the dynamics of a Berger plate with nonlinear boundary damping, Nonlin. Anal: Real World Applications, 31 (2016), 227-256. doi: 10.1016/j.nonrwa.2016.02.002. Google Scholar
P. G. Geredeli, I. Lasiecka and J. T. Webster, Smooth attractors of finite dimension for von Karman evolutions with nonlinear frictional damping localized in a boundary layer, J. Diff. Eqs., 254 (2013), 1193-1229. doi: 10.1016/j.jde.2012.10.016. Google Scholar
P. G. Geredeli and J. T. Webster, Decay rates to eqilibrium for nonlinear plate equations with geometrically constrained, degenerate dissipation, Appl. Math. and Optim., 68 (2013), 361-390. Erratum, Appl. Math. and Optim., 70 (2014), 565-566. Google Scholar
J. K. Hale and G. Raugel, Attractors for dissipative evolutionary equations, In International Conference on Differential Equations (Vol. 1, p. 2), 1993, World Scientific River Edge, NJ. Google Scholar
G. Ji and I. Lasiecka, Nonlinear boundary feedback stabilization for a semilinear Kirchhoff plate with dissipation acting only via moments-limiting behavior, JMAA, 229 (1999), 452-479. doi: 10.1006/jmaa.1998.6170. Google Scholar
A. Kh. Khanmamedov, Global attractors for von Karman equations with non-linear dissipation, J. Math. Anal. Appl, 318 (2006), 92-101. doi: 10.1016/j.jmaa.2005.05.031. Google Scholar
J. Lagnese, Boundary Stabilization of Thin Plates, SIAM, 1989. doi: 10.1137/1.9781611970821. Google Scholar
I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations, Cambridge University Press, Cambridge, 2000. Google Scholar
I. Lasiecka and R. Triggiani, Sharp trace estimates of solutions to Kirchhoff and Euler-Bernoulli equations, Appl. Math Optim, 28 (1993), 277-306. doi: 10.1007/BF01200382. Google Scholar
V. Kalantarov and S. Zelik, Finite-dimensional attractors for the quasi-linear strongly-damped wave equation, J. Diff. Equs., 247 (2009), 1120-1155. doi: 10.1016/j.jde.2009.04.010. Google Scholar
J. L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Springer, 1971. Google Scholar
J. L. Lions, Contrôlabilité exacte, perturbations et stabilization de systèmes distribués, Vol. I, Masson, Paris, 1989. Google Scholar
J. Málek and D. Pražak, Large time behavior via the method of $l$-trajectories, J. Diff. Eqs., 181 (2002), 243-279. doi: 10.1006/jdeq.2001.4087. Google Scholar
A. Miranville and S. Zelik, Attractors for dissipative partial differential equations in bounded and unbounded domains,, In \emph{Handbook of Differential Equations: Evolutionary Equations} (M. C. Dafermos and M. Pokorny eds.), (). doi: 10.1016/S1874-5717(08)00003-0. Google Scholar
V. Pata and S. Zelik, Smooth attractors for strongly damped wave equations, Nonlinearity, 19 (2006), 1495-1506. doi: 10.1088/0951-7715/19/7/001. Google Scholar
J.-P. Puel and M. Tucsnak, Boundary stabilization for the von Karman equations, SIAM J. Control and Optim., 33 (1995), 255-273. doi: 10.1137/S0363012992228350. Google Scholar
D. Pražak, On finite fractal dimension of the global attractor for the wave equation with nonlinear damping, J. Dyn. Diff. Eqs., 14 (2002), 764-776. doi: 10.1023/A:1020756426088. Google Scholar
G. Raugel, Global attractors in partial differential equations, In Handbook of Dynamical Systems (B. Fiedler ed.), v. 2, Elsevier Sciences, Amsterdam, 2002. doi: 10.1016/S1874-575X(02)80038-8. Google Scholar
J. Simon, Compact sets in the space $L^p(0,T;B)$, Annali di Matematica pura ed applicata IV, CXLVI (1987), 65-96. doi: 10.1007/BF01762360. Google Scholar
C. P. Vendhan, A study of Berger equations applied to nonlinear vibrations of elastic plates, Int. J. Mech. Sci, 17 (1975), 461-468. Google Scholar
Jiacheng Wang, Peng-Fei Yao. On the attractor for a semilinear wave equation with variable coefficients and nonlinear boundary dissipation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021043
Francesca Bucci, Igor Chueshov, Irena Lasiecka. Global attractor for a composite system of nonlinear wave and plate equations. Communications on Pure & Applied Analysis, 2007, 6 (1) : 113-140. doi: 10.3934/cpaa.2007.6.113
Moncef Aouadi, Alain Miranville. Quasi-stability and global attractor in nonlinear thermoelastic diffusion plate with memory. Evolution Equations & Control Theory, 2015, 4 (3) : 241-263. doi: 10.3934/eect.2015.4.241
Azer Khanmamedov, Sema Simsek. Existence of the global attractor for the plate equation with nonlocal nonlinearity in $ \mathbb{R} ^{n}$. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 151-172. doi: 10.3934/dcdsb.2016.21.151
Yongqin Liu, Shuichi Kawashima. Global existence and asymptotic behavior of solutions for quasi-linear dissipative plate equation. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1113-1139. doi: 10.3934/dcds.2011.29.1113
I. D. Chueshov, Iryna Ryzhkova. A global attractor for a fluid--plate interaction model. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1635-1656. doi: 10.3934/cpaa.2013.12.1635
Nikos I. Karachalios, Nikos M. Stavrakakis. Estimates on the dimension of a global attractor for a semilinear dissipative wave equation on $\mathbb R^N$. Discrete & Continuous Dynamical Systems, 2002, 8 (4) : 939-951. doi: 10.3934/dcds.2002.8.939
Yizhao Qin, Yuxia Guo, Peng-Fei Yao. Energy decay and global smooth solutions for a free boundary fluid-nonlinear elastic structure interface model with boundary dissipation. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1555-1593. doi: 10.3934/dcds.2020086
Wided Kechiche. Regularity of the global attractor for a nonlinear Schrödinger equation with a point defect. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1233-1252. doi: 10.3934/cpaa.2017060
Wided Kechiche. Global attractor for a nonlinear Schrödinger equation with a nonlinearity concentrated in one point. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 3027-3042. doi: 10.3934/dcdss.2021031
Yongqin Liu, Shuichi Kawashima. Decay property for a plate equation with memory-type dissipation. Kinetic & Related Models, 2011, 4 (2) : 531-547. doi: 10.3934/krm.2011.4.531
Muhammad I. Mustafa. Viscoelastic plate equation with boundary feedback. Evolution Equations & Control Theory, 2017, 6 (2) : 261-276. doi: 10.3934/eect.2017014
Vladimir V. Chepyzhov, Monica Conti, Vittorino Pata. Totally dissipative dynamical processes and their uniform global attractors. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1989-2004. doi: 10.3934/cpaa.2014.13.1989
Sébastien Court. Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part II: The nonlinear system.. Evolution Equations & Control Theory, 2014, 3 (1) : 83-118. doi: 10.3934/eect.2014.3.83
Moez Daoulatli, Irena Lasiecka, Daniel Toundykov. Uniform energy decay for a wave equation with partially supported nonlinear boundary dissipation without growth restrictions. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 67-94. doi: 10.3934/dcdss.2009.2.67
Boyan Jonov, Thomas C. Sideris. Global and almost global existence of small solutions to a dissipative wave equation in 3D with nearly null nonlinear terms. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1407-1442. doi: 10.3934/cpaa.2015.14.1407
Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194
Kangsheng Liu, Xu Liu, Bopeng Rao. Eventual regularity of a wave equation with boundary dissipation. Mathematical Control & Related Fields, 2012, 2 (1) : 17-28. doi: 10.3934/mcrf.2012.2.17
Dominique Blanchard, Nicolas Bruyère, Olivier Guibé. Existence and uniqueness of the solution of a Boussinesq system with nonlinear dissipation. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2213-2227. doi: 10.3934/cpaa.2013.12.2213
Rogério Martins. One-dimensional attractor for a dissipative system with a cylindrical phase space. Discrete & Continuous Dynamical Systems, 2006, 14 (3) : 533-547. doi: 10.3934/dcds.2006.14.533
George Avalos Pelin G. Geredeli Justin T. Webster | CommonCrawl |
Grating mathematical phrases---How to correct?
As mathematics educators, we all have come across students using mathematical notation incorrectly (looking at you, $\frac{d}{dx}$ vs $\frac{dy}{dx}$ or $\frac{\infty^2}{\infty}$). My question focuses on "verbal notation." For example, my hackles go up when I hear the following:
"take the prime of $f$" or "$d$-$dx$ the function" or "derive the function" instead of "compute the derivative of $f$" or "find $f'(x)$" (edit: or "differentiate the function"). Double chalkboard-fingernails for "the prime of the prime" and it's ilk.
"anti-derivative the function" instead of "integrate the function" or (even better) "find the indefinite integral of the function"
"minus/minusing $a$ from $b$" instead of "subtract $a$ from $b$" or "compute $b$ minus $a$"
"plus/plussing $a$ and $b$" or instead of "add $a$ and $b$" or "find $a$ plus $b$"
"take the inverse of a fraction" instead of "take the reciprocal of a fraction" (debatable, the "multiplicative inverse of a fraction" does appear in sources)
The list goes on from there--I would be curious to hear your pet peeve phrases! My question is this:
Is it overly picky and pedagogical to correct such phrases? If it is appropriate to correct these phrasings, is it situation dependent (tutoring/recitation/lecture) and how would you do so?
I would like to emphasize that this is a question specifically about phrasing and verbalizing mathematical operations. Assume that the hypothetical student is generally performing the correct operations, is capable of reasonably proper written notation, and "plussing $a$ and $b$" would be the correct step.
Edit: Running list of other phrasing
"vertexes" vs. "vertices", probably applying to pluralizations of many other words as well (axises vs axes, ...). Credit to kcrisman
Opposite of the above, "vertices" or "vertice" to refer to a single object (c.f. $x$-axes etc). Credit to Andreas Blass
Misuses of mathematical verbs such as "Solve 16 + 58" or "Prove the integral." Credit to Jack M
students-mistakes notation
erfink
erfinkerfink
$\begingroup$ @WeckarE. It is a case of verbing the noun. The action is usually called anti-differentiation, which results in the anti-derivative. $\endgroup$ – Adam Mar 16 '17 at 14:57
$\begingroup$ "Plussing" and "minussing" physically hurt me ears when I hear them. $\endgroup$ – Feathercrown Mar 16 '17 at 20:45
$\begingroup$ @erfink Hmm what is wrong with "Find the indefinite integral of the function"? If I had to be money I would guess "Find the anti-derivative of the function" would be the correct option, but I can't see how saying indefinite integral makes it wrong. $\endgroup$ – Ovi Mar 16 '17 at 22:35
$\begingroup$ I'm personally much more put off by misuses of mathematical verbs such as "Solve 16 + 58" or "Prove the integral", which show actual conceptual confusion rather than just being ignorance of standard terminology. $\endgroup$ – Jack M Mar 16 '17 at 23:08
$\begingroup$ @JamesFoit The question is not really about using shorthands, but about using the wrong words. The correct and rigorous (and useful for the future) phrase is not necessarily and longer. Precision is important, and we want the verbalizations to remain efficient in the future. $\endgroup$ – Joonas Ilmavirta Mar 17 '17 at 9:29
Personally, I don't think we attend to this sufficiently in lower-level mathematics (where it's actually needed most). Students need that vocabulary to interface with books, future teachers, tutors, other students, etc. I run questions on it in weekly quizzes; and if I had my druthers, it would be a major component of all tests (in addition to application-level stuff).
In my experience, you've got to jump on that stuff as directly, firmly, and as soon as possible to make a difference. Really lead by example that it's a priority for you that students know how to interface with that language for their next step. I never let it go by if it comes up in class; I always address the class with, "Can anyone help me? What's the correct word for this?". At least by the level of college algebra and above my students definitely respond positively to this, and it gets better rapidly.
Some of my non-native English speakers express outright fear the first day when it becomes clear that this is the emphasis, but I do try to reassure them that in some sense we're all in the same boat, and prior students in that situation have rapidly improved and done extremely well. They're usually thankful for that emphasis by the end of the semester.
Daniel R. CollinsDaniel R. Collins
$\begingroup$ When did "interface" become a verb? ;-) But I love the point on knowing what the vocab is for future use. Sort of like how if you don't know both prime and Leibniz notation, you are in deep trouble later on. $\endgroup$ – kcrisman Mar 16 '17 at 5:04
$\begingroup$ @kcrisman: Interface is listed as a verb in every dictionary I can find (e.g., merriam-webster.com/dictionary/interface). I don't have the OED to look up full history -- but at least since my Webster's New World, printed 1988. $\endgroup$ – Daniel R. Collins Mar 16 '17 at 10:14
$\begingroup$ @kcrisman Around 1940. At least according to google's ngrams: books.google.com/ngrams/… $\endgroup$ – Kevin Mar 16 '17 at 13:06
$\begingroup$ @kcrisman Funny thing that it was a verb before it was a noun (in the modern context that doesn't solely revolve around aviation). $\endgroup$ – Weckar E. Mar 16 '17 at 15:01
$\begingroup$ I found it much worse that students didn't understand what $=$ meant than that they would "d-dx it" (which by the way is mathematically absolutely fine as it's an operator even if verbally I suppose it's strange since you don't "f 5" if you want to find $f(5)%$). Unfortunately I found that the semester I would have with the students wasn't long enough to break the 8+ years of teachers using $=$ to mean "compute" and not punishing $3x+4=5=1/3$ and $f(x+3)=\sin(x+3)=\sin^2(x+3)$ when computing $h(f(x+3))$ $\endgroup$ – DRF Mar 21 '17 at 11:05
I wish to give a slightly different answer compared to the others.
Strict and Standardized Notations is Very Important
They not only help us communicate better, they also help us think. They prime us to remember things and understand things better. For example, if I see $a^2 + b^2 = c^2$, I think Pythagoras Theorem and right angle triangles. If I see $k^2 + y^2 = t^2$, I don't.
It also allows you to be more accurate, and make sure your logic is not flawed.
Non-standard Notation Could also be helpful
Though rare, not using standard notations could help with thinking about a problem in another way, or coming up with a different sub-field of math.
There are Different Standards/Language is Evolving
Like any other language, mathematical language is evolving. If enough people uses a phrase, it is a correct phrase. Different mathematical papers uses different standards.
Commonly used Standard Notation could be suboptimal
The first notation for a subfield is usually made by the guy who ventures into this subfield. Being the first, he is exploring in unfamiliar territory, and his standards ends up suboptimal. Then more people come in, and each tries to invent a better standard, or a more universal notation, and it ends up like xkcd 927.
Furthermore, it should be noted that different notations are more useful in at different times.
Verbal Notations are often much more flexible than written ones
People often don't speak in completely correct sentences. Things are shortened. Words are changed.
Verbal Math is often an attempt to translate a formula to English?
How would you say $(2a + b) \times c$? There is no guide to speaking formula. Do you say "The product of c and the sum of two-a and b"? That clearly got the multiplicands in reverse, and what is a two-a? Or do you say "Open bracket, two times a plus b, close bracket, times c.
As another example, take "three x plus four b over seven all over nine". What does that mean?
So I'd say saying "d d x" or "d over d x" is perfectly fine. And if we can say that $f'$ is f prime; why can't be say that it is the prime of $f$, or that the action of differentiating is taking the prime of f?
Not Everyone Uses the Standard Notations/the same standard notations as you do
Unless your students will only be talking to you/other people that strictly follow the standards, they'll need to be flexible.
Are you sure you are right?
Are you sure that the things you find incorrect are actually incorrect, and not just using a particular standard?
Are you sure vertexes is not a allowable pluralization of vertex? Are you sure that the word vertices is not an appendage that is being/has been phased out? Will you insist that data must be plural, and one must use datum for the singular?
Are you sure you are pronouncing $\Omega$ correctly. Do you pronounce it like this or this? The former is more Greek, and is often used by people from certain areas in Europe (And, sometimes if taken to the extreme, sounds like "OH MY GOD"). Something like the latter is more used in America. The common pronounciation for me and my peers is something slightly different from the latter.
Are you sure the word "derive" cannot be used to mean "differentiate"? I cannot support this with evidence, but I remember some sources using derive in that manner, and some sources claiming that derive can indeed mean "differentiate".
You've already mentioned the debatability of inverse. I'm going to claim that using inverse of a fraction instead of reciprocal is perfectly allowed. And I would argue that "minus a from b" is perfectly allowed as well. "Plus a and b" is slightly more awkward. However, without consulting a mathematical grammar guide and dictionary, can you tell me why "plus" can not be used that way in math?
Everything considered, notations are important. You should seek to introduce your students to the different types of verbal notations. They definitely should be able to fluently use the word "differentiate". You should impress that some notations are more proper than others, and should be used most of the time. If called upon, they should be able to use proper notations.
However, it is also important for them to understand and use other "less proper" notations. In general, it is fine to use these "less proper" verbal notations. However, if it leads to a situation where the students are unable to use proper notation, or when the usage of certain verbal notation is hindering communication or thought, proper notation should be emphasized.
Finally, you can simply use the "correct" notation in your speech, and in general, the students will follow. You can also explicitly note, every so often, that while "derive" can be used sometimes, there are other notations, and "differentiate" is in a generally better and more clear word.
I feel the need to add onto this answer.
I would first like to draw attention to this question, which has great answers.
To quote some of the quotes given:
"The student of mathematics has to develop a tolerance for ambiguity. Pedantry can be the enemy of insight." - Gila Hanna
As far as possible we have drawn attention in the text to abuse of language, without which any mathematical text runs the risk of pedantry not to say unreadability. - Bourbaki
Also linked in the answers to that question is an article by Terence Tao, who describes the progression of mathematical education in three stages: "pre-rigorous", "rigorous", and "post-rigorous". I'd argue that any sub-field in math is learnt kind of in this manner. I would say that the student should be only be steered toward correct notation in the pre-rigorous stage, and that if notation is to be emphasized, it should be during the "rigorous" stage.
Fluidized Pigeon ReactorFluidized Pigeon Reactor
$\begingroup$ Many excellent points; I very much appreciate the devil's advocate point of view (a good portion of why I asked this question). I'll have to think about a few of them to give a reasoned response. A first response would be: why do you feel that "plus $a$ and $b$" is more problematic than "minus $a$ and $b$"? From the perspective of ambiguity, fewer things can go wrong with "plussing" numbers due to it being a symmetric operator... $\endgroup$ – erfink Mar 16 '17 at 7:53
$\begingroup$ I only feel it is more problematic because it sounds weird. It probably sounds because the word plus is often used in other situations more often. "Minus a from b" sounds like a complete sentence ("minus a and b" would be wrong), but when I hear "plus a and b", I expect something in front of it. I'm ultimately not sure. Most likely it has to do with the frequency I hear these things. $\endgroup$ – Fluidized Pigeon Reactor Mar 16 '17 at 8:11
$\begingroup$ I think that's my point--I can't necessarily define why "plus $a$ and $b$" is wrong from a strictly mathematical perspective, but I can tell you that it feels awful to say out loud. Points to ponder before we fall too deep into a conversation for linguistics.sx =) $\endgroup$ – erfink Mar 16 '17 at 8:18
$\begingroup$ Yes, that is true. It does indeed feel bad to hear sometimes. However, that is a function of our own education, and our own standards, which we got from those who taught us and from those we interacted with. However, what sounds bad to us may not actually be wrong. On the one hand, we have to be careful about saying things like "You should never start a sentence with "because"". On the other hand, we are supposed influence our students to do more proper things. $\endgroup$ – Fluidized Pigeon Reactor Mar 16 '17 at 8:29
$\begingroup$ This leads us to 2 solutions. One is to try to learn about every standard and notation. The other is to make soft suggestions. If something sounds wrong and awkward, but might not be strictly wrong, we might say "It might be better if we rephrase what you said this way". If something is definitely wrong, and should definitely be corrected, I think we can say something much more firm. What do you think? $\endgroup$ – Fluidized Pigeon Reactor Mar 16 '17 at 8:31
Here's my stab at a self-answer:
I think we would all agree on the need for precise written notation is important within mathematics. Unless the context is specifically reverse polish notation, a student writing $+~2~~ 2$ would be bizarre and incorrect. As such, I feel that it is also important to emphasize precision when verbalizing mathematics.
Using the analogy of mathematics as foreign language, it would be strange to learn French with strict emphasis on proper spelling and grammar but to never have pronunciation corrected. While mathematics is primarily a written language, more emphasis is justly placed on written notation. However, I feel that we should also place value on spoken mathematics by correcting such phrasings.
My personal approach follows advice of what I've heard to do when a colleague is using a fancy vocab word incorrectly: try to use the same word in a proper context as soon as possible, rather than a direct "I do not think it means what you think it means." My goal is to point out the mistake while not coming off as nit-picky. My personal approach also tries to be sensitive---humiliating a student, even unintentionally, in front of their peers can be quite damaging.
For example, if a student used one of these phrasings while offering a suggestion or asking a question during class, I would try to parrot the statement back correctly and placing a slight emphasis on the correct phrasing:
"Good---in order to find the critical points, we'll need to compute the derivative of $f$ and ...
"I agree, subtracting $b$ from both sides of the equation will ..."
$\begingroup$ The "parrot the statement back" strategy is dependent on the audience population. With high-functioning, engaged and interested listeners, it is indeed helpful and polite. But with low-functioning students (not engaged, language and listening problems) it is in my experience too subtle, and only explicitly addressing it makes a difference. $\endgroup$ – Daniel R. Collins Mar 17 '17 at 2:50
The standard verb is "(anti)differentiate", right? That's quite a mouthful. Probably okay to correct but with a light heart - make it into a joke, if the context is right. It is useful to be able to use standard terminology, so I hear you.
As an example, I had a graph theory class once where one student consistently said "vertexes" rather than "vertices" - I never once had to correct him after the first week, another student and he made it into a running game. For all I know he tells this same story in his career as a jazz musician (not kidding!).
What you shouldn't do is find ways to shame students who are struggling with the computations, let alone concepts. (I'm not suggesting you are doing this. But it's easy to come across this way, as many of us have experienced.) Bonus points for first person to use correct terminology? Or pie for the first one to come up with a reason why "prime the prime" would be ambiguous? That last one seems pretty unambiguous to me, by the way - it's more annoying because it focuses too much on algebra than the idea of acceleration than because of the wording.
As a side note, "verbing the noun" seems to be more and more common, and is probably a normal linguistic change within English in general. This discussion may seem quaint a hundred years from now (imagine smiley emoji/emoticon here).
kcrismankcrisman
$\begingroup$ I agree that "prime the prime" is unambiguous in meaning, but sounds really grating. I look at it as there is a symbol ' (prime) that denotes a derivative is being/has been taken, but the operation we performed was not "priming." I would find it similarly strange to hear "take the Sigma of a sequence" instead of "take the sum of a sequence." $\endgroup$ – erfink Mar 16 '17 at 3:31
$\begingroup$ Also, "verbing the noun." Good point. So much for the Queen's English. $\endgroup$ – erfink Mar 16 '17 at 3:31
$\begingroup$ Maybe that question is for linguistics.SX.com :) see e.g. bbc.com/culture/story/… and Bill Watterson's take at gocomics.com/calvinandhobbes/1993/01/25 $\endgroup$ – kcrisman Mar 16 '17 at 4:58
$\begingroup$ "So much for the Queen's English." - or as Shakespeare wrote (in a previous Queen's English) "but me no buts!" Verbing and nouning conjunctions weirds language even worserer ;) $\endgroup$ – alephzero Mar 16 '17 at 8:44
Personally I feel like there are much more important issues for all students I've encountered while TA'ing/teaching in the US than whether they verbalize math correctly. I've generally taught lvl 300 courses (basic calc) and I've uniformly found that the students have incredibly poor notation with very few exceptions.
While verbalizing math badly isn't great, unless your students are drastically different to the ones I've seen, it feels like focusing on that is a bit like making sure the aspiring cook who doesn't know how to turn on the stove is great at naming the ingredients for beef wellington.
DRFDRF
$\begingroup$ I think we're generally referring to similar students in lower-division under-graduate courses. I will agree with you that written notation is more important than verbalization, but this doesn't excuse poor verbalization of mathematics. This is part of why I asked the question---how do we steer and correct verbal mathematics without spending an entire lecture on the subject? $\endgroup$ – erfink Mar 22 '17 at 3:13
Not the answer you're looking for? Browse other questions tagged students-mistakes notation or ask your own question.
Tutoring a recalcitrant/awkward/exasperating student---special needs?
Teaching students to find and correct their own errors
How to teach brackets?
As a TA, how to reduce imprecise notations/statements in students' exams
Misuse of parentheses for multiplication
Wording VS mathematical notations
Constructive refutation of student misconception
How students write their work, and learning outcomes
"Always/Sometimes/Never" vs. "True/False" questions for mathematical reasoning
A different symbol for the indefinite integral/antiderivative? | CommonCrawl |
Method | Open | Published: 06 May 2019
SCRABBLE: single-cell RNA-seq imputation constrained by bulk RNA-seq data
Tao Peng1,
Qin Zhu2,
Penghang Yin3 &
Kai Tan ORCID: orcid.org/0000-0002-9104-55671,2,4,5,6,7
Genome Biologyvolume 20, Article number: 88 (2019) | Download Citation
Single-cell RNA-seq data contain a large proportion of zeros for expressed genes. Such dropout events present a fundamental challenge for various types of data analyses. Here, we describe the SCRABBLE algorithm to address this problem. SCRABBLE leverages bulk data as a constraint and reduces unwanted bias towards expressed genes during imputation. Using both simulation and several types of experimental data, we demonstrate that SCRABBLE outperforms the existing methods in recovering dropout events, capturing true distribution of gene expression across cells, and preserving gene-gene relationship and cell-cell relationship in the data.
Single-cell RNA sequencing (scRNA-seq) has revolutionized cell biology, enabling studies of heterogeneity and transcriptome dynamics of complex tissues at single-cell resolution. However, a major limitation of scRNA-seq data is the low capturing and sequencing efficiency affecting each cell, resulting in a large proportion of expressed genes with zeros or low read counts, which is known as the "dropout" phenomenon. Such dropout events lead to bias in downstream analysis, such as clustering, classification, differential expression analysis, and pseudo-time analysis. To address this critical challenge, two types of approaches have been developed. One approach adopts analysis strategies that take dropout into consideration. For instance, ZINB-WaVE generates weights for genes and cells using a zero-inflated negative binomial model which in turn is used to detect differential expression [1]. Lun et al. used a pool-and-deconvolute approach to deal with dropout events for accurate normalization of scRNA-seq data [2]. The second approach is direct imputation of scRNA-seq data. Among these methods, MAGIC imputes dropout events by data diffusion based on a Markov transition matrix that defines a kernel distance measure among cells [3]. scImpute [4] first computes dropout probability using a two-component mixture model. It then uses a LASSO model to impute dropout values. Similarly, SAVER [5] also uses a linear regression to impute the missing data. But, it differs from the scImpute by using a Bayesian model to compute the probability of dropout events. DrImpute [6] first conducts consensus clustering of cells followed by imputation by the average value of similar cells. VIPER uses a non-negative sparse regression model to progressively infer local neighborhood cells for imputation [7].
All imputation methods above recover dropout values using scRNA-seq only. Here, we describe the SCRABBLE algorithm for imputing scRNA-seq data by using bulk RNA-seq as a constraint. SCRABBLE only requires consistent cell population between single-cell and bulk data. The bulk data represent the unfractionated composite mixture of all cell types without sorting them into individual types. For many scRNA-seq data, there are usually existing bulk data on the same cell/tissue. And it is becoming increasingly common to collect matched bulk data when a new scRNA-seq experiment is performed. Bulk RNA-seq data allows SCRABBLE to achieve a more accurate estimate of the gene expression distributions across cells than using single-cell data alone. SCRABBLE is based on the framework of matrix regularization that does not impose an assumption of specific statistical distributions for gene expression levels and dropout probabilities. It also does not force the imputation of genes that are not affected by dropout events.
SCRABBLE is based on the mathematical framework of matrix regularization [8]. It imputes dropout data by optimizing an objective function that consists of three terms (Fig. 1). The first term ensures that imputed values for genes with non-zero expression remain as close to their original values as possible, thus minimizing unwanted bias towards expressed genes. The second term ensures the rank of the imputed data matrix to be as small as possible. The rationale is that we only expect a limited number of distinct cell types in a given tissue sample. The third term operates on the bulk RNA-seq data. It ensures consistency between the average gene expression of the aggregated imputed data and the average gene expression of the bulk RNA-seq data. We developed a convex optimization algorithm to minimize the objective function (see the "Methods" section). The existence of an optimal solution is guaranteed mathematically [8].
Schematic overview of the SCRABBLE algorithm. The objective function is shown on the top. It has three terms. The first term represents the difference between the raw scRNA-seq data matrix and its projection of the optimizing matrix. The projection of the optimizing matrix has the same profile of zeros as that of the raw scRNA-seq data. The second term is the rank of the optimizing matrix. The third term represents the difference between the bulk RNA-seq data and the aggregated scRNA-seq data across cells. Here, the bulk data represent the composite mix of all cell types without sorting them into individual types
We first evaluated the performance of SCRABBLE using simulated data where the ground truth is known. We used two simulation strategies. Strategy 1 is based on the Splatter method and generates completely synthetic data (Fig. 2a, Additional file 1: Figure S1). Splatter captures many features observed in the scRNA-seq data, including zero-inflation, gene-wise dispersion, and differing sequencing depths between cells [9]. Strategy 2 uses down-sampled real bulk RNA-seq dataset [10] (Fig. 3a, Additional file 1: Figure S3). Here, we introduced dropout events using an exponential function to control dropout rate (parameter λ) and a Bernoulli process to introduce dropout events at the corresponding dropout rate [4, 11] (see the "Methods" section). Using the 2 strategies, we simulated data with dropout rates corresponding to 60 to 87% zeros in the data. Moreover, to evaluate the robustness of imputation methods, at a given dropout rate, we simulated 100 data sets. It is well known that real RNA-seq data tend to have a characteristic property of inverse relationship between mean and variance [12]. We confirmed that our simulated data also contains this property using the mean-variance plot (Additional file 1: Figures S1 and S3).
Performance evaluation using synthetic data. a A representative imputation result using simulated data containing 1000 cells and 800 genes. The data was simulated using the Splatter method [9]. The dropout rate is 83%. b t-SNE plots of the representative imputation results. c MA plots of the representative imputation results. d–f Imputation errors for data with different percentages of zeros in the data (71%, 83%, and 87%). The imputation error is defined as the L2 norm of the difference between the imputed data matrix and the true data matrix. Each boxplot represents the result from 100 simulated datasets. P values are based on Student's t test
Performance evaluation using down-sampled bulk RNA-seq data. a Schematic overview of the simulation strategy. Starting from the bulk RNA-seq data matrix consisting of three types of cells, T1 cells, T2 cells, and T3 cells, the data matrix X1 is obtained by resampling of raw data from the different type cells separately. Then, each element (xij) in the data matrix is perturbed by the normal distribution N(0, 5V) (V is the vector of standard deviation of genes across replicates in the bulk RNA-seq data), and the true data set X2 is generated. Finally, dropout events are introduced in X2 using an exponential function, resulting in the dropout data set X3. b A representative imputation result using simulated data. The dropout rate is 72%. c t-SNE plots of the representative imputation results. d MA plots of the representative imputation results. Imputation errors for data with 60% (e), 65% (f), 72% (g), and 77% (h) dropout rates. Each boxplot represents the result from 100 simulated datasets. P values are based on Student's t test
To evaluate the performance of each method, we define the imputation error as the L2 norm of the difference between the imputed and the true data matrices. Using both types of simulated data across a range of dropout rates, we found that SCRABBLE outperforms four state-of-the-art methods (DrImpute, scImpute, MAGIC, and VIPER) (Figs. 2d–f and 3e–h). More importantly, the performance gain is observed across the full spectrum of gene expression levels (Figs. 2c and 3d, Additional file 1: Figures S2, S4-S6). All other methods led to imputed values that were significantly lower than the true values for > 88% (Fig. 2c) and > 40% (Fig. 3d) of the genes. In contrast, SCRABBLE led to imputed values that were significantly higher than the true values for 1% (Fig. 2c) and 2% (Fig. 3d) of the genes. The imputed data by SCRABBLE also captures the data substructure (i.e., clusters) better as embedded in the true data (Figs. 2b and 3c, Additional file 1: Figures S2, S4-S6).
Besides simulating dropout events, we also used a real scRNA-seq dataset [13] (and matched bulk RNA-seq [14]) for mouse embryonic stem cells (J1 line) where dropout events are identified by comparing the data generated using the Drop-Seq [15] and the SCRB-Seq [16] protocols. At the same sequencing depth, the former protocol has a higher dropout rate [13]. We identified 56 genes that have zero expression in at least 29% of the cells in the Drop-Seq data but non-zero expression levels in all cells in the SCRB-Seq data. We therefore used the expression levels of these 56 genes in the SCRB-Seq data as the gold standard and imputed the Drop-Seq data. We found that SCRABBLE achieves the best performance among all methods in terms of matching the distribution of gene expression between the imputed and gold-standard data (Fig. 4b, Additional file 2: Figure S7). The similarity between distributions is measured using the Kolmogorov-Smirnov test statistic. Like the performance using simulated data, the performance gain by SCRABBLE is observed across the full range of gene expression levels (Additional file 2: Figure S8). Figure 4a shows raw and imputed expression levels of two representative genes, Tmem208 and Naa25 (the rest of the genes are shown in Additional file 2: Figure S7). We observed the same performance gain by SCRABBLE in another set of 17 genes with dropout events in at least 39% of the cells (i.e., higher dropout rate, Additional file 2: Figure S9).
SCRABBLE-imputed gene expression distribution has a better match with gold standards. a Gene expression distributions of two representative genes in true (SCRB-Seq), dropout (Drop-Seq), and imputed data. b Boxplots of the agreement of gene expression distribution between true data (SCRB-Seq) and imputed data using Drop-Seq data as input to the methods. Agreement between the two distributions is measured using the Kolmogorov-Smirnov (KS) test statistic. A set of 56 genes in mouse ES cells is examined. c Gene expression distributions of two representative genes in smRNA FISH data and imputed data. d Boxplots of the agreement of gene expression distribution between smRNA FISH data and imputed data. P values are based on Student's t test
We further assess the performance of SCRABBLE using single-molecule RNA fluorescence in situ hybridization (smRNA FISH) data and scRNA-seq data measured on the same cell type, mouse embryonic stem cell line, E14 [17, 18]. We compared the distributions of the imputed expression and smRNA FISH measurements for the same set of 12 genes across single cells. Overall, the distributions of expression values imputed by SCRABBLE have the highest agreement with the smRNA FISH data (Fig. 4d), suggesting best performance by SCRABBLE. Figure 4c shows raw and imputed expression levels of two representative genes, Esrrb and Tbp (the rest of the genes are shown in Additional file 2: Figure S10).
A major application of scRNA-seq is to better understand the gene-gene and cell-cell relationships in a complex tissue. Thus, a good imputation method should preserve the data structure that reflects the true gene-gene and cell-cell relationships. We computed the gene-gene and cell-cell correlation matrices using the data simulated using strategy 2. Using Pearson correlation, we then determined the similarity between the correlation matrices based on true data and dropout/imputed data. Data imputed by SCRABBLE gave rise to a significantly higher correlation to the true cell-cell correlations than those imputed by the other four methods (Fig. 5b). Figure 5a shows a set of representative cell-cell correlation matrices based on true, dropout, and imputed data. As can be seen, SCRABBLE does the best job in capturing the true cell-cell correlation patterns among the four methods. MAGIC reports a large number of high correlations. However, most of those are false positives judging by the true cell-cell correlation matrix. This is because MAGIC tends to impute counts that are not affected by dropout and thus tends to flatten the data distribution towards the sample mean. Histograms of the correlation values are shown in Additional file 2: Figure S11. We note that all imputation methods tend to distort the true data distribution as suggested by the inflated correlations based on the imputed data (Additional file 2: Figure S11). Nevertheless, the higher agreement of cell-cell correlations using true data and SCRABBLE imputed data is observed using the data simulated with both strategies and across a range of dropout rates (Additional file 2: Figures S12 and S13).
SCRABBLE better preserves the true cell-cell and gene-gene relationships in the data. a Representative cell-cell correlation matrices using true, dropout, and imputed data. The dropout rate is 72%. Values are Pearson correlation coefficients. b Pearson correlation between the cell-cell correlation matrices based on true and dropout/imputed data. Boxplots represent 100 sets of simulated data. P values are based on Student's t test. c Representative gene-gene correlation matrices using true, dropout, and imputed data. d Pearson correlation between the gene-gene correlation matrices based on true and dropout/imputed data
For the gene-gene relationship, among the entire set of 5000 genes, data imputed by SCRABBLE results in the highest agreement with the gene-gene correlation pattern based on the true data (Fig. 5c, d). This higher agreement of gene-gene correlations is observed using the data simulated with both strategies and across a range of dropout rates (Additional file 2: Figures S14 and S15). Histograms of the correlation values are shown in Additional file 2: Figure S16.
The imputation procedure could inadvertently distort the clustering result. To evaluate this issue, we next computed the cell-cell and gene-gene correlations using cells/genes stratified based on their cluster membership (for cell-cell correlation) and on whether they are marker genes of a cluster (for gene-gene correlation). For cell-cell correlation, we computed the within- and between-cluster correlations across cells. For gene-gene correlation, we computed the correlations among marker genes and among marker and non-marker genes for a given cluster. For both cell-cell and gene-gene correlations, the distance between the two correlation distributions was quantified using the Kolmogorov-Smirnov (KS) statistic. Finally, the distortion of the clustering result is measured by comparing the KS statistic based on true data and imputed data. For both cell-cell (Additional file 2: Figures S17 and S18) and gene-gene (Additional file 2: Figures S19 and S20) correlations, SCRABBLE gives the smallest distortion compared to the other methods. The same performance gain is observed using the data simulated with strategy 1 (Additional file 2: Figures S21 and S22).
Another way to evaluate the preservation of gene-gene relationship in the sample is by using pathway annotations because genes in the same pathway tend to have correlated expression. We applied SCRABBLE to matched the scRNA-seq and bulk RNA-seq data for seven cell types [19], H1 and H9 (human embryonic stem cell lines), human foreskin fibroblast (HFF), definitive endoderm cells (DEC), endothelial cells (EC), trophoblast (TB)-like cells, and neuronal progenitor cells (NPC). We defined a pathway gene correlation score (PGCS) which measures the increase in the expression correlation among the pathway genes compared to a set of randomly selected genes of the same size. We then computed the difference in PGCS (ΔPGCS) between the imputed data and un-imputed data. For a better imputation method, we expect to see a larger ΔPGCS value. Using pathway annotations from three databases, Ingenuity Pathway Analysis (IPA) [20], Kyoto Encyclopedia of Genes and Genomes (KEGG) [21], and REACTOME [22], we found SCRABBLE consistently produces larger ΔPGCS values compared to the other four methods (Fig. 6, Additional file 2: Figures S23-S25) in all cell types examined, suggesting data imputed by SCRABBLE better preserves the gene-gene relationship information in the data.
Pairwise expression correlation among pathway genes is improved using imputed data. A pathway gene correlation score (PGCS) measures the increase in expression correlation among pathway genes compared to a set of randomly selected genes of the same size. ΔPGCS is the difference in PGCS between imputed data and un-imputed data. For each data set (dropout or imputed data), a ΔPGCS value is computed for each pathway. Boxplot represents ΔPGCS values for 186 pathways in the IPA database. P value is based on Student's t test. a Human h1 ES cells data (H1). b Human trophoblast (TB)-like cells data. c Human foreskin fibroblast cells (HFF)
To demonstrate that SCRABBLE can improve the downstream analysis, we applied it to the matched scRNA-seq [23] and bulk RNA-seq [24] of 8 mouse tissues, including fetal brain (4369 cells), fetal liver (2699 cells), kidney (4673 cells), liver (4685 cells), lung (6940 cells), placenta (4346 cells), small intestine (6684 cells), and spleen (1970 cells). Using both raw and imputed scRNA-seq data, multiple cell types (as determined by signature gene expression) can be detected using K-nearest neighbor clustering (Fig. 7a, Additional file 2: Figures S26-S32). This result further demonstrates that SCRABBLE can capture cell heterogeneity in complex tissues although it uses average gene expression values of the bulk data. To evaluate the clustering quality using either raw or imputed data, we used the Dunn index which computes the ratio of minimal inter-cluster distance versus maximal intra-cluster distance. A higher Dunn index indicates a better separation among clusters. We found that the use of imputed data by SCRABBLE results in improved clustering quality as compared to clustering without imputation and with imputed data by the other four methods (Fig. 7b, Additional file 2: Figures S26-S32).
SCRABBLE improves the clustering analysis. a Clustering results using un-imputed and imputed data by various methods. scRNA-seq data was clustered using K-nearest neighbor clustering and visualized using t-SNE. The number of clusters (K) was based on the ones provided by the authors. Cell type of each cluster was identified based on marker genes provided by the authors. b Quantification of cluster quality using the Dunn index
SCRABBLE has three parameters (i.e., α, β, and γ). To evaluate the robustness of SCRABBLE over parameter setting, we varied the values of the three parameters by 0.1-, 0.5-, 2-, and 10-folds and performed imputation using data simulated using strategy 1 with the dropout rate of 83%. We found that the median percentage change in imputed data before and after changing the parameter is less than 5% for both α and β and less than 15% for γ (Additional file 2: Figure S33), suggesting SCRABBLE is very robust with regard to parameter setting. The sets of SCRABBLE parameters used in this study are provided in Additional file 3: Table S2. We also benchmarked the running time of SCRABBLE. The higher imputation accuracy of SCRABBLE comes with a price of slower running time. For dataset containing fewer than 2000 cells, SCRABBLE has a better or comparable speed as that of VIPER (Additional file 2: Figure S34). As the dataset size exceeds 5000 cells, SCRABBLE is twice as slow as VIPER, mostly due to the computationally expensive process of iterative single value decomposition.
SCRABBLE addresses several deficiencies of existing methods. First, several methods impute dropout events by using cell-cell distance, as quantified by either Euclidean distance or kernel distance. Such distance measures may not reflect the true relationship among cells. SCRABBLE relies on the framework of matrix regularization which does not use cell-cell distance measure. Second, SCRABBLE borrows information from bulk RNA-seq data to impute dropout data in order to reduce unwanted bias during imputation. Finally, since we transform the mathematical model of SCRABBLE to a convex optimization problem, the existence of the optimal solution is guaranteed mathematically. Our comprehensive analysis using both simulated and real experimental data suggests that SCRABBLE achieves significant improvement in terms of recovering dropout events and preserving cell-cell and gene-gene relationships in the samples. As an example of SCRABBLE's utility to facilitate downstream analysis, we show that using SCRABBLE-imputed data leads to a better clustering quality and helps identify different cell types in complex tissues.
One caveat about our method is the use of average values of bulk RNA-seq data. It may reduce the ability of the method to capture biological heterogeneity in the data. However, we believe the advantage of using bulk data outweighs the disadvantage. Additionally, the other two terms of our model, projection and low rank, enable SCRABBLE to detect heterogeneity and covariation.
As other types of single-cell omics data become more abundant, such as single-cell DNA methylation and ATAC data, our method provides a general framework for imputing and integrating these data for new discoveries.
Here, we describe the SCRABBLE algorithm and software package. SCRABBLE imputes single-cell RNA-seq data by using bulk RNA-seq data both as a constraint and as prior information. We show leveraging information in bulk RNA-seq data significantly improves the quality of imputed data. With SCRABBLE, existing or newly generated bulk RNA-seq data can be used to increase the utility of single-cell RNA-seq data.
The mathematical model of SCRABBLE
The input to SCRABBLE includes the scRNA-seq and bulk RNA-seq data on consistent cells/tissues. A matrix, X0, represents expression values from scRNA-seq data with columns representing m genes and rows representing n cells. A vector, D, represents the average expression levels of all genes in the bulk RNA-seq data across N samples.
The output matrix \( \overset{\wedge }{X} \) of SCRABBLE is the imputed matrix with the same dimensions as the input matrix X0. The algorithm is based on the following mathematical model:
$$ \overset{\wedge }{X}=\underset{X\ge 0}{\mathrm{argmin}}\left(\frac{1}{2}{\left\Vert {P}_{\varOmega }(X)-{X}_0\right\Vert}_F^2+\alpha \mathrm{Rank}(X)+\beta {\left\Vert aX-D\right\Vert}_2^2\right) $$
where PΩ(·) is the projection operator that forces xij to be zeros (xij is the element at the ith row and the jth column of the matrix X and (i, j) ∉ Ω); otherwise, the value of xij is kept as it is. Ω is determined by X0 and (i, j) ∈ Ω if \( {x}_{ij}^0\ne 0 \), where \( {x}_{ij}^0 \) is the element at the ith row and the jth column of the matrix X0. Rank(X) is the rank of the matrix X. a is a row vector with the size 1 by n and each element in a is \( \frac{1}{n} \). α and β are the parameters of the mathematical model. α is the weight for the rank of the imputed data matrix. Large α results in reduced heterogeneity across the cells. β is the weight for the agreement between the aggregated scRNA-seq and bulk RNA-seq data. β is proportional to α and the size of the imputed data matrix.
Iterative optimization of the objective function during imputation
Since the objective function in Eq. (1) is not convex due to the rank function, the relaxed form of the objective function is employed to compute the optimal solution as follows.
$$ \overset{\wedge }{X}=\underset{X\ge 0}{\mathrm{argmin}}\left(\frac{1}{2}{\left\Vert {P}_{\varOmega }(X)-{X}_0\right\Vert}_F^2+\alpha {\left|\left|X\right|\right|}_{\ast }+\beta {\left\Vert aX-D\right\Vert}_2^2\right) $$
where ||∙||∗ is the nuclear norm, which is the convex envelope of the rank function. We use the following three steps to calculate \( \overset{\wedge }{X} \).
Step 1: Convert the original optimization problem into a convex optimization problem with a linear constraint by introducing the auxiliary variable Y.
$$ \overset{\wedge }{\Big(X,}\overset{\wedge }{Y}\Big)=\mathrm{argmin}\left(\frac{1}{2}{\left\Vert {P}_{\varOmega }(X)-{X}_0\right\Vert}_F^2+\alpha {\left|\left|Y\right|\right|}_{\ast }+\beta {\left\Vert aX-D\right\Vert}_2^2+{\chi}_{X\ge 0}\right) $$
such that X − Y = 0.
where χX ≥ 0 is the characteristic function which takes the value of 0 if X ≥ 0 and ∞ otherwise.
Step 2: Convert the constrained optimization problem to the unconstrained optimization problem using the augmented Lagrangian method and solve the unconstrained optimization problem using the alternating direction method of multipliers (ADMM) [25].
$$ \left(\overset{\wedge }{X,}\overset{\wedge }{Y}\right)=\underset{X\ge 0}{\mathrm{argmin}}\left(\frac{1}{2}{\left\Vert {P}_{\varOmega }(X)-{X}_0\right\Vert}_F^2+\alpha {\left|\left|Y\right|\right|}_{\ast }+\beta {\left\Vert aX-D\right\Vert}_2^2+{\chi}_{X\ge 0}+<\Lambda, X-Y{>}_F+\frac{\gamma }{2}{\left|\left|X-Y\right|\right|}_F^2\right) $$
The ADMM iteration scheme can be written as follows:
$$ {X}^{k+1}=\mathrm{argmin}\left(\frac{1}{2}{\left\Vert {P}_{\varOmega }(X)-{X}_0\right\Vert}_F^2+\beta {\left\Vert aX-D\right\Vert}_2^2+{\chi}_{X\ge 0}+<{\Lambda}^k,X-{Y}^k{>}_F+\frac{\gamma }{2}{\left\Vert X-{Y}^k\right\Vert}_F^2\right) $$
$$ {Y}^{k+1}=\mathrm{argmin}\left(\alpha {\left|\left|Y\right|\right|}_{\ast }+<{\Lambda}^k,{X}^{k+1}-Y{>}_F+\frac{\gamma }{2}{\left|\left|{X}^{k+1}-Y\right|\right|}_F^2\right) $$
$$ {\Lambda}^{k+1}={\Lambda}^k+\gamma \left({X}^{k+1}-{Y}^{k+1}\right) $$
We take the derivative with respect to X to obtain the iteration scheme of Eq. (5).
$$ \left({P}_{\varOmega }(X)-{X}_0\right)+\beta {a}^T\left( aX-D\right)+{\Lambda}^k+\gamma \left(X-{Y}^k\right)=0 $$
$$ {P}_{\varOmega }(X)+\left(\beta {a}^Ta+\gamma I\right)X=\gamma {Y}^k+\beta {a}^TD+{X}_0-{\Lambda}^k $$
Let βaTa + γI = W and βaTD + X0 = T
$$ {P}_{\varOmega }(X)+ WX=\gamma {Y}^k+T-{\Lambda}^k $$
Then, we rewrite Eq. (6) as:
$$ {\displaystyle \begin{array}{c}{Y}^{k+1}=\mathrm{argmin}\ \alpha {\left\Vert Y\right\Vert}_{\ast }+<{\Lambda}^k,{X}^{k+1}-Y{>}_F+\frac{\gamma }{2}{\left\Vert {X}^{k+1}-Y\right\Vert}_F^2\\ {}=\mathrm{argmin}\frac{\alpha }{\gamma }{\left\Vert Y\right\Vert}_{\ast }+<\frac{\Lambda^k}{\gamma },{X}^{k+1}-Y{>}_F+\frac{1}{2}{\left\Vert {X}^{k+1}-Y\right\Vert}_F^2+\frac{1}{2}{\left\Vert \frac{\Lambda^k}{\gamma}\right\Vert}_F^2\\ {}=\mathrm{argmin}\frac{\alpha }{\gamma }{\left\Vert Y\right\Vert}_{\ast }+\frac{1}{2}{\left\Vert \frac{\Lambda^k}{\gamma }+{X}^{k+1}-Y\right\Vert}_F^2\end{array}} $$
Step 3: Based on Eqs. (7), (8), and (9), we could get the following iteration schemes.
$$ {x}_{ij}=\left\{\begin{array}{c}{\left(\frac{\gamma {y}_{ij}^k+{t}_{ij}-{\Lambda}_{ij}^k-\sum \limits_{i=1,j\ne i}^n{w}_{ij}{x}_{ij}}{w_{ii}}\right)}_{+}\kern2.75em \left(i,j\right)\notin \varOmega \\ {}{\left(\frac{\gamma {y}_{ij}^k+{t}_{ij}-{\Lambda}_{ij}^k-\sum \limits_{i=1,j\ne i}^n{w}_{ij}{x}_{ij}}{w_{ii}+1}\right)}_{+}\kern2.5em \left(i,j\right)\in \varOmega \end{array}\right. $$
$$ {\displaystyle \begin{array}{l}{Y}^{k+1}=\mathrm{SVT}\left(\frac{X^{k+1}+{\Lambda}^k}{\gamma },\frac{\alpha }{\gamma}\right)\\ {}{\Lambda}^{k+1}={\Lambda}^k+\gamma \left({X}^{k+1}-{Y}^{k+1}\right)\end{array}} $$
where Eqs. (10) and (11) are the iteration schemes for Eqs. (5) and (6), represents the singular value thresholding algorithm [26] defined for any matrix Z and τ > 0 as follows:
$$ \mathrm{SVT}\left(Z,\tau \right)=U\operatorname{diag}\left\{\left({\sigma}_i-\tau \right)\right\}{V}^T $$
Here, Z = U diag({σi}1 ≤ i ≤ r)VT is the singular value decomposition of Z, and σis are the positive singular values. Λk, Xk, and Yk are the kth iteration matrix of Λ, X, and Y, respectively. In addition, xij, \( {y}_{ij}^k \), \( {\Lambda}_{ij}^k \), wij, and tij are the elements at the ith row and jth column in the matrices X, Yk, Λk, W, and T, respectively. The convergence of ADMM for convex optimization problems has been extensively studied in the literature [25, 27]. Since the objective function in (2) is convex and non-negative, the problem has at least one global solution. This global structure of the objective function in Eq. (2) allows the above algorithm to converge more quickly compared to other evolutionary algorithms [28]. The penalty parameter γ plays an important role in solving the objective function in Eq. (9) using the singular value thresholding algorithm combined with the parameter α. Overall, α, β, and γ are the three necessary parameters of SCRABBLE.
Generation of simulated data
We simulated the scRNA-seq data consisting of three cell types using the Bioconductor package Splatter (version 1.4.3) [9]. We used the splatSimulateGroup function to generate the simulation data with 1000 cells and 800 genes. Three clusters were embedded in each simulated dataset. The size of each cluster was controlled by the parameter "group.prob" to be 0.2, 0.35, and 0.45. The parameter controlling the probability that a gene is differentially expressed in each group was set equal to 0.045. The location parameter and the scale factor parameter of randomly generating multiplication factors from a log-normal distribution were set to be 0.1 and 0.4, respectively. Dropout midpoints (parameter "dropout_mid" in Splatter) were used to control the dropout rates in the simulated data. For instance, dropout midpoints of 4, 5, and 5.5 correspond to 71%, 83%, and 87% dropout rates in the simulated data, respectively. The corresponding bulk RNA-seq data were the mean values of genes in the true scRNA-seq data. The dropout RNA-seq and bulk RNA-seq data matrices are the inputs of the imputation methods. To determine the performance stability of the methods, we generated 100 datasets for each dropout midpoints.
Generation of simulated data using bulk RNA-seq data
We used the bulk RNA-seq dataset of mouse hair follicles from [10]. In total, the dataset contains 20 different combinations of anatomic sites and developmental time points. We used the following procedures to generate the simulated datasets (Fig. 3a): (1) we randomly selected 8 out of the 20 conditions; (2) for each condition, we generated 100 resampled datasets. The means and standard deviations of genes were calculated for each condition based on the 100 resampled datasets; (3) 100 new datasets were generated based on the mean and the standard deviation of each gene; (4) in order to reduce the computation cost, we randomly selected 5000 genes from 20,721 genes in the above data matrices. The final data matrix represents 800 cells and 5000 genes; and (5) we made the dropout rate of each gene in each cell following an exponential function \( {e}^{-\lambda \bullet \mathrm{mean}\_{\mathrm{expression}}^2} \) [4, 11], where λ determines the dropout rate of scRNA-seq data. Zero values are introduced into the simulated data for each gene in each cell based on the Bernoulli distribution defined by the corresponding dropout rate. The corresponding bulk RNA-seq data are the mean values of genes in the scRNA-seq data without dropouts. To determine the performance stability of the methods, we generated 100 datasets for each dropout rate.
Running of other imputation methods
We benchmarked DrImpute, scImpute MAGIC, and VIPER packages in this manuscript. For DrImpute (version 1.0), we used the following default parameter settings described in the Quick Start section of the user manual: ks = 10:15, dists = c("spearman," "pearson"), fast = FALSE, dropout.probability.threshold = 0, n.dropout = 10,000, n.background = 10,000, and mc.cores = 1. For scImpute (version 0.0.9), we used the following default parameter setting described in the Quick Start section of the user manual: labeled = FALSE, drop_thre = 0.5, and Kcluster = 1 in all analysis. For MAGIC (version 1.3.0 implemented in Python), we used the following default parameter setting, k = 10, a = 15, t = "auto", n_pca = 100, knn_dist = "euclidean", n_jobs = 1, and random_state = none. For VIPER (version 0.1.1), we used the following parameter setting: num = 5000, percentage.cutoff = 0.1, minbool = FALSE, and alpha = 1.
Van den Berge K, Perraudeau F, Soneson C, Love MI, Risso D, Vert JP, Robinson MD, Dudoit S, Clement L. Observation weights unlock bulk RNA-seq tools for zero inflation and single-cell applications. Genome Biol. 2018;19:24.
Lun AT, Bach K, Marioni JC. Pooling across cells to normalize single-cell RNA sequencing data with many zero counts. Genome Biol. 2016;17:75.
van Dijk D, Sharma R, Nainys J, Yim K, Kathail P, Carr AJ, Burdziak C, Moon KR, Chaffer CL, Pattabiraman D, et al. Recovering gene interactions from single-cell data using data diffusion. Cell. 2018;174:716–729 e727.
Li WV, Li JJ. An accurate and robust imputation method scImpute for single-cell RNA-seq data. Nat Commun. 2018;9:997.
Huang M, Wang J, Torre E, Dueck H, Shaffer S, Bonasio R, Murray JI, Raj A, Li M, Zhang NR. SAVER: gene expression recovery for single-cell RNA sequencing. Nat Methods. 2018;15:539–42.
Gong W, Kwak IY, Pota P, Koyano-Nakagawa N, Garry DJ. DrImpute: imputing dropout events in single cell RNA sequencing data. BMC Bioinformatics. 2018;19:220.
Chen M, Zhou X. VIPER: variability-preserving imputation for accurate gene expression recovery in single-cell RNA sequencing studies. Genome Biol. 2018;19:196.
Bertsekas D, Nedic A, Ozdaglar A. Convex analysis and optimization: Athena Scientific; 2003.
Zappia L, Phipson B, Oshlack A. Splatter: simulation of single-cell RNA sequencing data. Genome Biol. 2017;18:174.
Wang Q, Oh JW, Lee HL, Dhar A, Peng T, Ramos R, Guerrero-Juarez CF, Wang X, Zhao R, Cao X, et al. A multi-scale model for hair follicles reveals heterogeneous domains driving rapid spatiotemporal hair growth patterning. Elife. 2017;6:e22772.
Pierson E, Yau C. ZIFA: dimensionality reduction for zero-inflated single-cell gene expression analysis. Genome Biol. 2015;16:241.
Ziegenhain C, Vieth B, Parekh S, Reinius B, Guillaumet-Adkins A, Smets M, Leonhardt H, Heyn H, Hellmann I, Enard W. Comparative analysis of single-cell RNA sequencing methods. Mol Cell. 2017;65:631–643 e634.
Deaton AM, Webb S, Kerr AR, Illingworth RS, Guy J, Andrews R, Bird A. Cell type-specific DNA methylation at intragenic CpG islands in the immune system. Genome Res. 2011;21:1074–86.
Macosko EZ, Basu A, Satija R, Nemesh J, Shekhar K, Goldman M, Tirosh I, Bialas AR, Kamitaki N, Martersteck EM, et al. Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. Cell. 2015;161:1202–14.
Soumillon M, Cacchiarelli D, Semrau S, van Oudenaarden A, Mikkelsen TS. Characterization of directed differentiation by high-throughput single-cell RNA-seq. bioRxiv. 2014;1:003236.
Semrau S, Goldmann JE, Soumillon M, Mikkelsen TS, Jaenisch R, van Oudenaarden A. Dynamics of lineage commitment revealed by single-cell transcriptomics of differentiating embryonic stem cells. Nat Commun. 2017;8:1096.
Singer ZS, Yong J, Tischler J, Hackett JA, Altinok A, Surani MA, Cai L, Elowitz MB. Dynamic heterogeneity and DNA methylation in embryonic stem cells. Mol Cell. 2014;55:319–31.
Chu LF, Leng N, Zhang J, Hou Z, Mamott D, Vereide DT, Choi J, Kendziorski C, Stewart R, Thomson JA. Single-cell RNA-seq reveals novel regulators of human embryonic stem cell differentiation to definitive endoderm. Genome Biol. 2016;17:173.
Kramer A, Green J, Pollard J Jr, Tugendreich S. Causal analysis approaches in ingenuity pathway analysis. Bioinformatics. 2014;30:523–30.
Kotera M, Hirakawa M, Tokimatsu T, Goto S, Kanehisa M. The KEGG databases and tools facilitating omics analysis: latest developments involving human diseases and pharmaceuticals. Methods Mol Biol. 2012;802:19–39.
Fabregat A, Sidiropoulos K, Garapati P, Gillespie M, Hausmann K, Haw R, Jassal B, Jupe S, Korninger F, McKay S, et al. The Reactome pathway Knowledgebase. Nucleic Acids Res. 2016;44:D481–7.
Han X, Wang R, Zhou Y, Fei L, Sun H, Lai S, Saadatpour A, Zhou Z, Chen H, Ye F, et al. Mapping the mouse cell atlas by Microwell-Seq. Cell. 2018;173:1307.
Shen Y, Yue F, McCleary DF, Ye Z, Edsall L, Kuan S, Wagner U, Dixon J, Lee L, Lobanenkov VV, Ren B. A map of the cis-regulatory sequences in the mouse genome. Nature. 2012;488:116–20.
Boyd SPN, Chu E, Peleato B, Eckstein J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations Trends Machine Learn. 2011;3:1–122.
Cai J, Candes E, Shen Z. A singular value thresholding algorithm for matrix completion. SIAM J Optim. 2010;20:1956–82.
Deng W, Yin W. On the global and linear convergence of the generalized alternating direction method of multipliers. J Sci Comput. 2016;66:889–916.
Salomon R. Evolutionary algorithms and gradient search: similarities and differences. IEEE Trans Evol Comput. 1998;2:10.
Peng T, Zhu Q, Yin P, Tan K: SCRABBLE: single-cell RNA-seq imputation constrained by bulk RNA-seq data. Source Code GitHub Repository 2019, (https://github.com/tanlabcode/SCRABBLE).
Peng T, Zhu Q, Yin P, Tan K: SCRABBLE: single-cell RNA-seq imputation constrained by bulk RNA-seq data. Source Code Zenodo Repository 2019, DOI: https://doi.org/10.5281/zenodo.2585902.
Peng T, Zhu Q, Yin P, Tan K: SCRABBLE: single-cell RNA-seq imputation constrained by bulk RNA-seq data. Analysis Code GitHub Repository 2019, https://github.com/tanlabcode/SCRABBLE_PAPER.
Peng T, Zhu Q, Yin P, Tan K. SCRABBLE: single-cell RNA-seq imputation constrained by bulk RNA-seq data. Analysis Code Zenodo Repository. 2019. https://doi.org/10.5281/zenodo.2585885.
We thank the Research Information Services at the Children's Hospital of Philadelphia for providing computing support.
This work was supported by the National Institutes of Health of the USA grants GM104369, GM108716, HG006130, HD089245, and CA233285 (to KT).
SCRABBLE is implemented using both R and MATLAB languages. The software packages are freely available under the MIT license. Source code has been deposited at the GitHub repository (https://github.com/tanlabcode/SCRABBLE) [29] and Zenodo with the access code DOI: https://doi.org/10.5281/zenodo.2585902 [30].
The datasets analyzed in this study are included in this published article and Additional file 3: Table S1. The analysis code used to analyze the datasets is available from the GitHub repository (https://github.com/tanlabcode/SCRABBLE_PAPER) [31] and Zenodo with the access code DOI: https://doi.org/10.5281/zenodo.2585885 [32].
Division of Oncology and Center for Childhood Cancer Research, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, USA
Tao Peng
& Kai Tan
Graduate Group in Genomics and Computational Biology, University of Pennsylvania, Philadelphia, PA, 19104, USA
Qin Zhu
Department of Mathematics, University of California, Los Angeles, CA, 90095, USA
Penghang Yin
Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, USA
Department of Pediatrics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
Department of Cell and Developmental Biology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
Department of Genetics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
Search for Tao Peng in:
Search for Qin Zhu in:
Search for Penghang Yin in:
Search for Kai Tan in:
TP and KT conceived and designed the study. TP designed and implemented the SCRABBLE algorithm with the help of QZ and PY. QZ and PY provided the additional analytical tools. TP and KT performed the data analysis. KT supervised the overall study. TP and KT wrote the paper. All authors read and approved the final manuscript.
Correspondence to Kai Tan.
Figures S1-S6. Supplementary figures. (PDF 17836 kb)
Figures S7-S34. Supplementary figures. (DOCX 9400 kb)
Tables S1 and S2. Supplementary tables. (PDF 41 kb)
Single-cell RNA-seq
Matrix regularization | CommonCrawl |
Journal Home About Issues in Progress Current Issue All Issues
Vol. 8,
•https://doi.org/10.1364/OPTICA.430893
Dielectric super-absorbing metasurfaces via PT symmetry breaking
Jianbo Yu, Binze Ma, Ao Ouyang, Pintu Ghosh, Hao Luo, Arnab Pattanayak, Sandeep Kaur, Min Qiu, Pavel Belov, and Qiang Li
Jianbo Yu,1 Binze Ma,1 Ao Ouyang,1 Pintu Ghosh,1 Hao Luo,1 Arnab Pattanayak,1 Sandeep Kaur,1 Min Qiu,2,3 Pavel Belov,4 and Qiang Li1,*
1State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310024, China
2Key Laboratory of 3D Micro/Nano Fabrication and Characterization of Zhejiang Province, School of Engineering, Westlake University, Hangzhou 310024, China
3Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou 310024, China
4Department of Physics and Engineering, ITMO University, Russia
*Corresponding author: [email protected]
Pintu Ghosh https://orcid.org/0000-0003-3676-0151
Hao Luo https://orcid.org/0000-0001-8165-5794
Pavel Belov https://orcid.org/0000-0002-5107-2763
Qiang Li https://orcid.org/0000-0001-9344-6682
J Yu
B Ma
A Ouyang
P Ghosh
H Luo
A Pattanayak
S Kaur
M Qiu
P Belov
Q Li
Jianbo Yu, Binze Ma, Ao Ouyang, Pintu Ghosh, Hao Luo, Arnab Pattanayak, Sandeep Kaur, Min Qiu, Pavel Belov, and Qiang Li, "Dielectric super-absorbing metasurfaces via PT symmetry breaking," Optica 8, 1290-1295 (2021)
Observing exceptional point degeneracy of radiation with electrically pumped photonic crystal...
Kenta Takata, et al.
Non-Hermitian metasurfaces for the best of plasmonics and dielectrics
Frank Yang, et al.
Opt. Mater. Express 11(7) 2326-2334 (2021)
Nondissipative non-Hermitian dynamics and exceptional points in coupled optical parametric...
Arkadev Roy, et al.
Absorption spectroscopy
Spectral properties
Thermal emission
Original Manuscript: May 6, 2021
Manuscript Accepted: September 11, 2021
Published: October 1, 2021
THEORETICAL MODEL
EXAMPLES OF SUPER ABSORPTION VIA PT SYMMETRY BREAKING
Suppl. Mat. (1)
Dielectric super-absorbing ($\gt\!{50}\%$) metasurfaces, born of necessity to break the 50% absorption limit of an ultrathin film, offer an efficient way to manipulate light. However, in previous works, super absorption in dielectric systems was predominately realized via making two modes reach the degenerate critical coupling condition, which restricted the two modes to be orthogonal. Here, we demonstrate that in nonorthogonal-mode systems, which represent a broader range of metasurfaces, super absorption can be achieved by breaking parity-time (PT) symmetry. As a proof of concept, super absorption (100% in simulation and 71% in experiment) at near-infrared frequencies is achieved in a Si-Ge-Si metasurface with two nonorthogonal modes. Engineering PT symmetry enriches the field of non-Hermitian flat photonics, opening-up new possibilities in optical sensing, thermal emission, photovoltaic, and photodetecting devices.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Dielectric super-absorbing ($\gt\!{50}\%$) metasurfaces have attracted growing attention in that they break the 50% absorption limit of a subwavelength thickness film [1], and exhibit advantages over metallic metasurfaces, such as low Ohmic dissipation, out-of-band transparency, and CMOS compatibility [2–9]. Losses in these metasurfaces enable them to exchange energy with the surrounding environment and exhibit complex eigenvalues, making them non-Hermitian [10–13]. Originating from quantum mechanics [14], non-Hermitian physics has been used to study the characteristics of optical systems, which led to the emergence of numerous novel phenomena and applications, such as loss-induced optical non-reciprocity [15], loss-induced optical transparency [16], selective thermal emitters [17,18], unidirectional invisibility [19,20], and exceptional points (EPs)-based sensing [21–23]. Parity-time (PT) symmetric systems is a particular family of non-Hermitian systems, which are invariant under the combined action of the $P$ $({i \to i,\hat x \to - \hat x,\hat p \to - \hat p})$ and $T$ $({i \to \!-\! i,\hat x \to \hat x,\hat p \to \!-\! \hat p})$ operators. The eigenfrequencies of such systems have distinct behavior in the PT symmetry and PT symmetry-breaking regime, and this characteristic opens up new possibilities of engineering spectral properties of photonic systems [24–27].
In dielectric non-Hermitian metasurfaces, previous efforts on realizing super absorption were making two spectrally overlapped modes reach the degenerate critical coupling condition, where the radiative and non-radiative decay rates are the same for each mode [28–41]. This method restricts the two modes to be orthogonal, i.e., the coupling between them is negligible compared to their losses. Consequently, the eigenfrequencies degenerate at the diabolic point [DP, see the green dashed line in Fig. 1(b)], where the eigenvectors are orthogonal (see Supplement 1). However, when the mode coupling is non-negligible, the strong interaction between different resonances can cause mode splitting, and the two-orthogonal-mode model is invalidated under this condition.
Fig. 1. (a) Diagram of a dual-port photonic system containing two nonorthogonal modes ${\rm M}_1$ and ${\rm M}_2$. (b) Dependence of the lineshape and Q-factors of absorption spectra on decay rates and the coupling coefficient. (c) Top: calculated absorption in the parameter space $({\gamma _{1,{\rm NR}}}/{\gamma _{1,{\rm R}}},{\gamma _{2,{\rm NR}}}/{\gamma _{2,{\rm R}}},\kappa /\sqrt {{\gamma _{1,{\rm R}}}{\gamma _{2,{\rm R}}}})$, with ${f_1} = {f_2}$. The green dots ${P_{1,2,3}}$ denote the unity absorption condition, and the corresponding trajectory is indicated by the green dashed line. Bottom: the value of ${\gamma _{1,{\rm NR}}}/{\gamma _{1,{\rm R}}}$ (the red solid line) and ${\gamma _{2,{\rm NR}}}/{\gamma _{2,{\rm R}}}$ (the blue solid line) under the unity absorption condition.
In this work, we demonstrate super absorption in dielectric metasurfaces by breaking PT symmetry. Our method is based on a two-nonorthogonal-mode model with EP degeneracies (both eigenvectors and complex eigenfrequencies coalesce), which can represent a broader range of optical systems. As a proof of concept, super absorption at near-infrared (NIR) frequencies is achieved in a Si-Ge-Si metasurface, with two nonorthogonal quasi bound states in the continuum (QBIC) modes. In this system, PT symmetry breaking is realized by engineering the loss difference between the two modes, and it successfully suppresses the mode splitting. Our work provides a clue for engineering light trapping in non-Hermitian flat photonics, thus having broad implications in optical sensing, photodetecting, thermal emission manipulation, and photovoltaic devices.
2. THEORETICAL MODEL
We start by considering a dual-port photonic system supporting two nonorthogonal modes ${\rm M}_1$ and ${\rm M}_2$ [Fig. 1(a)], whose resonant frequencies before coupling are ${f_1}$ and ${f_2}$, respectively. The two modes formed within a single resonator are connected by the near-field coupling coefficient $\kappa$. The radiative decay rate corresponding to mode $j$ ($j = 1,2$) can be expressed as ${\gamma _{j,{\rm R}}} = {\gamma _{je}} + \gamma _{je}^\prime$, where ${\gamma _{je}}$ and $\gamma _{je}^\prime$ are radiative decay rates of mode $j$ to Port 1 and Port 2, respectively. The total decay rate of mode $j$ can be given by ${\gamma _j} = {\gamma _{j,{\rm NR}}} + {\gamma _{j,{\rm R}}}$, where ${\gamma _{j,{\rm NR}}}$ corresponds to the non-radiative decay rate of mode $j $. The amplitude of incoming (outgoing) wave from Port 1 is expressed as ${S_{1 +}}$ (${S_{1 -}}$), whereas ${S_{2 -}}$ represents the amplitude of the outgoing wave from Port 2. This model can describe many dielectric systems such as dielectric meta-atoms on a transparent substrate illuminated from one side by incident light.
The far-field coupling induced by the radiation in two channels is mainly determined by the symmetric properties of the two modes. Here, we suppose that one mode decays symmetrically and the other decays anti-symmetrically into two ports, which leads to a negligible far-field coupling. Otherwise, if the two modes have the same symmetric properties, the total absorption cannot exceed 50% (see Supplement 1, Section 2). The effective Hamiltonian of the two-nonorthogonal-mode system in Fig. 1(a) is given by [42–44]
(1)$$H = \left[{\begin{array}{* {20}{c}}{{f_1} - i{\gamma _1}}&\kappa \\\kappa &{{f_2} - i{\gamma _2}}\end{array}} \right].$$
When the two resonant frequencies before coupling are the same (${f_1} = {f_2} = {f_0}$), the eigenfrequencies can be expressed as
(2)$${f_{\rm eigen}} = {f_0} - {i}\frac{{{\gamma _1} + {\gamma _2}}}{2} \pm \frac{1}{2}\sqrt {4{\kappa ^2} - {\Delta}{\gamma ^2}} .$$
The behavior of ${f_{\rm eigen}}$ is controlled by the coupling coefficient $\kappa$ and the loss difference ${\Delta}\gamma = | {{\gamma _1} - {\gamma _2}} |$. There are three cases: (1) $\kappa = {\Delta}\gamma /2$, which is the condition where both ${\rm Re}({{f_{\rm eigen}}})$ and ${\rm Im}({{f_{\rm eigen}}})$ of the two modes along with their associated eigenvectors coalesce, indicating the emergence of an EP [45]; (2) $\kappa \gt {\Delta}\gamma /2$, where, under this condition, the system is in passive PT-symmetric phase, and only ${\rm Im}({{f_{\rm eigen}}})$ of the two modes coincide; and (3) $\kappa \lt {\Delta}\gamma /2$, whereby the PT-symmetric phase is broken under this condition, and only ${\rm Re}({{f_{\rm eigen}}})$ of the two modes coincide. The coupling coefficient at the transition point of PT-symmetric phase is ${\kappa _{\rm EP}} = {\Delta}\gamma /2$ [see the vertical gray dashed line in Fig. 1(b)].
Lineshape and Linewidth Control—The eigenfrequencies are responsible for the lineshape features of absorption spectra. When $\kappa \gt {\kappa _{\rm EP}}$, two modes exchange energy strongly with each other, leading to the mode splitting. ${\rm Re}({{f_{\rm eigen}}})$ of the two modes differ from each other, and two peaks in the absorption spectrum can be seen consequently [see PT symmetry regime in Fig. 1(b)]. When $\kappa \lt {\kappa _{\rm EP}}$, ${\rm Re}({{f_{\rm eigen}}})$ of the two modes coincide and so do the two absorption peaks [see PT symmetry breaking regime in Fig. 1(b)]. Therefore, to obtain single-peak absorption spectra, we should either engineer $\kappa$ or ${\Delta}\gamma$ to make $\kappa \lt {\kappa _{\rm EP}}$, so that the coupling-induced mode splitting can be suppressed. In PT symmetry breaking regime, the quality-factors (Q-factors) are determined by the total loss (${\gamma _1} + {\gamma _2}$) according to the relationship ${Q} \propto {\omega}/({{\gamma _1} + {\gamma _2}})$, indicating that the total loss needs to be suppressed to obtain narrow band absorption spectra [see Fig. 1(b)].
Fig. 2. (a) Schematic of the dielectric metasurface. (b) The electric and magnetic field distribution for the E-QBIC and M-QBIC, respectively. The color shows the field distribution at the plane $z = {h_1}/2$ (left) and $x = {P_x}/4$ (right). The arrows show the direction of the fields. (c) Calculated ${\rm Re}({{f_{\rm eigen}}})$ of the two modes corresponding to varying Ge thickness ${h_2}$. Insets show the corresponding electric field distribution of the eigenmodes. (d) Calculated ${\rm Im}({{f_{\rm eigen}}})$ of the two modes versus Ge thickness ${h_2}$. In the calculation, the following parameters are used: ${h_1} = 0.330\;{\unicode{x00B5}{\rm m}}$, $\theta = 9^\circ$, ${a} = 0.075\;{\unicode{x00B5}{\rm m}}$, $b = 0.225\;{\unicode{x00B5}{\rm m}}$, ${P_y} = 0.740\;{\unicode{x00B5}{\rm m}}$, and ${P_x}$ is tuned from 0.712 µm to 0.733 µm to make either ${\rm Re}({{f_{\rm eigen}}})$ or ${\rm Im}({{f_{\rm eigen}}})$ of the two modes coincide at different ${h_2}$.
Amplitude Control—In PT symmetry breaking regime, the amplitudes of absorption spectra ($ A $) can be optimized to be 100% by tuning the radiative and non-radiative decay rates. The calculated absorption in the parameter space $({\gamma _{1,{\rm NR}}}/{\gamma _{1,{\rm R}}}, {\gamma _{2,{\rm NR}}}/{\gamma _{2,{\rm R}}},\kappa /\sqrt {{\gamma _{1,{\rm R}}}{\gamma _{2,{\rm R}}}})$ is shown in Fig. 1(c), and unity absorption ($A = {100}\%$) is achieved when both Eqs. (3) and (4) are satisfied.
(3)$$\frac{{{\gamma _{1,{\rm NR}}}}}{{{\gamma _{1,{\rm R}}}}} = 1 - \frac{\kappa}{{\sqrt {{\gamma _{1,{\rm R}}}{\gamma _{2,{\rm R}}}}}},$$
(4)$$\frac{{{\gamma _{2,{\rm NR}}}}}{{{\gamma _{2,{\rm R}}}}} = 1 + \frac{\kappa}{{\sqrt {{\gamma _{1,{\rm R}}}{\gamma _{2,{\rm R}}}}}}.$$
Here, we suppose ${\rm M}_1$ decays symmetrically, and ${\rm M}_2$ decays anti-symmetrically. The other case is that ${\rm M}_1$ decays anti-symmetrically, and ${\rm M}_2$ decays symmetrically, which requires $\frac{{{\gamma _{1,{\rm NR}}}}}{{{\gamma _{1,{\rm R}}}}} = 1 + \frac{\kappa}{{\sqrt {{\gamma _{1,{\rm R}}}{\gamma _{2,{\rm R}}}}}}$, and $\frac{{{\gamma _{2,{\rm NR}}}}}{{{\gamma _{2,{\rm R}}}}} = 1 - \frac{\kappa}{{\sqrt {{\gamma _{1,{\rm R}}}{\gamma _{2,{\rm R}}}}}}$ to realize unity absorption (more details are provided in Supplement 1, Section 3). At $\kappa = 0$, the two modes are orthogonal and unity absorption necessitates the radiative decay rate to be equal to the non-radiative decay rate for each mode, which is exactly the degenerate critical coupling condition studied before, see ${P_1}$ in Fig. 1(c). When the orthogonality of the two modes is perturbed by extra coupling ($\kappa \ne 0$), unity absorption can still be achieved as long as the radiative and non-radiative decay rates meet the condition described by Eqs. (3) and (4) [see ${P_2}$ and ${P_3}$ in Fig. 1(c)].
To sum up, in order to achieve super absorption for a two-nonorthogonal-mode system, we need to engineer the following three parameters: (1) lineshape: decrease the coupling coefficient $\kappa$ or increase the loss difference ${\Delta}\gamma$ to make $\kappa \lt {\kappa _{\rm EP}}$ so that PT symmetry is broken, and thereby the mode splitting is prevented; (2) linewidth: engineer the total loss to control the linewidth, and (3) amplitude: choose two modes with different symmetric properties, and tune the radiative and non-radiative decay rates of the two modes to satisfy Eqs. (3) and (4) so that the absorption can reach 100%.
3. EXAMPLES OF SUPER ABSORPTION VIA PT SYMMETRY BREAKING
To validate the theoretical model, a dielectric metasurface with two nonorthogonal modes is established [Fig. 2(a)]. The Si elliptical cylinders with an orientation angle $\theta$ on a ${\rm SiO}_2$ substrate support magnetic and electric QBIC (M-QBIC and E-QBIC) modes [3,46]. The mode coupling, which is comparable to the low radiative loss of QBIC modes, mainly arises from the substrate-induced interaction between electric and magnetic dipole resonances [47], and leads to the destruction of orthogonality. A thin layer of lossy Ge is inserted in the middle of lossless Si cylinders to introduce the non-radiative loss. In this Si-Ge-Si metasurface, the radiative decay rates ${\gamma _{{\rm E} - {\rm QBIC},{\rm R}}}$ and ${\gamma _{{\rm M} - {\rm QBIC},{\rm R}}}$ increase with the orientation angle $\theta$, and hardly change when Ge thickness ${h_2}$ varies (see Supplement 1, Section 4). Within the cylinders, the electric field of E-QBIC mainly concentrates in the lossy Ge layer, while that of M-QBIC is mainly in the lossless Si layer [Fig. 2(b)]. Therefore, the non-radiative decay rates ${\gamma _{{\rm E} - {\rm QBIC},{\rm NR}}}$ increases faster than ${\gamma _{{\rm M} - {\rm QBIC},{\rm NR}}}$ when Ge thickness ${h_2}$ increases, which causes the total loss difference ${\Delta}\gamma$ increases with Ge thickness ${h_2}$.
Fig. 3. Calculated (a) ${\rm Re}({{f_{\rm eigen}}})$ and (b) ${\rm Im}({{f_{\rm eigen}}})$ of the two modes (blue and red) in the parameter space $({h_1}, {h_2})$. ${\rm Re}({{f_{\rm eigen}}})$, ${\rm Im}({{f_{\rm eigen}}})$, and absorption spectra at three typical ${h_2}$ values are shown: (c-e) ${h_2} = 0.010\;{\unicode{x00B5}{\rm m}}$, (f-h) ${h_2} = 0.038\;{\unicode{x00B5}{\rm m}}$, and (i-k) ${h_2} = 0.050\;{\unicode{x00B5}{\rm m}}$. The points marked with a rhombus (d), stars (f, g), and a circle (i) correspond to PT-symmetric phase, EP, and PT symmetry broken phase, respectively.
The PT-symmetric phase is tuned by changing Ge thickness ${h_2}$ [Figs. 2(c) and 2(d)]. To guarantee either ${\rm Re}({{f_{\rm eigen}}})$ or ${\rm Im}({{f_{\rm eigen}}})$ of the two modes coinciding at different ${h_2}$, ${P_x}$ is varied from 0.712 µm to 0.733 µm in the calculation (see Table 1 in Supplement 1, Section 4). At ${h_2} \lt 0.038\;\unicode{x00B5}{\rm m}$, the total loss difference ${\Delta}\gamma$ is not large enough to compensate for the coupling, indicating the system is in PT-symmetric phase. The field distributions are distorted due to strong mode coupling. At ${h_2} = 0.038\;{\unicode{x00B5}{\rm m}}$, both ${\rm Re}({{f_{\rm eigen}}})$ and ${\rm Im}({{f_{\rm eigen}}})$ of the two modes coalesce at EP. The electric field distributions of the two eigenmodes are the same, since two corresponding eigenvectors are parallel at EP. At ${h_2} \gt 0.038{\unicode{x00B5}{\rm m}}$, the system is in PT symmetry broken phase. Under this condition, ${\rm Re}({{f_{\rm eigen}}})$ of the two modes coincide. The weak coupling compared with the loss difference makes the two eigenmodes have distinct field distributions. In the simulation, eigenfrequencies are calculated using the complex refractive-index of the lossy Ge, and the eigenmodes distributions can be seen in the insets of Fig. 2(c) and Supplement 1, Section 5.
The dependence of eigenfrequencies on both Ge thickness ${h_2}$ and the cylinder thickness ${h_1}$ are calculated [Figs. 3(a) and 3(b)]. At ${h_2} = 0.010{\unicode{x00B5}{\rm m}}$, PT symmetry is not broken, indicating two modes have different ${\rm Re}({{f_{\rm eigen}}})$ and the same ${\rm Im}({{f_{\rm eigen}}})$ at the point marked with a rhombus in Fig. 3(d), which leads to an avoided crossing in the absorption spectra. Two peaks can be observed in the absorption spectra whose amplitudes are less than 0.5 [Fig. 3(e)]. At ${h_2} = 0.038\;{\unicode{x00B5}{\rm m}}$, both ${\rm Re}({{f_{\rm eigen}}})$ and ${\rm Im}({{f_{\rm eigen}}})$ of the two modes coincide at the EP [stars in Figs. 3(f) and 3(g)]. The superposition of the two modes makes the absorption peak larger than 0.5 [Fig. 3(h)]. At ${h_2} = 0.050\;{\unicode{x00B5}{\rm m}}$, PT symmetry is broken and two modes have the same ${\rm Re}({{f_{\rm eigen}}})$ and different ${\rm Im}({{f_{\rm eigen}}})$ at the point marked with a circle in Fig. 3(i). No mode splitting can be seen due to the coincidence of ${\rm Re}({{f_{\rm eigen}}})$, thus the absorption spectra corresponding to the two modes also show a crossing, and super absorption is obtained [Fig. 3(k)].
The super absorption resulting from breaking PT symmetry is experimentally validated by fabricating Si-Ge-Si metasurfaces with different periods ${P_x}$. Here, Ge thickness ${h_2} = 0.040\;{\unicode{x00B5}{\rm m}}$ corresponds to the case in which the absorption can reaches unity in simulation. The corresponding simulated eigenfrequencies and absorption spectra are plotted in Fig. 4(a) and Fig. 4(b), respectively. The PT symmetry breaking condition is marked with the gray dashed line. The measured absorption spectra of the fabricated Si-Ge-Si metasurfaces are provided in Fig. 4(c). At ${P_x} = 0.790\;{\unicode{x00B5}{\rm m}}$, the M-QBIC and E-QBIC modes are spectrally separated. As ${P_x}$ is gradually decreased, two absorption peaks cross due to the breaking of PT symmetry. When ${P_x}$ is decreased further, the two peaks gradually separate again. At ${P_x} = 0.710\;{\unicode{x00B5}{\rm m}}$, super absorption (71%) with Q-factor $\sim\! {41}$ is achieved [Fig. 4(d)]. The deviation of absorption spectra in the experiment from the simulated results arises due to the fabrication imperfection, such as reduced cylinder size after etching. More details regarding the fabrication process and spectra measurement are provided in Supplement 1, Section 6.
Fig. 4. (a) Simulated absorption spectra at different ${P_x}$ with Ge thickness ${h_2} = 0.040\;{\unicode{x00B5}{\rm m}}$. The other parameters are the same as those specified above. (b) Corresponding eigenfrequencies at different ${P_x}$, the solid lines denote the real part, and the dotted lines denote the imaginary part. The vertical gray dashed line denotes PT symmetry broken phase. (c) Measured absorption spectra at different ${P_x}$ with Ge thickness ${h_2} = 0.040\;{\unicode{x00B5}{\rm m}}$. (d) Reflection (blue), transmission (red), and absorption spectra (orange) at ${P_x} = 0.710\;{\unicode{x00B5}{\rm m}}$, which corresponds to the red dashed line in (c).
In conclusion, we demonstrate that super absorption in dielectric metasurfaces can be achieved by breaking PT symmetry. Utilizing two coupled modes, our work provides a novel method of breaking the 50% absorption limit and achieving super absorption in dielectric metasurfaces. In previous works, dielectric super absorption was predominately realized via making two orthogonal modes reach the degenerate critical coupling condition. To guarantee the orthogonality, the losses of two modes should be large enough to preclude the influence of mode coupling, which usually makes the Q-factors low ($\sim{10}$) [32–34,36]. Breaking PT symmetry provides an effective way to achieve super absorption in high-Q dielectric systems where the losses are comparable to the mode coupling. Besides, different from most previous experimental works where the PT-symmetric phase is broken by decreasing the coupling strength [21,48], the method of increasing the loss difference presented in this work enriches the field of PT-symmetric phase engineering. Moreover, the high-Q and out-of-band transparent properties make this dielectric absorber promising for applications in optical sensing and photodetecting devices. The working frequencies of these devices can be further extended to mid-infrared (MIR) and terahertz (THz) frequencies by replacing Si and Ge with other combination of materials: for MIR range, lossless Ge and lossy ${\rm Ge}_2{\rm Sb}_2{\rm Te}_5$ (GST) [49,50], ${\rm VO}_2$ [51]; and for THz range, lossless Si and lossy doped-Si [52]. Finally, we believe that the study of engineering PT-symmetric phase in low-loss dielectric systems can lead to the generation of non-Hermitian devices with novel properties, such as selective thermal emitters with high coherence, and highly sensitive spectrally tunable metasurfaces with multiple functionalities.
National Key Research and Development Program of China (2017YFA0205700); National Natural Science Foundation of China (61775194, 61950410608, 61975181).
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
Supplemental document
See Supplement 1 for supporting content.
1. S. J. Kim, J. Park, M. Esfandyarpour, E. F. Pecora, P. G. Kik, and M. L. Brongersma, "Superabsorbing, artificial metal films constructed from semiconductor nanoantennas," Nano Lett. 16, 3801–3808 (2016). [CrossRef]
2. E. Mikheeva, J. B. Claude, M. Salomoni, J. Wenger, J. Lumeau, R. Abdeddaim, A. Ficorella, A. Gola, G. Paternoster, M. Paganoni, E. Auffray, P. Lecoq, and S. Enoch, "CMOS-compatible all-dielectric metalens for improving pixel photodetector arrays," APL Photon. 5, 116105 (2020). [CrossRef]
3. A. Tittl, A. Leitis, M. Liu, F. Yesilkoy, D. Y. Choi, D. N. Neshev, Y. S. Kivshar, and H. Altug, "Imaging-based molecular barcoding with pixelated dielectric metasurfaces," Science 360, 1105–1109 (2018). [CrossRef]
4. K. Vynck, D. Felbacq, E. Centeno, A. I. Cabuz, D. Cassagne, and B. Guizal, "All-dielectric rod-type metamaterials at optical frequencies," Phys. Rev. Lett. 102, 133901 (2009). [CrossRef]
5. A. B. Evlyukhin, S. M. Novikov, U. Zywietz, R. L. Eriksen, C. Reinhardt, S. I. Bozhevolnyi, and B. N. Chichkov, "Demonstration of magnetic dipole resonances of dielectric nanospheres in the visible region," Nano Lett. 12, 3749–3755 (2012). [CrossRef]
6. A. I. Kuznetsov, A. E. Miroshnichenko, M. L. Brongersma, Y. S. Kivshar, and B. Luk'yanchuk, "Optically resonant dielectric nanostructures," Science 354, aag2472 (2016). [CrossRef]
7. J. C. Ginn, I. Brener, D. W. Peters, J. R. Wendt, J. O. Stevens, P. F. Hines, L. I. Basilio, L. K. Warne, J. F. Ihlefeld, P. G. Clem, and M. B. Sinclair, "Realizing optical magnetism from dielectric metamaterials," Phys. Rev. Lett. 108, 097402 (2012). [CrossRef]
8. I. Staude, A. E. Miroshnichenko, M. Decker, N. T. Fofang, S. Liu, E. Gonzales, J. Dominguez, T. S. Luk, D. N. Neshev, I. Brener, and Y. S. Kivshar, "Tailoring directional scattering through magnetic and electric resonances in subwavelength silicon nanodisks," ACS Nano 7, 7824–7832 (2013). [CrossRef]
9. S. Jahani and Z. Jacob, "All-dielectric metamaterials," Nat. Nanotechnol. 11, 23–36 (2016). [CrossRef]
10. R. El-Ganainy, K. G. Makris, M. Khajavikhan, Z. H. Musslimani, S. Rotter, and D. N. Christodoulides, "Non-Hermitian physics and PT symmetry," Nat. Phys. 14, 11–19 (2018). [CrossRef]
11. C. M. Bender, "Making sense of non-Hermitian Hamiltonians," Rep. Prog. Phys. 70, 947–1018 (2007). [CrossRef]
12. K. Kawabata, K. Shiozaki, M. Ueda, and M. Sato, "Symmetry and topology in Non-Hermitian physics," Phys. Rev. X 9, 041015 (2019). [CrossRef]
13. M. A. Miri and A. Alu, "Exceptional points in optics and photonics," Science 363, eaar7709 (2019). [CrossRef]
14. M. Bender and S. Boettcher, "Real spectra in non-Hermitian Hamiltonians having PT symmetry," Phys. Rev. Lett. 80, 5243 (1998). [CrossRef]
15. X. Huang, C. Lu, C. Liang, H. Tao, and Y. C. Liu, "Loss-induced nonreciprocity," Light Sci. Appl. 10, 30 (2021). [CrossRef]
16. A. Guo, G. J. Salamo, D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G. A. Siviloglou, and D. N. Christodoulides, "Observation of PT-symmetry breaking in complex optical potentials," Phys. Rev. Lett. 103, 093902 (2009). [CrossRef]
17. C. F. Doiron and G. V. Naik, "Non-Hermitian selective thermal emitters using metal-semiconductor hybrid resonators," Adv. Mater. 31, 1904154 (2019). [CrossRef]
18. X. Zhang, Z. Zhang, Q. Wang, S. Zhu, and H. Liu, "Controlling thermal emission by parity-symmetric Fano resonance of optical absorbers in metasurfaces," ACS Photon. 6, 2671–2676 (2019). [CrossRef]
19. Z. Lin, H. Ramezani, T. Eichelkraut, T. Kottos, H. Cao, and D. N. Christodoulides, "Unidirectional invisibility induced by PT-symmetric periodic structures," Phys. Rev. Lett. 106, 213901 (2011). [CrossRef]
20. L. Feng, Y. L. Xu, W. S. Fegadolli, M. H. Lu, J. E. Oliveira, V. R. Almeida, Y. F. Chen, and A. Scherer, "Experimental demonstration of a unidirectional reflectionless parity-time metamaterial at optical frequencies," Nat. Mater. 12, 108–113 (2013). [CrossRef]
21. J. H. Park, A. Ndao, W. Cai, L. Hsu, A. Kodigala, T. Lepetit, Y. H. Lo, and B. Kanté, "Symmetry-breaking-induced plasmonic exceptional points and nanoscale sensing," Nat. Phys. 16, 462–468 (2020). [CrossRef]
22. W. Chen, S. K. Ozdemir, G. Zhao, J. Wiersig, and L. Yang, "Exceptional points enhance sensing in an optical microcavity," Nature 548, 192–196 (2017). [CrossRef]
23. H. Hodaei, A. U. Hassan, S. Wittek, H. Garcia-Gracia, R. El-Ganainy, D. N. Christodoulides, and M. Khajavikhan, "Enhanced sensitivity at higher-order exceptional points," Nature 548, 187–191 (2017). [CrossRef]
24. L. Chang, X. Jiang, S. Hua, C. Yang, J. Wen, L. Jiang, G. Li, G. Wang, and M. Xiao, "Parity-time symmetry and variable optical isolation in active-passive-coupled microresonators," Nat. Photonics 8, 524–529 (2014). [CrossRef]
25. T. Kottos, "Broken symmetry makes light work," Nat. Phys. 6, 166–167 (2010). [CrossRef]
26. C. E. Rüter, K. G. Makris, R. El-Ganainy, D. N. Christodoulides, M. Segev, and D. Kip, "Observation of parity-time symmetry in optics," Nat. Phys. 6, 192–195 (2010). [CrossRef]
27. A. Tuniz, T. Wieduwilt, and M. A. Schmidt, "Tuning the effective PT phase of plasmonic eigenmodes," Phys. Rev. Lett. 123, 213903 (2019). [CrossRef]
28. J. R. Piper, V. Liu, and S. Fan, "Total absorption by degenerate critical coupling," Appl. Phys. Lett. 104, 251110 (2014). [CrossRef]
29. T. Siday, P. P. Vabishchevich, L. Hale, C. T. Harris, T. S. Luk, J. L. Reno, I. Brener, and O. Mitrofanov, "Terahertz detection with perfectly-absorbing photoconductive metasurface," Nano Lett. 19, 2888–2896 (2019). [CrossRef]
30. J. Y. Suen, K. Fan, and W. J. Padilla, "A zero-rank, maximum nullity perfect electromagnetic wave absorber," Adv. Opt. Mater. 7, 1801632 (2019). [CrossRef]
31. K. Fan, J. Zhang, X. Liu, G. F. Zhang, R. D. Averitt, and W. J. Padilla, "Phototunable dielectric Huygens' metasurfaces," Adv. Mater. 30, 1800278 (2018). [CrossRef]
32. J. Tian, H. Luo, Q. Li, X. Pei, K. Du, and M. Qiu, "Near-infrared super-absorbing all-dielectric metasurface based on single-layer germanium nanostructures," Laser Photon. Rev. 12, 1800076 (2018). [CrossRef]
33. K. Fan, J. Y. Suen, X. Liu, and W. J. Padilla, "All-dielectric metasurface absorbers for uncooled terahertz imaging," Optica 4, 601–604 (2017). [CrossRef]
34. X. Ming, X. Liu, L. Sun, and W. J. Padilla, "Degenerate critical coupling in all-dielectric metasurface absorbers," Opt. Express 25, 24658–24669 (2017). [CrossRef]
35. J. Tian, Q. Li, P. A. Belov, R. K. Sinha, W. Qian, and M. Qiu, "High-Q all-dielectric metasurface: super and suppressed optical absorption," ACS Photon. 7, 1436–1443 (2020). [CrossRef]
36. C. Y. Yang, J. H. Yang, Z. Y. Yang, Z. X. Zhou, M. G. Sun, V. E. Babicheva, and K. P. Chen, "Nonradiating silicon nanoantenna metasurfaces as narrowband absorbers," ACS Photon. 5, 2596–2601 (2018). [CrossRef]
37. M. Yang, C. Meng, C. Fu, Y. Li, Z. Yang, and P. Sheng, "Subwavelength total acoustic absorption with degenerate resonators," Appl. Phys. Lett. 107, 104104 (2015). [CrossRef]
38. K. Fan, I. V. Shadrivov, A. E. Miroshnichenko, and W. J. Padilla, "Infrared all-dielectric Kerker metasurfaces," Opt. Express 29, 10518–10526 (2021). [CrossRef]
39. V. Romero-García, N. Jiménez, J. P. Groby, A. Merkel, V. Tournat, G. Theocharis, O. Richoux, and V. Pagneux, "Perfect absorption in mirror-symmetric acoustic metascreens," Phys. Rev. Appl. 14, 054055 (2020). [CrossRef]
40. L. L. Hale, P. P. Vabischevich, T. Siday, C. T. Harris, T. S. Luk, S. J. Addamane, J. L. Reno, I. Brener, and O. Mitrofanov, "Perfect absorption in GaAs metasurfaces near the bandgap edge," Opt. Express 28, 35284–35296 (2020). [CrossRef]
41. N. Jimenez, V. Romero-Garcia, V. Pagneux, and J. P. Groby, "Rainbow-trapping absorbers: broadband, perfect and asymmetric sound absorption by subwavelength panels for transmission problems," Sci. Rep. 7, 13595 (2017). [CrossRef]
42. W. Suh, Z. Wang, and S. Fan, "Temporal coupled-mode theory and the presence of non-orthogonal modes in lossless multimode cavities," IEEE J. Quantum Electron. 40, 1511–1518 (2004). [CrossRef]
43. X. Zhao, C. Chen, K. Kaj, I. Hammock, Y. Huang, R. D. Averitt, and X. Zhang, "Terahertz investigation of bound states in the continuum of metallic metasurfaces," Optica 7, 1548–1554 (2020). [CrossRef]
44. J. Lin, M. Qiu, X. Zhang, H. Guo, Q. Cai, S. Xiao, Q. He, and L. Zhou, "Tailoring the lineshapes of coupled plasmonic systems based on a theory derived from first principles," Light Sci. Appl. 9, 158 (2020). [CrossRef]
45. S. K. Ozdemir, S. Rotter, F. Nori, and L. Yang, "Parity-time symmetry and exceptional points in photonics," Nat. Mater. 18, 783–798 (2019). [CrossRef]
46. M. Liu and D. Y. Choi, "Extreme Huygens' metasurfaces based on quasi-bound states in the continuum," Nano Lett. 18, 8062–8069 (2018). [CrossRef]
47. A. E. Miroshnichenko, A. B. Evlyukhin, Y. S. Kivshar, and B. N. Chichkov, "Substrate-induced resonant magnetoelectric effects for dielectric nanoparticles," ACS Photon. 2, 1423–1428 (2015). [CrossRef]
48. F. Zhong, K. Ding, Y. Zhang, S. Zhu, C. T. Chan, and H. Liu, "Angle-resolved thermal emission spectroscopy characterization of non-Hermitian metacrystals," Phys. Rev. Appl. 13, 014071 (2020). [CrossRef]
49. Z. Xu, H. Luo, H. Zhu, Y. Hong, W. Shen, J. Ding, S. Kaur, P. Ghosh, M. Qiu, and Q. Li, "Nonvolatile optically reconfigurable radiative metasurface with visible tunability for anticounterfeiting," Nano Lett. 21, 5269–5276 (2021). [CrossRef]
50. Y. Qu, Q. Li, K. Du, L. Cai, J. Lu, and M. Qiu, "Dynamic thermal emission control based on ultrathin plasmonic metamaterials including phase-changing material GST," Laser Photon. Rev. 11, 1700091 (2017). [CrossRef]
51. Z. Xu, Q. Li, K. Du, S. Long, Y. Yang, X. Cao, H. Luo, H. Zhu, P. Ghosh, W. Shen, and M. Qiu, "Spatially resolved dynamically reconfigurable multilevel control of thermal emission," Laser Photon. Rev. 14, 1900162 (2020). [CrossRef]
52. M. van Exter and D. Grischkowsky, "Optical and electronic properties of doped silicon from 0.1 to 2 THz," Appl. Phys. Lett. 56, 1694–1696 (1990). [CrossRef]
S. J. Kim, J. Park, M. Esfandyarpour, E. F. Pecora, P. G. Kik, and M. L. Brongersma, "Superabsorbing, artificial metal films constructed from semiconductor nanoantennas," Nano Lett. 16, 3801–3808 (2016).
E. Mikheeva, J. B. Claude, M. Salomoni, J. Wenger, J. Lumeau, R. Abdeddaim, A. Ficorella, A. Gola, G. Paternoster, M. Paganoni, E. Auffray, P. Lecoq, and S. Enoch, "CMOS-compatible all-dielectric metalens for improving pixel photodetector arrays," APL Photon. 5, 116105 (2020).
A. Tittl, A. Leitis, M. Liu, F. Yesilkoy, D. Y. Choi, D. N. Neshev, Y. S. Kivshar, and H. Altug, "Imaging-based molecular barcoding with pixelated dielectric metasurfaces," Science 360, 1105–1109 (2018).
K. Vynck, D. Felbacq, E. Centeno, A. I. Cabuz, D. Cassagne, and B. Guizal, "All-dielectric rod-type metamaterials at optical frequencies," Phys. Rev. Lett. 102, 133901 (2009).
A. B. Evlyukhin, S. M. Novikov, U. Zywietz, R. L. Eriksen, C. Reinhardt, S. I. Bozhevolnyi, and B. N. Chichkov, "Demonstration of magnetic dipole resonances of dielectric nanospheres in the visible region," Nano Lett. 12, 3749–3755 (2012).
A. I. Kuznetsov, A. E. Miroshnichenko, M. L. Brongersma, Y. S. Kivshar, and B. Luk'yanchuk, "Optically resonant dielectric nanostructures," Science 354, aag2472 (2016).
J. C. Ginn, I. Brener, D. W. Peters, J. R. Wendt, J. O. Stevens, P. F. Hines, L. I. Basilio, L. K. Warne, J. F. Ihlefeld, P. G. Clem, and M. B. Sinclair, "Realizing optical magnetism from dielectric metamaterials," Phys. Rev. Lett. 108, 097402 (2012).
I. Staude, A. E. Miroshnichenko, M. Decker, N. T. Fofang, S. Liu, E. Gonzales, J. Dominguez, T. S. Luk, D. N. Neshev, I. Brener, and Y. S. Kivshar, "Tailoring directional scattering through magnetic and electric resonances in subwavelength silicon nanodisks," ACS Nano 7, 7824–7832 (2013).
S. Jahani and Z. Jacob, "All-dielectric metamaterials," Nat. Nanotechnol. 11, 23–36 (2016).
R. El-Ganainy, K. G. Makris, M. Khajavikhan, Z. H. Musslimani, S. Rotter, and D. N. Christodoulides, "Non-Hermitian physics and PT symmetry," Nat. Phys. 14, 11–19 (2018).
C. M. Bender, "Making sense of non-Hermitian Hamiltonians," Rep. Prog. Phys. 70, 947–1018 (2007).
K. Kawabata, K. Shiozaki, M. Ueda, and M. Sato, "Symmetry and topology in Non-Hermitian physics," Phys. Rev. X 9, 041015 (2019).
M. A. Miri and A. Alu, "Exceptional points in optics and photonics," Science 363, eaar7709 (2019).
M. Bender and S. Boettcher, "Real spectra in non-Hermitian Hamiltonians having PT symmetry," Phys. Rev. Lett. 80, 5243 (1998).
X. Huang, C. Lu, C. Liang, H. Tao, and Y. C. Liu, "Loss-induced nonreciprocity," Light Sci. Appl. 10, 30 (2021).
A. Guo, G. J. Salamo, D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G. A. Siviloglou, and D. N. Christodoulides, "Observation of PT-symmetry breaking in complex optical potentials," Phys. Rev. Lett. 103, 093902 (2009).
C. F. Doiron and G. V. Naik, "Non-Hermitian selective thermal emitters using metal-semiconductor hybrid resonators," Adv. Mater. 31, 1904154 (2019).
X. Zhang, Z. Zhang, Q. Wang, S. Zhu, and H. Liu, "Controlling thermal emission by parity-symmetric Fano resonance of optical absorbers in metasurfaces," ACS Photon. 6, 2671–2676 (2019).
Z. Lin, H. Ramezani, T. Eichelkraut, T. Kottos, H. Cao, and D. N. Christodoulides, "Unidirectional invisibility induced by PT-symmetric periodic structures," Phys. Rev. Lett. 106, 213901 (2011).
L. Feng, Y. L. Xu, W. S. Fegadolli, M. H. Lu, J. E. Oliveira, V. R. Almeida, Y. F. Chen, and A. Scherer, "Experimental demonstration of a unidirectional reflectionless parity-time metamaterial at optical frequencies," Nat. Mater. 12, 108–113 (2013).
J. H. Park, A. Ndao, W. Cai, L. Hsu, A. Kodigala, T. Lepetit, Y. H. Lo, and B. Kanté, "Symmetry-breaking-induced plasmonic exceptional points and nanoscale sensing," Nat. Phys. 16, 462–468 (2020).
W. Chen, S. K. Ozdemir, G. Zhao, J. Wiersig, and L. Yang, "Exceptional points enhance sensing in an optical microcavity," Nature 548, 192–196 (2017).
H. Hodaei, A. U. Hassan, S. Wittek, H. Garcia-Gracia, R. El-Ganainy, D. N. Christodoulides, and M. Khajavikhan, "Enhanced sensitivity at higher-order exceptional points," Nature 548, 187–191 (2017).
L. Chang, X. Jiang, S. Hua, C. Yang, J. Wen, L. Jiang, G. Li, G. Wang, and M. Xiao, "Parity-time symmetry and variable optical isolation in active-passive-coupled microresonators," Nat. Photonics 8, 524–529 (2014).
T. Kottos, "Broken symmetry makes light work," Nat. Phys. 6, 166–167 (2010).
C. E. Rüter, K. G. Makris, R. El-Ganainy, D. N. Christodoulides, M. Segev, and D. Kip, "Observation of parity-time symmetry in optics," Nat. Phys. 6, 192–195 (2010).
A. Tuniz, T. Wieduwilt, and M. A. Schmidt, "Tuning the effective PT phase of plasmonic eigenmodes," Phys. Rev. Lett. 123, 213903 (2019).
J. R. Piper, V. Liu, and S. Fan, "Total absorption by degenerate critical coupling," Appl. Phys. Lett. 104, 251110 (2014).
T. Siday, P. P. Vabishchevich, L. Hale, C. T. Harris, T. S. Luk, J. L. Reno, I. Brener, and O. Mitrofanov, "Terahertz detection with perfectly-absorbing photoconductive metasurface," Nano Lett. 19, 2888–2896 (2019).
J. Y. Suen, K. Fan, and W. J. Padilla, "A zero-rank, maximum nullity perfect electromagnetic wave absorber," Adv. Opt. Mater. 7, 1801632 (2019).
K. Fan, J. Zhang, X. Liu, G. F. Zhang, R. D. Averitt, and W. J. Padilla, "Phototunable dielectric Huygens' metasurfaces," Adv. Mater. 30, 1800278 (2018).
J. Tian, H. Luo, Q. Li, X. Pei, K. Du, and M. Qiu, "Near-infrared super-absorbing all-dielectric metasurface based on single-layer germanium nanostructures," Laser Photon. Rev. 12, 1800076 (2018).
K. Fan, J. Y. Suen, X. Liu, and W. J. Padilla, "All-dielectric metasurface absorbers for uncooled terahertz imaging," Optica 4, 601–604 (2017).
X. Ming, X. Liu, L. Sun, and W. J. Padilla, "Degenerate critical coupling in all-dielectric metasurface absorbers," Opt. Express 25, 24658–24669 (2017).
J. Tian, Q. Li, P. A. Belov, R. K. Sinha, W. Qian, and M. Qiu, "High-Q all-dielectric metasurface: super and suppressed optical absorption," ACS Photon. 7, 1436–1443 (2020).
C. Y. Yang, J. H. Yang, Z. Y. Yang, Z. X. Zhou, M. G. Sun, V. E. Babicheva, and K. P. Chen, "Nonradiating silicon nanoantenna metasurfaces as narrowband absorbers," ACS Photon. 5, 2596–2601 (2018).
M. Yang, C. Meng, C. Fu, Y. Li, Z. Yang, and P. Sheng, "Subwavelength total acoustic absorption with degenerate resonators," Appl. Phys. Lett. 107, 104104 (2015).
K. Fan, I. V. Shadrivov, A. E. Miroshnichenko, and W. J. Padilla, "Infrared all-dielectric Kerker metasurfaces," Opt. Express 29, 10518–10526 (2021).
V. Romero-García, N. Jiménez, J. P. Groby, A. Merkel, V. Tournat, G. Theocharis, O. Richoux, and V. Pagneux, "Perfect absorption in mirror-symmetric acoustic metascreens," Phys. Rev. Appl. 14, 054055 (2020).
L. L. Hale, P. P. Vabischevich, T. Siday, C. T. Harris, T. S. Luk, S. J. Addamane, J. L. Reno, I. Brener, and O. Mitrofanov, "Perfect absorption in GaAs metasurfaces near the bandgap edge," Opt. Express 28, 35284–35296 (2020).
N. Jimenez, V. Romero-Garcia, V. Pagneux, and J. P. Groby, "Rainbow-trapping absorbers: broadband, perfect and asymmetric sound absorption by subwavelength panels for transmission problems," Sci. Rep. 7, 13595 (2017).
W. Suh, Z. Wang, and S. Fan, "Temporal coupled-mode theory and the presence of non-orthogonal modes in lossless multimode cavities," IEEE J. Quantum Electron. 40, 1511–1518 (2004).
X. Zhao, C. Chen, K. Kaj, I. Hammock, Y. Huang, R. D. Averitt, and X. Zhang, "Terahertz investigation of bound states in the continuum of metallic metasurfaces," Optica 7, 1548–1554 (2020).
J. Lin, M. Qiu, X. Zhang, H. Guo, Q. Cai, S. Xiao, Q. He, and L. Zhou, "Tailoring the lineshapes of coupled plasmonic systems based on a theory derived from first principles," Light Sci. Appl. 9, 158 (2020).
S. K. Ozdemir, S. Rotter, F. Nori, and L. Yang, "Parity-time symmetry and exceptional points in photonics," Nat. Mater. 18, 783–798 (2019).
M. Liu and D. Y. Choi, "Extreme Huygens' metasurfaces based on quasi-bound states in the continuum," Nano Lett. 18, 8062–8069 (2018).
A. E. Miroshnichenko, A. B. Evlyukhin, Y. S. Kivshar, and B. N. Chichkov, "Substrate-induced resonant magnetoelectric effects for dielectric nanoparticles," ACS Photon. 2, 1423–1428 (2015).
F. Zhong, K. Ding, Y. Zhang, S. Zhu, C. T. Chan, and H. Liu, "Angle-resolved thermal emission spectroscopy characterization of non-Hermitian metacrystals," Phys. Rev. Appl. 13, 014071 (2020).
Z. Xu, H. Luo, H. Zhu, Y. Hong, W. Shen, J. Ding, S. Kaur, P. Ghosh, M. Qiu, and Q. Li, "Nonvolatile optically reconfigurable radiative metasurface with visible tunability for anticounterfeiting," Nano Lett. 21, 5269–5276 (2021).
Y. Qu, Q. Li, K. Du, L. Cai, J. Lu, and M. Qiu, "Dynamic thermal emission control based on ultrathin plasmonic metamaterials including phase-changing material GST," Laser Photon. Rev. 11, 1700091 (2017).
Z. Xu, Q. Li, K. Du, S. Long, Y. Yang, X. Cao, H. Luo, H. Zhu, P. Ghosh, W. Shen, and M. Qiu, "Spatially resolved dynamically reconfigurable multilevel control of thermal emission," Laser Photon. Rev. 14, 1900162 (2020).
M. van Exter and D. Grischkowsky, "Optical and electronic properties of doped silicon from 0.1 to 2 THz," Appl. Phys. Lett. 56, 1694–1696 (1990).
Abdeddaim, R.
Addamane, S. J.
Aimez, V.
Almeida, V. R.
Altug, H.
Alu, A.
Auffray, E.
Averitt, R. D.
Babicheva, V. E.
Basilio, L. I.
Belov, P. A.
Bender, C. M.
Bender, M.
Boettcher, S.
Bozhevolnyi, S. I.
Brener, I.
Brongersma, M. L.
Cabuz, A. I.
Cai, L.
Cai, Q.
Cai, W.
Cao, H.
Cao, X.
Cassagne, D.
Centeno, E.
Chan, C. T.
Chang, L.
Chen, C.
Chen, K. P.
Chen, W.
Chen, Y. F.
Chichkov, B. N.
Choi, D. Y.
Christodoulides, D. N.
Claude, J. B.
Clem, P. G.
Decker, M.
Ding, K.
Doiron, C. F.
Dominguez, J.
Du, K.
Duchesne, D.
Eichelkraut, T.
El-Ganainy, R.
Eriksen, R. L.
Esfandyarpour, M.
Evlyukhin, A. B.
Fan, K.
Fan, S.
Fegadolli, W. S.
Felbacq, D.
Feng, L.
Ficorella, A.
Fofang, N. T.
Fu, C.
Garcia-Gracia, H.
Ghosh, P.
Ginn, J. C.
Gola, A.
Gonzales, E.
Grischkowsky, D.
Groby, J. P.
Guizal, B.
Guo, A.
Guo, H.
Hale, L.
Hale, L. L.
Hammock, I.
Harris, C. T.
Hassan, A. U.
He, Q.
Hines, P. F.
Hodaei, H.
Hong, Y.
Hsu, L.
Hua, S.
Huang, X.
Ihlefeld, J. F.
Jacob, Z.
Jahani, S.
Jiang, L.
Jiang, X.
Jimenez, N.
Jiménez, N.
Kaj, K.
Kanté, B.
Kaur, S.
Kawabata, K.
Khajavikhan, M.
Kik, P. G.
Kim, S. J.
Kip, D.
Kivshar, Y. S.
Kodigala, A.
Kottos, T.
Kuznetsov, A. I.
Lecoq, P.
Leitis, A.
Lepetit, T.
Liang, C.
Lin, J.
Lin, Z.
Liu, H.
Liu, M.
Liu, V.
Liu, X.
Liu, Y. C.
Lo, Y. H.
Long, S.
Lu, C.
Lu, J.
Lu, M. H.
Luk, T. S.
Luk'yanchuk, B.
Lumeau, J.
Luo, H.
Makris, K. G.
Meng, C.
Merkel, A.
Mikheeva, E.
Ming, X.
Miri, M. A.
Miroshnichenko, A. E.
Mitrofanov, O.
Morandotti, R.
Musslimani, Z. H.
Naik, G. V.
Ndao, A.
Neshev, D. N.
Nori, F.
Novikov, S. M.
Oliveira, J. E.
Ozdemir, S. K.
Padilla, W. J.
Paganoni, M.
Pagneux, V.
Park, J.
Park, J. H.
Paternoster, G.
Pecora, E. F.
Pei, X.
Peters, D. W.
Piper, J. R.
Qian, W.
Qiu, M.
Qu, Y.
Ramezani, H.
Reinhardt, C.
Reno, J. L.
Richoux, O.
Romero-Garcia, V.
Romero-García, V.
Rotter, S.
Rüter, C. E.
Salamo, G. J.
Salomoni, M.
Sato, M.
Scherer, A.
Schmidt, M. A.
Segev, M.
Shadrivov, I. V.
Shen, W.
Sheng, P.
Shiozaki, K.
Siday, T.
Sinclair, M. B.
Sinha, R. K.
Siviloglou, G. A.
Staude, I.
Stevens, J. O.
Suen, J. Y.
Suh, W.
Sun, L.
Sun, M. G.
Tao, H.
Theocharis, G.
Tian, J.
Tittl, A.
Tournat, V.
Tuniz, A.
Ueda, M.
Vabischevich, P. P.
Vabishchevich, P. P.
van Exter, M.
Volatier-Ravat, M.
Vynck, K.
Wang, G.
Wang, Q.
Warne, L. K.
Wen, J.
Wendt, J. R.
Wenger, J.
Wieduwilt, T.
Wiersig, J.
Wittek, S.
Xiao, M.
Xiao, S.
Xu, Y. L.
Xu, Z.
Yang, C.
Yang, C. Y.
Yang, J. H.
Yang, L.
Yang, M.
Yang, Y.
Yang, Z. Y.
Yesilkoy, F.
Zhang, G. F.
Zhang, J.
Zhang, Z.
Zhao, G.
Zhao, X.
Zhong, F.
Zhou, L.
Zhou, Z. X.
Zhu, H.
Zhu, S.
Zywietz, U.
ACS Nano (1)
ACS Photon. (4)
Adv. Mater. (2)
Adv. Opt. Mater. (1)
APL Photon. (1)
IEEE J. Quantum Electron. (1)
Laser Photon. Rev. (3)
Light Sci. Appl. (2)
Nano Lett. (5)
Nat. Mater. (2)
Nat. Nanotechnol. (1)
Phys. Rev. Appl. (2)
Phys. Rev. X (1)
Rep. Prog. Phys. (1)
Supplementary Material (1)
Supplement 1 Supplemental document
(1) H = [ f 1 − i γ 1 κ κ f 2 − i γ 2 ] .
(2) f e i g e n = f 0 − i γ 1 + γ 2 2 ± 1 2 4 κ 2 − Δ γ 2 .
(3) γ 1 , N R γ 1 , R = 1 − κ γ 1 , R γ 2 , R ,
(4) γ 2 , N R γ 2 , R = 1 + κ γ 1 , R γ 2 , R .
Prem Kumar, Editor-in-Chief | CommonCrawl |
The Parable of the Dishwasher
2016-03-30 2016-03-24 pnrj ethics, futurism, public policy automation, Baumol Effect, dishwasher, jobs, Luddites, parable, Puritanical, robots, Roomba, technological unemployment, technology, unemployment, wages, work
Much like free trade, technological unemployment is an issue where the consensus opinion among economists diverges quite sharply from that of the general population.
Enough people think that "robots taking our jobs" is something bad that I've seen a fair number of memes like this:
EVERY TIME you use the Self Checkout you are ELIMINATING JOBS!
But like almost all economists, I think that self-checkouts, robots, and automation in general are a pretty good thing. They do have a few downsides, chiefly in terms of forcing us to make transitions that are costly and painful; but in general I want more robots, not fewer.
To help turn you toward this view, I offer a parable.
Suppose we have a family, the (stereo)typical American family with a father, a mother, and two kids, a boy named Joe and a girl named Sue.
The kids do chores for their allowance, and split them as follows: Joe always does the dishes, and Sue always vacuums the carpet. They both spend about 1 hour per week and they both get paid $10 a week.
But one day, Dad decides to buy a dishwasher. This dramatically cuts down the time it takes Joe to do the dishes; where he used to spend 1 hour washing dishes, now he can load the dishwasher and get it done in 5 minutes.
Mom suggests they just sell back the dishwasher to get rid of the problem.
Dad says that Joe should now only be paid for the 5 minutes he works each week, so he would now be paid $0.83 per week. (He's not buying a lot of video games on that allowance.)
Joe protests that he gets the same amount of work done, so he should be paid the same $10 for doing it.
Sue says it would be unfair for her to have to work so much more than Joe, and has a different solution: They'll trade off the two sets of chores each week, and they should of course get paid the same amount of money for getting the same amount of work done—$10 per kid per week, for an average of 32.5 minutes of work each.
Which of those solutions sounds the most sensible to you?
Mom's solution is clearly the worst, right? It's the Luddite solution, the one that throws away technological progress and makes everything less efficient. Yet that is the solution being offered by people who say "Don't use the self-checkout machine!" Indeed, anyone who speaks of the virtues of "hard work" is really speaking Mom's language here; they should be talking about the virtues of getting things done. The purpose of washing dishes is to have clean dishes, not to "work hard". And likewise, when we construct bridges or make cars or write books or solve equations, our goal should be to get that thing done—not to fulfill some sense of moral obligation to prove our worthiness through hard work.
Joe's solution is what neoclassical economics says should happen—higher productivity should yield higher wages, so the same amount of production should yield the same pay. This seems like it could work, but empirically it rarely happens. There's also something vaguely unfair about it; if productivity increases in your industry but not in someone else's, you get to cut your work hours dramatically while they are stuck working just as hard as before.
Dad's "solution" is clearly terrible, and makes no sense at all. Yet this is what we actually tend to observe—capital owners appropriate all (or nearly all) the benefits of the new technology, and workers get displaced or get ever-smaller wages. (I talked about that in a recent post.)
It's Sue's solution that really seems to make the most sense, isn't it? When one type of work becomes more efficient, people should shift into different types of labor so that people can work fewer hours—and wages should rise enough that incomes remain the same. "Baumol's disease" is not a disease—it is the primary means by which capitalism raises human welfare. (That's why I prefer to use the term "Baumol Effect" instead.)
One problem with this in practice is that sometimes people can't switch into other industries. That's a little hard to imagine in this case, but let's stipulate that for some reason Joe can't do the vacuuming. Maybe he has some sort of injury that makes it painful to use the vacuum cleaner, but doesn't impair his ability to wash dishes. Or maybe he has a severe dust allergy, so bad that the dust thrown up by the vacuum cleaner sends him into fits of coughing.
In that case I think we're back to Joe's solution; he should get paid the same for getting the same amount of work done. I'm actually tempted to say that Sue should get paid more, to compensate her for the unfairness; but in the real world there is a pretty harsh budget constraint there, so we need to essentially pretend that Dad only has $20 per week to give out in allowances. A possible compromise would be to raise Sue up to $12 and cut Joe down to $8; Joe will probably still be better off than he was, because he has that extra 55 minutes of free time each week for which he only had to "pay" $2. This also makes the incentives work out better—Joe doesn't have a reason to malinger and exaggerate his dust allergy just to get out of doing the vacuuming, since he would actually get paid more if he were willing to do the vacuuming; but if his allergy really is that bad, he can still do okay otherwise. (There's a lesson here for the proper structure of Social Security Disability, methinks.)
But you know what really seems like the best solution? Buy a Roomba.
Buy a Roomba, make it Sue's job to spend 5 minutes a week keeping the Roomba working at vacuuming the carpet, and continue paying both kids $10 per week. Give them both 55 minutes more per week to hang out with their friends or play video games. Whether you think of this $10 as a "higher wage" for higher productivity or simply an allowance they get anyway—a basic income—ultimately doesn't matter all that much. The point is that everyone gets enough money and nobody has to work very much, because the robots do everything.
And now, hopefully you see why I think we need more robots, not fewer.
Of course, like any simple analogy, this isn't perfect; it may be difficult to reduce the hours in some jobs or move more people into them. There are a lot of additional frictions and complications that go into the real-world problem of achieving equitable labor markets. But I hope I've gotten across the basic idea that robots are not the problem, and could in fact be the solution–not just to our current labor market woes, but to the very problem of wage labor itself.
My ultimate goal is a world where "work" itself is fundamentally redefined—so that it always means the creative sense "This painting is some of my best work." and not the menial sense "Sweeping this floor is so much work!"; so that human beings do things because we want to do them, because they are worth doing, and not because some employer is holding our food and housing hostage if we don't.
But that will require our whole society to rethink a lot of our core assumptions about work, jobs, and economics in general. We're so invested in this idea that "hard work" is inherently virtuous that we forgot the purpose of an economy was to get things done. Work is not a benefit; work is a cost. Costs are to be reduced. Puritanical sexual norms have been extremely damaging to American society, but time will tell if Puritanical work ethic actually does more damage to our long-term future.
What can we do to make the world a better place?
2016-03-26 2016-03-19 pnrj core principles, ethics, public policy Adidas, Against Malaria Foundation, Animal Charity Evaluator, Apple, Bill Gates, BP, charity, climate change, Deepwater Horizon, donation, GiveWell, global problems, Hasbro, inequality, morality, philanthropy, UNICEF, Union of Concerned Scientists, vegetarian, voting
There are an awful lot of big problems in the world: war, poverty, oppression, disease, terrorism, crime… I could go on for awhile, but I think you get the idea. Solving or even mitigating these huge global problems could improve or even save the lives of millions of people.
But precisely because these problems are so big, they can also make us feel powerless. What can one person, or even a hundred people, do against problems on this scale?
The answer is quite simple: Do your share.
No one person can solve any of these problems—not even someone like Bill Gates, though he for one at least can have a significant impact on poverty and disease because he is so spectacularly mind-bogglingly rich; the Gates Foundation has a huge impact because it has as much wealth as the annual budget of the NIH.
But all of us together can have an enormous impact. This post today is about helping you see just how cheap and easy it would be to end world hunger and cure poverty-related diseases, if we simply got enough people to contribute.
The Against Malaria Foundation releases annual reports for all their regular donors. I recently got a report that my donations personally account for 1/100,000 of their total assets. That's terrible. The global population is 7 billion people; in the First World alone it's over 1 billion. I am the 0.01%, at least when it comes to donations to the Against Malaria Foundation.
I've given them only $850. Their total assets are only $80 million. They shouldn't have $80 million—they should have $80 billion. So, please, if you do nothing else as a result of this post, go make a donation to the Against Malaria Foundation. I am entirely serious; if you think you might forget or change your mind, do it right now. Even a dollar would be worth it. If everyone in the First World gave $1, they would get 12 times as much as they currently have.
GiveWell is an excellent source for other places you should donate; they rate charities around the world for their cost-effectiveness in the only way worth doing: Lives saved per dollar donated. They don't just naively look at what percentage goes to administrative costs; they look at how everything is being spent and how many children have their diseases cured.
Until the end of April, UNICEF is offering an astonishing five times matching funds—meaning that if you donate $10, a full $50 goes to UNICEF projects. I have really mixed feelings about donors that offer matching funds (So what you're saying is, you won't give if we don't?), but when they are being offered, use them.
All those charities are focused on immediate poverty reduction; if you're looking for somewhere to give that fights Existential Risk, I highly recommend the Union of Concerned Scientists—one of the few Existential Risk organizations that uses evidence-based projections and recognizes that nuclear weapons and climate change are the threats we need to worry about.
And let's not be too anthropocentrist; there are a lot of other sentient beings on this planet, and Animal Charity Evaluator can help you find which charities will best improve the lives of other animals.
I've just listed a whole bunch of ways you can give money—and that probably is the best thing for you to give; your time is probably most efficiently used working in your own profession whatever that may be—but there are other ways you can contribute as well.
One simple but important change you can make, if you haven't already, is to become vegetarian. Even aside from the horrific treatment of animals in industrial farming, you don't have to believe that animals deserve rights to understand that meat is murder. Meat production is a larger contributor to global greenhouse gas emissions than transportation, so everyone becoming vegetarian would have a larger impact against climate change than taking literally every car and truck in the world off the road. Since the world population is less than 10 billion, meat is 18% of greenhouse emissions and the IPCC projects that climate change will kill between 10 and 100 million people over the next century, every 500 to 5000 new vegetarians saves a life.
You can move your money from a bank to a credit union, as even the worst credit unions are generally better than the best for-profit banks, and the worst for-profit banks are very, very bad. The actual transition can be fairly inconvenient, but a good credit union will provide you with all the same services, and most credit unions link their networks and have online banking, so for example I can still deposit and withdraw from my University of Michigan Credit Union account while in California.
Another thing you can do is reduce your consumption of sweatshop products in favor of products manufactured under fair labor standards. This is harder than it sounds; it can be very difficult to tell what a company's true labor conditions are like, as the worst companies work very hard to hide them (now, if they worked half as hard to improve them… it reminds me of how many students seem willing to do twice as much work to cheat as they would to simply learn the material in the first place).
You should not simply stop buying products that say "Made in China"; in fact, this could be counterproductive. We want products to be made in China; we need products to be made in China. What we have to do is improve labor standards in China, so that products made in China are like products made in Japan or Korea—skilled workers with high-paying jobs in high-tech factories. Presumably it doesn't bother you when something says "Made in Switzerland" or "Made in the UK", because you know their labor standards are at least as high as our own; that's where I'd like to get with "Made in China".
The simplest way to do this is of course to buy Fair Trade products, particularly coffee and chocolate. But most products are not available Fair Trade (there are no Fair Trade computers, and only loose analogues for clothing and shoes).
Moreover, we must not let the perfect be the enemy of the good; companies that have done terrible things in the past may still be the best companies to support, because there are no alternatives that are any better. In order to incentivize improvement, we must buy from the least of all evils for awhile until the new competitive pressure makes non-evil corporations viable. With this in mind, the Fair Labor Association may not be wrong to endorse companies like Adidas and Apple, even though they surely have substantial room to improve. Similarly, few companies on the Ethisphere list are spotless, but they probably are genuinely better than their competitors. (Well, those that have competitors; Hasbro is on there. Name a well-known board game, and odds are it's made by a Hasbro subsidiary: they own Parker Brothers, Milton Bradley, and Wizards of the Coast. Wikipedia has their own category, Hasbro subsidiaries. Maybe they've been trying to tell us something with all those versions of Monopoly?)
I'm not very happy with the current state of labor standards reporting (much less labor standards enforcement), so I don't want to recommend any of these sources too highly. But if you are considering buying from one of three companies and only one of them is endorsed by the Fair Labor Association, it couldn't hurt to buy from that one instead of the others.
Buying from ethical companies will generally be more expensive—but rarely prohibitively so, and this is part of how we use price signals to incentivize better behavior. For about a year, BP gasoline was clearly cheaper than other gasoline, because nobody wanted to buy from BP and they were forced to sell at a discount after the Deepwater Horizon disaster. Their profits tanked as a result. That's the kind of outcome we want—preferably for a longer period of time.
I suppose you could also save money by buying cheaper products and then donate the difference, and in the short run this would actually be most cost-effective for global utility; but (1) nobody really does that; people who buy Fair Trade also tend to donate more, maybe just because they are more generous in general, and (2) in the long run what we actually want is more ethical businesses, not a system where businesses exploit everyone and then we rely upon private charity to compensate us for our exploitation. For similar reasons, philanthropy is a stopgap—and a much-needed one—but not a solution.
Of course, you can vote. And don't just vote in the big name elections like President of the United States. Your personal impact may actually be larger from voting in legislatures and even local elections and ballot proposals. Certainly your probability of being a deciding vote is far larger, though this is compensated by the smaller effect of the resulting policies. Most US states have a website where you can look up any upcoming ballots you'll be eligible to vote on, so you can plan out your decisions well in advance.
You may even want to consider running for office at the local level, though I realize this is a very large commitment. But most local officials run uncontested, which means there is no real democracy at work there at all.
Finally, you can contribute in some small way to making the world a better place simply by spreading the word, as I hope I'm doing right now.
Efficient markets and the Wisdom of Crowds
2016-03-23 2016-03-23 pnrj Uncategorized
There is a well-known principle in social science called wisdom of the crowd, popularized in a book called The Wisdom of Crowds by James Surowiecki. It basically says that a group of people who aggregate their opinions can be more accurate than any individual opinion, even that of an expert; it is one of the fundamental justifications for democracy and free markets.
It is also often used to justify what is called the efficient market hypothesis, which in its weak form is approximately true (financial markets are unpredictable, unless you've got inside information or really good tools), but in its strong form is absolutely ludicrous (no, financial markets do not accurately reflect the most rational expectation of future outcomes in the real economy).
This post is about what the wisdom of the crowd actually does—and does not—say, and why it fails to justify the efficient market hypothesis even in its weak form.
The wisdom of the crowd says that when a group of people with a moderate level of accuracy all get together average their predictions, the resulting estimate is better, on average, than what they came up with individually. A group of people who all "sort of" know something can get together and create a prediction that is much better than any one of them could come up with.
This can actually be articulated as a mathematical theorem, the diversity prediction theorem:
(If you want to see the full equation, you can render the LaTeX here.)
Collective error = average individual error – prediction diversity
(\bar{x} – \mu)^2 = \frac{1}{n} \sum (x – \mu)^2 – \frac{1}{n} \sum (x – \bar{x})^2
This is a mathematical theorem; it's beyond dispute. By the definition of the sample mean, this equation holds.
But in applying it, we must be careful; it doesn't simply say that adding diversity will improve our predictions. Adding diversity will improve our predictions provided that we don't increase average individual error too much.
Here, I'll give some examples. Suppose we are guessing the weight of a Smart car. Person A says 1500 pounds; person B says 3000 pounds. Suppose the true weight is 2000 pounds.
Our collective estimate is the average of 1500 and 3000, which is 2250. So it's a bit high.
Suppose we add person C, who guesses the weight of the car as 1800 pounds. This is closer to the real value, so we'd expect our collective estimate to improve, and it does: It's now 2100 pounds.
But where the theorem can be a bit counter-intuitive is that we can add someone who is not particularly accurate, and still improve the estimate: If we also add person D, who guessed 1400 pounds, this seems like it should make our estimate worse—but it does not. Our new estimate is now 1925 pounds, which is a bit closer to the truth than 2100—and furthermore better than any individual estimate.
However, the theorem does not say that adding someone new will always improve the estimate; if we add person E, who has no idea how cars work and says that the car must weigh 50 pounds, we throw off the estimate so that it is now 1550 pounds. If we add enough such people, we can make the entire estimate wildly inaccurate: Add four more copies of person E and our new estimate of the car's weight is a mere 883 pounds.
In all cases the theorem holds, however. Let's consider the case where adding person E ruined our otherwise very good estimate.
Before we added person E, we had four estimates:
A said 1500, B said 3000, C said 1800, and D said 1400.
Our collective estimate was 1925.
Thus, collective error is (1925 – 2000)^2 = 5625, uh, square pounds? (Variances often have weird units.)
The individual errors are, respectively:
A: (1500 – 2000)^2 = 250,000
B: (3000 – 2000)^2 = 1,000,000
C: (1800 – 2000)^2 = 40,000
D: (1400 – 2000)^2 = 360,000
Average individual error is 412,500. So our collective error is much smaller than our average individual error. The difference is accounted for by prediction diversity.
Prediction diversity is found as the squared distance between each individual estimate and the average estimate:
Thus, prediction diversity is the average of these, 406875. And sure enough, 412,500 – 406,875 = 5625.
When we add on the fifth estimate of 50 and repeat the process, here's what we get:
The new collective estimate is 1550. The prediction diversity went way up; it's now 888,000. But the average error rose even faster, so it is now 1,090,500. As a result, the collective error got a lot worse, and is now 202,500. So adding more people does not always improve your estimates, if those people have no idea what they're doing.
When it comes to the stock market, most people have no idea what they're doing. Even most financial experts can forecast the market no better than chance.
The wisdom of the crowd holds when most people can basically get it right; maybe their predictions are 75% accurate for binary choices, or within a factor of 2 for quantitative estimates, something like that. Then, each guess is decent, but not great; and by combining a lot of decent estimates we get one really good estimate.
Of course, the diversity prediction theorem does still apply: Most individual investors underperform the stock market as a whole, just as the theorem would say—average individual prediction is worse than collective prediction.
Moreover, stock prices do have something to do with fundamentals, because fundamental analysis does often work, contrary to most forms of the efficient market hypothesis. (It's a very oddly named hypothesis, really; what's "efficient" about a market that is totally unpredictable?)
But in order for stock prices to actually be a good measure of the real value of a company, most of the people buying and selling stock would have to be using fundamental analysis. In order for stocks to reflect real values, stock choices must be based on real values—that's the only mechanism by which real values could ever enter the equation.
While there are definitely a lot of people who use fundamental analysis, it really doesn't seem like there are enough. At least for short-run ups and downs, most decisions seem to be made on a casual form of technical analysis: "It's going up! Buy!" or "It just went down! Buy!" (Yes, you hear both of those; the latter is closer to true for short-run fluctuations, but the real pattern is a bit more complicated than that.)
For the wisdom of the crowd to work, the estimates need to be independent—each person makes a reasonable guess on their own, then we average over all the guesses. When you do this for simple tasks like the weight of a car or the number of jellybeans in a jar, you get some really astonishingly accurate results. Even for harder tasks where people have a vague idea, like the number of visible stars in the sky, you can do pretty well. But if you let people talk about their answers, the aggregate guess often gets much worse, especially if there are no experts in the group. And we definitely talk about stocks an awful lot; one of the best sources for utterly meaningless post hoc statements in the world is the financial news section, which will always find some explanation for any market change, often tenuous at best, and then offer some sort of prediction for what will happen next which is almost always wrong.
This lack of independence fundamentally changes the system. The main thing that people consider when choosing which stocks to buy is which stocks other people are buying. This is called a Keynesian beauty contest; apparently these beauty contests used to be a thing in the 1930s, where you'd send in pictures of your baby and then people would vote on which baby was the cutest—but the key part in Keynes's version is that you win money not based on whether your baby wins, but based on whether the baby you vote for wins. So you don't necessarily vote for the one you think is cutest; you vote for the one you think other people will vote for, which is based on what they think other people will vote for, and so on. There are ways to make that infinite series converge, but there are also lots of cases where it diverges, and in reality what I think happens here is our brains max out and give up. (According to Dennett, we can handle about 7 layers of intentionality before our brains max out.)
A similar process is at work in the stock market, as well as with strategic voting—yet another reason why we should be designing our voting system to disincentivize strategic voting.
What we have then is a system with a feedback loop: We buy Apple because we buy Apple because we buy Apple. (Just as we use Facebook because we use Facebook because we use Facebook.)
Feedback loops can introduce chaotic behavior. Depending on the precise parameters involved, all of this guessing could turn out to converge to the real value of companies—or it could converge to something else entirely, or keep fluctuating all over the place indefinitely. Since the latter seems to be what happens, I think the real parameters are probably in that range of fluctuating instability. (I've actually programmed some simple computer models with parameters in that chaotic range, and they come out pretty darn close to the real behavior of stock markets—much better than the Black-Scholes model, for instance.) If you want a really in-depth analysis of the irrationality of financial markets, I highly recommend Robert Shiller, who after all won a Nobel for this sort of thing.
What does this mean for the efficient market hypothesis? That it's basically a non-starter. We have no reason to believe that stock prices accurately integrate real fundamental information, and many reasons to think they do not. The unpredictability of stock prices could be just that—unpredictability, meaning that stock prices in the short run are simply random, and short-term trading is literally gambling. In the long run they seem to settle out into trends with some relation to fundamentals—but as Keynes said, in the long run we are all dead, and the market can remain irrational longer than you can remain solvent.
Free trade is not the problem. Billionaires are the problem.
2016-03-19 2017-10-08 pnrj development economics, ethics, inequality, public policy Airbnb, Bernie Sanders, Bill Gates, CEO pay, China, confiscation, Donald Trump, exports, Ferrari, free trade, gains from trade, GDP, globalization, household income, immigration, imports, income, inequality, Jeff Bezos, Larry Page, median income, Raw Deal, Robert Reich, TPP, trade, trade deficit, trade policy, TRIPS accord, Uber, wealth
One thing that really stuck out to me about the analysis of the outcome of the Michigan primary elections was that people kept talking about trade; when Bernie Sanders, a center-left social democrat, and Donald Trump, a far-right populist nationalist (and maybe even crypto-fascist) are the winners, something strange is at work. The one common element that the two victors seemed to have was their opposition to free trade agreements. And while people give many reasons to support Trump, many quite baffling, his staunch protectionism is one of the stronger voices. While Sanders is not as staunchly protectionist, he definitely has opposed many free-trade agreements.
Most of the American middle class feels as though they are running in place, working as hard as they can to stay where they are and never moving forward. The income statistics back them up on this; as you can see in this graph from FRED, real median household income in the US is actually lower than it was a decade ago; it never really did recover from the Second Depression:
As I talk to people about why they think this is, one of the biggest reasons they always give is some variant of "We keep sending our jobs to China." There is this deep-seated intuition most Americans seem to have that the degradation of the middle class is the result of trade globalization. Bernie Sanders speaks about ending this by changes in tax policy and stronger labor regulations (which actually makes some sense); Donald Trump speaks of ending this by keeping out all those dirty foreigners (which appeals to the worst in us); but ultimately, they both are working from the narrative that free trade is the problem.
But free trade is not the problem. Like almost all economists, I support free trade. Free trade agreements might be part of the problem—but that's because a lot of free trade agreements aren't really about free trade. Many trade agreements, especially the infamous TRIPS accord, were primarily about restricting trade—specifically on "intellectual property" goods like patented drugs and copyrighted books. They were about expanding the monopoly power of corporations over their products so that the monopoly applied not just to the United States, but indeed to the whole world. This is the opposite of free trade and everything that it stands for. The TPP was a mixed bag, with some genuinely free-trade provisions (removing tariffs on imported cars) and some awful anti-trade provisions (making patents on drugs even stronger).
Every product we buy as an import is another product we sell as an export. This is not quite true, as the US does run a trade deficit; but our trade deficit is small compared to our overall volume of trade (which is ludicrously huge). Total US exports for 2014, the last full year we've fully tabulated, were $3.306 trillion—roughly the entire budget of the federal government. Total US imports for 2014 were $3.578 trillion. This makes our trade deficit $272 billion, which is 7.6% of our imports, or about 1.5% of our GDP of $18.148 trillion. So to be more precise, every 100 products we buy as imports are 92 products we sell as exports.
If we stopped making all these imports, what would happen? Well, for one thing, millions of people in China would lose their jobs and fall back into poverty. But even if you're just looking at the US specifically, there's no reason to think that domestic production would increase nearly as much as the volume of trade was reduced, because the whole point of trade is that it's more efficient than domestic production alone. It is actually generous to think that by switching to autarky we'd have even half the domestic production that we're currently buying in imports. And then of course countries we export to would retaliate, and we'd lose all those exports. The net effect of cutting ourselves off from world trade would be a loss of about $1.5 trillion in GDP—average income would drop by 8%.
Now, to be fair, there are winners and losers. Offshoring of manufacturing does destroy the manufacturing jobs that are offshored; but at least when done properly, it also creates new jobs by improved efficiency. These two effects are about the same size, so the overall effect is a small decline in the overall number of US manufacturing jobs. It's not nearly large enough to account for the collapsing middle class.
Globalization may be one contributor to rising inequality, as may changes in technology that make some workers (software programmers) wildly more productive as they make other workers (cashiers, machinists, and soon truck drivers) obsolete. But those of us who have looked carefully at the causes of rising income inequality know that this is at best a small part of what's really going on.
The real cause is what Bernie Sanders is always on about: The 1%. Gains in income in the US for the last few decades (roughly as long as I've been alive) have been concentrated in a very small minority of the population—in fact, even 1% may be too coarse. Most of the income gains have actually gone to more like the top 0.5% or top 0.25%, and the most spectacular increases in income have all been concentrated in the top 0.01%.
The story that we've been told—I dare say sold—by the mainstream media (which is, lets face it, owned by a handful of corporations) is that new technology has made it so that anyone who works hard (or at least anyone who is talented and works hard and gets a bit lucky) can succeed or even excel in this new tech-driven economy.
I just gave up on a piece of drivel called Bold that was seriously trying to argue that anyone with a brilliant idea can become a billionaire if they just try hard enough. (It also seemed positively gleeful about the possibility of a cyberpunk dystopia in which corporations use mass surveillance on their customers and competitors—yes, seriously, this was portrayed as a good thing.) If you must read it, please, don't give these people any more money. Find it in a library, or find a free ebook version, or something. Instead you should give money to the people who wrote the book I switched to, Raw Deal, whose authors actually understand what's going on here (though I maintain that the book should in fact be called Uber Capitalism).
When you look at where all the money from the tech-driven "new economy" is going, it's not to the people who actually make things run. A typical wage for a web developer is about $35 per hour, and that's relatively good as far as entry-level tech jobs. A typical wage for a social media intern is about $11 per hour, which is probably less than what the minimum wage ought to be. The "sharing economy" doesn't produce outstandingly high incomes for workers, just outstandingly high income risk because you aren't given a full-time salary. Uber has claimed that its drivers earn $90,000 per year, but in fact their real take-home pay is about $25 per hour. A typical employee at Airbnb makes $28 per hour. If you do manage to find full-time hours at those rates, you can make a middle-class salary; but that's a big "if". "Sharing economy"? Robert Reich has aptly renamed it the "share the crumbs economy".
So where's all this money going? CEOs. The CEO of Uber has net wealth of $8 billion. The CEO of Airbnb has net wealth of $3.3 billion. But they are paupers compared to the true giants of the tech industry: Larry Page of Google has $36 billion. Jeff Bezos of Amazon has $49 billion. And of course who can forget Bill Gates, founder of Microsoft, and his mind-boggling $77 billion.
Can we seriously believe that this is because their ideas were so brilliant, or because they are so talented and skilled? Uber's "brilliant" idea is just to monetize carpooling and automate linking people up. Airbnb's "revolutionary" concept is an app to advertise your bed-and-breakfast. At least Google invented some very impressive search algorithms, Amazon created one of the most competitive product markets in the world, and Microsoft democratized business computing. Of course, none of these would be possible without the invention of the Internet by government and university projects.
As for what these CEOs do that is so skilled? At this point they basically don't do… anything. Any real work they did was in the past, and now it's all delegated to other people; they just rake in money because they own things. They can manage if they want, but most of them have figured out that the best CEOs do very little while CEOS who micromanage typically fail. While I can see some argument for the idea that working hard in the past could merit you owning capital in the future, I have a very hard time seeing how being very good at programming and marketing makes you deserve to have so much money you could buy a new Ferrari every day for the rest of your life.
That's the heuristic I like to tell people, to help them see the absolutely enormous difference between a millionaire and a billionaire: A millionaire is someone who can buy a Ferrari. A billionaire is someone who can buy a new Ferrari every day for the rest of their life. A high double-digit billionaire like Bezos or Gates could buy a new Ferrari every hour for the rest of their life. (Do the math; a Ferrari is about $250,000. Remember that they get a return on capital typically between 5% and 15% per year. With $1 billion, you get $50 to $150 million just in interest and dividends every year, and $100 million is enough to buy 365 Ferraris. As long as you don't have several very bad years in a row on your stocks, you can keep doing this more or less forever—and that's with only $1 billion.)
Immigration and globalization are not what is killing the American middle class. Corporatization is what's killing the American middle class. Specifically, the use of regulatory capture to enforce monopoly power and thereby appropriate almost all the gains of new technologies into into the hands of a few dozen billionaires. Typically this is achieved through intellectual property, since corporate-owned patents basically just are monopolistic regulatory capture.
Since 1984, US real GDP per capita rose from $28,416 to $46,405 (in 2005 dollars). In that same time period, real median household income only rose from $48,664 to $53,657 (in 2014 dollars). That means that the total amount of income per person in the US rose by 49 log points (63%), while the amount of income that a typical family received only rose 10 log points (10%). If median income had risen at the same rate as per-capita GDP (and if inequality remained constant, it would), it would now be over $79,000, instead of $53,657. That is, a typical family would have $25,000 more than they actually do. The poverty line for a family of 4 is $24,300; so if you're a family of 4 or less, the billionaires owe you a poverty line. You should have three times the poverty line, and in fact you have only two—because they took the rest.
And let me be very clear: I mean took. I mean stole, in a very real sense. This is not wealth that they created by their brilliance and hard work. This is wealth that they expropriated by exploiting people and manipulating the system in their favor. There is no way that the top 1% deserves to have as much wealth as the bottom 95% combined. They may be talented; they may work hard; but they are not that talented, and they do not work that hard. You speak of "confiscation of wealth" and you mean income taxes? No, this is the confiscation of our nation's wealth.
Those of us who voted for Bernie Sanders voted for someone who is trying to stop it.
Those of you who voted for Donald Trump? Congratulations on supporting someone who epitomizes it.
This is why we must vote our consciences.
2016-03-16 2016-03-09 pnrj current events, public policy Bernie Sanders, Black-Scholes, cloneproof, Donald Trump, elections, forecast, global warming, Hillary Clinton, Nate Silver, politics, polls, primary election, Ralph Nader, range voting, Ross Perot, Ted Cruz, voting
As I write, Bernie Sanders has just officially won the Michigan Democratic Primary. It was a close race—he was ahead by about 2% the entire time—so the delegates will be split; but he won.
This is notable because so many forecasters said it was impossible. Before the election, Nate Silver, one of the best political forecasters in the world (and he still deserves that title) had predicted a less than 1% chance Bernie Sanders could win. In fact, had he taken his models literally, he would have predicted a less than 1 in 10 million chance Bernie Sanders could win—I think it speaks highly of him that he was not willing to trust his models quite that far. I got into one of the wonkiest flamewars of all time earlier today debating whether this kind of egregious statistical error should call into question many of our standard statistical methods (I think it should; another good example is the total failure of the Black-Scholes model during the 2008 financial crisis).
Had we trusted the forecasters, held our noses and voted for the "electable" candidate, this would not have happened. But instead we voted our consciences, and the candidate we really wanted won.
It is an unfortunate truth that our system of plurality "first-past-the-post" voting does actually strongly incentivize strategic voting. Indeed, did it not, we wouldn't need primaries in the first place. With a good range voting or even Condorcet voting system, you could basically just vote honestly among all candidates and expect a good outcome. Technically it's still possible to vote strategically in range and Condorcet systems, but it's not necessary the way it is in plurality vote systems.
The reason we need primaries is that plurality voting is not cloneproof; if two very similar candidates ("clones") run that everyone likes, votes will be split between them and the two highly-favored candidates can lose to a less-favored candidate. Condorcet voting is cloneproof in most circumstances, and range voting is provably cloneproof everywhere and always. (Have I mentioned that we should really have range voting?)
Hillary Clinton and Bernie Sanders are not clones by any means, but they are considerably more similar to one another than either is to Donald Trump or Ted Cruz. If all the Republicans were to immediately drop out besides Trump while Clinton and Sanders stayed in the race, Trump could end up winning because votes were split between Clinton and Sanders. Primaries exist to prevent this outcome; either Sanders or Clinton will be in the final election, but not both (the #BernieOrBust people notwithstanding), so it will be a simple matter of whether they are preferred to Trump, which of course both Clinton and Sanders are. Don't put too much stock in these polls, as polls this early are wildly unreliable. But I think they at least give us some sense of which direction the outcome is likely to be.
Ideally, we wouldn't need to worry about that, and we could just vote our consciences all the time. But in the general election, you really do need to vote a little strategically and choose the better (or less-bad) option among the two major parties. No third-party Presidential candidate has ever gotten close to actually winning an election, and the best they ever seem to do is acting as weak clones undermining other similar candidates, as Ross Perot and Ralph Nader did. (Still, if you were thinking of not voting at all, it is obviously preferable for you to vote for a third-party candidate. If everyone who didn't vote had instead voted for Ralph Nader, Nader would have won by a landslide—and US climate policy would be at least a decade ahead of where it is now, and we might not be already halfway to the 2 C global warming threshold.)
But in the primary? Vote your conscience. Primaries exist to make this possible, and we just showed that it can work. When people actually turn out to vote and support candidates they believe in, they win elections. If the same thing happens in several other states that just happened in Michigan, Bernie Sanders could win this election. And even if he doesn't, he's already gone a lot further than most of the pundits ever thought he could. (Sadly, so has Trump.)
We do not benefit from economic injustice.
2016-03-12 2016-03-05 pnrj development economics, ethics, public policy, Uncategorized Apple, billionaires, efficiency, exploitation, history, income, income effect, inequality, injustice, iPad, iPod, Japan, Korea, labor, Oprah, Pareto-efficiency, poverty, privilege, slavery, substitution effect, sweatshops, tragedy of the commons
Recently I think I figured out why so many middle-class White Americans express so much guilt about global injustice: A lot of people seem to think that we actually benefit from it. Thus, they feel caught between a rock and a hard place; conquering injustice would mean undermining their own already precarious standard of living, while leaving it in place is unconscionable.
The compromise, is apparently to feel really, really guilty about it, constantly tell people to "check their privilege" in this bizarre form of trendy autoflagellation, and then… never really get around to doing anything about the injustice.
(I guess that's better than the conservative interpretation, which seems to be that since we benefit from this, we should keep doing it, and make sure we elect big, strong leaders who will make that happen.)
So let me tell you in no uncertain words: You do not benefit from this.
If anyone does—and as I'll get to in a moment, that is not even necessarily true—then it is the billionaires who own the multinational corporations that orchestrate these abuses. Billionaires and billionaires only stand to gain from the exploitation of workers in the US, China, and everywhere else.
How do I know this with such certainty? Allow me to explain.
First of all, it is a common perception that prices of goods would be unattainably high if they were not produced on the backs of sweatshop workers. This perception is mistaken. The primary effect of the exploitation is simply to raise the profits of the corporation; there is a secondary effect of raising the price a moderate amount; and even this would be overwhelmed by the long-run dynamic effect of the increased consumer spending if workers were paid fairly.
Let's take an iPad, for example. The price of iPads varies around the world in a combination of purchasing power parity and outright price discrimination; but the top model almost never sells for less than $500. The raw material expenditure involved in producing one is about $370—and the labor expenditure? Just $11. Not $110; $11. If it had been $110, the price could still be kept under $500 and turn a profit; it would simply be much smaller. That is, even if prices are really so elastic that Americans would refuse to buy an iPad at any more than $500 that would still mean Apple could still afford to raise the wages they pay (or rather, their subcontractors pay) workers by an order of magnitude. A worker who currently works 50 hours a week for $10 per day could now make $10 per hour. And the price would not have to change; Apple would simply lose profit, which is why they don't do this. In the absence of pressure to the contrary, corporations will do whatever they can to maximize profits.
Now, in fact, the price probably would go up, because Apple fans are among the most inelastic technology consumers in the world. But suppose it went up to $600, which would mean a 1:1 absorption of these higher labor expenditures into price. Does that really sound like "Americans could never afford this"? A few people right on the edge might decide they couldn't buy it at that price, but it wouldn't be very many—indeed, like any well-managed monopoly, Apple knows to stop raising the price at the point where they start losing more revenue than they gain.
Similarly, half the price of an iPhone is pure profit for Apple, and only 2% goes into labor. Once again, wages could be raised by an order of magnitude and the price would not need to change.
Apple is a particularly obvious example, but it's quite simple to see why exploitative labor cannot be the source of improved economic efficiency. Paying workers less does not make them do better work. Treating people more harshly does not improve their performance. Quite the opposite: People work much harder when they are treated well. In addition, at the levels of income we're talking about, small improvements in wages would result in substantial improvements in worker health, further improving performance. Finally, substitution effect dominates income effect at low incomes. At very high incomes, income effect can dominate substitution effect, so higher wages might result in less work—but it is precisely when we're talking about poor people that it makes the least sense to say they would work less if you paid them more and treated them better.
At most, paying higher wages can redistribute existing wealth, if we assume that the total amount of wealth does not increase. So it's theoretically possible that paying higher wages to sweatshop workers would result in them getting some of the stuff that we currently have (essentially by a price mechanism where the things we want get more expensive, but our own wages don't go up). But in fact our wages are most likely too low as well—wages in the US have become unlinked from productivity, around the time of Reagan—so there's reason to think that a more just system would improve our standard of living also. Where would all the extra wealth come from? Well, there's an awful lot of room at the top.
The top 1% in the US own 35% of net wealth, about as much as the bottom 95%. The 400 billionaires of the Forbes list have more wealth than the entire African-American population combined. (We're double-counting Oprah—but that's it, she's the only African-American billionaire in the US.) So even assuming that the total amount of wealth remains constant (which is too conservative, as I'll get to in a moment), improving global labor standards wouldn't need to pull any wealth from the middle class; it could get plenty just from the top 0.01%.
In surveys, most Americans are willing to pay more for goods in order to improve labor standards—and the amounts that people are willing to pay, while they may seem small (on the order of 10% to 20% more), are in fact clearly enough that they could substantially increase the wages of sweatshop workers. The biggest problem is that corporations are so good at covering their tracks that it's difficult to know whether you are really supporting higher labor standards. The multiple layers of international subcontractors make things even more complicated; the people who directly decide the wages are not the people who ultimately profit from them, because subcontractors are competitive while the multinationals that control them are monopsonists.
But for now I'm not going to deal with the thorny question of how we can actually regulate multinational corporations to stop them from using sweatshops. Right now, I just really want to get everyone on the same page and be absolutely clear about cui bono. If there is a benefit at all, it's not going to you and me.
Why do I keep saying "if"? As so many people will ask me: "Isn't it obvious that if one person gets less money, someone else must get more?" If you've been following my blog at all, you know that the answer is no.
On a single transaction, with everything else held constant, that is true. But we're not talking about a single transaction. We're talking about a system of global markets. Indeed, we're not really talking about money at all; we're talking about wealth.
By paying their workers so little that those workers can barely survive, corporations are making it impossible for those workers to go out and buy things of their own. Since the costs of higher wages are concentrated in one corporation while the benefits of higher wages are spread out across society, there is a Tragedy of the Commons where each corporation acting in its own self-interest undermines the consumer base that would have benefited all corporations (not to mention people who don't own corporations). It does depend on some parameters we haven't measured very precisely, but under a wide range of plausible values, it works out that literally everyone is worse off under this system than they would have been under a system of fair wages.
This is not simply theoretical. We have empirical data about what happened when companies (in the US at least) stopped using an even more extreme form of labor exploitation: slavery.
Because we were on the classical gold standard, GDP growth in the US in the 19th century was extremely erratic, jumping up and down as high as 10 lp and as low as -5 lp. But if you try to smooth out this roller-coaster business cycle, you can see that our growth rate did not appear tobe slowed by the ending of slavery:
Looking at the level of real per capita GDP (on a log scale) shows a continuous growth trend as if nothing had changed at all:
In fact, if you average the growth rates (in log points, averaging makes sense) from 1800 to 1860 as antebellum and from 1865 to 1900 as postbellum, you find that the antebellum growth rate averaged 1.04 lp, while the postbellum growth rate averaged 1.77 lp. Over a period of 50 years, that's the difference between growing by a factor of 1.7 and growing by a factor of 2.4. Of course, there were a lot of other factors involved besides the end of slavery—but at the very least it seems clear that ending slavery did not reduce economic growth, which it would have if slavery were actually an efficient economic system.
This is a different question from whether slaveowners were irrational in continuing to own slaves. Purely on the basis of individual profit, it was most likely rational to own slaves. But the broader effects on the economic system as a whole were strongly negative. I think that part of why the debate on whether slavery is economically inefficient has never been settled is a confusion between these two questions. One side says "Slavery damaged overall economic growth." The other says "But owning slaves produced a rate of return for investors as high as manufacturing!" Yeah, those… aren't answering the same question. They are in fact probably both true. Something can be highly profitable for individuals while still being tremendously damaging to society.
I don't mean to imply that sweatshops are as bad as slavery; they are not. (Though there is still slavery in the world, and some sweatshops tread a fine line.) What I'm saying is that showing that sweatshops are profitable (no doubt there) or even that they are better than most of the alternatives for their workers (probably true in most cases) does not show that they are economically efficient. Sweatshops are beneficent exploitation—they make workers better off, but in an obviously unjust way. And they only make workers better off compared to the current alternatives; if they were replaced with industries paying fair wages, workers would obviously be much better off still.
And my point is, so would we. While the prices of goods would increase slightly in the short run, in the long run the increased consumer spending by people in Third World countries—which soon would cease to be Third World countries, as happened in Korea and Japan—would result in additional trade with us that would raise our standard of living, not lower it. The only people it is even plausible to think would be harmed are the billionaires who own our multinational corporations; and yet even they might stand to benefit from the improved efficiency of the global economy.
No, you do not benefit from sweatshops. So stop feeling guilty, stop worrying so much about "checking your privilege"—and let's get out there and do something about it.
The real Existential Risk we should be concerned about
2016-03-09 2016-03-09 pnrj ethics, futurism, public policy Animaniacs, Archer, artificial intelligence, asteroid, climate change, credible targeted conventional response, disarmament, existential risk, Fermi paradox, global catastrophic risk, global warming, ICBM, multilateral, NASA, nuclear weapons, prisoner's dilemma, Singularitarian, Stag Hunt, unilateral, Union of Concerned Scientists, United Nations
There is a rather large subgroup within the rationalist community (loosely defined because organizing freethinkers is like herding cats) that focuses on existential risks, also called global catastrophic risks. Prominent examples include Nick Bostrom and Eliezer Yudkowsky.
Their stated goal in life is to save humanity from destruction. And when you put it that way, it sounds pretty darn important. How can you disagree with wanting to save humanity from destruction?
Well, there are actually people who do (the Voluntary Human Extinction movement), but they are profoundly silly. It should be obvious to anyone with even a basic moral compass that saving humanity from destruction is a good thing.
It's not the goal of fighting existential risk that bothers me. It's the approach. Specifically, they almost all seem to focus on exotic existential risks, vivid and compelling existential risks that are the stuff of great science fiction stories. In particular, they have a rather odd obsession with AI.
Maybe it's the overlap with Singularitarians, and their inability to understand that exponentials are not arbitrarily fast; if you just keep projecting the growth in computing power as growing forever, surely eventually we'll have a computer powerful enough to solve all the world's problems, right? Well, yeah, I guess… if we can actually maintain the progress that long, which we almost certainly can't, and if the problems turn out to be computationally tractable at all (the fastest possible computer that could fit inside the observable universe could not brute-force solve the game of Go, though a heuristic AI did just beat one of the world's best players), and/or if we find really good heuristic methods of narrowing down the solution space… but that's an awful lot of "if"s.
But AI isn't what we need to worry about in terms of saving humanity from destruction. Nor is it asteroid impacts; NASA has been doing a good job watching for asteroids lately, and estimates the current risk of a serious impact (by which I mean something like a city-destroyer or global climate shock, not even a global killer) at around 1/10,000 per year. Alien invasion is right out; we can't even find clear evidence of bacteria on Mars, and the skies are so empty of voices it has been called a paradox. Gamma ray bursts could kill us, and we aren't sure about the probability of that (we think it's small?), but much like brain aneurysms, there really isn't a whole lot we can do to prevent them.
There is one thing that we really need to worry about destroying humanity, and one other thing that could potentially get close over a much longer timescale. The long-range threat is ecological collapse; as global climate change gets worse and the oceans become more acidic and the aquifers are drained, we could eventually reach the point where humanity cannot survive on Earth, or at least where our population collapses so severely that civilization as we know it is destroyed. This might not seem like such a threat, since we would see this coming decades or centuries in advance—but we are seeing it coming decades or centuries in advance, and yet we can't seem to get the world's policymakers to wake up and do something about it. So that's clearly the second-most important existential risk.
But the most important existential risk, by far, no question, is nuclear weapons.
Nuclear weapons are the only foreseeable, preventable means by which humanity could be destroyed in the next twenty minutes.
Yes, that is approximately the time it takes an ICBM to hit its target after launch. There are almost 4,000 ICBMs currently deployed, mostly by the US and Russia. Once we include submarine-launched missiles and bombers, the total number of global nuclear weapons is over 15,000. I apologize for terrifying you by saying that these weapons could be deployed in a moment's notice to wipe out most of human civilization within half an hour, followed by a global ecological collapse and fallout that would endanger the future of the entire human race—but it's the truth. If you're not terrified, you're not paying attention.
I've intentionally linked the Union of Concerned Scientists as one of those sources. Now they are people who understand existential risk. They don't talk about AI and asteroids and aliens (how alliterative). They talk about climate change and nuclear weapons.
We must stop this. We must get rid of these weapons. Next to that, literally nothing else matters.
"What if we're conquered by tyrants?" It won't matter. "What if there is a genocide?" It won't matter. "What if there is a global economic collapse?" None of these things will matter, if the human race wipes itself out with nuclear weapons.
To speak like an economist for a moment, the utility of a global nuclear war must be set at negative infinity. Any detectable reduction in the probability of that event must be considered worth paying any cost to achieve. I don't care if it costs $20 trillion and results in us being taken over by genocidal fascists—we are talking about the destruction of humanity. We can spend $20 trillion (actually the US as a whole does every 14 months!). We can survive genocidal fascists. We cannot survive nuclear war.
The good news is, we shouldn't actually have to pay that sort of cost. All we have to do is dismantle our nuclear arsenal, and get other countries—particularly Russia—to dismantle theirs. In the long run, we will increase our wealth as our efforts are no longer wasted maintaining doomsday machines.
The main challenge is actually a matter of game theory. The surprisingly-sophisticated 1990s cartoon show the Animaniacs basically got it right when they sang: "We'd beat our swords into liverwurst / Down by the East Riverside / But no one wants to be the first!"
The thinking, anyway, is that this is basically a Prisoner's Dilemma. If the US disarms and Russia doesn't, Russia can destroy the US. Conversely, if Russia disarms and the US doesn't, the US can destroy Russia. If neither disarms, we're left where we are. Whether or not the other country disarms, you're always better off not disarming. So neither country disarms.
But I contend that it is not, in fact, a Prisoner's Dilemma. It could be a Stag Hunt; if that's the case, then only multilateral disarmament makes sense, because the best outcome is if we both disarm, but the worst outcome is if we disarm and they don't. Once we expect them to disarm, we have no temptation to renege on the deal ourselves; but if we think there's a good chance they won't, we might not want to either. Stag Hunts have two stable Nash equilibria; one is where both arm, the other where both disarm.
But in fact, I think it may be simply the trivial game.
There aren't actually that many possible symmetric two-player nonzero-sum games (basically it's a question of ordering 4 possibilities, and it's symmetric, so 12 possible games), and one that we never talk about (because it's sort of boring) is the trivial game: If I do the right thing and you do the right thing, we're both better off. If you do the wrong thing and I do the right thing, I'm better off. If we both do the wrong thing, we're both worse off. So, obviously, we both do the right thing, because we'd be idiots not to. Formally, we say that cooperation is a strictly dominant strategy. There's no dilemma, no paradox; the self-interested strategy is the optimal strategy. (I find it kind of amusing that laissez-faire economics basically amounts to assuming that all real-world games are the trivial game.)
That is, I don't think the US would actually benefit from nuking Russia, even if we could do so without retaliation. Likewise, I don't think Russia would actually benefit from nuking the US. One of the things we've discovered—the hardest way possible—through human history is that working together is often better for everyone than fighting. Russia could nuke NATO, and thereby destroy all of their largest trading partners, or they could continue trading with us. Even if they are despicable psychopaths who think nothing of committing mass murder (Putin might be, but surely there are people under his command who aren't?), it's simply not in Russia's best interest to nuke the US and Europe. Likewise, it is not in our best interest to nuke them.
Nuclear war is a strange game: The only winning move is not to play.
So I say, let's stop playing. Yes, let's unilaterally disarm, the thing that so many policy analysts are terrified of because they're so convinced we're in a Prisoner's Dilemma or a Stag Hunt. "What's to stop them from destroying us, if we make it impossible for us to destroy them!?" I dunno, maybe basic human decency, or failing that, rationality?
Several other countries have already done this—South Africa unilaterally disarmed, and nobody nuked them. Japan refused to build nuclear weapons in the first place—and I think it says something that they're the only people to ever have them used against them.
Our conventional military is plenty large enough to defend us against all realistic threats, and could even be repurposed to defend against nuclear threats as well, by a method I call credible targeted conventional response. Instead of building ever-larger nuclear arsenals to threaten devastation in the world's most terrifying penis-measuring contest, you deploy covert operatives (perhaps Navy SEALS in submarines, or double agents, or these days even stealth drones) around the world, with the standing order that if they have reason to believe a country initiated a nuclear attack, they will stop at nothing to hunt down and kill the specific people responsible for that attack. Not the country they came from; not the city they live in; those specific people. If a leader is enough of a psychopath to be willing to kill 300 million people in another country, he's probably enough of a psychopath to be willing to lose 150 million people in his own country. He likely has a secret underground bunker that would allow him to survive, at least if humanity as a whole does. So you should be threatening the one thing he does care about—himself. You make sure he knows that if he pushes that button, you'll find that bunker, drop in from helicopters, and shoot him in the face.
The "targeted conventional response" should be clear by now—you use non-nuclear means to respond, and you target the particular leaders responsible—but let me say a bit more about the "credible" part. The threat of mutually-assured destruction is actually not a credible one. It's not what we call in game theory a subgame perfect Nash equilibrium. If you know that Russia has launched 1500 ICBMs to destroy every city in America, you actually have no reason at all to retaliate with your own 1500 ICBMs, and the most important reason imaginable not to. Your people are dead either way; you can't save them. You lose. The only question now is whether you risk taking the rest of humanity down with you. If you have even the most basic human decency, you will not push that button. You will not "retaliate" in useless vengeance that could wipe out human civilization. Thus, your threat is a bluff—it is not credible.
But if your response is targeted and conventional, it suddenly becomes credible. It's exactly reversed; you now have every reason to retaliate, and no reason not to. Your covert operation teams aren't being asked to destroy humanity; they're being tasked with finding and executing the greatest mass murderer in history. They don't have some horrific moral dilemma to resolve; they have the opportunity to become the world's greatest heroes. Indeed, they'd very likely have the whole world (or what's left of it) on their side; even the population of the attacking country would rise up in revolt and the double agents could use the revolt as cover. Now you have no reason to even hesitate; your threat is completely credible. The only question is whether you can actually pull it off, and if we committed the full resources of the United States military to preparing for this possibility, I see no reason to doubt that we could. If a US President can be assassinated by a lone maniac (and yes, that is actually what happened), then the world's finest covert operations teams can assassinate whatever leader pushed that button.
This is a policy that works both unilaterally and multilaterally. We could even assemble an international coalition—perhaps make the UN "peacekeepers" put their money where their mouth is and train the finest special operatives in the history of the world tasked with actually keeping the peace.
Let's not wait for someone else to save humanity from destruction. Let's be the first.
Is America uniquely… mean?
2016-03-05 2016-03-02 pnrj core principles, ethics, public policy, Uncategorized authoritarianism, Bible, Donald Trump, E.O. Wilson, Epicurus, hierarchy, inequality, Kohlberg, meanness, Moral Majority, morality, narcissism, Qur'an, religion, Saudi Arabia, social dominance orientation, tribal paradigm
I read this article yesterday which I found both very resonant and very disturbing: At least among First World countries, the United States really does seem uniquely, for lack of a better word, mean.
The formal psychological terminology is social dominance orientation; the political science term is authoritarianism. In economics, we notice the difference due to its effect on income inequality. But all of these concepts are capturing part of a deeper underlying reality that in the age of Trump I am finding increasingly hard to deny. The best predictor of support for Trump is authoritarianism.
Of course I've already talked about our enormous military budget; but then Tennessee had to make their official state rifle a 50-caliber weapon capable of destroying light tanks. There is something especially dominant, aggressive, and violent about American culture.
We are certainly not unique in the world as a whole—actually I think the amount of social dominance orientation, authoritarianism, and inequality in the US is fairly similar to the world average. We are unique in our gun ownership, but our military spending proportional to GDP is not particularly high by world standards—we're just an extremely rich country. But in all these respects we are a unique outlier among First World countries; in many ways we resemble a rich authoritarian petrostate like Qatar rather than a European social democracy like France or the UK. (At least we're not Saudi Arabia?)
More than other First World cultures, Americans believe in hierarchy; they believe that someone should be on top and other people should be on the bottom. More than that, they believe that people "like us" should be on top and people "not like us" should be on the bottom, however that is defined—often in terms of race or religion, but not necessarily.
Indeed, one of the things I find most baffling about this is that it is often more important to people that others be held down than that they themselves be lifted up. This is the only way I can make sense of the fact that people who have watched their wages be drained into the pockets of billionaires for a generation can think that the most important things to do right now are block out illegal immigrants and deport Muslims.
It seems to be that people become convinced that their own status, whatever it may be, is deserved: If they are rich, it is obviously because they are so brilliant and hard-working (something Trump clearly believes about himself, being a textbook example of Narcissistic Personality Disorder); if they are poor, it is obviously because they are so incompetent and lazy. Thus, being lifted up doesn't make sense; why would you give me things I don't deserve?
But then when they see people who are different from them, they know automatically that those people must be by definition inferior, as all who are Not of Our Tribe are by definition inferior. And therefore, any of them who are rich gained their position through corruption or injustice, and all of them who are poor deserve their fate for being so inferior. Thus, it is most vital to ensure that these Not of Our Tribe are held down from reaching high positions they so obviously do not deserve.
I'm fairly sure that most of this happens at a very deep unconscious level; it calls upon ancient evolutionary instincts to love our own tribe, to serve the alpha male, to fear and hate those of other tribes. These instincts may well have served us 200,000 years ago (then again, they may just have been the best our brains could manage at the time); but they are becoming a dangerous liability today.
As E.O. Wilson put it: "The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology."
Yet this cannot be a complete explanation, for there is variation in these attitudes. A purely instinctual theory should say that all human cultures have this to an essentially equal degree; but I started this post by pointing out that the United States appears to have a particularly large amount relative to Europe.
So, there must be something in the cultures or institutions of different nations that makes them either enhance or suppress this instinctual tribalism. There must be something that Europe is doing right, the US is doing wrong, and Saudi Arabia is doing very, very wrong.
Well, the obvious one that sticks out at me is religion. It seems fairly obvious to me that Sweden is less religious than the US, which is less religious than Saudi Arabia.
Data does back me up on this. Religiosity isn't easy to measure, but we have methods of doing so. If we ask people in various countries if religion is very important in their lives, the percentage of people who say yes gives us an indication of how religious that country is.
In Saudi Arabia, 93% say yes. In the United States, 65% say yes. In Sweden, only 17% say yes.
Religiosity tends to be highest in the poorest countries, but the US is an outlier, far too rich for our religion (or too religious for our wealth).
Religiosity also tends to be highest in countries with high inequality—this time, the US fits right in.
The link between religion and inequality is quite clear. It's harder to say which way the causation runs. Perhaps high inequality makes people cling more to religion as a comfort, and getting rid of religion would only mean taking that comfort away. Or, perhaps religion actually makes people believe more in social dominance, and thus is part of what keeps that high inequality in place. It could also be a feedback loop, in which higher inequality leads to higher religiosity which leads to higher inequality.
That said, I think we actually have some evidence that causality runs from religion to inequality, rather than the other way around. The secularization of France took place around the same time as the French Revolution that overthrew the existing economic system and replaced it with one that had substantially less inequality. Iran's government became substantially more based on religion in the latter half of the 20th century, and their inequality soared thereafter.
Above all, Donald Trump dominates the evangelical vote, which makes absolutely no sense if religion is a comfort against inequality—but perfect sense if religion solidifies the tendency of people to think in terms of hierarchy and authoritarianism.
This also makes sense in terms of the content of religion, especially Abrahamaic religion; read the Bible and the Qur'an, and you will see that their primary goal seems to be to convince you that some people, namely people who believe in this book, are just better than other people, and we should be in charge because God says so. (And you wouldn't try to argue with God, would you?) They really make no particular effort to convince you that God actually exists; they spend all their argumentative effort on what God wants you to do and who God wants you to put in charge—and for some strange reason it always seems to be the same guys who are writing down "God's words" in the book! What a coincidence!
If religion is indeed the problem, or a large part of the problem, what can we do about it? That's the most difficult part. We've been making absolutely conclusive rational arguments against religion since literally 300 years before Jesus was even born (there has never been a time in human history in which it was rational for an educated person to believe in Christianity or Islam, for the religions did not come into existence until well after the arguments to refute them were well-known!), and the empirical evidence against theism has only gotten stronger ever since; so that clearly isn't enough.
I think what we really need to do at this point is confront the moral monopoly that religion has asserted for itself. The "Moral Majority" was neither, but its name still sort of makes sense to us because we so strongly associate being moral with being religious. We use terms like "Christian" and "generous" almost interchangeably. And whenever you get into a debate about religion, shortly after you have thoroughly demolished any shred of empirical credibility religion still had left, you can basically guarantee that the response will be: "But without God, how can you know right from wrong?"
What is perhaps most baffling about this concept of morality so commonplace in our culture is that not only is the command of a higher authority that rewards and punishes you not the highest level of moral development—it is literally the lowest. Of the six stages of moral thinking Kohlberg documented in children, the reward and punishment orientation exemplified by the Bible and the Qur'an is the very first. I think many of these people really truly haven't gotten past level 1, which is why when you start trying to explain how you base your moral judgments on universal principles of justice and consequences (level 6) they don't seem to have any idea what you're talking about.
Perhaps this is a task for our education system (philosophy classes in middle school?), perhaps we need something more drastic than that, or perhaps it is enough that we keep speaking about it in public. But somehow we need to break up the monopoly that religion has on moral concepts, so that people no longer feel ashamed to say that something is morally wrong without being able to cite a particular passage from a particular book from the Iron Age. Perhaps once we can finally make people realize that morality does not depend on religion, we can finally free them from the grip of religion—and therefore from the grip of authoritarianism and social dominance.
If this is right, then the reason America is so mean is that we are so Christian—and people need to realize that this is not a paradoxical statement.
Will robots take our jobs?
2016-03-02 2016-02-24 pnrj core principles, critique of neoclassical economics, futurism, Uncategorized automation, computers, dystopia, future, jobs, labor, Maslow, needs, productivity, resources, Revenge of the Humanities Majors, Robert Solow, robots, s also ertryid of th, sales, scarcity, utopia
I briefly discussed this topic before, but I thought it deserved a little more depth. Also, the SF author in me really likes writing this sort of post where I get to speculate about futures that are utopian, dystopian, or (most likely) somewhere in between.
The fear is quite widespread, but how realistic is it? Will robots in fact take all our jobs?
Most economists do not think so. Robert Solow famously quipped, "You can see the computer age everywhere but in the productivity statistics." (It never quite seemed to occur to him that this might be a flaw in the way we measure productivity statistics.)
By the usual measure of labor productivity, robots do not appear to have had a large impact. Indeed, their impact appears to have been smaller than almost any other major technological innovation.
Using BLS data (which was formatted badly and thus a pain to clean, by the way—albeit not as bad as the World Bank data I used on my master's thesis, which was awful), I made this graph of the growth rate of labor productivity as usually measured:
The fluctuations are really jagged due to measurement errors, so I also made an annually smoothed version:
Based on this standard measure, productivity has grown more or less steadily during my lifetime, fluctuating with the business cycle around a value of about 3.5% per year (3.4 log points). If anything, the growth rate seems to be slowing down; in recent years it's been around 1.5% (1.5 lp).
This was clearly the time during which robots became ubiquitous—autonomous robots did not emerge until the 1970s and 1980s, and robots became widespread in factories in the 1980s. Then there's the fact that computing power has been doubling every 1.5 years during this period, which is an annual growth rate of 59% (46 lp). So why hasn't productivity grown at anywhere near that rate?
I think the main problem is that we're measuring productivity all wrong. We measure it in terms of money instead of in terms of services. Yes, we try to correct for inflation; but we fail to account for the fact that computers have allowed us to perform literally billions of services every day that could not have been performed without them. You can't adjust that away by plugging into the CPI or the GDP deflator.
Think about it: Your computer provides you the services of all the following:
A decent typesetter and layout artist
A truly spectacular computer (remember, that used to be a profession!)
A highly skilled statistician (who takes no initiative—you must tell her what calculations to do)
A painting studio
A photographer
A video camera operator
A professional orchestra of the highest quality
A decent audio recording studio
Thousands of books, articles, and textbooks
Ideal seats at every sports stadium in the world
And that's not even counting things like social media and video games that can't even be readily compared to services that were provided before computers.
If you added up the value of all of those jobs, the amount you would have had to pay in order to hire all those people to do all those things for you before computers existed, your computer easily provides you with at least $1 million in professional services every year. Put another way, your computer has taken jobs that would have provided $1 million in wages. You do the work of a hundred people with the help of your computer.
This isn't counted in our productivity statistics precisely because it's so efficient. If we still had to pay that much for all these services, it would be included in our GDP and then our GDP per worker would properly reflect all this work that is getting done. But then… whom would we be paying? And how would we have enough to pay that? Capitalism isn't actually set up to handle this sort of dramatic increase in productivity—no system is, really—and thus the market price for work has almost no real relation to the productive capacity of the technology that makes that work possible.
Instead it has to do with scarcity of work—if you are the only one in the world who can do something (e.g. write Harry Potter books), you can make an awful lot of money doing that thing, while something that is far more important but can be done by almost anyone (e.g. feed babies) will pay nothing or next to nothing. At best we could say it has to do with marginal productivity, but marginal in the sense of your additional contribution over and above what everyone else could already do—not in the sense of the value actually provided by the work that you are doing. Anyone who thinks that markets automatically reward hard work or "pay you what you're worth" clearly does not understand how markets function in the real world.
So, let's ask again: Will robots take our jobs?
Well, they've already taken many jobs already. There isn't even a clear high-skill/low-skill dichotomy here; robots are just as likely to make pharmacists obsolete as they are truck drivers, just as likely to replace surgeons as they are cashiers.
Labor force participation is declining, though slowly:
Yet I think this also underestimates the effect of technology. As David Graeber points out, most of the new jobs we've been creating seem to be for lack of a better term bullshit jobs—jobs that really don't seem like they need to be done, other than to provide people with something to do so that we can justify paying them salaries.
As he puts it:
Again, an objective measure is hard to find, but one easy way to get a sense is to ask: what would happen were this entire class of people to simply disappear? Say what you like about nurses, garbage collectors, or mechanics, it's obvious that were they to vanish in a puff of smoke, the results would be immediate and catastrophic. A world without teachers or dock-workers would soon be in trouble, and even one without science fiction writers or ska musicians would clearly be a lesser place. It's not entirely clear how humanity would suffer were all private equity CEOs, lobbyists, PR researchers, actuaries, telemarketers, bailiffs or legal consultants to similarly vanish. (Many suspect it might markedly improve.)
The paragon of all bullshit jobs is sales. Sales is a job that simply should not exist. If something is worth buying, you should be able to present it to the market and people should choose to buy it. If there are many choices for a given product, maybe we could have some sort of independent product rating agencies that decide which ones are the best. But sales means trying to convince people to buy your product—you have an absolutely overwhelming conflict of interest that makes your statements to customers so utterly unreliable that they are literally not even information anymore. The vast majority of advertising, marketing, and sales is thus, in a fundamental sense, literally noise. Sales contributes absolutely nothing to our economy, and because we spend so much effort on it and advertising occupies so much of our time and attention, takes a great deal away. But sales is one of our most steadily growing labor sectors; once we figure out how to make things without people, we employ the people in trying to convince customers to buy the new things we've made. Sales is also absolutely miserable for many of the people who do it, as I know from personal experience in two different sales jobs that I had to quit before the end of the first week.
Fortunately we have not yet reached the point where sales is the fastest growing labor sector. Currently the fastest-growing jobs fall into three categories: Medicine, green energy, and of course computers—but actually mostly medicine. Yet even this is unlikely to last; one of the easiest ways to reduce medical costs would be to replace more and more medical staff with automated systems. A nursing robot may not be quite as pleasant as a real professional nurse—but if by switching to robots the hospital can save several million dollars a year, they're quite likely to do so.
Certain tasks are harder to automate than others—particularly anything requiring creativity and originality is very hard to replace, which is why I believe that in the 2050s or so there will be a Revenge of the Humanities Majors as all the supposedly so stable and forward-thinking STEM jobs disappear and the only jobs that are left are for artists, authors, musicians, game designers and graphic designers. (Also, by that point, very likely holographic designers, VR game designers, and perhaps even neurostim artists.) Being good at math won't mean anything anymore—frankly it probably shouldn't right now. No human being, not even great mathematical savants, is anywhere near as good at arithmetic as a pocket calculator. There will still be a place for scientists and mathematicians, but it will be the creative aspects of science and math that persist—design of experiments, development of new theories, mathematical intuition to develop new concepts. The grunt work of cleaning data and churning through statistical models will be fully automated.
Most economists appear to believe that we will continue to find tasks for human beings to perform, and this improved productivity will simply raise our overall standard of living. As any ECON 101 textbook will tell you, "scarcity is a fundamental fact of the universe, because human needs are unlimited and resources are finite."
In fact, neither of those claims are true. Human needs are not unlimited; indeed, on Maslow's hierarchy of needs First World countries have essentially reached the point where we could provide the entire population with the whole pyramid, guaranteed, all the time—if we were willing and able to fundamentally reform our economic system.
Resources are not even finite; what constitutes a "resource" depends on technology, as does how accessible or available any given source of resources will be. When we were hunter-gatherers, our only resources were the plants and animals around us. Agriculture turned seeds and arable land into a vital resource. Whale oil used to be a major scarce resource, until we found ways to use petroleum. Petroleum in turn is becoming increasingly irrelevant (and cheap) as solar and wind power mature. Soon the waters of the oceans themselves will be our power source as we refine the deuterium for fusion. Eventually we'll find we need something for interstellar travel that we used to throw away as garbage (perhaps it will in fact be dilithium!) I suppose that if the universe is finite or if FTL is impossible, we will be bound by what is available in the cosmic horizon… but even that is not finite, as the universe continues to expand! If the universe is open (as it probably is) and one day we can harness the dark energy that seethes through the ever-expanding vacuum, our total energy consumption can grow without bound just as the universe does. Perhaps we could even stave off the heat death of the universe this way—we after all have billions of years to figure out how.
If scarcity were indeed this fundamental law that we could rely on, then more jobs would always continue to emerge, producing whatever is next on the list of needs ordered by marginal utility. Life would always get better, but there would always be more work to be done. But in fact, we are basically already at the point where our needs are satiated; we continue to try to make more not because there isn't enough stuff, but because nobody will let us have it unless we do enough work to convince them that we deserve it.
We could continue on this route, making more and more bullshit jobs, pretending that this is work that needs done so that we don't have to adjust our moral framework which requires that people be constantly working for money in order to deserve to live. It's quite likely in fact that we will, at least for the foreseeable future. In this future, robots will not take our jobs, because we'll make up excuses to create more.
But that future is more on the dystopian end, in my opinion; there is another way, a better way, the world could be. As technology makes it ever easier to produce as much wealth as we need, we could learn to share that wealth. As robots take our jobs, we could get rid of the idea of jobs as something people must have in order to live. We could build a new economic system: One where we don't ask ourselves whether children deserve to eat before we feed them, where we don't expect adults to spend most of their waking hours pushing papers around in order to justify letting them have homes, where we don't require students to take out loans they'll need decades to repay before we teach them history and calculus.
This second vision is admittedly utopian, and perhaps in the worst way—perhaps there's simply no way to make human beings actually live like this. Perhaps our brains, evolved for the all-too-real scarcity of the ancient savannah, simply are not plastic enough to live without that scarcity, and so create imaginary scarcity by whatever means they can. It is indeed hard to believe that we can make so fundamental a shift. But for a Homo erectus in 500,000 BP, the idea that our descendants would one day turn rocks into thinking machines that travel to other worlds would be pretty hard to believe too.
Will robots take our jobs? Let's hope so. | CommonCrawl |
Living Reviews in Relativity
December 2020 , 23:1 | Cite as
Kilonovae
Brian D. Metzger
Latest version View article history
The coalescence of double neutron star (NS–NS) and black hole (BH)–NS binaries are prime sources of gravitational waves (GW) for Advanced LIGO/Virgo and future ground-based detectors. Neutron-rich matter released from such events undergoes rapid neutron capture (r-process) nucleosynthesis as it decompresses into space, enriching our universe with rare heavy elements like gold and platinum. Radioactive decay of these unstable nuclei powers a rapidly evolving, approximately isotropic thermal transient known as a "kilonova", which probes the physical conditions during the merger and its aftermath. Here I review the history and physics of kilonovae, leading to the current paradigm of day-timescale emission at optical wavelengths from lanthanide-free components of the ejecta, followed by week-long emission with a spectral peak in the near-infrared (NIR). These theoretical predictions, as compiled in the original version of this review, were largely confirmed by the transient optical/NIR counterpart discovered to the first NS–NS merger, GW170817, discovered by LIGO/Virgo. Using a simple light curve model to illustrate the essential physical processes and their application to GW170817, I then introduce important variations about the standard picture which may be observable in future mergers. These include \(\sim \)hour-long UV precursor emission, powered by the decay of free neutrons in the outermost ejecta layers or shock-heating of the ejecta by a delayed ultra-relativistic outflow; and enhancement of the luminosity from a long-lived central engine, such as an accreting BH or millisecond magnetar. Joint GW and kilonova observations of GW170817 and future events provide a new avenue to constrain the astrophysical origin of the r-process elements and the equation of state of dense nuclear matter.
Gravitational waves Neutron stars Nucleosynthesis Black holes Radiative transfer
This article is a revised version of https://doi.org/10.1007/s41114-017-0006-z.
1 Electromagnetic counterparts of binary neutron star mergers
The discovery in 2015 of gravitational waves (GW) from the inspiral and coalescence of binary black holes (BH) by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and later with its partner observatory, Virgo, has opened an entirely new window on the cosmos (Abbott et al. 2016c). This modest, but rapidly-expanding, sample1 of BH–BH merger events (Abbott et al. 2019b) is already being used to place constraints on the formation channels of compact binary systems (e.g., Abbott et al. 2016), as well as fundamental tests of general relativity in the previously inaccessible strong-field regime (e.g., Miller 2016; Abbott et al. 2019c). We are fortunate witnesses to the birth of a new field of research: Gravitational-Wave Astronomy.
On August 17, 2017, near the end of their second observing run, Advanced LIGO/Virgo detected its first merger of a double NS binary (Abbott et al. 2017b). This event, like other GW detections, was dubbed GW170817 based on its date of discovery. The individual masses of the binary components measured by the GW signal, \(M_{1}, M_{2} \approx 1.16\)–\(1.60\,M_{\odot }\) (under the assumption of low NS spin) and the precisely-measured chirp mass
$$\begin{aligned} {\mathcal {M}}_c \equiv \frac{(M_1M_2)^{3/5}}{(M_1+M_2)^{1/5}} \underset{\mathrm{GW170817}}{\simeq }1.118\,M_{\odot }, \end{aligned}$$
are fully consistent with being drawn from the known population of Galactic binary NSs (Abbott et al. 2019a), particularly those with GW merger times less than the age of the universe (Zhao and Lattimer 2018). The lack of evidence for tidal interaction between the merging objects in the inspiral GW signal allowed for stringent upper limits to be placed on the tidal deformability and radii of NSs (Abbott et al. 2018; De et al. 2018), properties closely related to the pressure of neutron-rich matter above nuclear saturation density (see Horowitz et al. 2019 for a recent review).
Beyond information encoded in the GW strain data, the discovery of electromagnetic (EM) emission accompanying the GW chirp has the potential to reveal a much richer picture of these events (Bloom et al. 2009). By identifying the host galaxies of the merging systems, and their precise locations within or around their hosts, we obtain valuable information on the binary formation channels, age of the stellar population, evidence for dynamical formation channels in dense stellar systems, or displacement due to supernova (SN) birth kicks, in a manner similar to techniques long applied to gamma-ray bursts (GRBs) and supernovae (e.g., Fruchter et al. 2006; Fong and Berger 2013). From the host galaxy redshifts, we obtain independent distance estimates to the sources, thus reducing degeneracies in the GW parameter estimation, especially of the binary inclination with respect to the line of sight (e.g., Cantiello et al. 2018; Chen et al. 2019). Redshift measurements also enable the use of GW events as standard rulers to measure the Hubble constant, or more generally, probe the cosmic expansion history (Schutz 1986; Holz and Hughes 2005; Nissanke et al. 2013). Remarkably, all of these these opportunities, and many others to be discussed later in this review, became reality with GW170817.
Except in rare circumstances, the mergers of stellar-mass BH–BH binaries are not expected to produce luminous EM emission due to the absence of significant matter surrounding these systems at the time of coalescence. Fruitful synthesis of the GW and EM skies will therefore most likely first be achieved from NS–NS and BH–NS mergers. Given the discovery of a single NS–NS merger in the O1 and O2 observing runs, LIGO/Virgo infer a volumetric rate of 110–\(3840\mathrm {\ Gpc^{-3}\ yr^{-1}}\) (Abbott et al. 2019b), corresponding to an expected NS–NS rate of \(\approx 6\)–\(120\mathrm {\ yr}^{-1}\) once Advanced LIGO/Virgo reach design sensitivity by the early 2020s (Abbott et al. 2017d). The O1/O2 upper limit on the NS–BH merger rate is \(\lesssim 600\mathrm {\ Gpc^{-3}\ yr^{-1}}\) (Abbott et al. 2019b). This range is broadly consistent with theoretical expectations on the rates (e.g., population synthesis models of field binaries; e.g., Dominik et al. 2015) as well as those derived empirically from the known population of Galactic double NS systems (Phinney 1991; Kalogera et al. 2004; Kim et al. 2015; see Abadie et al. 2010 for a review of rate predictions).
Among the greatest challenge to the joint EM/GW endeavor are the large uncertainties in the measured sky positions of the GW sources, which are primarily determined by triangulating the GW arrival times with an array of interferometers. When a detection is made by just the two North American LIGO facilities, sky error regions are very large (e.g., \(\approx 850\mathrm {\ deg}^{2}\) for the BH–BH merger GW150914, though later improved to \(\approx 250 \mathrm {\ deg}^{2}\); Abbott et al. 2016a, b). However, with the addition of the Virgo detector in Italy, and eventually KAGRA in Japan (Somiya 2012) and LIGO-India, these can be reduced to more manageable values of \(\sim 10\) – \(100\mathrm {\ deg^{2}}\) or less (Fairhurst 2011; Nissanke et al. 2013; Rodriguez et al. 2014). Indeed, information from Virgo proved crucial in reducing the sky error region of GW170817 to 30 \(\hbox {deg}^{2}\) (Abbott et al. 2017c), greatly facilitating the discovery of its optical counterpart. Nevertheless, even in the best of cases, the GW-measured sky areas still greatly exceed that covered in a single pointing by most radio, optical, and X-ray telescopes, especially those with the required sensitivity to detect the potentially dim EM counterparts of NS–NS and BH–NS mergers (Metzger and Berger 2012).
Summary of the electromagnetic counterparts of NS–NS and BH–NS mergers and their dependence on the viewing angle with respect to the axis of the GRB jet. The kilonova, in contrast to the GRB and its afterglow, is relatively isotropic and thus represents the most promising counterpart for the majority of GW-detected mergers
Image reproduced with permission from Metzger and Berger (2012), copyright by AAS
1.1 Gamma-ray bursts
Figure 1 summarizes the EM counterparts of NS–NS and BH–NS mergers as a function of the observer viewing angle relative to the binary axis. Multiple lines of evidence, both observational (e.g., Fong et al. 2014) and theoretical2 (Eichler et al. 1989; Narayan et al. 1992), support an association between NS–NS/BH–NS mergers and the "short duration" class of GRBs. The latter are those bursts with durations in the gamma-ray band less than about 2 s (Nakar 2007; Berger 2014), in contrast to the longer lasting bursts of duration \( > rsim \) 2 s which are instead associated with the core collapse of very massive stars. For a typical LIGO/Virgo source distance of \(\lesssim 200\) Mpc, any gamma-ray transient with properties matching those of the well-characterized cosmological population of GRBs would easily be detected by the Fermi, Swift or Integral satellites within their fields of view, or even with the less sensitive but all-sky Interplanetary Network of gamma-ray telescopes (Hurley 2013).
The tightly collimated, relativistic outflows responsible for short GRBs are commonly believed to be powered by the accretion of a massive remnant disk onto the compact BH or NS remnant following the merger (e.g., Narayan et al. 1992). This is expected to occur within seconds of the merger, making their temporal association with the termination of the GW chirp unambiguous (the gamma-ray sky is otherwise quiet). Once a GRB is detected, its associated afterglow can in many cases be identified by promptly slewing a sensitive X-ray telescope to the location of the burst. This exercise is now routine with Swift, but may become less so in the future without a suitable replacement mission. Although gamma-ray detectors themselves typically provide poor sky localization, the higher angular resolution of the X-ray telescope allows for the discovery of the optical or radio afterglow; this in turn provides an even more precise position, which can help to identify the host galaxy.
A prompt burst of gamma-ray emission was detected from GW170817 by the Fermi and Integral satellites with a delay of \(\approx 1.7\) s from the end of the inspiral (Abbott et al. 2017d; Goldstein et al. 2017; Savchenko et al. 2017). However, rapid localization of the event was not possible, for two reasons: (1) the merger was outside the field-of-view of the Swift BAT gamma-ray detector and therefore a relatively precise sky position was not immediately available; (2) even if rapidly slewing of the X-ray telescope had been made, the X-ray afterglow may not have been detectable at such early times. Deep upper limits on the X-ray luminosity of GW170817 at \(t = 2.3\) days (Margutti et al. 2017) reveal a much dimmer event than expected for a cosmological GRB placed at the same distance at a similar epoch. As we discuss below, the delayed rise and low luminosity of the synchrotron afterglow were the result of our viewing angle being far outside the core of the ultra-relativistic GRB jet, unlike the nearly on-axis orientation from which cosmological GRBs are typically viewed (e.g., Ryan et al. 2015).
Although short GRBs are probably the cleanest EM counterparts, their measured rate within the Advanced LIGO detection volume, based on observations prior to GW170817, was expected to low, probably less than once per year to decade (Metzger and Berger 2012). The measured volumetric rate of short GRBs in the local universe of \({\mathcal {R}}_{\mathrm{SGRB}} \sim 5\) \(\hbox {Gpc}^{-3}\) \(\hbox {yr}^{-1}\) (Wanderman and Piran 2015) can be reconciled with the much higher NS–NS merger rate \({\mathcal {R}}_{\mathrm{BNS}} \sim 10^{3}\) \(\hbox {Gpc}^{-3}\) \(\hbox {yr}^{-1}\) (Abbott et al. 2019b) if the gamma-ray emission is beamed into a narrow solid angle \(\ll 4\pi \) by the bulk relativistic motion of the GRB jet (Fong et al. 2015; Troja et al. 2016). Given a typical GRB jet opening angle of \(\theta _{\mathrm{jet}} \approx 0.1\) radians, the resulting beaming fraction of \(f_{\mathrm{b}}^{-1} = \theta _\mathrm{jet}^{2}/2 \sim 1/200 \sim {\mathcal {R}}_{\mathrm{SGRB}}/{\mathcal {R}}_\mathrm{BNS}\), is consistent with most or all short GRBs arising from NS–NS mergers (though uncertainties remain large enough that a contribution from other channels, such as BH–NS mergers, cannot yet be excluded).
While the discovery of gamma-rays from the first GW-detected NS–NS merger came as a surprise to most, its properties were also highly unusual. The isotropic luminosity of the burst was \(\sim 10^{3}\) times smaller than the known population of cosmological short GRBs on which prior rate estimates had been based. A similar burst would have been challenging to detect at even twice the distance of GW170817. Several different theoretical models were proposed to explain the origin of gamma-ray signal from GW170817 (e.g., Granot et al. 2017; Fraija et al. 2019; Gottlieb et al. 2018; Beloborodov et al. 2018), but in all cases its low luminosity and unusual spectral properties are related to our large viewing angle \(\theta _{\mathrm{obs}} \approx 0.4\) relative to the binary angular momentum (Finstad et al. 2018; Abbott et al. 2019a) being \( > rsim 4\) times larger than the opening angle of the jet core, \(\theta _{\mathrm{jet}} \lesssim 0.1\) (e.g., Fong et al. 2017; Mooley et al. 2018; Beniamini et al. 2019). Given that the majority of future GW-detections will occur at greater distances and from even larger \(\theta _{\mathrm{obs}}\) (for which the prompt jet emission is likely to be less luminous) than for GW170817, it remains true that only a fraction of NS–NS mergers detected are likely to be accompanied by detectable gamma-rays (e.g., Mandhai et al. 2018; Howell et al. 2019). Nevertheless, given the unique information encoded in the prompt gamma-ray emission when available (e.g., on the properties of the earliest ejecta and the timing of jet formation relative to binary coalescence), every effort should be made to guarantee the presence of a wide-field gamma-ray telescopes in space throughout the next decades.
For the majority of GW-detected mergers viewed at \(\theta _{\mathrm{obs}} \gg \theta _{\mathrm{jet}}\), the most luminous GRB emission will be beamed away from our line of sight by the bulk relativistic motion of the emitting jet material. However, as the relativistic jet slows down by colliding with and shocking the interstellar medium, even off-axis viewers enter the causal emission region of the synchrotron afterglow (e.g., Totani and Panaitescu 2002). Such a delayed off-axis non-thermal afterglow emission was observed from GW170817 at X-ray (e.g., Troja et al. 2017; Margutti et al. 2017; Haggard et al. 2017), radio (e.g., Hallinan et al. 2017; Alexander et al. 2017), and optical frequencies (after the kilonova faded; e.g., Lyman et al. 2018). The afterglow light curve, which showed a gradual rise to a peak at \(t \sim 200\) days followed by extremely rapid fading, reveals details on the angular structure of the jet (e.g., Perna et al. 2003; Lamb and Kobayashi 2017; Lazzati et al. 2018; Xie and MacFadyen 2019). The latter structured may have been imprinted by the relativistic GRB jet as it pierced through the merger ejecta (Nakar and Piran 2017; Lazzati et al. 2017). We return later to an additional possible signature of the ejecta being shock-heated by the jet on the early-time kilonova emission.
1.2 Kilonovae
In addition to the beamed GRB and its afterglow, the merger of NS–NS and BH–NS binaries are also accompanied by a more isotropic counterpart, commonly known as a 'kilonova' (or, less commonly, 'macronova'). Kilonovae are thermal supernova-like transients lasting days to weeks, which are powered by the radioactive decay of heavy neutron-rich elements synthesized in the expanding merger ejecta (Li and Paczyński 1998). They provide both a robust EM counterpart to the GW chirp, which is expected to accompany a fraction of BH–NS mergers and essentially all NS–NS mergers, as well as a direct probe of the unknown astrophysical origin of the heaviest elements (Metzger et al. 2010b).
This article provides a pedagogical review of kilonovae, including a brief historical background and recent developments in this rapidly evolving field (Sect. 2). Section 3 describes the basic physical ingredients, including the key input from numerical relativity simulations of the merger and its aftermath. For pedagogical reasons, the discussion is organized around a simple toy model for the kilonova light curve (Sect. 4), which synthesizes most of the key ingredients in a common and easy-to-interpret framework. My goal is to make the basic results accessible to anyone with the ability to solve a set of coupled ordinary differential equations.
I begin by introducing the simplest model of lanthanide-rich ejecta heated by radioactivity, which produces a week-long near-infrared (NIR) transient ('red kilonova'; Sect. 4.1.1) which is proceeded in at least some cases by \(\sim \) day-long UV/optical-wavelength emission ('blue kilonova') arising from lanthanide-free components of the ejecta (Sect. 4.1.2). Section 5 describes observations of the thermal UVOIR kilonova emission observed following GW170817 and its theoretical interpretation within this largely pre-existing theoretical framework. I also summarize the lessons GW170817 has provided, about the origin of r-process elements, the equation of state of neutron stars, and the final fate of the merger remnant.
Section 6 explores several variations on this canonical picture, some of which were not possible to test in the case of GW170817 and some which are ruled out in that event but could be relevant to future mergers, e.g., with different ingoing binary parameters. These include early (\(\sim \) hours-long) 'precursor' emission at ultra-violet wavelengths (UV), which is powered either by the decay of free neutrons in the outermost layers of the ejecta (Sect. 6.1.1) or prompt shock heating of the ejecta by a relativistic outflow such as the GRB jet (Sect. 6.1.2). In Sect. 6.2 we consider the impact on the kilonova signal of energy input from a long-lived accreting BH or magnetar central engine. In Sect. 7, I assess the prospects for discovering kilonovae following short GRBs and for future GW-triggers of NS–NS/BH–NS mergers. I use this opportunity to make predictions for the diversity of kilonova signals with GW-measured properties of the binary, which will become testable once EM observations are routinely conducted in coincidence with a large sample of GW-detected merger events. I conclude with some personal thoughts in Sect. 8.
Although I have attempted to make this review self-contained, the material covered is necessarily limited in scope and reflects my own opinions and biases. I refer the reader to a number of other excellent recent reviews, which cover some of the topics discussed briefly here in greater detail: Nakar (2007), Faber and Rasio (2012), Berger (2014), Rosswog (2015), Fan and Hendry (2015), Baiotti and Rezzolla (2017), Baiotti (2019), including other short reviews dedicated exclusively to kilonovae (Tanaka 2016; Yu 2019). I encourage the reader to consult Fernández and Metzger (2016) for a review of the broader range of EM counterparts of NS–NS/BH–NS mergers. A few complementary reviews have appeared since GW170817 overviewing the interpretation of this event (e.g., Miller 2017; Bloom and Sigurdsson 2017; Metzger 2017b; Siegel 2019) or its constraints on the nuclear EOS (e.g., Raithel 2019). I also encourage the reader to consult the initial version of this review, written the year prior to the discovery of GW170817 (Metzger 2017a). Table 1 summarizes key events in the historical development of kilonovae.
Timeline of major developments in kilonova research
Lattimer and Schramm: r-process from BH–NS mergers
Hulse and Taylor: discovery of binary pulsar system PSR 1913+16
Symbalisty and Schramm: r-process from NS–NS mergers
Eichler et al.: GRBs from NS–NS mergers
Davies et al.: first numerical simulation of mass ejection from NS–NS mergers
Li and Paczyński: first kilonova model, with parametrized heating
Freiburghaus et al.: NS–NS dynamical ejecta \(\Rightarrow \) r-process abundances
Kulkarni: kilonova powered by free neutron-decay ("macronova"), central engine
Perley et al.: optical kilonova candidate following GRB 080503
Metzger et al., Roberts et al., Goriely et al.: "kilonova" powered by r-process heating
Barnes and Kasen, Tanaka and Hotokezaka: La/Ac opacities \(\Rightarrow \) NIR spectral peak
Tanvir et al., Berger et al.: NIR kilonova candidate following GRB 130603B
Yu, Zhang, Gao: magnetar-boosted kilonova ("merger-nova")
Metzger and Fernández: blue kilonova from post-merger remnant disk winds
Coulter et al.: kilonova detected from NS–NS merger following GW-trigger
2.1 NS mergers as sources of the r-process
Burbidge et al. (1957) and Cameron (1957) realized that approximately half of the elements heavier than iron are synthesized via the capture of neutrons onto lighter seed nuclei like iron) in a dense neutron-rich environment in which the timescale for neutron capture is shorter than the \(\beta \)-decay timescale. This 'rapid neutron-capture process', or r-process, occurs along a nuclear path which resides far on the neutron-rich side of the valley of stable isotopes. Despite these works occurring over 70 years ago, the astrophysical environments giving rise to the r-process remains an enduring mystery, among the greatest in nuclear astrophysics (e.g., Qian and Wasserburg 2007; Arnould et al. 2007; Thielemann et al. 2011; Cowan et al. 2019, for contemporary reviews).
Among the most critical quantities which characterize the viability of a potential r-process event is the electron fraction of the ejecta,
$$\begin{aligned} Y_e \equiv \frac{n_p}{n_n + n_p}, \end{aligned}$$
where \(n_p\) and \(n_n\) are the densities of protons and neutrons, respectively. Ordinary stellar material usually has more protons than neutrons (\(Y_e \ge 0.5\)), while matter with a neutron excess (\(Y_e < 0.5\)) is typically required for the r-process.
Core collapse supernovae have long been considered promising r-process sources. This is in part due to their short delays following star formation, which allows even the earliest generations of metal-poor stars in our Galaxy to be polluted with r-process elements (e.g., Mathews et al. 1992; Sneden et al. 2008). Throughout the 1990s, the high entropy3 neutrino-heated winds from proto-neutron stars (Duncan et al. 1986; Qian and Woosley 1996), which emerge seconds after a successful explosion, were considered the most likely r-process site4 within the core collapse environment (Woosley et al. 1994; Takahashi et al. 1994). However, more detailed calculations of the wind properties (Thompson et al. 2001; Arcones et al. 2007; Fischer et al. 2010; Hüdepohl et al. 2010; Roberts et al. 2010; Martínez-Pinedo et al. 2012; Roberts et al. 2012) later showed that the requisite combination of neutron-rich conditions (\(Y_e \lesssim 0.5\)) and high entropy were unlikely to obtain. Possible exceptions include the rare case of a very massive proto-NS (Cardall and Fuller 1997), or in the presence of non-standard physics such as an eV-mass sterile neutrino (Tamborra et al. 2012; Wu et al. 2014).
Another exception to this canonical picture may occur if the NS is formed rotating rapidly and is endowed with an ultra-strong ordered magnetic field \(B > rsim 10^{14}\)–\(10^{15}\) G, similar to those which characterize Galactic magnetars. Magneto-centrifugal acceleration within the wind of such a "millisecond magnetar" to relativistic velocities can act to lower its electron fraction or reduce the number of seed nuclei formed through rapid expansion (Thompson et al. 2004). This could occur during the early supernova explosion phase (Winteler et al. 2012; Nishimura et al. 2015) as well as during the subsequent cooling phase of the proto-NS over several seconds (Thompson 2003; Metzger et al. 2007; Vlasov et al. 2014). Despite the promise of such models, numerical simulations of MHD supernovae are still in a preliminary state, especially when it comes to the accurate neutrino transport need to determine the ejecta \(Y_e\) and the high resolution three-dimensional grid needed to capturing the growth of non-axisymmetric (magnetic kink or sausage mode) instabilities. The latter can disrupt and slow the expansion rate of jet-like structures (Mösta et al. 2014), rendering the creation of the heaviest (third abundance-peak) r-process elements challenging to obtain (Halevi and Mösta 2018).
The observed rate of hyper-energetic supernovae (the only bona fide MHD-powered explosions largely agreed upon to exist in nature) is only \(\sim 1/1000\) of the total core collapse supernova rate (e.g., Podsiadlowski et al. 2004). Therefore, a higher r-process yield per event \( > rsim 10^{-2}\,M_{\odot }\) is required to explain a significant fraction of the Galactic abundances through this channel. However, in scenarios where the r-process takes place in a prompt jet during the supernova explosion, it is inevitable that the r-process material will mix into the outer layers of the supernova ejecta along with the shock-synthesized \(^{56}\)Ni, the latter being responsible for powering the supernova's optical luminosity. As we shall discuss later in the context of kilonovae, such a large abundance of lanthanide elements mixed into the outer ejecta layers would substantially redden the observed colors of the supernova light in a way incompatible with observed hyper-energetic (MHD) supernovae (Siegel et al. 2019).5
Contemporaneously with the discovery of the first binary pulsar (Hulse and Taylor 1975), Lattimer and Schramm (1974, 1976) proposed that the merger of compact star binaries—in particular the collision of BH–NS systems—could give rise to the r-process by the decompression of highly neutron-rich ejecta (Meyer 1989). Symbalisty and Schramm (1982) were the first to suggest NS–NS mergers as the site of the r-process. Blinnikov et al. (1984) and Paczyński (1986) first suggested a connection between NS–NS mergers and GRBs. Eichler et al. (1989) presented a more detailed model for how this environment could give rise to a GRB, albeit one which differs significantly from the current view. Davies et al. (1994) performed the first numerical simulations of mass ejection from merging neutron stars, finding that \(\sim 2\%\) of the binary mass was unbound during the process. Freiburghaus et al. (1999) presented the first explicit calculations showing that the ejecta properties extracted from a hydrodynamical simulation of a NS–NS merger (Rosswog et al. 1999) indeed produces abundance patterns in basic accord with the solar system r-process.
The neutrino-driven wind following a supernova explosion accelerates matter from the proto-NS surface relatively gradually, in which case neutrino absorption reactions on nucleons (particularly \(\nu _e + n \rightarrow p + e^{-}\)) have time to appreciably raise the electron fraction of the wind from its initial low value near the NS surface. By contrast, in NS–NS/BH–NS mergers the different geometry and more dynamical nature of the system allows at least a fraction of the unbound ejecta (tidal tails and disk winds) to avoid strong neutrino irradiation, maintaining a much lower value of \(Y_e \lesssim 0.2\) (Sect. 3.1).
When averaged over the age of the Galaxy, the required production rate of heavy r-process nuclei of mass number \(A > 140\) is \(\sim 2\times 10^{-7} \,M_{\odot }\) \(\hbox {yr}^{-1}\) (Qian 2000). Given a rate \(R_{\mathrm{NS-NS}}\) of detection of NS–NS mergers by Advanced LIGO/Virgo at design sensitivity (defined here as a horizon distance of 200 Mpc for NS–NS mergers), the required r-process mass yield per merger event to explain the entire Galactic abundances is very approximately given by (e.g., Metzger et al. 2009; Vangioni et al. 2016)
$$\begin{aligned} \langle M_{r} \rangle \sim 10^{-2}\,M_{\odot }\left( \frac{R_\mathrm{NS-NS}}{10\,\mathrm{yr^{-1}}}\right) ^{-1}. \end{aligned}$$
As described in Sect. 3.1, numerical simulations of NS–NS/BH–NS mergers find a range of total ejecta masses of \(\langle M_{r} \rangle \sim 10^{-3}{-}10^{-1} \,M_{\odot }\), while \(\langle M_{r} \rangle \approx 0.03-0.06\,M_{\odot }\) was inferred from the kilonova of GW170817 (Sect. 5). Although large uncertainties remain, it is safe to conclude that NS mergers are likely major, if not the dominant, sources of the r-process in the universe.
Several additional lines of evidence support 'high yield' r-process events consistent with NS–NS/BH–NS mergers being common in our Galaxy, both now and in its early history. These include the detection of \(^{244}\)Pu on the ocean floor at abundances roughly two orders lower than that expected if the currently active r-process source were frequent, low-yield events like ordinary core collapse supernovae (Wallner et al. 2015; Hotokezaka et al. 2015). Similar arguments show that actinide abundances in the primordial solar system require a rare source for the heaviest r-process elements (Bartos and Marka 2019; Côté et al. 2019b). A fraction of the stars in the ultra-faint dwarf galaxy Reticulum II are highly enriched in r-process elements, indicating that this galaxy was polluted early in its history by a single rare r-process event (Ji et al. 2016). Similarly, the large spread seen in the r-process abundances in many Globular Clusters (e.g., Roederer 2011) may also indicate a rare source acting at low metallicity. Reasonable variations in the ejecta properties of NS mergers could in principle explain the observed variability in the actinide abundances of metal-poor stars (Holmbeck et al. 2019).
Nevertheless, NS mergers are challenged by some observations, which may point to alternative r-process sites. Given the low escape speeds of dwarf galaxies of \(\sim 10\mathrm {\ km\ s}^{-1}\), even moderate velocity kicks to binaries from the process of NS formation would remove the binaries from the galaxy prior to merger (and thus preventing the merger ejecta from polluting the next generation of stars). Although a sizable fraction of the Galactic NS–NS binaries have low proper motions and are indeed inferred to have experienced very low supernova kicks (Beniamini et al. 2016), even relatively modest spatial offsets of the merger events from the core of the galaxies make it challenging to retain enough r-process enhanced gas (Bonetti et al. 2019). Another challenge to NS mergers are the short delay times \(\lesssim \)10–100 Myr between star formation and merger which are required to explain stellar populations in the low-metallicity Galactic halo (Safarzadeh et al. 2019) and Globular Clusters (Zevin et al. 2019). Depending on the efficiency of compositional mixing between the merger ejecta and the ISM of the Galaxy, realistic delay time distributions for NS–NS/NS–BH mergers within a consistent picture of structure formation via hierarchical growth (Kelley et al. 2010) were argued to produce chemical evolution histories consistent with observations of the abundances of r-process elements in metal-poor halo stars as a function of their iron abundance (Shen et al. 2015; Ramirez-Ruiz et al. 2015; van de Voort et al. 2015). However, Safarzadeh et al. 2019 comes to a different conclusion, while van de Voort et al. (2019) found that r-process production by rare supernovae better fit the abundances of metal-poor stars than NS mergers. Due to the fact that the delay time distribution of NS mergers at late times after star formation is expected to similar to that of Type Ia supernova (which generate most of the iron in the Galaxy), NS mergers are also challenged to explain the observed decrease of [Eu/Fe] with increasing [Fe] at later times in the chemical evolution history of the Galaxy (Hotokezaka et al. 2018; Côté et al. 2019a).
Together, these deficiencies may hint at the existence of an additional high-yield r-process channel beyond NS–NS mergers which can operate at low metallicities with short delay times. The collapse of massive, rotating stars ("collapsars"), the central engines of long-duration gamma-ray bursts (which are directly observed to occur in dwarf low-metallicity galaxies), are among the most promising contenders (Pruet et al. 2004; Fryer et al. 2006; Siegel et al. 2019). As we shall discuss, the physical conditions of hyper-accreting disks and their outflows following NS mergers, as probed by their kilonova emission, offers an indirect probe of the broadly similar physical conditions which characterize the outflows generated in collapsars. Evidence for r-process in the outflows of NS merger accretion flows would indirectly support an r-process occurring in collapsars as well.
2.2 A brief history of kilonovae
Li and Paczyński (1998) first argued that the radioactive ejecta from a NS–NS or BH–NS merger provides a source for powering thermal transient emission, in analogy with supernovae. They developed a toy model for the light curve, similar to that we describe in Sect. 4. Given the low mass and high velocity of the ejecta from a NS–NS/BH–NS merger, they concluded that the ejecta will become transparent to its own radiation quickly, producing emission which peaks on a timescale of about one day, much faster than for normal supernovae (which instead peak on a timescale of weeks or longer).
Lacking a model for the nucleosynthesis (the word "r-process" does not appear in their work), Li and Paczyński (1998) parametrized the radioactive heating rate of the ejecta at time t after the merger according to the following prescription,
$$\begin{aligned} {\dot{Q}}_{\mathrm{LP}} = \frac{f M c^{2}}{t}, \end{aligned}$$
where M is the ejecta mass and f was a free parameter. The \(\propto 1/t\) time dependence was motivated by the total heating rate which results from the sum of the radioactive decay heating rate \({\dot{Q}}_i \propto \exp (-t/\tau _i)\) of a large number of isotopes i, under the assumption that their half-lives \(\tau _i\) are distributed equally per logarithmic time (at any time t, the heating rate is dominated by isotopes with half-lives \(\tau _i \sim t\)). Contemporary models, which process the thermodynamic history of the expanding ejecta based on numerical simulations of the merger through a detailed nuclear reaction network, show that the heating rate at late times actually approaches a steeper power law decay \(\propto t^{-\alpha }\), with \(\alpha \approx 1.1\)–1.4 (Metzger et al. 2010b; Roberts et al. 2011; Korobkin et al. 2012), similar to that found for the decay rate of terrestrial radioactive waste (Way and Wigner 1948). Metzger et al. (2010b) and Hotokezaka et al. (2017) describe how this power-law decay can be understood from the basic physics of \(\beta \)-decay and the properties of nuclei on the neutron-rich valley of stability.
Li and Paczyński (1998) left the normalization of the heating rate f, to which the peak luminosity of the kilonova is linearly proportional, as a free parameter, considering a range of models with different values of \(f = 10^{-5}\)–\(10^{-3}\). More recent calculations, described below, show that such high heating rates are extremely optimistic, leading to predicted peak luminosities \( > rsim 10^{43}\)–\(10^{44}\mathrm {\ erg\ s}^{-1}\) (Li and Paczyński 1998, their Fig. 2) which exceed even those of supernovae. These over-predictions leaked to other works throughout the next decade; for instance, Rosswog (2005) predicted that BH–NS mergers are accompanied by transients of luminosity \( > rsim 10^{44}\mathrm {\ erg\ s}^{-1}\), which would rival the most luminous transients ever discovered. This unclear theoretical situation led to observational searches for kilonovae following short GRBs which were inconclusive since they were forced to parametrized their results (usually non-detections) in terms of the allowed range of f (Bloom et al. 2006; Kocevski et al. 2010) instead of in terms of more meaningful constraints on the ejecta properties such as its mass.
Metzger et al. (2010b) were the first to determine the true luminosity scale of the radioactively-powered transients of NS mergers by calculating light curve models using radioactive heating rates derived self-consistently from a nuclear reaction network calculation of the r-process, based on the dynamical ejecta trajectories of Freiburghaus et al. (1999).6 Based on their derived peak luminosities being approximately one thousand times brighter than a nova, Metzger et al. (2010b) introduced the term 'kilonova' to describe the EM counterparts of NS mergers powered by the decay of r-process nuclei. They showed that the radioactive heating rate was relatively insensitive to the precise electron fraction of the ejecta and to the assumed nuclear mass model, and they were the first to consider how efficiently the decay products thermalize their energy in the ejecta. Their work highlighted the critical four-way connection, now taken for granted, between kilonovae, short GRBs, GWs from NS–NS/BH–NS mergers, and the astrophysical origin of the r-process.
Prior to Metzger et al. (2010b), it was commonly believed that kilonovae were in fact brighter, or much brighter, than supernovae (Li and Paczyński 1998; Rosswog 2005). One exception is Kulkarni (2005), who assumed that the radioactive power was supplied by the decay of \(^{56}\)Ni or free neutrons. However, \(^{56}\)Ni cannot be produced in the neutron-rich ejecta of a NS merger, while all initially free neutrons are captured into seed nuclei during the r-process (except perhaps in the very outermost, fastest expanding layers of the ejecta; see Sect. 6.1.1). Kulkarni introduced the term "macronovae" for such Nickel/neutron-powered events. Despite its inauspicious physical motivation and limited use in the literature until well after the term kilonova was already in use, many authors continue to use the macronova terminology, in part because this name is not tied to a particular luminosity scale (which may change as our physical models evolve).
Once the radioactive heating rate was determined, attention turned to the yet thornier issue of the ejecta opacity. The latter is crucial since it determines at what time and wavelength the ejecta becomes transparent and the light curve peaks. Given the general lack of experimental data or theoretical models for the opacity of heavy r-process elements, especially in the first and second ionization states of greatest relevance, Metzger et al. (2010b), Roberts et al. (2011) adopted grey opacities appropriate to the Fe-rich ejecta in Type Ia supernovae. However, Kasen et al. (2013) showed that the opacity of r-process elements can be significantly higher than that of Fe, due to the high density of line transitions associated with the complex atomic structures of some lanthanide and actinide elements (Sect. 3.2). This finding was subsequently confirmed by Tanaka and Hotokezaka (2013). As compared to the earlier predictions (Metzger et al. 2010b), these higher opacities push the bolometric light curve to peak later in time (\(\sim 1\) week instead of a \(\sim 1\) day timescale), and at a lower luminosity (Barnes and Kasen 2013). More importantly, the enormous optical wavelength opacity caused by line blanketing moved the spectral peak from optical/UV frequencies to the near-infrared (NIR). Later that year, Tanvir et al. (2013) and Berger et al. (2013) presented evidence for excess infrared emission following the short GRB 130603B on a timescale of about one week using the Hubble Space Telescope.
However, not all of the merger ejecta necessarily will contain lanthanide elements with such a high optical opacity (e.g., Metzger et al. 2008a). While ejecta with a relatively high electron fraction \(0.25 \lesssim Y_{e} \lesssim 0.4\) has enough neutrons to synthesize radioactive r-process nuclei, the ratio of neutrons to lighter seed nuclei is insufficient to reach the relatively heavy lanthanide elements of atomic mass number \(A > rsim 140\) which (if not blocked by high-opacity low-\(Y_e\) material further out) produces emission with more rapid evolution and bluer colors, similar to those predicted by the original models (Metzger et al. 2010b; Roberts et al. 2011). Metzger and Fernández (2014) dubbed the emission from high-\(Y_e\), lanthanide-poor ejecta a "blue" kilonova, in contrast to "red" kilonova emission originating from low-\(Y_e\), lanthanide-rich portions of the ejecta (Barnes and Kasen 2013). They further argued that both "blue" and "red" kilonova emission, arising from different components of the merger ejecta, could be seen in the same merger event, at least for some observing geometries. As will be discussed in \(\S 5\), such hybrid "blue" + "red" kilonova models came to play an important role in the interpretation of GW170817. Figure 2 is a timeline of theoretical predictions for the peak luminosities, timescales, and spectral peak of the kilonova emission.
Schematic timeline of the development kilonova models in the space of peak luminosity and peak timescale. The wavelength of the predicted spectral peak are indicated by color as marked in the figure. Shown for comparison are the approximate properties of the "red" and "blue" kilonova emission components observed following GW170817 (e.g., Cowperthwaite et al. 2017; Villar et al. 2017)
3 Basic ingredients
The physics of kilonovae can be understood from basic considerations. Consider the merger ejecta of total mass M, which is expanding at a constant mean velocity v, such that its mean radius is \(R \approx vt\) after a time t following the merger. Perhaps surprisingly, it is not unreasonable to assume spherical symmetry to first order because the ejecta will have a chance to expand laterally over the many orders of magnitude in scale from the merging binary (\(R_{0} \sim 10^{6}\) cm) to the much larger radius (\(R_{\mathrm{peak}} \sim 10^{15}\) cm) at which the kilonova emission peaks (Roberts et al. 2011; Grossman et al. 2014; Rosswog et al. 2014).
The ejecta is extremely hot immediately after being ejected from the viscinity of the merger (Sect. 3.1). This thermal energy cannot, however, initially escape as radiation because of its high optical depth at early times,
$$\begin{aligned} \tau \simeq \rho \kappa R = \frac{3M\kappa }{4\pi R^{2}} \simeq 70\left( \frac{M}{10^{-2}\,M_{\odot }}\right) \left( \frac{\kappa }{{\mathrm{1\,cm}}^{2}{\mathrm{\,g}}^{-1}}\right) \left( \frac{v}{0.1\,\mathrm{{c}}}\right) ^{-2}\left( \frac{t}{\mathrm{1\,day}}\right) ^{-2}, \end{aligned}$$
and the correspondingly long photon diffusion timescale through the ejecta,
$$\begin{aligned} t_{\mathrm{diff}} \simeq \frac{R}{c}\tau = \frac{3M\kappa }{4\pi c R} = \frac{3M\kappa }{4\pi c vt}, \end{aligned}$$
where \(\rho = 3M/(4\pi R^{3})\) is the mean density and \(\kappa \) is the opacity (cross section per unit mass). As the ejecta expands, the diffusion time decreases with time \(t_{\mathrm{diff}} \propto t^{-1}\), until eventually radiation can escape on the expansion timescale, as occurs once \(t_{\mathrm{diff}} = t\) (Arnett 1982). This condition determines the characteristic timescale at which the light curve peaks,
$$\begin{aligned} t_{\mathrm{peak}} \equiv \left( \frac{3 M \kappa }{4\pi \beta v c}\right) ^{1/2} \approx 1.6\,\mathrm{d}\,\,\left( \frac{M}{10^{-2}\,M_{\odot }}\right) ^{1/2}\left( \frac{v}{0.1\,\mathrm{c}}\right) ^{-1/2}\left( \frac{\kappa }{1\,\mathrm{cm^{2}\,g^{-1}}}\right) ^{1/2}, \end{aligned}$$
where the constant \(\beta \approx 3\) depends on the precise density profile of the ejecta (see Sect. 4). For values of the opacity \(\kappa \sim 0.5\)–\(30\mathrm {\ cm^{2}\ g^{-1}}\) which characterize the range from lanthanide-free and lanthanide-rich matter (Tanaka et al. 2019; Table 4), respectively, Eq. (7) predicts characteristic durations \(\sim 1\) day – 1 week.
Luminosity versus time after the merger of a range of heating sources relevant to powering kilonovae. Left: sources of radioactive heating include the decay of \(\sim 10^{-2}\,M_{\odot }\) of r-process nuclei, as first modeled in a parametrized way by Li and Paczyński (1998) (Eq. 4, grey band) and then by Metzger et al. (2010b) using a full reaction network, plotted here using the analytic fit of Korobkin et al. (2012) (Eq. 22, black line) and including the thermalization efficiency of Barnes et al. (2016) (Eq. 25). The outermost layers of the ejecta may contain \(\sim 10^{-4}\,M_{\odot }\) free neutrons (red line), which due to their comparatively long half-life can enhance the kilonova emission during the first few hours if present in the outermost layers of the ejecta due to premature freeze-out of the r-process (Sect. 6.1.1). Right: heating sources from a central engine. These include fall-back accretion (blue lines), shown separately for NS–NS (solid line) and BH–NS (dashed line) mergers, based on results by Rosswog (2007) for an assumed jet efficiency \(\epsilon _j = 0.1\) (Eq. 34). Also shown is the rotational energy input from the magnetic dipole spin-down of a stable magnetar remnant with an initial spin period of \(P = 0.7\) ms and dipole field strengths of \(B = 10^{15}\) G (brown lines) and \(10^{16}\) G (orange lines). Dashed lines show the total spin-down luminosity \(L_{\mathrm{sd}}\) (Eq. 36), while solid lines show the effective luminosity available to power optical/X-ray emission once accounting for suppression of the efficiency of thermalization due to the high scattering opacity of \(e^{\pm }\) pairs in the nebula (Eq. 39; Metzger and Piro, 2014). The isotropic luminosity of the temporally-extended X-ray emission observed following the short GRB 080503 is shown with a green line (for an assumed source redshift \(z = 0.3\); Perley et al. 2009)
The temperature of matter freshly ejected at the radius of the merger \(R_0 \lesssim 10^{6}\) cm generally exceed \(10^{9}{-}10^{10}\) K. However, absent a source of persistent heating, this matter will cool through adiabatic expansion, losing all but a fraction \(\sim R_0/R_{\mathrm{peak}} \sim 10^{-9}\) of its initial thermal energy before reaching the radius \(R_{\mathrm{peak}} = vt_{\mathrm{peak}}\) at which the ejecta becomes transparent (Eq. 7). Such adiabatic losses would leave the ejecta so cold as to be effectively invisible at large distances.
In a realistic situation, the ejecta will be continuously heated, by a combination of sources, at a total rate \({\dot{Q}}(t)\) (Fig. 3). At a minimum, this heating includes contributions from radioactivity due to r-process nuclei and, possibly at early times, free neutrons. More speculatively, the ejecta can also be heated from within by a central engine, such as the emergence of the GRB jet or over longer timescales by the rotational energy of a magnetar remnant. In most cases of relevance, \({\dot{Q}}(t)\) is constant or decreasing with time less steeply than \(\propto t^{-2}\). The peak luminosity of the observed emission then equals the heating rate at the peak time (\(t = t_\mathrm{peak}\)), i.e.,
$$\begin{aligned} L_{\mathrm{peak}} \approx {\dot{Q}}(t_{\mathrm{peak}}), \end{aligned}$$
a result commonly known as "Arnett's Law" (Arnett 1982).
Equations (7) and (8) make clear that, in order to quantify the key observables of kilonovae (peak timescale, luminosity, and effective temperature), we must understand three key ingredients:
The mass and velocity of the ejecta from NS–NS/BH–NS mergers, as comprised by several distinct components.
The opacity \(\kappa \) of expanding neutron-rich matter.
The variety of sources contributing to the ejecta heating \({\dot{Q}}(t)\), particularly on timescales of \(t_{\mathrm{peak}}\), when the ejecta first becomes transparent.
The remainder of this section addresses the first two issues. The range of different heating sources, which can give rise to different types of kilonovae, are covered in Sect. 4.
3.1 Sources of neutron-rich ejecta
Two broad sources of ejecta characterize NS–NS and BH–NS mergers (see Fernández and Metzger 2016; Shibata and Hotokezaka 2019, for recent reviews). First, there is matter ejected on the dynamical timescale of milliseconds, either by tidal forces or due to compression-induced heating at the interface between merging bodies (Sect. 3.1.1). Debris from the merger, which is not immediately unbound or incorporated into the central compact object, can possess enough angular momentum to circularize into an accretion disk around the central remnant. A disk can also be generated by outwards transport of angular momentum and mass during the post-merger evolution of the central NS remnant prior to BH formation. Outflows from this remnant disk, taking place on longer timescales of up to seconds, provide a second important source of ejecta (Sect. 3.1.2) (Table 2).
Sources of ejecta in NS–NS mergers
Ejecta type
\(M_{\mathrm{ej}} (M_{\odot })\)
\(v_{\mathrm{ej}}(c)\)
\(Y_e\)
\(M_{\mathrm{ej}}\) decreases with
Tidal tails\(^{\hbox {a}}\)
\(10^{-4}{-}10^{-2}\)
\(0.15{-}0.35\)
\(\lesssim 0.2\)
\(q = M_{2}/M_{1} < 1\)
Polar shocked
\(M_{\mathrm{tot}}/M_{\mathrm{TOV}}, R_{\mathrm{ns}}\)
Magnetar wind
\(10^{-2}\)
\(0.2{-}1\)
\(M_{\mathrm{tot}}/M_{\mathrm{TOV}}\)
Disk outflows\(^\mathrm{a}a\)
\(10^{-3}{-}0.1\)
\(0.03{-}0.1\)
\(0.1{-}0.4\)
\(^{\hbox {a}}\)Present in NS–BH mergers
In BH–NS mergers, significant mass ejection and disk formation occurs only if the BH has a low mass \(M_{\bullet }\) and is rapidly spinning; in such cases, the NS is tidally disrupted during the very final stages of the inspiral instead of being swallowed whole (giving effectively zero mass ejection). Roughly speaking, the condition for the latter is that the tidal radius of the NS, \(R_\mathrm{t} \propto M_{\bullet }^{1/3}\), exceed the innermost stable circular orbit of the BH, \(R_{\mathrm{isco}} \propto M_{\bullet }\) (see Foucart 2012; Foucart et al. 2018 for a more precise criterion for mass ejection, calibrated to GR merger simulations). For a NS of radius 12 km and mass \(1.4\,M_{\odot }\), this requires a BH of mass \(\lesssim 4(12)\,M_{\odot }\) for a BH Kerr spin parameter of \(\chi _{\mathrm{BH}} = 0.7(0.95)\). For slowly-spinning BHs (as appears to characterize most of LIGO/Virgo's BH–BH systems), the BH mass range giving rise to tidal disruption—and hence a kilonova or GRB—could be very small.
In the case of a NS–NS merger, the ejecta properties depend sensitively on the fate of the massive NS remnant which is created by the coalescence event. The latter in turn depends sensitively on the total mass of the original NS–NS binary, \(M_{\mathrm{tot}}\) (Shibata and Uryū 2000; Shibata and Taniguchi 2006). For \(M_{\mathrm{tot}}\) above a threshold mass of \(M_{\mathrm{crit}} \sim 2.6-3.9M_\odot \) [covering a range of soft and stiff nuclear-theory based equations of state (EOS), respectively], the remnant collapses to a BH essentially immediately, on the dynamical time of milliseconds or less (Hotokezaka et al. 2011; Bauswein et al. 2013a). Bauswein et al. (2013a) present an empirical fitting formula for the value of \(M_{\mathrm{crit}}\) in terms of the maximum mass \(M_{\mathrm{TOV}}\) of a non-rotating NS (the Tolman–Oppenheimer–Volkoff [TOV] mass) and the NS compactness (see also Köppel et al. 2019), which they find is insensitive to the binary mass ratio \(q = M_{2}/M_{1}\) for \(q > rsim 0.7\) (however, see Kiuchi et al. 2019).
Mergers that do not undergo prompt collapse (\(M_{\mathrm{tot}} < M_\mathrm{crit}\)) typically result in the formation of rapidly-spinning NS remnant of mass \(\sim M_{\mathrm{tot}}\) (after subtracting mass lost through neutrino and GW emission and in the dynamical ejecta), which is at least temporarily stable against gravitational collapse to a BH. The maximum stable mass of a NS exceeds its non-rotating value, \(M_{\mathrm{TOV}}\), if the NS is rapidly spinning close to the break-up velocity (Baumgarte et al. 2000; Özel et al. 2010; Kaplan et al. 2014).
A massive NS remnant, which is supported exclusively by its differential rotation, is known as a hypermassive NS (HMNS). A somewhat less massive NS, which can be supported even by its solid body rotation (i.e., after differential rotation has been removed), is known as a supramassive NS (SMNS). A HMNS is unlikely to survive for more than a few tens to hundreds of milliseconds after the merger, before collapsing to a BH due to the loss of differential rotation and accretion of mass by internal hydro-magnetic torques and gravitational wave radiation (Shibata and Taniguchi 2006; Duez et al. 2006; Siegel et al. 2013). In contrast, SMNS remnants must spin-down to the point of collapse through the global loss of angular momentum. The latter must take place through less efficient processes, such as magnetic dipole radiation or GW emission arising from small non-axisymmetric distortions of the NS, and hence such objects can in principle survive for much longer before collapsing. Finally, the merger of a particularly low mass binary, which leaves a remnant mass less than \(M_{\mathrm{TOV}}\), will produce an indefinitely stable remnant (Metzger et al. 2008b; Giacomazzo and Perna 2013), from which a BH can never form, even once its angular momentum has been entirely removed. Such cases are likely very rare.
Remnants of NS–NS mergers
Binary mass range
NS lifetime (\(t_{\mathrm{collapse}}\))
\(\%\) of \(\hbox {mergers}^\mathrm{a}\)
Prompt BH
\(M_{\mathrm{tot}} > rsim M_{\mathrm{th}}^\mathrm{b} \sim 1.3{-}1.6M_{\mathrm{TOV}}\)
\(\lesssim 1\) ms
\(\sim 0{-}32\)
HMNS
\(\sim 1.2M_{\mathrm{TOV}} \lesssim M_{\mathrm{tot}} \lesssim M_{\mathrm{th}}\)
\(\sim 30{-}300\) ms
SMNS
\(M_{\mathrm{TOV}} \lesssim M_{\mathrm{tot}} \lesssim 1.2M_{\mathrm{TOV}}\)
\(\gg 300\) ms
\(\sim 18{-}65\)
Stable NS
\(M_{\mathrm{tot}} < M_{\mathrm{TOV}}\)
\(\infty \)
\(\lesssim 3\)
\(^\mathrm{a}\)Percentage of mergers allowed by current EOS constraints on NS radii and \(M_{\mathrm{TOV}}\) (Sect. 5.2) assuming the merging extragalactic NS–NS binary population is identical to the known Galactic NS–NS binaries (from Margalit and Metzger 2019)
\(^\mathrm{b}\)The prompt collapse threshold, \(M_{\mathrm{th}}\), depends on both \(M_{\mathrm{TOV}}\) and the NS compactness/radius (see text)
Left: properties of the merger ejecta which affect the EM emission as a function of the binary chirp mass \({\mathcal {M}}_{\mathrm{c}}\) (Eq. 1), taken here as a proxy for the total binary mass \(M_{\mathrm{tot}}\). Vertical dashed lines delineate the threshold masses for different merger remnants as marked, for an example EOS with \(M_{\mathrm{TOV}} = 2.1\,M_{\odot }\) and radius \(R_{1.6} = 12\) km of a \(1.6\,M_{\odot }\) NS. The top panel shows the ejecta kinetic energy, which we take to be the sum of the initial kinetic energy of the ejecta (estimated using fits to numerical relativity simulations; Coughlin et al. 2018, 2019) and, in the case of stable or SMNSs, the rotational energy which can be extracted from the remnant before forming a BH (Margalit and Metzger 2017). The bottom panel shows the ejecta mass, both dynamical and disk wind ejecta, estimated as in Coughlin et al. (2019), where 50% of the disk mass is assumed to be ejected at \(v = 0.15\) c (e.g., Siegel and Metzger 2017). The finite width of the lines results from a range of binary mass ratio \(q = 0.7\)–1, to which the tidal dynamical ejecta is most sensitive. The ejecta mass line is colored qualitatively according to the dominant color of the kilonova emission, which becomes redder for more massive binaries (with shorter-lived remnants) due to their more neutron-rich ejecta (Metzger and Fernández 2014). Right: distribution of BNS merger chirp masses drawn from a NS population representative of Galactic double NSs (Kiziltan et al. 2013). Dashed vertical curves separate the \({\mathcal {M}}_{\mathrm{c}}\) parameter space based on the possible merger outcomes in each region. The fraction of mergers expected to occur in each region (the integral over the PDF within this region) is stated above the region in red (see also Table 3)
Image reproduced with permission from Margalit and Metzger (2019), copyright by the authors
Table 3 summarizes the four possible outcomes of a NS–NS merger and estimates of their expected rates. The left panel of Fig. 4 from Margalit and Metzger (2019) illustrates these mass divisions in terms of the chirp mass \({\mathcal {M}}_{\mathrm{c}} \simeq 0.435M_{\mathrm{tot}}\) (Eq. 1; under the assumption of an equal mass binary \(M_{1} = M_{2}\)) and taking as an example EOS one which predicts a \(1.6\,M_{\odot }\) NS radius \(R_{1.6} = 12\) km and TOV mass \(M_{\mathrm{TOV}} \approx 2.1\,M_{\odot }\). The latter is consistent with the lower limit of \(M_{\mathrm{TOV}} > rsim 2-2.1\,M_{\odot }\) set by the discovery of pulsars with similar masses (Demorest et al. 2010; Antoniadis et al. 2013; Cromartie et al. 2019) and the upper limit \(M_{\mathrm{TOV}} \lesssim 2.16\,M_{\odot }\) supported by joint EM/GW observations of GW170817 (Sect. 5.2).
The right panel of Fig. 4 shows the chirp mass distribution of known Galactic NS–NS binaries (Kiziltan et al. 2013) compared to the allowed ranges in the binary mass thresholds separating different remnant classes (stable NS, SMNS, HMNS, prompt collapse) given current EOS constraints. The chirp mass of GW170817 is fully consistent with being drawn from the Galactic NS–NS population, while indications from the EM observations suggest that a HMNS remnant formed in this event (Sect. 5). If the extra-galactic population of merging NS–NS binaries is indeed similar to the known Galactic population, Margalit and Metzger (2019) predict that 18\(\% -65\%\) of mergers would result in SMNS remnants, while only a small fraction \(< 3\%\) would produce indefinitely stable NS remnants (Table 3). As we discuss in Sect. 6.2.2, additional energy input from a long-lived magnetar remnant could substantially boost the kilonova emission. The fractions of mergers leading to a prompt BH collapse, and relatively little ejecta or disk, ranges from tens of percent to extremely infrequently.
3.1.1 Dynamical ejecta
NS–NS mergers eject unbound matter through processes that operate on the dynamical time, and which depend primarily on the total binary mass, the mass ratio, and the EOS. Total dynamical ejecta masses typically lie in the range \(10^{-4}\)–\(10^{-2}M_\odot \) for NS–NS mergers (e.g., Hotokezaka et al. 2013a; Radice et al. 2016a; Bovard et al. 2017), with velocities 0.1–0.3 c. For BH–NS mergers, the ejecta mass can be up to \(\sim 0.1M_\odot \) with similar velocities as in the NS–NS case (Kyutoku et al. 2013, 2015; Foucart et al. 2017). The ejecta mass is typically greater for eccentric binaries (East et al. 2012; Gold et al. 2012), although the dynamical interactions giving rise to eccentric mergers require high stellar densities, probably making them rare events compared to circular inspirals (Tsang 2013). Very high NS spin can also enhance the quantity of dynamical ejecta (e.g., Dietrich et al. 2017a; East et al. 2019; Most et al. 2019).
Two main ejection processes operate in NS–NS mergers. First, material at the contact interface between the merging stars is squeezed out by hydrodynamic forces and is subsequently expelled by quasi-radial pulsations of the remnant (Oechslin et al. 2007; Bauswein et al. 2013b; Hotokezaka et al. 2013a), ejecting shock-heated matter in a broad range of angular directions. The second process involves spiral arms from tidal interactions during the merger, which expand outwards in the equatorial plane due to angular momentum transport by hydrodynamic processes. The relative importance of these mechanisms depends on the EOS and the binary mass ratio q, with lower values of \(q \ll 1\) (asymmetric) binaries ejecting greater quantities of mass (Bauswein et al. 2013b; Lehner et al. 2016). The ejecta mass also depends on the BH formation timescale; for the prompt collapses which characterize massive binaries, mass ejection from the contact interface is suppressed due to prompt swallowing of this region. Figure 5 shows the total ejecta mass and mean velocity of the dynamical ejecta inferred from a range of NS–NS simulations compiled from the literature (Bauswein et al. 2013b; Hotokezaka et al. 2013a; Radice et al. 2018b; Sekiguchi et al. 2016; Ciolfi et al. 2017).
In BH–NS mergers, mass is ejected primarily by tidal forces that disrupt the NS, with the matter emerging primarily in the equatorial plane (Kawaguchi et al. 2015). The ejecta from BH–NS mergers also often covers only part of the azimuthal range (Kyutoku et al. 2015), which may introduce a stronger viewing angle dependence on the kilonova emission than for NS–NS mergers.
Another key property of the dynamical ejecta, in addition to the mass and velocity, is its electron fraction, \(Y_e\). Simulations that do not account for weak interactions find the ejecta from NS–NS mergers to be highly neutron-rich, with an electron fraction \(Y_e \lesssim 0.1\), sufficiently low to produce a robust7 abundance pattern for heavy nuclei with \(A > rsim 130\) (Goriely et al. 2011; Korobkin et al. 2012; Bauswein et al. 2013b; Mendoza-Temis et al. 2015). More recent merger calculations that include the effects of \(e^\pm \) captures and neutrino irradiation in full general-relativity have shown that the dynamical ejecta may have a wider electron fraction distribution (\(Y_e \sim 0.1{-}0.4\)) than models which neglect weak interactions (Sekiguchi et al. 2015; Radice et al. 2016a). As a result, lighter r-process elements with \(90 \lesssim A \lesssim 130\) are synthesized in addition to third-peak elements (Wanajo et al. 2014). These high-\(Y_e\) ejecta components are distributed in a relatively spherically-symmetric geometry, while the primarily tidally-ejected, lower-\(Y_e\) matter is concentrated closer to the equatorial plane (Fig. 7).
Dynamical ejecta masses and velocities from a range of binary neutron star merger simulations encompassing different numerical techniques, various equations of state, binary binary mass ratios \(q = 0.65{-}1\), effects of neutrinos and magnetic fields, together with the corresponding ejecta parameters inferred from the 'blue' and 'red' kilonova of GW170817. Image reproduced with permission from Siegel (2019)
3.1.2 Disk outflow ejecta
All NS–NS mergers, and those BH–NS mergers which end in NS tidal disruption outside the BH horizon, result in the formation of an accretion disk around the central NS or BH remnant. The disk mass is typically \(\sim 0.01\)–\(0.3\,M_{\odot }\), depending on the total mass and mass ratio of the binary, the spins of the binary components, and the NS EOS (e.g., Oechslin and Janka 2006). Relatively low disk masses are expected in the case of massive binaries that undergo prompt collapse to a BH, because the process of massive disk formation is intimately related to the internal redistribution of mass and angular momentum of the remnant as it evolves from a differentially rotating to solid body state (which has no time to occur in a prompt collapse). Outflows from this disk, over a timescales of seconds or longer, represent an important source of ejecta mass which can often dominate that of the dynamical ejecta.
At early times after the disk forms, its mass accretion rate is high and the disk is a copious source of thermal neutrinos (Popham et al. 1999). During this phase, mass loss is driven from the disk surface by neutrino heating, in a manner analogous to neutrino-driven proto-NS winds in core collapse supernovae (Surman et al. 2008; Metzger et al. 2008c). Spiral density waves, which are excited in the disk by the oscillations of the central NS remnant, may also play a role in outwards angular momentum transport and mass ejection during this early phase (Nedora et al. 2019). Time dependent models of the long-term evolution of these remnant tori, which include neutrino emission and absorption, indicate that when BH formation is prompt, the amount of mass ejected through this channel is small, contributing at most a few percent of the outflow, because the neutrino luminosity decreases rapidly in time (Fernández and Metzger 2013; Just et al. 2015). However, if the central NS remnant survives for longer than \(\sim 50\) ms (as a HMNS or SMNS), then the larger neutrino luminosity from the NS remnant ejects a non-negligible amount of mass (\(\sim 10^{-3}M_\odot \), primarily from the NS itself instead of the disk; Dessart et al. 2009; Perego et al. 2014; Martin et al. 2015; Richers et al. 2015). As we discuss below, ejecta from the star could be substantially enhanced if the central remnant has a strong ordered magnetic field (Metzger et al. 2018).
The disk evolves in time due to the outwards transport of angular momentum, as mediated e.g., by spiral density waves or (more generically) magnetic stresses created by MHD turbulence generated by the magneto-rotational instability. Initial time-dependent calculations of this 'viscous spreading' followed the disk evolution over several viscous times using one-zone (Metzger et al. 2008a) and one-dimensional height-integrated (Metzger et al. 2009) models. These works showed that, as the disk evolves and its accretion rate decreases, the disk transitions from a neutrino-cooled state to a radiatively inefficient (geometrically thick disk) state as the temperature, and hence the neutrino cooling rate, decreases over a timescale of seconds (see also Lee et al. 2009; Beloborodov 2008). Significant outflows occur once during the radiatively inefficient phase, because viscous turbulent heating and nuclear recombination are unbalanced by neutrino cooling (Kohri et al. 2005). This state transition is also accompanied by "freeze-out"8 of weak interactions, leading to the winds being neutron-rich (Metzger et al. 2008a, 2009). Neutron-rich mater is shielded within the degenerate disk midplane, being ejected only once the disk radius has become large enough, and the neutrino luminosity low enough, that weak interactions no longer appreciably raise \(Y_e\) in the outflow.
These early estimates were followed by two-dimensional, axisymmetric hydrodynamical models of the disk evolution, which show that, in the case of prompt BH formation, the electron fraction of the disk outflows lies in the range \(Y_e \sim 0.2\)–0.4 (Fernández and Metzger 2013; Just et al. 2015), sufficient to produce the entire mass range of r-process elements (Just et al. 2015; Wu et al. 2016). The total fraction of the disk mass which is unbound by these "viscously-driven" winds ranges from \(\sim 5\%\) for a slowly spinning BH, to \(\sim 30\%\) for high BH spin \(\chi _{\mathrm{BH}} \simeq 0.95\) (Just et al. 2015; Fernández et al. 2015a); see also Kiuchi et al. (2015), who simulated the long-term evolution of BH–NS disks but without following the electron fraction evolution. These large disk ejecta fractions and neutron-rich ejecta were confirmed by the first 3D GRMHD simulations of the long-term disk evolution (Siegel and Metzger 2017, 2018; Fernández et al. 2019), with Siegel and Metzger (2017) finding that up to 40% of the initial torus may be unbound. The velocity and composition of magnetized disk outflows appears to be sensitive to the strength and geometry of the large-scale net magnetic flux threading the accretion disk (Fernández et al. 2019; Christie et al. 2019).
An even larger fraction of the disk mass (up to \(\sim 90\%\)) is unbound when the central remnant is a long-lived hypermassive or supramassive NS instead of a BH, due to the presence of a hard surface and the higher level of neutrino irradiation from the central remnant (Metzger and Fernández 2014; Fahlman and Fernández 2018). A longer-lived remnant also increases the electron fraction of the ejecta, which increases monotonically with the lifetime of the HMNS (Fig. 6). Most of the ejecta is lanthanide-free (\(Y_e > rsim 0.3\)) if the NS survives longer than about 300 ms (Metzger and Fernández 2014; Kasen et al. 2015; Lippuner et al. 2017). Even when BH formation is prompt, simulations with Monte Carlo radiation transport included find that the earliest phases of disk evolution can produce at least a modest quantity of high-\(Y_e\) material (Miller et al. 2019).
Longer-lived remnants produce higher \(Y_e\) disk wind ejecta and bluer kilonovae. Shown here is the mass distribution by electron fraction \(Y_e\) of the disk wind ejecta, calculated for different assumptions about the lifetime, \(t_\mathrm{collapse}\), of the central NS remnant prior to BH formation, from the axisymmetric \(\alpha \)-viscosity hydrodynamical calculations of Metzger and Fernández (2014). A vertical line approximately delineates the ejecta with enough neutrons to synthesize lanthanide elements (\(Y_e \lesssim 0.25\)) generate a red kilonova from that with \(Y_e > rsim 0.25\) which is lanthanide-poor and will generate blue kilonova emission. The NS lifetime has a strong effect on the ejecta composition because it is a strong source of electron neutrinos, which convert neutrons in the disk to protons via the process \(\nu _e + n \rightarrow p + e^{-}\). This figure is modified from a version in Lippuner et al. (2017)
The mass ejected by the late disk wind can easily be comparable to, or larger than, that in the dynamical ejecta (e.g., Wu et al. 2016, their Fig. 1). Indeed, the total ejecta mass inferred for GW170817 greatly exceeds that of the dynamical ejecta found in merger simulations (Fig. 5), but is consistent in both its mass and velocity with originating from a disk wind (e.g., Siegel and Metzger 2017). As the disk outflows emerge after the dynamical ejecta, the disk outflow material will be physically located behind the latter (Fig. 7).
Different components of the ejecta from NS–NS mergers and the possible dependence of their kilonova emission on the observer viewing angle, \(\theta _{\mathrm{obs}}\), relative to the binary axis, in the case of a relatively prompt BH formation (left panel) and a long-lived magnetar remnant (right panel). In both cases, the dynamical ejecta in the equatorial plane is highly neutron-rich (\(Y_e \lesssim 0.1\)), producing lanthanides and correspondingly "red" kilonova emission peaking at NIR wavelengths. Mass ejected dynamically in the polar directions may be sufficiently neutron-poor (\(Y_e > rsim 0.3\)) to preclude lanthanide production, powering "blue" kilonova emission at optical wavelengths (although this component may be suppressed if BH formation is extremely prompt). The outermost layers of the polar ejecta may contain free neutrons, the decay of which powers a UV transient lasting a few hours following the merger (Sect. 6.1.1). Re-heating of the ejecta by a delayed relativistic outflow (e.g., the GRB jet or a wind from the magnetar remnant) may also contribute to early blue emission (Sect. 6.1.2). The innermost ejecta layers originate from accretion disk outflows, which may emerge more isotropically. When BH formation is prompt, the disk wind ejecta is mainly neutron-rich, powering red kilonova emission (Fernández and Metzger 2013; Just et al. 2015; Wu et al. 2016; Siegel and Metzger 2017). If the NS remnant is instead long-lived relative to the disk lifetime, then neutrino emission can increase \(Y_e\) sufficiently to suppress lanthanide production and result in blue disk wind emission (Fig. 6; e.g., Metzger and Fernández 2014; Perego et al. 2014). Energy input from the central accreting BH or magnetar remnant enhance the kilonova luminosity compared to that exclusively from radioactivity (Sect. 6.2)
Beyond the dynamical and disk wind ejecta, other ejecta sources have been proposed, though these remain more speculative because the physical processes at work are less robust. Mass loss may occur from the differentially rotating NS during the process of angular momentum redistribution (Fujibayashi et al. 2018; Radice et al. 2018b). However, the details of this mechanism and its predictions for the ejecta properties depend sensitively on the uncertain physical source and operation of the "viscosity" currently put into the simulations by hand; unlike the quasi-Keplerian accretion disk on larger radial scales, the inner regions of the NS remnant possess a positive shear profile \(d \Omega /dr > 0\) are therefore not unstable to the magneto-rotational instability.
Outflows can also occur from the HMNS/SMNS or stable NS remnant as it undergoes Kelvin–Helmholtz contraction and neutrino cooling over a timescale of seconds. At a minimum there will be outflows driven from the NS surface by neutrino heating (Dessart et al. 2009), which typically will possess a relatively low mass-loss rate \(\lesssim 10^{-3}\,M_{\odot }\hbox { s}^{-1}\) and low asymptotic velocity \(\sim 0.1\) c. However, the NS remnant possesses an ordered magnetic field of strength \(\sim 10^{14}{-}10^{15}\) G, then the mass-loss rate and velocity of such an outflow is substantially enhanced by the centrifugal force along the open magnetic field lines (e.g., Thompson et al. 2004). (Metzger et al. 2018) argue that such a magnetar wind, from a HMNS of lifetime \(\sim 0.1{-}1\) s, was responsible for the fastest ejecta in GW170817 (Sect. 5). While the presence of an ordered magnetic field of this strength is physically reasonable, its generation from the smaller scale magnetic field generated during the merger process has yet to be conclusively demonstrated by numerical simulations.
3.2 Ejecta opacity
It is no coincidence that kilonova emission is centered in the optical/IR band, as this is the first spectral window through which the expanding merger ejecta becomes transparent. Figure 8 illustrates semi-quantitatively the opacity of NS merger ejecta near peak light as a function of photon energy.
Schematic illustration of the opacity of the NS merger ejecta as a function of photon energy at a fixed epoch near peak light. The free-free opacity (red line) is calculated assuming singly-ionized ejecta of temperature \(T = 2\times 10^{4}\) K and density \(\rho = 10^{-14}\) g \(\hbox {cm}^{-3}\), corresponding to the mean properties of \(10^{-2} \,M_{\odot }\) of ejecta expanding at \(v = 0.1\) c at \(t =\) 3 days. Line opacities of iron-like elements and lanthanide-rich elements are approximated from Figs. 3 and 7 of Kasen et al. (2013). Bound-free opacities are estimated as that of neutral iron (Verner et al. 1996), which should crudely approximate the behavior of heavier r-process elements. Electron scattering opacity accounts for the Klein–Nishina suppression at energies \(\gg m_e c^{2}\) and (very schematically) for the rise in opacity that occurs above the keV energy scale due to all electrons (including those bound in atoms) contributing to the scattering opacity when the photon wavelength is smaller than the atomic scale. At the highest energies, opacity is dominated by pair creation by gamma-rays interacting with the electric fields of nuclei in the ejecta (shown schematically for Xenon, \(A = 131\), \(Z = 54\)). Not included are possible contributions from r-process dust; or \(\gamma {-}\gamma \) pair creation opacity at photon energies \(\gg m_e c^{2} \sim 10^{6}\) eV (see Eq. 9)
At the lowest frequencies (radio and far-IR), free-free absorption from ionized gas dominates, as shown with a red line in Fig. 8, and calculated for the approximate ejecta conditions three days post merger. As the ejecta expands, the free-free opacity will decrease rapidly due to the decreasing density \(\propto \rho \propto t^{-3}\) and the fewer number of free electrons as the ejecta cools and recombines.
At near-IR/optical frequencies, the dominant source of opacity is a dense forest of line (bound-bound) transitions. The magnitude of this effective continuum opacity is determined by the strengths and wavelength density of the lines, which in turn depend sensitively on the ejecta composition. If the ejecta contains elements with relatively simple valence electron shell structures, such as iron, then the resulting opacity is comparatively low (dashed brown line), only moderately higher than the Fe-rich ejecta in Type Ia supernovae (Pinto and Eastman 2000). On the other hand, if the ejecta also contains even a modest fraction of elements with partially-filled f-shell valence shells, such as those in the lanthanide and actinide group, then the opacity can be an order of magnitude or more higher (Kasen et al. 2013; Tanaka and Hotokezaka 2013; Fontes et al. 2015, 2017; Even et al. 2019). In both cases, the opacity rises steeply from the optical into the UV, due to the increasing line density moving to higher frequencies.
Considerable uncertainty remains in current calculations of the La/Ac opacities because the atomic states and line strengths of these complex elements are not measured experimentally. Theoretically, such high-Z atoms represent an unsolved problem in N-body quantum mechanics, with statistical models that must be calibrated to experimental data. Beyond identifying the line transitions themselves, there is considerably uncertainty in how to translate these data into an effective opacity. The commonly employed "line expansion opacity" formalism (Pinto and Eastman 2000), based on the Sobolev approximation and applied to kilonovae by Barnes and Kasen (2013) and Tanaka and Hotokezaka (2013), may break down if the line density is sufficiently high that the wavelength spacing of strong lines becomes comparable to the intrinsic thermal) width of the lines (Kasen et al. 2013; Fontes et al. 2015, 2017). Nevertheless, the qualitative dichotomy between the opacity of La/Ac-free and La/Ac-bearing ejecta is likely to be robust and will imprint diversity in the kilonova color evolution (Sect. 4.1.2).9
Despite the strong time- and wavelength-dependence of the opacity, for purposes of an estimate it is reasonable to model the kilonova using a constant effective "grey" (wavelength-independent) opacity, \(\kappa \). Including a large range of r-process nuclei in their analysis, Tanaka et al. (2019) found (for temperatures \(T = 5{-}10\times 10^{3}\) K characteristic of the ejecta near the time of peak emission) values of \(\kappa \lesssim 20{-}30\hbox { cm}^{2}\) \(\hbox {g}^{-1}\) for \(Y_{e} \lesssim 0.2\) (sufficient neutrons for the r-process to extend up to or beyond the third abundance peak at \(A \sim 195\) with a large lanthanide mass fraction), \(\kappa \sim 3{-}5\hbox { cm}^{2}\) \(\hbox {g}^{-1}\) for \(Y_{e} \approx 0.25-0.35\) (r-process extending only to the second abundance peak \(A \sim 130\) with a small or zero lanthanide mass fraction) and \(\kappa \sim 1\hbox { cm}^{2}\) \(\hbox {g}^{-1}\) for \(Y_{e} \approx 0.40\) (mainly neutron-rich Fe-group nuclei and a weak r-process). The approximate opacity range corresponding to different ejecta composition is summarized in Table 4.
Throughout the far UV and X-ray bands, bound-free transitions of the partially neutral ejecta dominates the opacity (blue line in Fig. 8). This prevents radiation from escaping the ejecta at these frequencies, unless non-thermal radiation from the central magnetar or BH remnant remains luminous enough to re-ionize the ejecta at late times (Sect. 6.2.2). Margalit et al. (2018) find that an X-ray luminosity \(L_{\mathrm{X}} > rsim 10^{42}\) erg/s would be required to ionize \(M_{\mathrm{ej}} \sim 10^{-3}\,M_{\odot }\) of Fe-like ejecta expanding at \(\approx 0.2\) c at \(t \sim 1\) day after the merger. However, a substantially higher luminosity \(L_{\mathrm{X}} > rsim 10^{44}{-}10^{45}\) erg \(\hbox {s}^{-1}\) would be needed10 to ionize the greater expected quantity of disk wind ejecta \(M_{\mathrm{ej}} \sim 0.01\)–\(0.1\,M_{\odot }\), especially considering that the bound-free opacity of r-process nuclei will be even higher than Fe. Such high X-ray luminosities are too large to be powered by fall-back accretion onto the central remnant from the merger ejecta, but could be achievable from the rotational energy input from a long-lived magnetar remnant (Fig. 3, right panel).
Colors and sources of kilonova ejecta in NS–NS mergers
\(A_{\mathrm{max}}^\mathrm{a}\)
\(\kappa ^\mathrm{b}\) (\(\hbox {cm}^{2}\) \(\hbox {g}^{-1}\))
\(\sim 200\)
\(\sim 30\)
Tidal tail dynamical disk wind (prompt BH/HMNS)
\(\sim \) 130
\(\sim 3\)
Shock-heated dynamical disk wind (HMNS/SMNS) magnetar wind "viscous" outflows/spiral arm
\(\sim 0.40\)
\(\lesssim \) 100
Disk wind (SMNS/stable NS)
\(^\mathrm{a}\)Maximum atomic mass of nuclei synthesized
\(^\mathrm{b}\)Effective gray opacity (Tanaka et al. 2019)
At hard X-rays and gamma-ray energies, electron scattering, with Klein–Nishina corrections, provides an important opacity (which becomes highly inelastic at energies \( > rsim m_e c^{2}\)). For gamma-ray energies \( > rsim m_e c^{2}\), the opacity approaches a constant value \(\kappa _{A\gamma } \approx \alpha _\mathrm{fs}\kappa _{T}(Z^{2}/A)\) due to electron/positron pair creation on nuclei, where \(\alpha _{\mathrm{fs}} \simeq 1/137\), and A and Z are the nuclear mass and charge, respectively (e.g., Zdziarski and Svensson 1989). For r-process nuclei with \(Z^{2}/A > rsim 10-20\) this dominates inelastic scattering at energies \( > rsim 10\) MeV. The low opacity \(\lesssim 0.1\hbox { cm}^{2}\) \(\hbox {g}^{-1}\) in the \(\sim \) MeV energy range implies that gamma-rays released by radioactive decay of r-process elements largely escape the ejecta prior to the optical/NIR peak without their energy being thermalized (Sect. 4.1).
Gamma-rays with very high energies \( > rsim (m_e c^{2})^{2}/h\nu _{\mathrm{s}} \sim 0.3\mathrm{TeV}(h\nu _{\mathrm{s}}/1\mathrm{eV})^{-1}\) can also create electron/positron pairs by interacting with (more abundant) lower energy optical or X-ray photons of energy \(h\nu _{\mathrm{s}} \ll m_e c^{2}\). The \(\gamma -\gamma \) pair creation optical depth through the ejecta of radius \(R = vt\) is roughly given by
$$\begin{aligned} \tau _{\gamma -\gamma }\simeq & {} \frac{U_{\mathrm{rad}}\sigma _T R}{h\nu _\mathrm{opt}} \simeq \frac{L \sigma _T}{4\pi R h\nu _{\mathrm{opt}} c} \nonumber \\\approx & {} 2\times 10^{3}\left( \frac{L}{10^{42}\,{\mathrm{erg\,s}}^{-1}}\right) \left( \frac{v}{0.2\,\mathrm{c}}\right) ^{-1}\left( \frac{h\nu _{\mathrm{s}}}{\mathrm{1\,eV}}\right) ^{-1}\left( \frac{t}{\mathrm{1\,day}}\right) ^{-1}, \end{aligned}$$
where \(U_{\mathrm{rad}} \simeq L/(4\pi R^{2}c)\) is the energy density of the seed photons of luminosity L. The fact that \(\tau _{\gamma -\gamma } \gg 1\) for characteristic thermal luminosities \(\sim 10^{40}{-}10^{42}\) erg \(\hbox {s}^{-1}\) shows that \(\sim \) GeV–TeV photons from a putative long-lived central engine (e.g., millisecond magnetar; Sect. 6.2.2) will be trapped for days to weeks after the merger. Prompt TeV emission from a NS–NS merger is thus unlikely to come from the merger remnant, but still could be generated within the relativistically-beamed GRB jet on much larger physical scales.
4 Unified toy model
Thermal emission following a NS–NS or BH–NS merger (a "kilonova", broadly defined) can be powered by a variety of different energy sources, including radioactivity and central engine activity (see Fig. 3 for a summary of different heating sources). This section describes a simple model for the evolution of the ejecta and its radiation, which we use to motivate the potential diversity of kilonova emission properties. Though ultimately no substitute for full, multi-dimensional, multi-group radiative transfer, this 1D toy model does a reasonable job at the factor of a few level. Some sacrifice in accuracy may be justified in order to facilitate a qualitative understanding, given the other uncertainties on the mass, heating rate, composition, and opacity of the ejecta.
Following the merger, the ejecta velocity structure approaches one of homologous expansion, with the faster matter lying ahead of slower matter (Rosswog et al. 2014). We approximate the distribution of mass with velocity greater than a value v as a power-law,
$$\begin{aligned} M_{v} = M(v/v_{\mathrm{0}})^{-\beta },\,\,\,\, v \ge v_{0}, \end{aligned}$$
where M is the total mass, \(v_{0} \approx 0.1\) c is the average (\(\sim \) minimum) velocity. We adopt a fiducial value of \(\beta \approx 3\), motivated by a power-law fit to the dynamical ejecta in the numerical simulations of (Bauswein et al. 2013b). In general the velocity distribution derived from numerical simulations cannot be fit by a single power-law (e.g., Fig. 3 of Piran et al. 2013), but the following analysis can be readily extended to the case of an arbitrary velocity distribution.
In analogy with Eq. (6), radiation escapes from the mass layer \(M_{v}\) on the diffusion timescale
$$\begin{aligned} t_{d,v} \approx \frac{3 M_{v} \kappa _{v}}{4\pi \beta R_v c} \underset{R_v = vt}{=} \frac{M_{v}^{4/3}\kappa _{v}}{4\pi M^{1/3} v_{0} t c}, \end{aligned}$$
where \(\kappa _v\) is the opacity of the mass layer v and in the second equality makes use of Eq. (10) with \(\beta = 3\). Equating \(t_{d,v} = t\) gives the mass depth from which radiation peaks for each time t,
$$\begin{aligned} M_{v}(t) = \left\{ \begin{array}{lr} M(t/t_{\mathrm{peak}})^{3/2} , &{} t < t_{\mathrm{peak}}\\ M &{} t > t_{\mathrm{peak}} \\ \end{array} \right. {,} \end{aligned}$$
where \(t_{\mathrm{peak}}\) is the peak time for diffusion out of the whole ejecta mass, e.g., Eq. (7) evaluated for \(v = v_0\). Emission from the outer layers (mass \(M_v < M\)) peaks first, while the luminosity of the innermost shell of mass \(\sim M\) peaks at \(t = t_{\mathrm{peak}}\). The deepest layers usually set the peak luminosity of the total light curve, except when the heating rate and/or opacity are not constant with depth (e.g., if the outer layers are free neutrons instead of r-process nuclei; Sect. 6.1.1).
As the ejecta expands, the radius of each layer of depth \(M_{v}\) of mass \(\delta M_{v}\) evolves according to
$$\begin{aligned} \frac{dR_v}{dt} = v. \end{aligned}$$
The thermal energy \(\delta E_v\) of the layer evolves according to
$$\begin{aligned} \frac{d(\delta E_v)}{dt} = -\frac{\delta E_v}{R_v}\frac{dR_v}{dt} - L_v + {\dot{Q}}, \end{aligned}$$
where the first term accounts for losses due to PdV expansion in the radiation-dominated ejecta. The second term in Eq. (14),
$$\begin{aligned} L_{v} = \frac{\delta E_v}{t_{d,v} + t_{lc,v}}, \end{aligned}$$
accounts for radiative losses (the observed luminosity) and \(t_{lc,v} = R_v/c\) limits the energy loss time to the light crossing time (this becomes important at late times when the layer is optically thin). The third term in Eq. (14),
$$\begin{aligned} {\dot{Q}}(t) = {\dot{Q}}_{r,v} + {\dot{Q}}_{\mathrm{mag}} + {\dot{Q}}_{\mathrm{fb}} \end{aligned}$$
accounts for sources of heating, including radioactivity (\({\dot{Q}}_{r,v}\); Sect. 4.1), a millisecond magnetar (\({\dot{Q}}_{\mathrm{mag}}\); Sect. 6.2.2) or fall-back accretion (\({\dot{Q}}_{\mathrm{fb}}\); Sect. 6.2.1). The radioactive heating rate, being intrinsic to the ejecta, will in general vary between different mass layers v. In the case of magnetar or accretion heating, radiation must diffuse from the central cavity through the entire ejecta shell (Fig. 7, right panel).
One must in general also account for the evolution of the ejecta velocity (Eq. 10) due to acceleration by pressure forces. For radioactive heating, the total energy input \(\int {\dot{Q}}_{r,v}dt\) is less than the initial kinetic energy of the ejecta (Metzger et al. 2010a; Rosswog et al. 2013; Desai et al. 2019), in which case changes to the initial velocity distribution (Eq. 10) are safely ignored. However, free expansion is not a good assumption when there is substantial energy input from a central engine. In such cases, the velocity \(v_0\) of the central shell (of mass M and thermal energy \(E_{v_0}\)) is evolved separately according to
$$\begin{aligned} \frac{d}{dt}\left( \frac{M v_0^{2}}{2}\right) = Mv_0 \frac{dv_0}{dt} = \frac{E_{v_{0}}}{R_0}\frac{dR_0}{dt}, \end{aligned}$$
where the source term on the right hand side balances the PdV loss term in the thermal energy equation (14), and \(R_0\) is the radius of the inner mass shell. Equation (17) neglects two details: (1) special relativistic effects, which become important for low ejecta mass \(\lesssim 10^{-2}\,M_{\odot }\) and the most energetic magnetar engines (Zhang 2013; Gao et al. 2013; Siegel and Ciolfi 2016a, b; 2) the secondary shock driven through the outer ejecta layers by the nebula inflated by a long-lived central engine (e.g., Kasen et al. 2016) and its effects on the outer velocity distribution (e.g., Suzuki and Maeda 2017).
Under the idealization of blackbody emission, the temperature of the thermal emission is
$$\begin{aligned} T_{\mathrm{eff}} = \left( \frac{L_{\mathrm{tot}}}{4\pi \sigma R_\mathrm{ph}^{2}}\right) ^{1/4}, \end{aligned}$$
where \(L_{\mathrm{tot}} = \int _{v_0}L_v \frac{dM_v}{dv}dv \simeq \sum _{v} (L_v \delta M_v)\) is the total luminosity (summed over all mass shells). The radius of the photosphere \(R_{\mathrm{ph}} (t)\) is defined as that of the mass shell at which the optical depth \(\tau _v = \int _{v_0}\frac{dM_v}{dv}\frac{\kappa _v}{4\pi R_{\mathrm{v}}^2}dv \simeq \sum _{v} \left( \frac{\kappa _v \delta M_v}{4\pi R_v^{2}}\right) = 1\). The flux density of the source at photon frequency \(\nu \) is given by
$$\begin{aligned} F_{\nu }(t) = \frac{2\pi h \nu ^{3}}{c^{2}}\frac{1}{\exp \left[ h\nu /kT_{\mathrm{eff}}(t)\right] -1}\frac{R_{\mathrm{ph}}^{2}(t)}{D^{2}}, \end{aligned}$$
where D is the source luminosity distance. We have neglected cosmological effects such as K-corrections, but these can be readily included.
For simplicity in what follows, we assume that the opacity \(\kappa _v\) of each mass layer depends entirely on its composition, i.e. we adopt a temperature-independent grey opacity. The most relevant feature of the composition is the mass fraction of lanthanide or actinide elements, which in turn depends most sensitively on the ejecta \(Y_e\) (Sect. 3.2). Following Tanaka et al. (2019, Table 4), one is motivated to take
$$\begin{aligned} \kappa _v(Y_e) = \left\{ \begin{array}{lr} 20-30\,\mathrm{cm^{2}\,g^{-1}} , &{} Y_{e} \lesssim 0.2 \,\,\,\,\,(\mathrm{Red}) \\ 3-5\,\mathrm{cm^{2}\,g^{-1}} &{} Y_e \approx 0.25-0.35 \,\,\,\,\,(\mathrm{Blue/Purple}) \\ 1\,\mathrm{cm^{2}\,g^{-1}} &{} Y_e \approx 0.4\,\,\,\,\, (\mathrm{Blue}) \\ \end{array} \right. , \end{aligned}$$
smoothly interpolating when necessary. We caution that these values were calculated by (Tanaka et al. 2019) for ejecta temperatures in the range \(5{-}10\times 10^{3}\) K, i.e. similar to those obtained close to peak light, and therefore may not be appropriate much earlier (the first hours) or later (nebular phase).
The full emission properties are determined by solving Eq. (14) for \(\delta E_v\), and hence \(L_v\), for a densely sampled distribution of shells of mass \(\delta M_v\) and velocity \(v > v_0\). When considering radioactive heating acting alone, one can fix the velocity distribution (Eq. 10). For an energetic engine, the velocity of the central shell is evolved simultaneously using Eq. (17).
As initial conditions at the ejection radius \(R(t = 0) \approx 10{-}100\) km, it is reasonable to assume that the initial thermal energy is comparable to its final kinetic energy, \(\delta E_{v}(t = 0) \sim (1/2)\delta M_v v^2(t=0)\). If the ejecta expands freely from the site of ejection, the predicted light curves are largely insensitive to the details of this assumption because the initial thermal energy is anyways quickly removed by adiabatic expansion. Section 6.1.2 explores an exception to this rule when the ejecta is re-heated by shock interaction with a delayed outflow from the central engine.
4.1 R-process heating
At a minimum, the ejecta is heated by the radioactive decay of heavy r-process nuclei. This occurs at a rate
$$\begin{aligned} {\dot{Q}}_{r,v} = \delta M_v X_{r,v} {\dot{e}}_r(t), \end{aligned}$$
where \(X_{r,v}\) is the r-process mass fraction in mass layer \(M_v\) and \(e_r\) is the specific heating rate. For neutron-rich ejecta (\(Y_e \lesssim 0.2\)), the latter can be reasonably well approximated by the fitting formula (Korobkin et al. 2012)
$$\begin{aligned} {\dot{e}}_r = 4\times 10^{18}\epsilon _{th,v} \left( 0.5-\pi ^{-1}\arctan [(t-t_0)/\sigma ]\right) ^{1.3}\,\mathrm{erg\,s^{-1}\,g^{-1}}, \end{aligned}$$
where \(t_0 = 1.3\) s and \(\sigma = 0.11\) s are constants, and \(\epsilon _{\mathrm{th,m}}\) is the thermalization efficiency (see below). Equation (22) predicts a constant heating rate for the first second (while neutrons are being consumed during the r-process), followed by a power-law decay at later times as nuclei decay back to stability (Metzger et al. 2010b; Roberts et al. 2011); see Figs. 3 and 10. The latter is reasonably well approximated by the expression
$$\begin{aligned} {\dot{e}}_r \underset{t \gg t_0}{\approx }2\times 10^{10}\epsilon _{th,v} \left( \frac{t}{1\,\mathrm{day}}\right) ^{-1.3}\mathrm{erg\,s^{-1}\,g^{-1}} \end{aligned}$$
Using Eqs. (7) and (8) the peak luminosity can be estimated as
$$\begin{aligned} L_{\mathrm{peak}}\approx & {} M{\dot{e}}_r(t_{\mathrm{peak}}) \nonumber \\\approx & {} 10^{41}\mathrm{erg\,s^{-1}}\left( \frac{\epsilon _{th,v}}{0.5}\right) \left( \frac{M}{10^{-2}\,M_{\odot }}\right) ^{0.35}\left( \frac{v}{0.1\,\mathrm{c}}\right) ^{0.65}\left( \frac{\kappa }{1\,\mathrm{cm^{2}\,g^{-1}}}\right) ^{-0.65}.\nonumber \\ \end{aligned}$$
Given the reasonably large range in values allowed in NS–NS or BH–NS mergers, \(M \sim 10^{-3}{-}0.1\,M_{\odot }\), \(\kappa \sim 0.5-30\,\hbox {cm}^{2}\hbox { g}^{-1}\), \(v \sim 0.1{-}0.3\) c, one can have \(L_\mathrm{peak} \approx 10^{39}{-}10^{42}\,\hbox {erg s}^{-1}\).
The time dependence of \({\dot{q}}_r\) is more complicated for higher \(0.2 \lesssim Y_e \lesssim 0.4\), with 'wiggles' caused by the heating rate being dominated by a few discrete nuclei instead of the large statistical ensemble present at low \(Y_e\) (Korobkin et al. 2012; Martin et al. 2015). However, when averaged over a realistic \(Y_e\) distribution, the heating rate on timescales of days-weeks (of greatest relevance to the peak luminosity; Eq. 8), is constant to within a factor of a few for \(Y_e \lesssim 0.4\) (Lippuner and Roberts 2015; Wu et al. 2019b; see Fig. 10). The radioactive decay power is sensitive to various uncertainties in the assumed nuclear physics (nuclear masses, cross sections, and fission fragment distribution) at the factor of a few level (e.g., Wu et al. 2019b)11, a point we shall return to when discussing GW170817 (Sect. 5).
Radioactive heating occurs through a combination of \(\beta \)-decays, \(\alpha \)-decays, and fission (Metzger et al. 2010b; Barnes et al. 2016; Hotokezaka et al. 2016). The thermalization efficiency, \(\epsilon _{th,v}\), depends on how these decay products share their energies with the thermal plasma. Neutrinos escape from the ejecta without interacting; \(\sim \) MeV gamma-rays are trapped at early times (\(\lesssim 1\) day), but leak out at later times given the low Klein–Nishina-suppressed opacity (Fig. 8; Hotokezaka et al. 2016; Barnes et al. 2016). \(\beta \)-decay electrons, \(\alpha \)-particles, and fission fragments share their kinetic energy effectively with the ejecta via Coulomb collisions (Metzger et al. 2010b) and by ionizing atoms (Barnes et al. 2016). For a fixed energy release rate, the thermalization efficiency is smallest for \(\beta \)-decay, higher for \(\alpha \)-decay, and the highest for fission fragments. The thermalization efficiency of charged particles also depends on the magnetic field orientation within the ejecta, since the particle Larmor radius is generally shorter than the mean free path for Coulomb interactions. Because the actinide yield around mass number \(A \sim 230\) varies significantly with the assumed nuclear mass model, Barnes et al. (2016) finds that the effective heating rate including thermalization efficiency can vary by a factor of 2 – 6, depending on time.
Barnes et al. (2016) find that the combined efficiency from all of these processes typically decreases from \(\epsilon _{ th,v} \sim 0.5\) on a timescale of 1 day to \(\sim 0.1\) at \(t \sim 1\) week (their Fig. 13). In what follows, we adopt the fit provided in their Table 1,
$$\begin{aligned} \epsilon _{th,v}(t) = 0.36\left[ \exp (-a_v t_{\mathrm{day}}) + \frac{\mathrm{ln}(1+2b_v t_{\mathrm{day}}^{d_v})}{2b_v t_{\mathrm{day}}^{d_v}}\right] , \end{aligned}$$
where \(t_{\mathrm{day}} = t/1\) day, and \(\{a_v,b_v,d_v\}\) are constants that will in general depend on the mass and velocity of the layer under consideration. For simplicity, we adopt fixed values of \(a_v = 0.56, b_v = 0.17, c_v = 0.74\), corresponding to a layer with \(M = 10^{-2}\,M_{\odot }\) and \(v_0 = 0.1\) c.
As we shall discuss, the luminosity and color evolution of kilonovae encode information on the total quantity of r-process ejecta and, in principle, the abundance of lanthanide/actinide elements. However, insofar as the lanthanides cover the atomic mass range \(A \sim 140\)–175, kilonova observations at peak light do not readily probe the creation of the heaviest elements, those near the third r-process peak (\(A \sim 195\)) and the transuranic elements (\(A > rsim 240\)).
One avenue for probing the formation of ultra-heavy elements is by the light curve's decay at late times, weeks to months after maximum light. At such late times the radioactive heating is often dominated by a few discrete isotopes with well-measured half-lifes (e.g., \(^{223}\)Ra [\(t_{1/2} = 11.4\) days], \(^{225}\)Ac [\(t_{1/2} = 10.0\) days], \(^{225}\)Ra [\(t_{1/2} = 14.9\) days], \(^{254}\)Cf [\(t_{1/2} = 60.5\) days]) which could produce distinctive features (e.g., bumps or exponential decay-like features) in the bolometric light curve of characteristic timescale \(\sim t_{1/2}\) (Zhu et al. 2018; Wu et al. 2019b), much in the way that the half-life of \(^{56}\)Co is imprinted in the decay of Type Ia supernovae. The ability in practice to identify individual isotopes through this method will depend on accurate models for the ejecta thermalization entering the nebular phase (Kasen and Barnes 2019; Waxman et al. 2019) as well as dedicated broad-band, multi-epoch follow-up of nearby kilonovae in the NIR (where most the nebular emission likely emerges) with sensitive facilities like the James Webb Telescope (Kasliwal et al. 2019; Villar et al. 2018). Table 5 compiles all r-process isotopes with half-lives in the range 10–100 day (Wu et al. 2019b).
Gamma-ray decay lines from the r-process element decays escape the ejecta within days or less of the merger and could in principle be directly observed from an extremely nearby event \(\lesssim \) 3–10 Mpc with future gamma-ray satellites (Hotokezaka et al. 2016; Korobkin et al. 2019). A related, but potentially more promising near-term strategy is a gamma-ray search for remnants of past NS mergers in our Galaxy (Wu et al. 2019a; Korobkin et al. 2019). Among the most promising isotopes for this purpose is \(^{126}\hbox {Sn}\), which has several lines in the energy range 415–695 keV and resides close to the second r-process peak, because of its half-life \(t_{1/2} = 2.3\times 10^{5}\) yr is comparable to the ages of the most recent Galactic merger(s). Wu et al. (2019a) estimate that multiple remnants are detectable as individual sources by next-generation gamma-ray satellites with line sensitivities \(\sim 10^{-6}{-}10^{-8}\,\gamma \hbox { cm}^{-2}\hbox { s}^{-1}\).
r-process nuclei with half-lives \(t_{1/2} = 10{-}100\) days
Decay channel
\(t_{1/2}\) (days)
\(^{225}\hbox {Ra}\)
\(\beta ^-\)
\(^{225}\hbox {Ac}\)
\(\alpha \beta ^-\) to \(^{209}\hbox {Bi}\)
\(^{246}\hbox {Pu}\)
\(\beta ^-\) to \(^{246}\hbox {Cm}\)
\(^{147}\hbox {Nd}\)
\(\alpha \beta ^-\) to \(^{207}\hbox {Pb}\)
\(^{140}\hbox {Ba}\)
\(\beta ^-\) to \(^{140}\hbox {Ce}\)
12.7527(23)
\(^{143}\hbox {Pr}\)
\(^{156}\hbox {Eu}\)
\(^{191}\hbox {Os}\)
\(^{253}\hbox {Cf}\)
\(^{253}\hbox {Es}\)
\(\alpha \)
\(^{234}\hbox {Th}\)
\(\beta ^-\) to \(^{234}\hbox {U}\)
\(^{233}\hbox {Pa}\)
26.975(13)
\(^{141}\hbox {Ce}\)
\(^{103}\hbox {Ru}\)
39.247(3)
\(\alpha \beta ^-\) to \(^{251}\hbox {Cf}\)
\(^{181}\hbox {Hf}\)
\(^{203}\hbox {Hg}\)
\(^{89}\hbox {Sr}\)
\(^{91}\hbox {Y}\)
\(^{95}\hbox {Zr}\)
\(^{95}\hbox {Nb}\)
\(^{188}\hbox {W}\)
\(\beta ^-\) to \(^{188}\hbox {Os}\)
Modified from Table II in (Wu et al. 2019b)
4.1.1 Red kilonova: lanthanide-bearing ejecta
All NS–NS mergers, and the fraction of BH–NS mergers in which the NS is tidally disrupted before being swallowed by the BH, will unbind at least some highly neutron-rich matter (\(Y_e \lesssim 0.25\)) capable of forming heavy r-process nuclei. This lanthanide-bearing high-opacity material resides within the equatorially-focused tidal tail, or in more spherical outflows from the accretion disk (Fig. 7, top panel). The disk outflows will contain a greater abundance of low-\(Y_e \lesssim 0.25\) material in NS–NS mergers if the BH formation is prompt or the HMNS phase short-lived (Fig. 7, top panel).
The left panel of Fig. 9 shows an example light curve of such a 'red' kilonova, calculated using the toy model assuming an ejecta mass \(M = 10^{-2}\,M_{\odot }\), opacity \(\kappa = 20\hbox { cm}^{2}\) \(\hbox {g}^{-1}\), minimum velocity \(v_0 = 0.1\) c, and velocity index \(\beta =3\), at an assumed distance of 100 Mpc. For comparison, dashed lines show light curves calculated from Barnes et al. (2016), based on a full one-dimensional radiative transfer calculation, for similar parameters. The emission is seen to peak at NIR wavelengths on a timescale of several days to a week at J and K bands (1.2 and 2.2 \(\upmu \)m, respectively).
One notable feature of the light curves calculated using full radiative transfer is the significant suppression of the emission in the UV/optical wavebands URV due to the high lanthanide opacity. Here, the assumption of a gray opacity made in the toy model results in an overestimation of the UV flux relative to that found by the full radiative transfer calculation (Barnes et al. 2016). This difference results in part because the true line opacity increases strongly moving to higher frequencies due to the higher density of lines in the UV (Fig. 8).
Kilonova light curves in AB magnitudes for a source at 100 Mpc, calculated using the toy model presented in Sect. 4, assuming a total ejecta mass \(M = 10^{-2}\,M_{\odot }\), minimum velocity \(v_0 = 0.1\) c, and gray opacity \(\kappa = 20\hbox { cm}^{2}\hbox { g}^{-1}\). The left panel shows a standard "red" kilonova, corresponding to ejecta bearing lanthanide elements, while the right panel shows a "blue" kilonova poor in lanthanides (\(\kappa = 1\hbox { cm}^{2}\hbox { g}^{-1}\)). Shown for comparison in the red kilonova case with dashed lines are models from Barnes et al. (2016) for \(v = 0.1\) c and \(M = 10^{-2}\,M_{\odot }\). Depending on the relative speeds of the two components and the viewing angle of the observer, both red and blue emission components can be present in a single merger, originating from distinct portions of the ejecta (Fig. 7)
4.1.2 Blue kilonova: lanthanide-free ejecta
In addition to the highly neutron-rich ejecta (\(Y_e \lesssim 0.30\)), some of the matter unbound from a NS–NS merger may contain a lower neutron abundance (\(Y_e > rsim 0.30\)) and thus will be free of lanthanide group elements (e.g., Metzger and Fernández 2014; Perego et al. 2014; Wanajo et al. 2014). This low-opacity ejecta can reside either in the polar regions, due to dynamical ejection from the NS–NS merger interface, or in more isotropic outflows from the accretion disk (e.g., Miller et al. 2019). The quantity of high-\(Y_e\) matter will be greatest in cases when BH formation is significantly delayed relative to the lifetime of the accretion disk due to the strong neutrino luminosity of the NS remnant (Fig. 7, right panel).
The right panel of Fig. 9 shows a model otherwise identical to that in the left panel, but which assumes a lower opacity \(\kappa = 1\hbox { cm}^{2}\hbox { g}^{-1}\) more appropriate to lanthanide-free ejecta. The emission now peaks at the visual bands R and I, on a timescale of about 1 day at a level 2–3 magnitudes brighter than the lanthanide-rich case. This luminous, fast-evolving visual signal was key to the discovery of the kilonova counterpart of GW170817 (Sect. 5).
4.1.3 Mixed blue + red kilonova
In general, the total kilonova emission can be thought of as a combination of distinct 'blue' and 'red' components. This is because both high- and low-\(Y_e\) ejecta components can be simultaneously visible following a merger, particularly for viewing angles close to the binary rotation axis (Fig. 7). For viewers closer to the equatorial plane, the blue emission may in some cases be blocked by the high-opacity lanthanide-rich tidal ejecta (Kasen et al. 2015). Thus, although the presence of a days to week-long NIR transient is probably a generic feature of all mergers, the early blue kilonova phase might only be visible or prominent in a fraction of events. On the other hand, if the blue component expands faster than the tidal ejecta (or the latter is negligibly low in mass—e.g., for an equal-mass merger), the early blue emission may be visible from a greater range of angles than just pole-on (e.g., Christie et al. 2019).
It has become common practice following GW170817 to model the total kilonova light curve by adding independent 1D blue (low \(\kappa \)) and red (high \(\kappa \)) models on top of one another (e.g., Villar et al. 2017), i.e. neglecting any interaction between the ejecta components. While extremely useful for obtaining qualitative insight, in detail this assumption will result in quantitative errors in the inferred ejecta properties (e.g., Kasen et al. 2017; Wollaeger et al. 2018). With well-sampled photometry (in both time and frequency), the total ejecta mass should be reasonably well-measured: once the ejecta has become effectively transparent at late times the bolometric luminosity directly traces the radioactive energy input.
However, at early times when the ejecta is still opaque, the radial and angular structure of the opacity (i.e., lanthanide abundance \(X_{\mathrm{La}}\)) can couple distinct ejecta components in a way not captured by combining two independent 1D models (e.g., Kasen et al. 2015). Radial dependence of \(X_{\mathrm{La}}\) (e.g., due to a \(Y_e\) gradient) is straightforward to implement in the toy model through a mass shell-dependent value of \(\kappa _v\) (Eq. 11). If \(X_{\mathrm{La}}\) increases with radius (i.e. if the "red" ejecta resides physically outside of the "blue"), then in principle even a low amount of red (high-opacity) fast material can 'reprocess' the radioactive luminosity generated from a much greater mass of blue (low-opacity) slower material residing behind it. Kawaguchi et al. (2018) found that this could in principle lead to an over-estimate the quantity of blue ejecta if one models the light curve by simply adding independent red and blue components. For these reasons, caution must be taken in naively adding 'blue' and 'red' models and a detailed analysis must take into account not just photometric light curve information, but also spectral features (in the above example, for instance, reprocessing by an outer thin layer of lanthanide-rich matter would generate strong UV line blanketing). The geometry of the blue and red components of the ejecta, and the observer viewer angle, are also in principle distinguishable by their relative levels of polarization (Covino et al. 2017; Matsumoto 2018; Bulla et al. 2019).
5 GW170817: the first LIGO NS–NS merger
As introduced in Sect. 1, the termination of the GW inspiral from LIGO/Virgo's first NS–NS merger GW170817 (Abbott et al. 2017b) was followed within seconds by a short GRB (Goldstein et al. 2017; Savchenko et al. 2017; Abbott et al. 2017d). Roughly 11 hours later, a luminous optical counterpart, dubbed AT2017gfo, was discovered in the galaxy NGC 4993 at a distance of only \(\approx 40\) Mpc (Coulter et al. 2017; Soares-Santos et al. 2017; Arcavi et al. 2017a; Díaz et al. 2017; Hu et al. 2017; Lipunov et al. 2017; Valenti et al. 2017; Troja et al. 2017; Kilpatrick et al. 2017; Smartt et al. 2017; Drout et al. 2017; Evans et al. 2017; Abbott et al. 2017c; McCully et al. 2017; Buckley et al. 2018; Utsumi et al. 2017; Covino et al. 2017; Hu et al. 2017). Table 6 summaries a few key properties of GW170817 as inferred from its GW/EM emission.
Key properties of GW170817
Chirp mass, \({\mathcal {M}}_{\mathrm{c}}\) (rest frame)
\(1.188^{+0.004}_{-0.002}\,M_{\odot }\)
First NS mass, \(M_{\mathrm{1}}\)
1.36–\(1.60\,M_{\odot }\) (90%)
Second NS mass, \(M_{\mathrm{2}}\)
Total mass, \(M_{\mathrm{tot}} = M_{1}+M_{2}\)
\(\approx 2.74^{0.04}_{-0.01}\,M_{\odot }\)
Observer angle to orbital axis, \(\theta _{\mathrm{obs}}\)
19–\(42^{\circ }\) (90%)
Blue KN ejecta (\(A_{\mathrm{max}} \lesssim 140\))
\(\approx 0.01\)–\(0.02\,M_{\odot }\)
Red KN ejecta (\(A_{\mathrm{max}} > rsim 140\))
Light r-process yield (\(A \lesssim 140\))
Heavy r-process yield (\(A > rsim 140\))
\(\approx 0.01\,M_{\odot }\)
Energy of GRB jet
\(\sim 10^{49}\)–\(10^{50}\) erg
ISM density
\(\sim 10^{-5}\)–\(10^{-3}\mathrm {\ cm}^{-3}\)
[1] Abbott et al. (2017b); [2] Finstad et al. (2018); Mooley et al. (2018); [3] e.g., Nicholl et al. (2017); Kasen et al. (2017); [4] e.g., Chornock et al. (2017); Kasen et al. (2017); [5] e.g., Hallinan et al. (2017); Alexander et al. (2017); [6] e.g., Nicholl et al. (2017); Kasen et al. (2017); [7] e.g., Margutti et al. (2018); Mooley et al. (2018)
The timeline of the discovery was recounted in the capstone paper written jointly by LIGO/Virgo and astronomers involved in the EM follow-up (Abbott et al. 2017c) and will not be recounted here. In limiting the scope of our discussion, we also do not address the host galaxy and environment of the merger and its implication for NS–NS merger formation channels (Blanchard et al. 2017; Hjorth et al. 2017; Im et al. 2017; Levan et al. 2017; Pan et al. 2017), nor shall we discuss the plethora of other science opportunities the kilonova enabled (e.g., H0 cosmology; Abbott et al. 2017a). We also do not touch upon inferences about the GRB jet and its connection to the observed prompt gamma-ray and non-thermal afterglow emission (e.g., Bromberg et al. 2018; Kasliwal et al. 2017; Gottlieb et al. 2018; Murguia-Berthier et al. 2017a; Salafia et al. 2018; Xiao et al. 2017), though possible connections between the jet and the early kilonova emission will be discussed in Sect. 6.1.2.
5.1 The kilonova
AT2017gfo started out blue in color, with a featureless thermal spectrum that peaked at UV frequencies (e.g., Nicholl et al. 2017; McCully et al. 2017; Evans et al. 2017), before rapidly evolving over the course of a few days to become dominated by emission with a spectral peak in the near-infrared (NIR) (Chornock et al. 2017; Pian et al. 2017; Tanvir et al. 2017). Although early blue colors are not uncommon among astrophysical transients (most explosions start hot and thereafter cool from expansion), the very fast evolution of AT2017gfo was completely unlike that seen in an previously known extra-galactic event, making its connection to GW170817 of high significance (even before folding in theoretical priors on the expected properties of kilonovae). Simultaneous optical (e.g., Nicholl et al. 2017; Shappee et al. 2017) and NIR (Chornock et al. 2017) spectra around day 2.5 appeared to demonstrate the presence of distinct optical and NIR emission components. Smartt et al. (2017) observed absorption features in the spectra around 1.5 and 1.75 \(\upmu \)m which they associated with features of Cs i and Te i (light r-process elements). Recently, Watson et al. (2019) made a more convincing case for the presence of absorption lines of Sr ii in multiple epochs, which they furthermore point out is one of the more abundant elements generated by the r-process (despite Sr predominantly being formed in the s-process). In broad brush, the properties of the optical/NIR emission agreed remarkably well with those predicted for r-process powered kilonova (Li and Paczyński 1998; Metzger et al. 2010b; Roberts et al. 2011; Barnes and Kasen 2013; Tanaka and Hotokezaka 2013; Grossman et al. 2014; Martin et al. 2015; Tanaka et al. 2017; Wollaeger et al. 2018; Fontes et al. 2017), a conclusion reached nearly unanimously by the community (e.g., Kasen et al. 2017; Drout et al. 2017; Tanaka et al. 2017; Kasliwal et al. 2017; Murguia-Berthier et al. 2017a; Waxman et al. 2018). In discussing the interpretation of AT2017gfo, we start with the most basic and robust inferences that can be made, before moving onto areas where there is less universal agreement.
Fig. 10
Bolometric luminosity of the kilonova AT2017gfo associated with GW170817 from Smartt et al. (2017) with uncertainties derived from the range given in the literature (Smartt et al. 2017; Waxman et al. 2018; Cowperthwaite et al. 2017; Arcavi 2018). Also shown are lower limits (empty triangles) on the late-time luminosity as inferred from the Ks band with VLT/HAWK-I (Tanvir et al. 2017) (black) and the 4.5 \(\upmu \)m detections by the Spitzer Space Telescope from Villar et al. (2018, green) and (Kasliwal et al. 2019, blue). Colored lines show the ejecta heating rate for models with different values for the ejecta mass and average electron fraction as follows: A (\(Y_e = 0.15\); \(M_{\mathrm{ej}} = 0.04\,M_{\odot }\)), B (\(Y_e = 0.25\); \(M_{\mathrm{ej}} = 0.04\,M_{\odot }\)), C (\(Y_e = 0.35\); \(M_{\mathrm{ej}} = 0.055\,M_{\odot }\)), D (\(Y_e = 0.45\); \(M_{\mathrm{ej}} = 0.03\,M_{\odot }\)). While models \(A-D\) assume the FRDM nuclear mass model (Möller et al. 1995), Model A1 (\(Y_e = 0.15\); \(M_{\mathrm{ej}} = 0.02\,M_{\odot }\)) uses the DZ31 nuclear mass model (Duflo and Zuker 1995). Their corresponding r-process abundance distributions at t = 1 days are shown in the inset. Thermalization is calculated following (Kasen and Barnes 2019) for an assumed ejecta velocity 0.1 c. The black solid (dashed) horizontal lines in the lower right corner represent the approximate observation limits of the NIR (MIR) instruments on the James Webb Space Telescope for a merger at 100 Mpc. Image reproduced with permission from Wu et al. (2019b), copyright by APS
Perhaps the first question one might ask is: What evidence exists that AT2017gfo was powered by r-process heating? and, if so, How much radioactive material was synthesized? Figure 10 from Wu et al. (2019b) shows the bolometric luminosity \(L_{\mathrm{bol}}(t)\) compiled from observations in the literature (Smartt et al. 2017; Cowperthwaite et al. 2017; Waxman et al. 2018; Arcavi 2018) compared to several distinct models for the time-dependent heating rate of r-process decay \({\dot{Q}}_r\) (Eq. 21), in which the authors have varied the mean \(Y_e\) of the ejecta contributing to the heating and the nuclear mass model, the latter being one of the biggest nuclear physics uncertainties.
A first takeaway point is the broad similarity between the observed \(L_{\mathrm{bol}}(t)\) evolution and the power-law-like decay predicted by the decay of a large ensemble of r-process isotopes (Metzger et al. 2010b). Furthermore, the total ejecta mass one requires to match the normalization of \(L_{\mathrm{bol}}\) varies with the assumptions, ranging from \(\approx 0.02\,M_{\odot }\) (\(Y_e= 0.15\); DZ31 mass model) to \(0.06\,M_{\odot }\) (\(Y_e = 0.35\); FRDM mass model). This range broadly agrees with that reported by independent groups modeling GW170817 (see Côté et al. 2018 for a compilation). It is also entirely consistent with the range of ejecta masses predicted from NS–NS mergers (Sect. 3.1), as we elaborate further below. Although non-r-process powered explanations for AT2017gfo can be constructed (e.g., invoking magnetar power; Sect. 6.2.2), they require several additional assumptions and thus are disfavored by Ockham's razor.
If the yield of r-process elements in GW170817 is at all representative of that of NS–NS mergers in the Universe as a whole (as the similarity of its GW-inferred properties compared to the Galactic binary NS population support it being; e.g., Zhao and Lattimer 2018), then, even adopting the lowest NS–NS merger rate currently allowed from LIGO/Virgo of \(\sim 100\) \(\hbox {Gpc}^{-1}\) \(\hbox {yr}^{-1}\), an order-of-magnitude estimate (Eq. 3) leads to the conclusion that NS–NS mergers are major sources of r-process elements in the universe (e.g., Kasen et al. 2017; Côté et al. 2018). However, given large current uncertainties on the Galactic rate of NS–NS mergers and the precise abundance distribution synthesized in GW170817, it cannot yet be established that mergers are the exclusive, or even dominant, r-process site (see discussion in Sect. 2.1).
With the production and ejection of at least a few hundredths of a solar mass of neutron-rich elements established, the next question is the detailed nature of its composition. Specifically, which r-process elements were formed? Figure 11 shows a compilation of photometric data from the literature on AT2017gfo by Villar et al. (2017). The blue/UV bands (e.g., F225W, F275W) fade rapidly from the first observation at 11 hours, while the NIR bands (e.g., JHK) show a much flatter decay over the first week. The early-time blue emission suggests that the outermost layers of the merger ejecta (at least those dominating the observed emission) are composed of light r-process material with a low opacity (blue kilonova; Sect. 4.1.2) synthesized from merger ejecta with a relatively high12 electron fraction, \(Y_{e} > rsim 0.25\). The more persistent late NIR emission instead requires matter with higher opacity, consistent with the inner ejecta layers containing at least a moderate amount of lanthanide or actinide elements (red kilonova; Sect 4.1.1).
Motivated by the theoretical prediction of distinct lanthanide-free and lanthanide-rich ejecta components (e.g., Metzger and Fernández 2014), many groups interpreted AT2017gfo using mixed models described in the previous section, comprised of 2 or 3 separate ejecta components with different lanthanide abundances (e.g., Kasen et al. 2017; Tanaka et al. 2017; Drout et al. 2017; Kasliwal et al. 2017; Perego et al. 2017; however, see Waxman et al. 2018). As one example, the solid lines in Fig. 11 show a best-fit model from Villar et al. (2018) based on the sum of three spherical gray-opacity kilonova models ("blue", "purple", "red") with respective opacities \(\kappa = (0.5,3,10)\hbox { cm}^{2}\hbox { g}^{-1}\) (similar to those given in Table 4) and from which they infer for the respective components ejecta masses \(M_{\mathrm{ej}} \approx (0.02,0.047,0.011)\,M_{\odot }\) and mean velocities \(v_{\mathrm{ej}} \approx (0.27,0.15,0.14) {\hbox {c}}\). Mapping the opacities back to electron fractions using e.g., Table 4, one infers that most of the ejecta possessed intermediate values of \(Y_e \approx 0.25-0.35\) which generated elements up to the second r-process peak. Smaller quantities of the ejecta had \(Y_e > rsim 0.4\) or \(Y_e \lesssim 0.25\), the latter containing a sufficient neutron abundance to produce some lanthanide elements (\(A > rsim 140\)) if not nuclei extending up to third r-process peak (\(A \sim 195\)) or beyond. Despite the unprecedented data set available for AT2017gfo, it is unfortunately not possible to reconstruct the detailed abundance pattern synthesized, for instance to test its consistency with that observed in metal-poor stars or in our solar system (e.g., Hotokezaka et al. 2018).
UVOIR light curves of AT2017gfo from the data set compiled, along with a best-fit spherically symmetric three-component kilonova model (see text). The data in this figure was originally presented in (Andreoni et al. 2017; Arcavi et al. 2017b; Coulter et al. 2017; Cowperthwaite et al. 2017; Díaz et al. 2017; Drout et al. 2017; Evans et al. 2017; Hu et al. 2017; Kasliwal et al. 2017; Lipunov et al. 2017; Pian et al. 2017; Shappee et al. 2017; Smartt et al. 2017; Tanvir et al. 2017; Troja et al. 2017; Utsumi et al. 2017; Valenti et al. 2017). Image reproduced with permission from Villar et al. (2017), copyright by the authors
From the inferred ejecta mass, velocity, and \(Y_e\) distribution, the next question is: What phase or phases during or following the merger was this material released? One thing is clear: the dynamical ejecta alone is insufficient. Fig. 5 shows that the total ejecta mass \( > rsim 0.02\,M_{\odot }\) exceeds the predicted dynamical ejecta from essentially all NS–NS merger simulations published to date, while the average velocity of the bulk of the lower-\(Y_e\) ejecta \(\approx 0.1\) c is also significantly less than predicted for the dynamical ejecta. The bulk of the ejecta, particularly the redder low-\(Y_e\) component, is instead most naturally explained as an outflow from the remnant accretion torus created around the central compact object following the merger (Sect. 3.1.2). GRMHD simulation of the post-merger disk evolution demonstrate that \(\approx 40\%\) of the initial mass of the torus (i.e. up to \(\sim 0.08\,M_{\odot }\) in wind ejecta for initial disk masses up to \(\approx 0.2\,M_{\odot }\) predicted by simulations) is unbound at an average velocity of \(v \approx 0.1\) c (e.g., Siegel and Metzger 2017; Fernández et al. 2019). The disk wind ejecta can contain a range of electron fractions (and thus produce blue or red emission), depending e.g., on the lifetime of the central NS remnant prior BH formation (see Fig. 6).
Potential sources of the fast blue KN ejecta in GW170817
Table reproduced with permission from Metzger et al. (2018), copyright by the authors
Quantity?
Velocity?
\(Y_e\)?
Tidal tail dynamical
Maybe, if \(q \lesssim 0.7^\mathrm{a}\)
\(\checkmark \)
Shock-heated dynamical
Maybe, if \(R_{1.6} \lesssim 11\mathrm {\ km}^\mathrm{b}\)
\(\checkmark \) if NS long-lived
Accretion disk outflow
\(\checkmark \) if torus massive
HMNS \(\nu \)-driven wind
Too high?
Spiral wave wind
Magnetized HMNS wind
\(^\mathrm{a}\)Where \(q \equiv M_{1}/M_{2}\) and \(M_{1}, M_{2}\) are the individual NS masses (Dietrich et al. 2017b; Dietrich and Ujevic 2017; Gao et al. 2017)
\(^\mathrm{b}\)However, a small NS radius may be in tension with the creation of a large accretion disk needed to produce the red KN ejecta (Radice et al. 2018c)
The physical source of the lanthanide-poor ejecta (\(Y_e > rsim 0.35\)) responsible for powering the early-time blue emission is more open to debate. Table 7 summarizes several possible origins, along with some of their pros and cons. The high velocities \(v_{\mathrm{blue}} \approx 0.2{-}0.3\) c and composition (\(Y_{e} > rsim 0.25\)) broadly agree with predictions for the shock-heated dynamical ejecta (e.g., Oechslin and Janka 2006; Sekiguchi et al. 2016; Radice et al. 2016b). However, one concern is the large quantity \(M_{\mathrm{blue}} > rsim 10^{-2}\,M_{\odot }\), which again is higher than the total predicted by most merger simulations (Fig. 5), especially considering that only a fraction of the dynamical ejecta will possess a high \(Y_e\). If the blue ejecta is dynamical in origin, this could provide evidence for a small value of the NS radius (Nicholl et al. 2017, Sect. 5.2) because the quantity of shock-heated ejecta appears to grow with the NS compactness (Bauswein et al. 2013b).
The highest velocity tail of the kilonova ejecta might not be an intrinsic property, but instead the result of shock-heating of an originally slower ejecta cloud by a relativistic jet created following some delay after the merger (e.g., Bucciantini et al. 2012; Duffell et al. 2015; Gottlieb et al. 2018; Kasliwal et al. 2017; Bromberg et al. 2018; Piro and Kollmeier 2018; Sect. 6.1.2). However, even models which invoke an early component of cocoon emission (Kasliwal et al. 2017) require a radioactive-powered component of emission which dominates after the first 11 hours that contains a large mass \(\sim 10^{-2}\,M_{\odot }\) of low-opacity (high \(Y_e\)) matter (stated another way, the observations do not require an early extra emission component beyond radioactivity; see Fig. 12). Further disfavoring a jet-related origin for the blue kilonova ejecta is that the kinetic energy of the latter \(M_{\mathrm{blue}}v_{\mathrm{blue}}^{2}/2 \approx 10^{51}\) ergs exceeds the kinetic energies of cosmological short gamma-ray bursts (\(\approx 10^{49}{-}10^{50}\) erg; Nakar 2007; Berger 2014), and that of the off-axis jet specifically required to fit the off-axis afterglow of GW170817 (e.g., Margutti et al. 2018), by a large factor \(\sim 10-100\).
Bolometric kilonova light curve during the first few hours of a NS–NS merger, calculated for several model assumptions that can reproduce the measured luminosity \(L_{\mathrm{bol}} \approx 10^{42}\) erg \(\hbox {s}^{-1}\) of AT2017gfo at \(t \approx 11\) hr (blue uncertainty bar; e.g., Arcavi et al. 2017b; Cowperthwaite et al. 2017; Drout et al. 2017). Black solid lines show how r-process only models change with the assumed timescale \(t_0 = 0.01, 0.1, 1\) s at which the outer ejecta was last "thermalized", i.e. endowed with an internal thermal energy comparable to its asymptotic kinetic energy (at \(t > rsim t_0\), the ejecta is heated is solely by r-process radioactivity in these models). A small value of \(t_0 \sim 0.01\) s corresponds to a dynamical ejecta origin with no additional heating, while a large value of \(t_0 \sim 0.01{-}1\) s represents the case of a long-lived engine (GRB jet, magnetar wind or accretion disk outflow) which re-heats the ejecta on a timescale \(\sim t_0\). We adopt parameters \(\beta = 3\), \(v_0 = 0.25\,{\hbox {c}}\), \(M = 0.025\,M_{\odot }\), \(\kappa = 0.5\hbox { cm}^{2}\hbox { g}^{-1}\) (except for the \(t_0 = 1\) s case, for which \(M = 0.015\,M_{\odot }\)). Red dashed lines show models with \(t_0 = 0.01\) s but for which the outer layer of mass \(M_n\) is assumed to contains free neutrons instead of r-process nuclei (a model similar to those shown in Fig. 14). Note that the early-time signatures of neutron decay are largely degenerate with late-time shock re-heating of the ejecta. Image reproduced with permission from Metzger et al. (2018), copyright by the authors
Metzger et al. (2018) proposed an alternative source for the blue kilonova ejecta: a magnetized wind which emerges from the HMNS remnant \(\approx 0.1{-}1\) s prior to its collapse to a BH. While the HMNS remnant was proposed as a potential ejecta source for GW170817, (e.g., Evans et al. 2017), the velocity and mass-loss rate of such purely neutrino-powered winds (Dessart et al. 2009; Perego et al. 2014) are insufficient to explain that of the observed kilonova. Metzger et al. (2018) emphasize the role that strong magnetic fields have on increasing the mass-loss rate and velocity of the wind through centrifugal slinging (similar to models of magnetized winds from ordinary stars; Belcher and MacGregor 1976). Using a series of 1D wind models, they found that a temporarily-stable magnetar remnant with a surface field strength \(B \approx 1{-}3\times 10^{14}\) G can naturally produce the mass, velocity, and composition of the blue kilonova ejecta in GW170817. We return to the role that much longer-lived magnetar remnants can play in the kilonova emission in Sect. 6.2.2.
Recently, using numerical relativity simulations which include approximate neutrino transport and a treatment of the effects of turbulent viscosity in the disk, Nedora et al. (2019) found that spiral density waves generated in the post-merger accretion disk by the central HMNS remnant can lead to the ejection of \(\sim 10^{-2}\,M_{\odot }\) in matter with \(Y_e > rsim 0.25\) and velocity \(\sim \)0.15–0.2 c. An open question is whether such spiral waves behave similarly, and can produce ejecta with sufficiently high velocities \( > rsim 0.2\) c to explain AT2017gfo, even in the physical case in which the magneto-rotational instability operates simultaneously in the disk. If so, this mechanism would provide an additional promising source of blue kilonova ejecta from a moderately long-lived HMNS.
5.2 Inferences about the neutron star equation of state
GW170817 provided a wealth of information on a wide range of astrophysical topics. One closely connected to the focus of this review are new constraints it enabled on the equation of state (EOS) of nuclear density matter, that which is responsible for determining the internal structure of the NS and setting its key properties, such as its radius and maximum stable mass \(M_{\mathrm{TOV}}\) (Lattimer and Prakash 2016; Özel and Freire 2016).
Even absent a bright EM counterpart, the gravitational waveform can be used to measure or constrain the tidal deformability of the inspiraling stars prior to their disruption resulting from tidal effects on the inspiral phase evolution (e.g., Raithel et al. 2018; De et al. 2018; Abbott et al. 2018). Assuming two stars with the same EOS, observations of GW170817 were used to place limits on the radius of a \(1.6\,M_{\odot }\) NS of \(R_{1.6} = 10.8^{+2.0}_{-1.7}\) km (Abbott et al. 2018). Likewise, in BH–NS mergers, measurement of tidal interactions and the cut-off GW frequency at which the NS is tidally disrupted by the BH, provide an alternative method to measure NS radii (e.g., Kyutoku et al. 2011; Lackey et al. 2014; Pannarale 2013; Pannarale et al. 2015).
Unfortunately, current generation of GW detectors are far less sensitive to the post-merger signal and thus of the ultimate fate of the merger remnant, such and whether and when a BH is formed (as was true even for the high signal-to-noise event, GW170817; Abbott et al. 2017e). Here, EM observations provide a complementary view. In a BH–NS merger, the presence or absence of an EM counterpart is informative about whether the NS was tidally disrupted and thus can be used to measure its compactness (e.g., Ascenzi et al. 2019). As discussed in Sect. 3.1 and summarized in Table 3, the type of compact remnant which is created by a NS–NS merger (prompt collapse, HMNS, SMNS, or stable NS) depends sensitively on the total binary mass \(M_{\mathrm{tot}}\) relative to various threshold masses, which depend on unknown properties of the EOS, particularly \(M_{\mathrm{TOV}}\) and \(R_{1.6}\) (Fig. 13). Thus, if one can infer the type of remnant produced in a given merger from the EM counterpart, e.g., the kilonova or GRB emission, then by combining this with GW measured13 value of \(M_\mathrm{tot}\) one can constrain the values of \(M_{\mathrm{TOV}}\) and/or \(R_{1.6}\).
The four possible outcomes of a NS–NS merger depend on the total binary mass relative to various threshold masses, each of which is proportional to the maximum mass, \(M_\mathrm{TOV}\), of a non-rotating NS (Table 3). Prompt BH formation or a short-lived HMNS results generates ejecta with a relatively low kinetic energy \(\sim 10^{50}{-}10^{51}\) erg (energy stored in the differential rotation of the HMNS remnant can largely be dissipated as heat and thus lost to neutrinos). By contrast, the delayed formation of a BH through spin-down of a SMNS or stable remnant takes place over longer, secular timescales and must be accompanied by the release of substantial rotational energy \(\sim 10^{52}{-}10^{53}\) erg. Unless effectively "hidden" through GW emission, a large fraction of this energy will be transferred to the ejecta kinetic energy (and, ultimately, the ISM forward shock), thus producing a more luminous kilonova and synchrotron afterglow than for a short-lived remnant
Figure credit: Ben Margalit
In GW170817, the large quantity of ejecta \( > rsim 0.02\,M_{\odot }\) inferred from the kilonova, and its high electron fraction, strongly disfavored that the merger resulted in a prompt (\(\sim \) dynamical timescale) collapse to a BH. Given that the threshold for prompt collapse depends on the NS compactness (Bauswein et al. 2013a), this enabled Bauswein et al. (2017) to place a lower limit of \(R_{1.6} > rsim 10.3{-}10.7\) km (depending on the conservativeness of their assumptions). Radice et al. (2018c) came to a physically-related conclusion (that GW170817 produced a large ejecta mass not present in the case of prompt BH formation), but expressed their results as a lower limit on tidal deformability instead of \(R_{1.6}\) (see also Coughlin et al. 2019 for a joint Bayesian analysis of the EM and GW data).
Going beyond the inference that GW170817 initially formed a NS remnant instead of a prompt collapse BH to infer the stability and lifetime of the remnant becomes trickier. Nevertheless, several independent arguments can be made which taken together strongly suggest that remnant was a relatively short-lived HMNS (\(t_\mathrm{collapse} \lesssim 0.1{-}1\) s), rather than a SMNS or indefinitely-stable NS (Margalit and Metzger 2017; Granot et al. 2017; Bauswein et al. 2017; Perego et al. 2017; Rezzolla et al. 2018; Ruiz et al. 2018; Pooley et al. 2018).
The presence of a significant quantity of lanthanide-rich disk wind ejecta, as inferred from the presence of red kilonova emission, is in tension with the \(Y_e\) distribution predicted for a merger remnant that were to have survived longer than several hundred milliseconds (Metzger and Fernández 2014; Lippuner et al. 2017; see Fig. 6).
The kinetic energies of the kilonova ejecta (\(\sim 10^{51}\) erg) and the off-axis gamma-ray burst jet inferred from the X-ray/radio afterglow (\(\sim 10^{49}{-}10^{50}\) erg) exceed by a large factor \( > rsim 10{-}100\) the rotational energy necessarily released for a SMNS or stable NS remnant to collapse to a BH (Fig. 13; see further discussion in Sect. 6.2.2).
The formation of an ultra-relativistic GRB jet on a timescale of \(\lesssim 1\) s after the merger is believed by many to require a clean polar funnel only present above a BH (e.g., Murguia-Berthier et al. 2017b; see Sect. 6.2.2 for discussion of this point). All indications from the GW170817 afterglow point to the presence of an off-axis jet with properties consistent with the cosmological short GRB population (e.g., Wu and MacFadyen 2019).
The magnetar spin-down luminosity could power temporally-extended X-ray emission minutes to days after the merger (Sect. 6.2.2); however, the observed X-rays from GW170817 are completely explained by the GRB afterglow without excess emission from a long-lived central remnant being required (Margutti et al. 2017; Pooley et al. 2018; however, see Piro et al. 2019).
Taking the exclusion of a SMNS remnant in GW170817 for granted, and combining this inference with the measured binary mass \(M_{\mathrm{tot}} = 2.74^{+0.04}_{-0.01}\,M_{\odot }\) (Abbott et al. 2017b) from the GW signal, Margalit and Metzger (2017) place an upper limit on the TOV mass of \(M_{\mathrm{TOV}} \lesssim 2.17\,M_{\odot }\) (see also Shibata et al. 2017; Rezzolla et al. 2018; Ruiz et al. 2018; however see Shibata et al. 2019, who argue for a more conservative constraint of \(M_{\mathrm{TOV}} \lesssim 2.30\,M_{\odot }\)). Stated another way, if \(M_{\mathrm{TOV}}\) were much higher than this limit, one would expect the remnant of GW170817 to have survived longer and produced an EM signal markedly different than the one observed. If this result holds up to further scrutiny, it provides the most stringent upper limit on \(M_{\mathrm{TOV}}\) currently available (and one in possible tension with the high NS masses \(\approx 2.4\,M_{\odot }\) suggested for some so-called "black widow" pulsars; e.g., Romani et al. 2015).
The above methods for constraining the NS EOS from pure GW or joint EM/GW data come with systematic uncertainties (most yet to be quantified), albeit different ones than afflict current EM-only methods. However, there is reason to hope these will improve with additional modeling and observations. For instance, if our current theoretical understanding of the diversity of possible outcomes of NS–NS mergers with the in-going binary properties (\(M_{\mathrm{tot}}\), q) is correct (Fig. 4), these predicted trends should be verifiable from a sample of future kilonova/afterglow observations (Sect. 7.3). Margalit and Metzger (2019) show that \(\sim \) 10 joint EM–GW detections of sufficient quality to accurately ascertain the merger outcome could constrain the values of \(R_{1.6}\) and \(M_{\mathrm{TOV}}\) to the several percent level where systematic effects are certain to dominate the uncertainties. The future is bright for EM–GW joint studies of the NS EOS in tandem with improvements in our ability to understand and even predict the diverse outcomes of NS–NS or BH–NS merger events (Sect. 7.3).
6 Diversity of kilonova signatures
The previous section described the most "vanilla" models of red/blue kilonovae powered exclusively by r-process heating and how the thermal UVOIR emission following GW170817 could be adequately described in this framework. This section explores additional, sources of emission which are theoretically predicted by some models but have not yet been observed (at least unambiguously). Either these emission sources were not accessible in GW170817 due to observational limitations, or they could be ruled out in this event but nevertheless may accompany future NS–NS or NS–BH mergers (e.g., for different masses of the in-going binary stars). Though some of these possibilities remain speculative, their consideration is nevertheless useful to define future observational goals and to inform search strategies regarding just how differently future mergers could appear than GW170817.
6.1 The first few hours
The UV/optical counterpart of GW170817 was discovered roughly 11 hours after the two stars merged. This delay was largely due to the event taking place over the Indian Ocean, rendering its sky position initially inaccessible to the majority of ground-based follow-up telescopes. However, roughly half of future mergers should take place in the northern hemisphere (above the LIGO detectors) which improves the chances of rapid optical follow-up, potentially within hours or less from the time of coalescence (e.g., Kasliwal and Nissanke 2014). A future wide-field UV satellite (or fleet of satellites) able to rapidly cover GW event error regions could revolutionize the early-time frontier.
Kilonovae are powered by the outwards diffusion of thermal radiation. The earliest time emission therefore probes the fastest, outermost layers of the ejecta. In the most simple-minded and conservative scenario, these layers are also heated by r-process decay, rendering the early-time emission a simple continuation of the r-process kilonova to earlier times. However, due to the higher temperatures when the ejecta is more compact, the ionization states of the outer layers (and hence which elements dominate the line opacity) could differ markedly from those at later times. Full opacity calculations which extend to thermodynamic conditions appropriate to the first few hours of the transient are a necessary ingredient to making more accurate predictions for this early phase (e.g., Tanaka et al. 2019).
However, it is also possible that the outermost layers of the ejecta are even hotter—and thus more luminous—than expected from r-process heating alone. Additional sources of early-time heating include: (1) radioactive decay of free neutrons which may be preferentially present in the fast outers layers (Sect. 6.1.1) or (2) the delayed passage through the ejecta by a relativistic jet or wide-angle outflow (Sect. 6.1.2). The tail-end of such an extra emission component could in principle have contributed to the earliest epochs of optical/UV emission from GW170817 (Arcavi 2018), though the available data is fully consistent with being powered exclusively by r-process heating.
6.1.1 Neutron precursor emission
The majority of the ejecta from a NS–NS merger remain sufficiently dense during its decompression from nuclear densities that all neutrons are captured into nuclei during the r-process (which typically takes place seconds after matter is ejected). However, some NS–NS merger simulations find that a small fraction of the dynamical ejecta (typically a few percent, or \(\sim 10^{-4}\,M_{\odot }\)) can expand sufficiently rapidly that the neutrons in the ejecta do not have time to be captured into nuclei (Bauswein et al. 2013b), i.e., the r-process "freezes out". In the simulations of Bauswein et al. (2013b) this fast-expanding matter, which reaches asymptotic velocities \(v > rsim 0.5\) c, originates from the shocked interface between the merging stars and resides on the outermost layers of the polar ejecta (see also Ishii et al. 2018). Equally fast-expanding material could in principle be produced via other mechanisms which take place after the dynamical phase, e.g., the passage through the ejecta by a GRB jet or in a magnetized wind from the HMNS remnant (see next section).
Free neutrons, if present in the outer ejecta layers, provide an order of magnitude greater specific heating rate than produced by r-process nuclei on timescales of tens of minutes to hours (Fig. 3). Metzger et al. (2015) emphasized that such super-heating by such a free neutron layer could substantially enhancing the early kilonova emission (see also Kulkarni 2005).
An ejecta layer \(\delta M_v\) containing free neutrons experiences a radioactive heating rate of
$$\begin{aligned} {\dot{Q}}_{r,v} = \delta M_v X_{n,v}{\dot{e}}_n(t), \end{aligned}$$
where the initial mass fraction of neutrons,
$$\begin{aligned} X_{n,v} = \frac{2}{\pi } (1-Y_e)\arctan \left( \frac{M_{n}}{M_v}\right) , \end{aligned}$$
is interpolated in a smooth (but otherwise ad-hoc) manner between the neutron-free inner layers at \(M \gg M_n\) and the neutron-rich outer layers \(M \ll M_n\), which have a maximum mass fraction of \(1- 2Y_e\). The specific heating rate due to neutron \(\beta \)-decay (accounting for energy loss to neutrinos) is given by
$$\begin{aligned} {\dot{e}}_n = 3.2\times 10^{14}\exp [-t/\tau _{n}]\,\mathrm{erg\,s^{-1}\,g^{-1}}, \end{aligned}$$
where \(\tau _n \approx 900\) s is the neutron half-life. The rising fraction of free neutrons in the outermost layers produces a corresponding decreasing fraction of r-process nuclei in the outermost layers, i.e., \(X_{r,v} = 1-X_{n,v}\) in calculating the r-process heating rate from Eq. (21).
Figure 14 shows kilonova light curves, including an outer layer of neutrons of mass \(M_n = 10^{-4}\,M_{\odot }\) and electron fraction \(Y_e = 0.1\). In the left panel, we have assumed that the r-process nuclei which co-exist with the neutrons contain lanthanides, and hence would otherwise (absent the neutrons) produce a "red" kilonova. Neutron heating boosts the UVR luminosities on timescales of hours after the merger by a large factor compared to the otherwise identical case without free neutrons (shown for comparison with dashed lines). Even compared to the early emission predicted from otherwise lanthanide-free ejecta ("blue kilonova"), neutron decay increases the luminosity during the first few hours by a magnitude or more, as shown in the right panel of Fig. 14.
Kilonova light curves, including the presence of free neutrons in the outer \(M_{\mathrm{n}} = 10^{-4}\,M_{\odot }\) mass layers of the ejecta ("neutron precusor" emission), calculated for the same parameters of total ejecta mass \(M = 10^{-2}\,M_{\odot }\) and velocity \(v_0 = 0.1\) c used in Fig. 9. The left panel shows a calculation with an opacity appropriate to lanthanide-bearing nuclei, while the right panel shows an opacity appropriate to lanthanide-free ejecta. Models without a free neutron layer (\(M_{\mathrm{n}} = 0\); Fig. 9) are shown for comparison with dashed lines
How can such a small layer of neutrons have such a large impact on the light curve? The specific heating rate due to free neutrons \({\dot{e}}_n\) (Eq. 28) exceeds that due to r-process nuclei \({\dot{e}}_r\) (Eq. 22) by over an order of magnitude on timescales \(\sim 0.1{-}1\) hr after the merger. This timescale is also, coincidentally, comparable to the photon diffusion depth of the inner edge of the neutron mass layer if \(M_{\mathrm{n}} > rsim 10^{-5}\,M_{\odot }\). Indeed, setting \(t_\mathrm{d,v} = t\) in Eq. (11), the emission from mass layer \(M_v\) peaks on a timescale
$$\begin{aligned}&t_{\mathrm{peak,v}} \approx \left( \frac{M_{v}^{4/3}\kappa _{v}}{4\pi M^{1/3} v_{0} c}\right) ^{1/2} \nonumber \\&\quad \approx 1.2\,\mathrm{hr}\left( \frac{M_v}{10^{-5}\,M_{\odot }}\right) ^{2/3}\left( \frac{\kappa _v}{10\,\mathrm{cm^{2}\,g^{-1}}}\right) ^{1/2}\left( \frac{v_0}{0.1\, \mathrm c}\right) ^{-1/2}\left( \frac{M}{10^{-2}\,M_{\odot }}\right) ^{-1/6} \nonumber \\ \end{aligned}$$
The total energy energy released by neutron-decay is \(E_n \simeq \int {\dot{e}}_n M_{\mathrm{n}} dt \approx 6\times 10^{45}(M_\mathrm{n}/10^{-5}\,M_{\odot })\mathrm {\ erg}\) for \(Y_e \ll 0.5\). Following adiabatic losses, a fraction \(\tau _{\mathrm{n}}/t_{\mathrm{peak,v}} \sim 0.01{-}0.1\) of this energy is available to be radiated over a timescale \(\sim t_{\mathrm{peak,v}}\). The peak luminosity of the neutron layer is thus approximately
$$\begin{aligned}&L_{\mathrm{peak,n}} \approx \frac{E_n \tau _n}{t_{\mathrm{peak,v}}^{2}} \nonumber \\&\quad \approx 3\times 10^{42}\,\mathrm{erg\,s^{-1}}\left( \frac{M_{v}}{10^{-5}\,M_{\odot }}\right) ^{-1/3}\left( \frac{\kappa _v}{10\,\mathrm{cm^{2}\,g^{-1}}}\right) ^{-1}\left( \frac{v_0}{0.1\, \mathrm c}\right) \left( \frac{M}{10^{-2}\,M_{\odot }}\right) ^{1/3}, \nonumber \\ \end{aligned}$$
and hence is relatively insensitive to the mass of the neutron layer, \(M_{v} = M_{\mathrm{n}}\). This peak luminosity can be \(\sim 10{-}100\) times higher than that of the main r-process powered kilonova peak. The high temperature of the ejecta during the first hours of the merger will typically place the spectral peak in the UV, potentially even in cases when the free neutron-enriched outer layers contain lanthanide elements.
Additional theoretical and numerical work is needed to assess the robustness of the fast-moving ejecta and its abundance of free neutrons, which thus far has been seen in a single numerical code (Bauswein et al. 2013b). The freeze-out of the r-process, and the resulting abundance of free neutrons, is also sensitive to the expansion rate of the ejecta (Lippuner and Roberts 2015), which must currently be extrapolated from the merger simulations (running at most tens of milliseconds) to the much longer timescales of \(\sim \) 1 second over which neutrons would nominally be captured into nuclei. Figure 14 and Eq. (30) make clear that the neutron emission is sensitive to the opacity of the ejecta at early stages, when the temperatures and ionization states of the ejecta are higher than those employed in extant kilonova opacity calculations.
6.1.2 Shock re-heating (short-lived engine power)
The ejecta from a NS merger is extremely hot \(\gg 10^{10}\) K immediately after becoming unbound from the central remnant or accretion disk. However, due to PdV losses, the temperature drops rapidly \(\propto 1/R\) as the ejecta radius \(R = vt\) expands. Absent additional sources of heating, the internal energy decays in time as
$$\begin{aligned} e_0(t)\simeq & {} 0.68\frac{\rho ^{1/3}s^{4/3}}{a^{1/3}} \nonumber \\\approx & {} 4\times 10^{12}\,\mathrm{erg\,g^{-1}}\left( \frac{s}{20\mathrm{k_b\,b^{-1}}}\right) ^{4/3}\left( \frac{M}{10^{-2}\,M_{\odot }}\right) ^{1/3}\left( \frac{t}{1\,\mathrm day}\right) ^{-1}\left( \frac{v}{0.1\,\mathrm{c}}\right) ^{-1}, \nonumber \\ \end{aligned}$$
where the ejecta density has been set to its mean value \(\rho = 3M/(4\pi R^{3})\) and the entropy s normalized to a value 20 \(k_b\) per baryon typical of the shock-heated polar dynamical or disk wind ejecta (the unshocked tidal tail material can be even colder). As the ejecta expands, it receives heating from the decay of r-process nuclei at the rate given by Eq. (23). The thermal energy input from r-process decay on timescales \(\sim t\) is thus approximately given by
$$\begin{aligned} e_r \sim {\dot{e}}_r t \approx 9\times 10^{14}\left( \frac{t}{1\,\mathrm{day}}\right) ^{-0.3}\,\mathrm{erg\,g}^{-1} \end{aligned}$$
The key point to note is that \(e_r \gg e_0\) within minutes after the merger. This demonstrates why the initial thermal energy of the ejecta can be neglected in calculating the kilonova emission on timescales of follow-up observations of several hours to days (or, specifically, why the toy model light curves calculations presented thus far are insensitive to the precise initial value of \(E_v\)).
However, the early-time luminosity of the kilonova could be substantially boosted from this naive expectation if the ejecta is re-heated at large radii, well after its initial release (i.e. if the ejecta entropy is boosted to a substantially larger value than assumed in Eq. 31).
One way such re-heating could take place is by the passage of a relativistic GRB jet through the polar ejecta, which generates a shocked "cocoon" of hot gas (e.g., Gottlieb et al. 2018; Kasliwal et al. 2017; Piro and Kollmeier 2018). However, the efficiency of this heating process is debated. Duffell et al. (2018) found, using a large parameter study of jet parameters (jet energies \(\sim 10^{48}{-}10^{51}\) erg and opening angles \(\theta \sim 0.07{-}0.4\) covering the range thought to characterize GRBs) that the thermal energy deposited into the ejecta by the jet falls short of that produced by the r-process heating on the same timescale by an order of magnitude or more. Jet heating is particularly suppressed when the relativistic jet successfully escapes from the ejecta, evidence in GW170817 by late-time afterglow observations (Margutti et al. 2017, 2018; Mooley et al. 2018).
An alternative means to shock-heat the ejecta is by a wind from the magnetized central NS remnant prior to its collapse into a BH (Metzger et al. 2018). Such a wind is expected to have a wide opening angle and to accelerate to trans-relativistic speeds over a characteristic timescale of seconds (e.g., Metzger et al. 2008b).14 This delay in the wind acceleration, set by the Kelvin–Helmholtz cooling of the remnant, would naturally allow the dynamical ejecta time to reach large radii before being hit and shocked by the wind. Although this has not yet been explored in the literature, even time variability in the accretion disk outflows (Sect. 3.1.2) could generate internal shocks and re-heat the wind ejecta over timescales comparable to the disk lifetime \(\lesssim \) seconds (e.g., Fernández et al. 2019).
In all of these mechanisms, re-setting of the ejecta thermal energy at large radii is key to producing luminous emission, because otherwise the freshly-deposited energy is degraded by PdV expansion before being radiated. This is illustrated by Fig. 12, where black lines show how the early-time kilonova light curve is enhanced when the ejecta is re-thermalized (its thermal energy re-set to a value comparable to its kinetic energy, i.e. \(E_v \sim \delta M_v v^2/2\)) at different times, \(t_0\), following its initial ejection. As expected, larger \(t_0\) (later re-thermalization) results in more luminous emission over the first few hours.
But also note that the light curve enhancement from jet/wind re-heating looks broadly similar that resulting from the outer layers being composed of free neutrons (shown for comparison as red lines in Fig. 12). This degeneracy between free neutrons and delayed shock-heating makes the two physical processes challenging to observationally distinguish (Arcavi 2018; Metzger et al. 2018). Well-sampled early-time light curves, e.g., to search for a subtle bump in the light curve on the neutron half-live \(\tau _{\beta } \approx 10^{3}\) s, could be necessary to make progress on the interpretation. Regardless, the first few hours of the kilonova is an important frontier for future EM follow-up efforts: the signal during this time is sensitive to the origin of the ejecta and how it interacts with the central engine (the details which are largely washed out at later times when r-process heating takes over).
6.2 Long-lived engine power
The end product of a NS–NS or BH–NS merger is a central compact remnant, either a BH or a massive NS (Sect. 3.1). Sustained energy input from this remnant can produce an additional source of ejecta heating in excess of the minimal contribution from radioactivity, thereby substantially altering the kilonova properties (e.g., Yu et al. 2013; Metzger and Piro 2014; Wollaeger et al. 2019).
Evidence exists for late-time central engine activity following short GRBs, on timescales from minutes to days. A fraction \(\approx 15{-}25\%\) of Swift short bursts are followed by a distinct hump of hard X-ray emission lasting for tens to hundreds of seconds following the initial prompt spike (e.g., Norris and Bonnell 2006; Perley et al. 2009; Kagawa et al. 2015). The isotropic X-ray light curve of such temporally extended emission in GRB 080503 is shown in the bottom panel of Fig. 3 (Perley et al. 2009). Other GRBs exhibit a temporary flattening or "plateau" in their X-ray afterglows lasting \(\approx 10^2{-}10^3\) s (Nousek et al. 2006), which in some cases abruptly ceases (Rowlinson et al. 2010). X-ray flares are also observed at even later timescales of \(\sim \)few days (Perley et al. 2009; Fong et al. 2014). The power output of the central engine required to explain this emission is uncertain by several orders of magnitude because it depends on the radiative efficiency and beaming fraction of the (likely jetted) X-ray emission. Nevertheless, comparison of the left and right panels of Fig. 3 makes clear that the energy input of a central remnant can compete with, or even dominate, that of radioactivity on timescales from minutes to weeks after the merger (e.g., Kisaka et al. 2017).
6.2.1 Fall-back accretion
In addition to the unbound ejecta during a NS–NS/BH–NS merger (or the accretion disk after the merger), a comparable mass could remain gravitationally bound to the central remnant. Depending on the energy distribution of this matter, it will fall back to the center and enter the accretion disk over timescales ranging from seconds to days or longer after the coalescence event (Rosswog 2007; Rossi and Begelman 2009; Chawla et al. 2010; Kyutoku et al. 2015). At late times \(t \gg 0.1\) s, the mass fall-back rate decays as a power-law
$$\begin{aligned} {\dot{M}}_{\mathrm{fb}} \approx \left( \frac{{\dot{M}}_{\mathrm{fb}}(t = 0.1\,\mathrm{s})}{10^{-3}\,M_{\odot }\,s^{-1}}\right) \left( \frac{t}{0.1\,\mathrm{s}}\right) ^{-5/3}, \end{aligned}$$
where the normalization \({\dot{M}}_{\mathrm{fb}}(t = 0.1)\) at the reference time \(t = 0.1\) s can vary from \(\sim 10^{-3}\,M_{\odot }\,\mathrm{s}^{-1}\) in NS–NS mergers, to values up to an order of magnitude larger in BH–NS mergers (Rosswog 2007; Foucart et al. 2015).
There are several caveats to the presence of fall-back accretion. Simulations show that disk outflows from the inner accretion flow in BH–NS mergers can stifle the fall-back of material, preventing it from reaching the BH on timescales \(t > rsim 100\) ms (Fernández et al. 2015b). Heating due to the r-process over the first \(\sim 1\) second can also unbind matter that was originally marginally-bound, generating a cut-off in the fall-back rate after a timescale of seconds or minutes (Metzger et al. 2010a; Desai et al. 2019). Furthermore, matter which does return to the central remnant is only tenuously bound and unable to cool through neutrinos, which may drastically reduce the accretion efficiency (the fraction of \({\dot{M}}_{\mathrm{fb}}\) that remains bound in the disk; Rossi and Begelman 2009). Despite these concerns, some fall-back and accretion by the central remnant is likely over the days-weeks timescales of the observed kilonovae.
If matter reaches the central compact object at the rate \({\dot{M}}_{\mathrm{fb}}\) (Eq. 33), then a fraction of the resulting accretion power \(L_{\mathrm{acc}} \propto {\dot{M}}_\mathrm{fb}c^{2}\) would be available to heat the ejecta, contributing to the kilonova luminosity. The accretion flow is still highly super-Eddington throughout this epoch (\(L_{\mathrm{acc}} \gg L_{\mathrm{Edd}} \sim 10^{39}\) erg \(\hbox {s}^{-1}\)) and might be expected power a collimated ultra-relativistic jet, similar but weaker than that responsible for generating the earlier GRB. At early times, the jet has sufficient power to propagate through the ejecta, producing high energy emission at larger radii (e.g., powering the short GRB or temporally-extended X-ray emission following the burst). However, as the jet power decreases in time it is more likely to become unstable (e.g., Bromberg and Tchekhovskoy 2016), in which case its Poynting flux or bulk kinetic energy would be deposited as heat behind the ejecta. A mildly-relativistic wind could be driven from the inner fall-back-fed accretion disk, which would emerge into the surroundings and collide/shock against the (potentially slower, but higher mass) ejecta shell, thermalizing the wind's kinetic energy and providing a heat source behind the ejecta (Dexter and Kasen 2013).
Kilonova light curves powered by fall-back accretion, calculated for the same parameters of total ejecta mass \(M = 10^{-2}\,M_{\odot }\) and velocity \(v_0 = 0.1\) c used in Fig. 9, shown separately assuming opacities appropriate to lanthanide-bearing (\(\kappa = 20\hbox { cm}^{2}\hbox { g}^{-1}\); left panel) and lanthanide-free (\(\kappa = 1\hbox { cm}^{2}\hbox { g}^{-1}\); right panel) ejecta. We adopt ejecta heating rate following Eq. (34) for a constant efficiency \(\epsilon _{\mathrm{j}} = 0.1\) and have normalized the fall-back mass to an optimistic value \({\dot{M}}_{\mathrm{fb}}(t = 0.1) = 10^{-2}\,M_{\odot }\) \(\hbox {s}^{-1}\)
Heating by fall-back accretion can be crudely parametrized as follows,
$$\begin{aligned} {\dot{Q}}_{\mathrm{fb}}= \epsilon _{j} {\dot{M}}_{\mathrm{fb}} c^{2} \approx 2\times 10^{51}\,\mathrm{erg\,s^{-1}}\left( \frac{\epsilon _{j}}{0.1}\right) \left( \frac{{\dot{M}}_\mathrm{fb}(0.1\mathrm{s})}{10^{-3}\,M_{\odot }\,s^{-1}}\right) \left( \frac{t}{\, \mathrm 0.1 s}\right) ^{-5/3}, \end{aligned}$$
where \(\epsilon _{j}\) is a jet/disk wind efficiency factor.15 For optimistic, but not physically unreasonable, values of \(\epsilon _j \sim 0.01-0.1\) and \({\dot{M}}_{\mathrm{fb}}(0.1\mathrm{s}) \sim 10^{-3}\,M_{\odot }\) \(\hbox {s}^{-1}\), Fig. 3 shows that \({\dot{Q}}_{\mathrm{fb}}\) can be comparable to radioactive heating on timescales of days to weeks.
Figure 15 shows toy model light curves calculated assuming the ejecta (mass \(M = 10^{-2}\,M_{\odot }\) and velocity \(v_0 = 0.1\) c) is heated by fall-back accretion according to Eq. (34) under the optimistic assumption that \(\epsilon _{\mathrm{j}}{\dot{M}}_{\mathrm{fb}}(t = 0.1) = 10^{-3}\,M_{\odot }\) \(\hbox {s}^{-1}\) on the very high end of the values suggested by merger simulations (Rosswog 2007) and expected for BH-powered outflows (\(\epsilon _j \sim 1\)). The left and right panels show the results separately in the case of ejecta with a low (\(\kappa = 1\hbox { cm}^{2}\hbox { g}^{-1}\)) and high (\(\kappa = 20\hbox { cm}^{2}\hbox { g}^{-1}\)) opacity, respectively. Fall-back enhance the peak brightness, particularly those bands which peak during the first \(\lesssim 1\) day, by up to a magnitude or more, compared to the otherwise similar case with pure r-process heating (reproduced from Fig. 9 and shown for comparison with dashed lines). If the amount of fall-back, and the jet/accretion disk wind efficiency is high, we conclude that accretion power could in principle provide a moderate boost to the observed kilonova emission, particularly in cases where the ejecta mass (and thus intrinsic r-process decay power) is particularly low.
Based on the high observed X-ray luminosity from GRB 130603B simultaneous with the excess NIR emission, Kisaka et al. (2016) argue that the latter was powered by reprocessed X-ray emission rather than radioactive heating (Tanvir et al. 2013; Berger et al. 2013). However, the validity of using the observed X-rays as a proxy for the ejecta heating rely on the assumption that the X-ray emission is instrinsically isotropic (i.e., we are peering through a hole in the ejecta shell), as opposed to being geometrically or relativistically-beamed as a part of a jet-like outflow from the central engine (a substantial beaming-correction to the observed isotropic X-ray luminosity would render it too low to power the observed NIR emission). Matsumoto et al. (2018) and Li et al. (2018) made a similar argument that AT2017gfo was powered by a central engine. However, unlike in 130603B, no X-ray emission in excess of the afterglow from the external shocked ISM was observed following GW170817 at the time of the kilonova (Margutti et al. 2017). Furthermore, the observed bolometric light curve is well explained by r-process radioactive decay without the need for an additional central energy source (Fig. 10).
6.2.2 Long-lived magnetar remnants
As described in Sect. 3.1, the type of compact remnant produced by a NS–NS merger prompt BH formation, hypermassive NS, supramassive NS, or indefinitely stable NS) depends sensitively on the total mass of the binary relative to the poorly constrained TOV mass, \(M_{\mathrm{TOV}}\). A lower limit of \(M_{\mathrm{TOV}} > rsim 2\)–\(2.1\,M_{\odot }\) is set by measured pulsar masses (Demorest et al. 2010; Antoniadis et al. 2013; Cromartie et al. 2019), while an upper limit of \(M_{\mathrm{TOV}} \lesssim 2.16\,M_{\odot }\) is suggested for GW170817 (Sect. 5.2). Taken as granted, these limits, combined with the assumption that the measured mass distribution of the Galactic population of binary neutron stars is representative of those in the universe as a whole, leads to the inference that \(\approx 18{-}65\%\) of mergers will result in a long-lived SMNS (Margalit and Metzger 2019) instead of the short-lived HMNS most believe formed in GW170817 (Table 3).
The massive NS remnant created by a NS–NS merger will in general have more than sufficient angular momentum to be rotating near break-up (Radice et al. 2018a; however, see Shibata et al. 2019). A NS of mass \(M_{\mathrm{ns}}\) rotating near its mass-shedding limit possesses a rotational energy,
$$\begin{aligned} E_{\mathrm{rot}} = \frac{1}{2}I\Omega ^{2} \simeq 1\times 10^{53}\left( \frac{I}{I_{\mathrm{LS}}}\right) \left( \frac{M_{\mathrm{ns}}}{2.3 \,M_{\odot }}\right) ^{3/2}\left( \frac{P}{\mathrm{0.7\,ms}}\right) ^{-2}\,\mathrm{erg}, \end{aligned}$$
where \(P = 2\pi /\Omega \) is the rotational period and I is the NS moment of inertia, which we have normalized to an approximate value for a relatively wide class of nuclear equations of state \(I_\mathrm{LS} \approx 1.3\times 10^{45}(M_\mathrm{ns}/1.4\,M_{\odot })^{3/2}\mathrm {\ g\ cm}^{2}\), motivated by Fig. 1 of Lattimer and Schutz (2005). This energy reservoir is enormous, both compared to the kinetic energy of the merger ejecta (\(\approx 10^{50}{-}10^{51}\mathrm {\ erg}\)) and to that released by its radioactive decay. Even if only a modest fraction of \(E_{\mathrm{rot}}\) were to be extracted from the remnant hours to years after the merger by its electromagnetic spin-down, this would substantially enhance the EM luminosity of the merger counterparts (Yu et al. 2013; Gao et al. 2013; Metzger and Piro 2014; Gao et al. 2015; Siegel and Ciolfi 2016a).
This brings us back to a crucial qualitative difference between the formation of a HMNS and a SMNS or stable NS remnant. A HMNS can be brought to the point of collapse by the accretion of mass and redistribution of its internal angular momentum. However, energy dissipated by removing internal differential rotational support can largely be released as heat and thus will escape as neutrino emission (effectively unobservable at typical merger distances). Thus, the angular momentum of the binary can largely be trapped in the spin of the newly-formed BH upon its collapse, rendering most of \(E_{\mathrm{rot}}\) unavailable to power EM emission.
By contrast, a SMNS is (by definition) supported by its solid-body rotation, even once all forms of differential rotation have been removed. Thus, angular momentum must physically be removed from the system to allow the collapse, and the removal of angular momentum brings with it a concomitant amount of rotational energy. The left panel of Fig. 4 shows the "extractable" rotational energy for mergers which leave SMNS remnants and how quickly it grows with decreasing binary chirp mass (proxy for the mass \(M_{\mathrm{tot}}\)).16 This energy budget increases from \(\lesssim 10^{51}\) erg for remnants near the HMNS-SMNS boundary at \(M_{\mathrm{tot}} \approx 1.2M_{\mathrm{TOV}}\) to the full rotational energy \(E_{\mathrm{rot}} \approx 10^{53}\) erg (Eq. 35) for the lowest mass, indefinitely stable remnants \(M_{\mathrm{tot}} \lesssim M_{\mathrm{TOV}}\).
A strong magnetic field provides an agent for extracting rotational energy from the NS remnant via electromagnetic spin-down (the same mechanism at work in ordinary pulsars). MHD simulations show that the original magnetic fields of the NS in a NS–NS merger are amplified to very large values, similar or exceeding the field strengths of \(10^{15}{-}10^{16}\) G of Galactic magnetars (Price and Rosswog 2006; Zrake and MacFadyen 2013; Kiuchi et al. 2014). However, most of this amplification occurs on small spatial scales, and at early times when the NS is still differentially-rotating, resulting in a complex and time-dependent field geometry (Siegel et al. 2014). Nevertheless, by the time the NS enters into a state of solid body rotation (typically within hundreds of milliseconds following the merger), there are reasons to believe the remnant could possess an ordered dipole magnetic field of comparable strength, \(B \sim 10^{15}{-}10^{16}\) G. For instance, an ordered magnetic field can be generated by an \(\alpha -\Omega \) dynamo, driven by the combined action of its rapid millisecond rotation period and thermal and lepton gradient-driven convection in the cooling remnant (Thompson and Duncan 1993).
The spin-down luminosity of an aligned dipole17 rotator is given by (e.g., Philippov et al. 2015)
$$\begin{aligned}&L_{\mathrm{sd}} \nonumber \\&\quad = \left\{ \begin{array}{lr} \frac{\mu ^{2}\Omega ^{4}}{c^{3}} = 7\times 10^{50}\,{\mathrm{erg\,s}}^{-1}\left( \frac{I}{I_\mathrm{LS}}\right) \left( \frac{B}{10^{15}\,\mathrm{G}}\right) ^{2}\left( \frac{P_{\mathrm{0}}}{\mathrm{0.7\,ms}}\right) ^{-4}\left( 1 + \frac{t}{t_{\mathrm{sd}}}\right) ^{-2} , &{} t < t_{\mathrm{collapse}}\\ 0 &{} t > t_{\mathrm{collapse}} \\ \end{array} \right. ,\nonumber \\ \end{aligned}$$
where \(\mu = B R_{\mathrm{ns}}^{3}\) is the dipole moment, \(R_{\mathrm{ns}} = 12\,\mathrm{km}\) is the NS radius, B is the surface equatorial dipole field,
$$\begin{aligned} t_{\mathrm{sd}} = \left. \frac{E_{\mathrm{rot}}}{L_{\mathrm{sd}}}\right| _{t = 0}\simeq 150\,\mathrm{s}\left( \frac{I}{I_\mathrm{LS}}\right) \left( \frac{B}{10^{15}\,\mathrm{G}}\right) ^{-2}\left( \frac{P_{\mathrm{0}}}{\mathrm{0.7\,ms}}\right) ^{2} \end{aligned}$$
is the characteristic spin-down time over which an order unity fraction of the rotational energy is removed, where \(P_{0}\) is the initial spin-period and we have assumed a remnant mass of \(M = 2.3\,M_{\odot }\). The latter is typically close to, or slightly exceeding, the mass-shedding limit of \(P = 0.7\) ms.18
The spin-down luminosity in Eq. (36) goes to zero19 when the NS collapses to the BH at time \(t_{\mathrm{collapse}}\). For a stable remnant, \(t_{\mathrm{collapse}} \rightarrow \infty \), but for supramassive remnants, the NS will collapse to a black hole after a finite time which can be estimated20 by equating \(\int _0^{t_\mathrm{collapse}}L_{\mathrm{sd}}dt\) to the extractable rotational energy.
Yu et al. (2013) suggested21 that magnetic spin-down power, injected by the magnetar behind the merger ejecta over a timescale of days, could enhance the kilonova emission (the termed such events "merger-novae"; see also Gao et al. 2015). Their model was motivated by similar ideas applied to super-luminous supernovae (Kasen and Bildsten 2010; Woosley 2010; Metzger et al. 2014) and is similar in spirit to the 'fall-back powered' emission described in Sect. 6.2.1. Although the spin-down luminosity implied by Eq. (36) is substantial on timescales of hours to days, the fraction of this energy which will actually be thermalized by the ejecta, and hence available to power kilonova emission, may be much smaller.
As in the Crab Nebula, pulsar winds inject a relativistic wind of electron/positron pairs. This wind is generally assumed to undergo shock dissipation or magnetic reconnection near or outside a termination shock, inflating a nebula of relativistic particles (Kennel and Coroniti 1984). Given the high energy densities of the post-NS–NS merger environment, electron/positron pairs heated at the shock cool extremely rapidly via synchrotron and inverse Compton emission inside the nebula (Metzger et al. 2014; Siegel and Ciolfi 2016a, b), producing broadband radiation from the radio to gamma-rays (again similar to conventional pulsar wind nebulae; e.g., Gaensler and Slane, 2006).22 A fraction of this non-thermal radiation, in particular that at UV and soft X-ray frequencies, will be absorbed by the neutral ejecta walls and reprocessed to lower, optical/IR frequencies (Metzger et al. 2014), where the lower opacity allows the energy to escape, powering luminous kilonova-like emission.
On the other hand, this non-thermal nebular radiation may also escape directly from the ejecta without being thermalized, e.g., through spectral windows in the opacity. This can occur for hard X-ray energies above the bound-free opacity or (within days or less) for high energy \( \gg \) MeV gamma-rays between the decreasing Klein–Nishina cross section and the rising photo-nuclear and \(\gamma {-}\gamma \) opacities (Fig. 8). Furthermore, if the engine is very luminous and the ejecta mass sufficiently low, the engine can photo-ionize the ejecta shell, allowing radiation to freely escape even from the far UV and softer X-ray bands (where bound-free opacity normally dominates). While such leakage from the nebula provides a potential isotropic high energy counterpart to the merger at X-ray wavelengths (Metzger and Piro 2014; Siegel and Ciolfi 2016a, b; Wang et al. 2016), it also reduces the fraction of the magnetar spin-down luminosity which is thermalized and available to power optical-band radiation.
We parameterize the magnetar contribution to the ejecta heating as
$$\begin{aligned} {\dot{Q}}_{\mathrm{sd}} = \epsilon _{\mathrm{th}}L_{\mathrm{sd}}, \end{aligned}$$
where, as in the fall-back case (Eq. 34), \(\epsilon _\mathrm{th}\) is the thermalization efficiency. We expect \(\epsilon _{\mathrm{th}} \sim 1\) at early times when the ejecta is opaque (unless significant energy escapes in a jet), but the value of \(\epsilon _{\mathrm{th}}\) will decrease as the optical depth of the expanding ejecta decreases, especially if the ejecta becomes ionized by the central engine.
Metzger and Piro (2014) point out another inefficiency, which, unlike radiation leakage, is most severe at early times. High energy \( > rsim \) MeV gamma-rays in the nebula behind the ejecta produce copious electron/positron pairs when the compactness is high. These pairs in turn are created with enough energy to Compton upscatter additional seed photons to sufficient energies to produce another generation of pairs (and so on\(\ldots \)). For high compactness \(\ell \gg 1\), this process repeats multiple times, resulting in a 'pair cascade' which acts to transform a significant fraction \(Y \sim 0.01{-}0.1\) of the pulsar spin-down power \(L_{\mathrm{sd}}\) into the rest mass of electron/positron pairs (Svensson 1987; Lightman et al. 1987). Crucially, in order for non-thermal radiation from the central nebula to reach the ejecta and thermalize, it must diffuse radially through this pair cloud, during which time it experiences adiabatic PdV losses. Because at early times the Thomson optical depth of the pair cloud, \(\tau _{\mathrm{n}}\), actually exceeds the optical depth through the ejecta itself, this suppresses the fraction of the magnetar spin-down power which is available to thermalize and power the emission.
Following Metzger and Piro (2014) and Kasen et al. (2016), we account in an approximate manner for the effect of the pair cloud by suppressing the observed luminosity according to,
$$\begin{aligned} L_{\mathrm{obs}} = \frac{L}{1 + (t_{\mathrm{life}}/t)} \end{aligned}$$
where L is the luminosity of the kilonova, calculated as usual from the energy equation (14) using the magnetar heat source (Eq. 38), and
$$\begin{aligned} \frac{t_{\mathrm{life}}}{t} = \frac{\tau _{\mathrm{n}}v}{c(1-A)} \approx \frac{0.6}{1-A}\left( \frac{Y}{0.1}\right) ^{1/2}\left( \frac{L_\mathrm{sd}}{10^{45}\, {\mathrm{erg\,s}}^{-1}}\right) ^{1/2}\left( \frac{v}{0.3\,\mathrm{c}}\right) ^{1/2}\left( \frac{t}{\mathrm{1\, day}}\right) ^{-1/2}\nonumber \\ \end{aligned}$$
is the ratio of the characteristic 'lifetime' of a non-thermal photon in the nebula, \(t_{\mathrm{life}}\), to the ejecta expansion timescale \(\sim t\), where A is the (frequency-averaged) albedo of the ejecta. In what follows we assume \(A = 0.5\).
Kilonova light curves, boosted by spin-down energy from an indefinitely stable magnetar (\(t_{\mathrm{collapse}} = \infty \)), and taking an opacity \(\kappa = 20\hbox { cm}^{2}\hbox { g}^{-1}\) appropriate to lanthanide-rich matter. We assume an ejecta mass \(M = 0.1\,M_{\odot }\) (Metzger and Fernández 2014), initial magnetar spin period \(P_0 = 0.7\) ms, thermalization efficiency \(\epsilon _{\mathrm{th}} = 1\) and magnetic dipole field strength of \(10^{15}\) G (left panel) or \(10^{16}\) G (right panel). Dashed lines show for comparison the purely r-process powered case
For high spin-down power and early times (\(t_{\mathrm{life}} \gg t\)), pair trapping acts to reduce the thermalization efficiency of nebular photons, reducing the effective luminosity of the magnetar-powered kilonova by several orders of magnitude compared to its value were this effect neglected. The bottom panel of Fig. 3 shows the spin-down luminosity \(L_{\mathrm{sd}}\) for stable magnetars with \(P_0 = 0.7\) ms and \(B = 10^{15}, 10^{16}\) G. We also show the spin-down power, 'corrected' by the factor \((1 + t_{\mathrm{life}}/t)^{-1}\), as in Eq. (39) for \(Y = 0.1\).23
Figures 16 and 17 shows kilonova light curves, calculated from our toy model, but including additional heating of the ejecta due to rotational energy input from an indefinitely stable magnetar with assumed dipole field strengths of \(B = 10^{15}\) G and \(10^{16}\) G, respectively. We again show separately cases in which we assume a high value for the ejecta opacity \(\kappa = 20\hbox { cm}^{2}\hbox { g}^{-1}\) appropriate for lanthanide-rich ejecta compared to low opacity \(\kappa = 1\hbox { cm}^{2}\hbox { g}^{-1}\) appropriate to lanthanide-free (light r-process) elements. The latter case is probably the most physical one, because the majority of the disk wind ejecta (which typically dominates the total) is expected to possess a high-\(Y_e\) in the presence of a long-lived stable merger remnant (\(t_{\mathrm{life}} \approx \infty \) in Fig. 6).
A long-lived magnetar engine has three main effects on the light curve relative to the normal (pure r-process-powered) case: (1) increase in the peak luminosity, by up to \(4{-}5\) magnitudes; (2) more rapid evolution, i.e. an earlier time of peak light; (3) substantially bluer colors. Feature (1) is simply the result of additional heating from magnetar spin-down, while feature (2) results from the greater ejecta velocity due to the kinetic energy added to the ejecta by the portion of the spin-down energy released before the ejecta has become transparent (that portion going into PdV rather than escaping as radiation). Feature (3) is a simple result of the fact that the much higher luminosity of the transient increases its effective temperature, for an otherwise similar photosphere radius near peak light.
Same as Fig. 16, but calculated for an ejecta opacity \(\kappa = 1\hbox { cm}^{2}\hbox { g}^{-1}\) relevant to lanthanide-free matter
Figures 16 and 17 represent close to the most "optimistic" effects a long-lived magnetar could have on the kilonova light curves. The effects will be more subtle, and closer to the radioactivity-only models, if the magnetar is a less-stable SMNS (that collapses into a BH before all of \(E_\mathrm{rot}\) is released) or if a substantial fraction of its rotational energy escapes as gamma-rays or is radiated by the magnetar through GW emission (e.g., Dall'Osso and Stella 2007; Corsi and Mészáros 2009), rather than being transferred to the ejecta through its magnetic dipole spin-down. Such effects are easy to incorporate into the toy model by cutting off the magnetar spin-down heating for \(t \ge t_{\mathrm{collapse}}\) in Eq. (36), or by including additional losses due to GW radiation into the magnetar spin-down evolution (e.g., Li et al. 2018 and references therein).
7 Observational prospects and strategies
With the basic theory of kilonova in place, we now discuss several implications for past and present kilonova observations. Table 8 provides rough estimates for the expected range in kilonova luminosities, timescales, isotropy to accompany NS–NS and BH–NS mergers for different assumptions about the merger remnant/outcome. Figure 18 illustrates some of this diversity graphically.
Range of kilonova properties
Remnant/outcome
\(L_{\mathrm{pk}}^\mathrm{a}\) (\(\hbox {erg s}^{-1}\))
\(t_{\mathrm{pk}}^\mathrm{b}\)
Isotropic?\(^\mathrm{c}\)
NS–NS
\(\sim 10^{40}{-}10^{41}\)
\(\sim 3\) day
Mostly red
HMNS \(\Rightarrow \) BH
\(\sim \) Y
SMNS \(\Rightarrow \) BH
\(\lesssim 1\) day
Mostly blue
\(\lesssim \) 1 day
BH–NS
\(R_{\mathrm{t}} > rsim R_{\mathrm{isco}}^\mathrm{d}\)
\(R_{\mathrm{t}} \lesssim R_{\mathrm{isco}}\)
\(\sim \) 1 week
\(^\mathrm{a}\)Estimated range in peak luminosity. Does not account for extra sources of early-time heating from free neutrons or shocks, which could enhance the peak luminosity in the first hours (Sect. 6.1)
\(^\mathrm{b}\)Estimated peak timescale
\(^\mathrm{c}\)Whether to expect large pole-equatorial (or azimuthal, in the NS–BH case) isotropy in the kilonova properties
\(^\mathrm{d}\)Whether a BH–NS merger is accompanied by mass ejection depends on whether the NS is tidally disrupted sufficiently far outside of the BH event horizon to generate unbound tidal material and the formation of an accretion disk. Very roughly, this condition translates into a comparison between the tidal radius of the NS, \(R_{\mathrm{t}}\) (which depends on the BH–NS mass ratio and the NS radius), and the radius of the innermost stable circular orbit of the BH, \(R_{\mathrm{isco}}\) (which depends on the BH mass and spin); see futher discussion in Sect. 3.1
Schematic illustration mapping different types of mergers and their outcomes to trends in their kilonova light curves. The top panel shows the progenitor system, either an NS–NS or an NS–BH binary, while the middle plane shows the final merger remnant (from left to right: an HMNS that collapses to a BH after time \(t_{\mathrm{collapse}}\), a spinning magnetized NS, a non-spinning BH and a rapidly spinning BH). The bottom panel illustrates the relative amount of UV/blue emission from an neutron precursor (purple), optical emission from lanthanide-free material (blue) and IR emission from lanthanide containing ejecta (red). Note: the case of a NS–NS merger leading to a slowly spinning black hole is unlikely, given that at a minimum the remnant will acquire the angular momentum of the original binary orbit. Modified from a figure originally presented in Kasen et al. (2015), copyright by the authors
7.1 Kilonova candidates following short GRBs
If short duration GRBs originate from NS–NS or NS–BH mergers, then one way to constrain kilonova models is via optical and NIR follow-up observations of nearby short bursts on timescales of hours to a week. All else being equal, the closest GRBs provide the most stringent constraints; however, the non-thermal afterglow emission—the strength of which can vary from burst to burst—must also be relatively weak, so that it does not outshine the thermal kilonova. Blue kilonova emission similar in luminosity to GW170817 would have been outshone by the afterglow emission in all but a small handful of observed short GRBs, but the longer-lived red kilonova emission has a better chance of sticking out above the fading, relatively blue afterglow.
The NIR excess observed following GRB 130603B (Berger et al. 2013; Tanvir et al. 2013), if powered by the radioactive decay of r-process nuclei, required a total ejecta mass of lanthanide-bearing matter of \( > rsim 0.1\,M_{\odot }\) (Barnes et al. 2016). This is \(\sim \) 3–5 times greater than the ejecta mass inferred for GW170817 (Fong et al. 2017; Sect. 5). As with GW170817, the ejecta implied by kilonova models of 130603B is too high to be explained by the dynamical ejecta from a NS–NS merger, possibly implicating a BH–NS merger in which the NS was tidally disrupted well outside the BH horizon (Hotokezaka et al. 2013b; Tanaka et al. 2014; Kawaguchi et al. 2016). However, NS–NS mergers can also produce such a high ejecta mass if a large fraction of the remnant accretion disk (which possess masses up to \(\sim 0.2\,M_{\odot }\)) is unbound in disk winds (Siegel and Metzger 2017 found that \(\approx 40\%\) of the disk mass could be unbound). Alternatively, the unexpectedly high luminosity of this event could attributed to energy input from a central engine rather than radioactivity (Kisaka et al. 2016), which for fall-back accretion indeed produces the correct luminosity to within an order-of-magnitude (Fig. 15).
Yang et al. (2015), Jin et al. (2015), Jin et al. (2016) found evidence for NIR emission in excess of the expected afterglow following the short GRBs 050709 and 060614, indicative of possible kilonova emission. The short GRB 080503 (Perley et al. 2009) showed an optical peak on a timescale of \(\sim 1\) day, which could be explainded as blue kilonova powered by r-process heating (Metzger and Fernández 2014; Kasen et al. 2015) or a central engine (Metzger and Piro 2014; Gao et al. 2015). These possibilities unfortunately could not be distinguished because the host galaxy of GRB080503 was not identified, resulting in its distance and thus luminosity being unconstrained.24
Gompertz et al. (2018) found three short bursts (GRBs 050509b, 061201, and 080905A) where, if the reported redshifts were correct, deep upper limits rule out the presence of a kilonova similar to AT2017gfo by several magnitudes (see also Fong et al. 2017). Given the diverse outcomes of NS–NS mergers, and how the properties of the remnant can dramatically effect the quantity and composition of the kilonova outflows, variation in the ejecta properties by an order of magnitude or more would not be unexpected. For instance, high-mass mergers that undergo a prompt collapse to a BH eject substantially less mass (particularly of the high-\(Y_e\) kind capable of producing blue kilonova emission), but still could generate an accretion disk of sufficient mass to power a GRB jet.25
Even with deep observations of a particularly nearby burst, Fong et al. (2016a) emphasize the challenges to constrain vanilla blue/red kilonova models with ground follow-up of GRBs. This highlights the crucial role played by the Hubble Space Telescope, and in the future by the James Webb Space Telescope (JSWT) and Wide Field Infrared Survey Telescope (WFIRST), in such efforts. Fortunately, NS–NS mergers detected by Advanced LIGO at distances \(<200\) Mpc (redshift \(z < 0.045\)) are at least three times closer (\(>2.5\) mags brighter) than the nearest cosmological short GRBs. Nevertheless, the detection rate of well-localized short GRBs is currently higher than GW events. For this reason among others (e.g., the information obtained on GRB jet opening angles from afterglow jet breaks), we advocate for continued space-based observations performing late-time follow-up of short GRB afterglows to search for kilonova signatures.
7.2 Gravitational-wave follow-up
This review has hopefully made clear that kilonovae are not likely to be homogeneous in their properties, with potentially significant differences expected in their colors and luminosities, depending on the type of merging system, the properties of the in-going binary, and, potentially, our viewing angle relative to the binary inclination (see, Figs. 4, 18 and Table 8). Here we discuss prospects and strategies of GW follow-up in the case of different merger outcomes.
Prompt collapse to BH In a NS–NS merger the observed emission is expected to depend sensitively on the lifetime of the central NS remnant, which in turn will depend on the in-going binary mass (Table 3). When BH formation is prompt, the ejecta mass will in most cases be low \(\ll 10^{-2}\,M_{\odot }\) and radioactivity (and, potentially, fall-back accretion of the tidal tail) will provide the only heating sources. The lack of a HMNS remnant will also result in a greater fraction of ejecta being lanthanide-rich and thus generating red kilonova emission. While some blue ejecta could still originate from the accretion disks, its relatively low velocity \(\sim 0.1\) c could result in its emission being blocked for equatorial viewing angles. Little or no neutron precursor emission is expected. For a merger at \(\sim 100\) Mpc, a purely red r-process powered kilonova of ejecta mass \(\sim 3\times 10^{-3}\,M_{\odot }\) would peak over a timescale of a few days in the NIR at \(IJK \sim 23\)–24 (scaling from Fig. 9, right panel). Given such relatively dim emission, only the largest aperture telescopes (e.g., DECam, Subaru HSC, or LSST) are capable of detecting the low-\(M_{\mathrm{ej}}\) red kilonova of a prompt collapse (Fig. 9, right panel).
HMNS remnant (GW170817-like) The situation is more promising for lower-mass mergers in which at least a moderately long-lived HMNS remnant forms. Shock-heated matter from the merger interface can generate high-\(Y_e\) dynamical ejecta comparable of exceeding that of the lanthanide-rich tidal tail. Likewise, outflows from the magnetized HMNS, or from the accretion disk prior to BH formtion (Fig. 6), will produce a greater quantity of high-\(Y_e\) lanthanide free material than in the prompt collapse case. For a merger generating \(\sim 10^{-2}\,M_{\odot }\) of high-\(Y_e\) ejecta (similar to GW170817) at \(\sim 100\) Mpc, the resulting blue kilonova emission could peak at UVR \(\sim 19\)–20 (Fig. 9, right panel) on a timescale of several hours to days. Even if the blue component is somehow not present or is blocked by lanthanide-rich matter, a source at 100 Mpc could still reach \(U \sim 20\) on a timescale of hours if the outer layers of the ejecta contain free neutrons (Fig. 14) or if the ejecta has been shock heated within a second after being first ejected (Fig. 12). Although not much brighter in magnitude than the NIR peak at later times, the blue kilonova may be the most promising counterpart for the majority of follow-up telescopes, for which the greatest sensitivity at optical wavelengths (a fact that the discovery of AT2017gfo has now made obvious). It is thus essential that follow-up observations begin within hours to one day following the GW trigger.
SMNS/stable remnant Although potentially rare, the brightest counterparts may arise from the mergers of low-mass NS–NS binaries which generate long-lived SMNS or stable NS remnants (\(t_{\mathrm{collapse}} \gg 300\) ms). Even ignoring the possibility of additional energy input from magnetar spin-down, the quantity of blue disk wind ejecta in this case is substantially enhanced (\(t_{\mathrm{collapse}} = \infty \) in Fig. 6) and could approach \(\sim 0.1\,M_{\odot }\), boosting the peak luminosity of the blue kilonova by a magnitude or more from what was observed in GW170817. Allowing also for energy input from the magnetar rotational energy (in addition to radioactivity), the transient at 100 Mpc could reach \(UVI \approx 16-17\) (Figs. 16, 17). However, the precise luminosity is highly uncertain, as it depends on several unknown factors: the dipole magnetic field of the magnetar remnant, the thermalization efficiency of the magnetar nebula by the ejecta (Eq. 38), and the NS collapse time (which in turn depends on the binary mass and the magnetic field strength). Shallower follow-up observations, such as those used in the discovery and follow-up of GW170817, are thus still relevant to kilonova follow-up even for more distant events (they could also be sufficient to detect the on-axis GRB afterglow in the rare case of a face-on merger).
BH–NS mergers BH–NS mergers are "all or nothing" events. If the NS is swallowed whole prior to being tidally disrupted, then little or no kilonova emission is expected. However, in the potentially rare cases when the BH is low in mass and rapidly spinning (in the prograde orbital sense), then the NS is tidally disrupted well outside of the horizon and the quantity of dynamical ejecta can be larger than in NS–NS mergers, by a typical factor of \(\sim 10\) (Sect. 3.1). All else being equal, this results in the kilonova peaking one magnitude brighter in BH–NS mergers. Likewise, the mass fall-back rate in BH–NS mergers can be up to \(\sim 10\) times higher than in NS–NS mergers (Rosswog 2007), enhancing potential accretion-powered contributions to the kilonova emission (Fig. 15, bottom panel).
However, the amount of high-\(Y_e\) ejecta is potentially less than in NS–NS mergers due to the lack in BH–NS mergers of shock-heated ejecta or a magnetar remnant, and for the same reason no neutron precursor is anticipated, unless it can be somehow generated by the GRB jet. The accretion disk outflows could still produce a small quantity of blue ejecta, but its velocity is likely to be sufficiently low \(\sim 0.1\) c that it will be blocked by the (faster, more massive) tidal tail, at least for equatorial viewing angles. Taken together, the kilonova emission from BH–NS mergers is more likely to dominated by the red component, although moderate amounts of high-\(Y_e\) matter and blue emission could still be produced by the disk winds (Just et al. 2015; Fernández et al. 2015b). Unfortunately for purposes of follow-up, any benefits of the higher dynamical ejecta mass on the light curve luminosity may be more than offset by the larger expected source distance, which will typically be \(\approx 2\)–3 times greater than the 200 Mpc horizon characteristic of NS–NS mergers for an otherwise equal GW event detection rates. See, e.g., Bhattacharya et al. (2019) for further discussion of the diverse EM counterparts of BH–NS mergers.
Search strategies Several works have explored the optimal EM follow-up strategies of GW sources, or ways to achieve lower latency GW triggers (Metzger and Berger 2012; Cowperthwaite and Berger 2015; Gehrels et al. 2016; Ghosh et al. 2016; Howell et al. 2016; Rana et al. 2017). Extremely low latency (Cannon et al. 2012; Chen and Holz 2017), though crucial to searching for a potential low-frequency radio burst (Kaplan et al. 2016), is generally not essential for kilonova follow-up. One possible exception is the speculative neutron precursor (Sect. 6.1.1), which peaks hours after the merger. However, in this case, the greatest advantage is arguably to instead locate the follow-up telescope in North America (Kasliwal and Nissanke 2014), producing a better chance of the source occuring near zenith (since the LIGO detectors are most sensitive directly above or below the detectors).
Future EM follow-up efforts would also be aided if LIGO were to provide more information on the properties of its binaries to the wider astronomy community at the time of the GW trigger (Margalit and Metzger 2019). The predicted signal from a NS–NS is expected to depend on the binary inclination and the total binary mass, \(M_{\mathrm{tot}}\). The inclination cannot be measured with high precision because it is largely degenerate with the (initially unknown) source distance, though it could be determined once the host galaxy was identified. However, the chirp mass and total binary mass can be reasonably accurately determined in low latency (e.g., Biscoveanu et al. 2019). Once the mapping between EM counterparts and the binary mass is better established, providing the binary mass in low latency could provide crucial information for informing search strategies, or even prioritizing, EM follow-up. This is especially important given the cost/scarcity of follow-up resources capable of performing these challenging deep searches over large sky areas.
The generally greater sensitivity of telescopes at optical wavelengths, as compared to the infrared, motivates a general strategy by which candidate targets are first identified by wide-field optical telescopes on a timescale of days, and then followed-up with spectroscopy or photometry in the NIR over a longer timescale of \(\sim 1\) week. Cowperthwaite and Berger (2015) show that no other known or predicted astrophysical transients are as red and evolve as quickly as kilonovae, thus reducing the number of optical false positives to a manageable level. Follow-up observations of candidates at wavelengths of a few microns could be accomplished, for instance, by the James Webb Space Telescope (Bartos et al. 2016), WFIRST (Gehrels et al. 2015), or a dedicated GW follow-up telescope with better target-of-opportunity capabilities.
Another goal for future kilonova observations would be a spectroscopic measurement of absorption lines from individual r-process elements (however, see Watson et al. 2019, for a possible detection of Sr ii in GW170817). Individual lines are challenging to identify for the simple reason that most of their specific wavelengths cannot be predicted theoretically with sufficient precision and have not been measured experimentally. Furthermore, at early times, the absorption lines are Doppler-broadened near peak light due to the substantial velocities \(v > rsim 0.1\) c of the ejecta. Broad absorption features were seen in AT2017gfo (e.g., Chornock et al. 2017), but these likely represented a blend of multiple lines. Fortunately, line-widths become narrower post-maximum as the photosphere recedes to lower velocity coordinates through the ejecta and nebular lines appear (ejecta velocities as low as \(\sim 0.03\) c are predicted for the disk wind ejecta). Unfortunately, emission becomes significantly dimmer at these late times and line blending could remain an issue. Spectroscopic IR observations of such dim targets is a compelling science case for future 30-meter telescopes. For instance, the planned Infrared Imaging Spectrograph (IRIS) on the Thirty Meter Telescope (Skidmore et al. 2015) will obtain a signal to noise ratio of 10 per wavelength channel (spectral resolution \(R = 4000\)) for a \(K = 25\) mag point source.
7.3 Summary of predictions
We conclude with a summary of predictions for a future large samples of kilonovae. Once Advanced LIGO/Virgo reach design sensitivity within a few years, NS–NS mergers could be detected as frequently as once per week to once per month. This rate will increase by roughly another factor of \(\sim 8\) with the planned LIGO A+ upgrades in the mid 2020s (Reitze et al. 2019b), and yet further with proposed third generation GW detectors, such as Einstein Telescope (Punturo et al. 2010) and Cosmic Explorer (Reitze et al. 2019a), which could come online in the 2030s.
As mentioned earlier in this review, high-fidelity first-principles kilonova models are not currently available. Reasons for this include uncertainties in: (1) the predicted ejecta properties (mass, velocity, \(Y_e\)) due to numerical limitations combined with present ignorance about the NS equation of state; (2) the properties of unstable neutron-rich nuclei, which determine the details of the r-process and the radioactive heating rate; (3) radiative transfer in face of the complex atomic structure of heavy r-process elements and the potential break-down of the approximations which are more safely applied to modeling supernova with less exotic ejecta composition.
Nevertheless, we can still highlight a few trends which a future sample of joint GW/EM events should bear out if our understanding of these events is even qualitatively correct. These predictions include:
For a total binary mass (\(\sim \) chirp mass) significantly higher than GW170817, the remnants of NS–NS merger will undergo a prompt collapse to a BH, resulting in a kilonova which is dimmer and redder than AT2017gfo. The blue component of the ejecta, if present at all, will arise from the relatively low-velocity disk wind and is likely be blocked for equatorial viewers by the lanthanide-rich tidal tail ejecta. A GRB jet can still be produced because a low-mass accretion disk can still form, but it could be less energetic than the off-axis jet inferred for GW170817. Such events could be relatively rare if the prompt collapse threshold mass is high.
For binary masses below the (uncertain) prompt collapse threshold mass, the merger will form a HMNS remnant, perhaps similar to that believed to be generated in GW170817. The strength of the blue kilonova relative to the red kilonova emission will increase with decreasing total binary mass, as the ejecta is dominated by the disk wind and the average \(Y_e\) rises with the collapse time (Fig. 6). BH formation is likely to still be relatively prompt, providing no hindrance to the production of an ultra-relativistic GRB jet and afterglow emission.
Below another uncertain critical binary mass threshold, the merger remnant will survive for minutes or longer as a quasi-stable SMNS or indefinitely stable NS remnant. The disk wind ejecta will be almost entirely blue but a weak red component could still be present from the tidal tail ejecta (even though such an emission component was likely swamped in GW170817). The mean velocity of the ejecta will increase with decreasing remnant mass, as the amount of rotational energy extracted from the remnant prior to BH formation increases (Fig. 4). Less certain is whether a powerful ultra-relativistic GRB jet will form in this case, due to baryon pollution from the wind of the remnant (though exceptions to such an expectation would be highly informative). However, the afterglow produced by the interaction of the jet/kilonova ejecta with the ISM (particularly the radio emission, which is roughly isotropic and generated by even trans-relativistic ejecta) will be even more luminous than in the HMNS case due to the addition of magnetar rotational energy (Metzger and Bower 2014; Horesh et al. 2016; Fong et al. 2016b).
For BH–NS mergers, in the (possibly rare) subset of low-mass/high-spin BHs which disrupt the NS well outside the horizon, the red kilonova will be more luminous, and extend to higher velocities, than in the NS–NS case due to the greater quantity of tidal tail ejecta. As in the prompt collapse of a NS–NS, the blue component—if present—will arise from the relatively low-velocity disk wind and thus could be blocked for equatorial viewers by the lanthanide-rich tidal tail ejecta, which is likely to be more massive than in the NS–NS case.
The first edition of this review was written in the year prior to the discovery of GW170817. In the final section of that article I pondered: "Given the rapid evolution of this field in recent years, it is natural to question the robustness of current kilonova models. What would it mean if kilonova emission is ruled out following a NS–NS merger, even to stringent limits?" I concluded this was untenable: "First, it should be recognized that—unlike, for instance, a GRB afterglow—kilonovae are largely thermal phenomena. The ejection of neutron-rich matter during a NS–NS merger at about ten percent of the speed of light appears to be a robust consequence of the hydrodynamics of such events, which all modern simulations agree upon. Likewise, the fact that decompressing nuclear-density matter will synthesize heavy neutron rich isotopes is also robust. The properties of individual nuclei well off of the stable valley are not well understood, although that will improve soon due to measurements with the new Facility for Rare Isotope Beams (e.g., Horowitz et al. 2019). However, the combined radioactive heating rate from a large ensemble of decaying nuclei is largely statistical in nature and hence is also relatively robust, even if individual isotopes are not; furthermore, most of the isotopes which contribute to the heating on the timescale of days to weeks most relevant to the kilonova peak are stable enough that their masses and half-lives are experimentally measured."
Despite the success that astrophysics theorists and numerical relativists had in anticipating many of the properties of GW170817, the tables may soon be turned as we struggle to catch up to the rich phenomenology which is likely to explode from the new GW/EM science. In particular, much additional work is required on the theory side to transform kilonovae into quantitative probes of the diversity of merger outcomes and nuclear physics. Among the largest remaining uncertainty in kilonova emission relates to the wavelength-dependent opacity of the ejecta, in particular when it includes lanthanide/actinides isotopes with partially-filled f-shell valence shells (Kasen et al. 2013; Tanaka and Hotokezaka 2013; Fontes et al. 2015). As discussed in Sect. 3.2, the wavelengths and strengths of the enormous number of lines of these elements and ionization states are not experimentally measured and are impossible to calculate from first principles from multi-body quantum mechanics with current computational capabilities. Furthermore, how to handle radiative transport in cases when the density of strong lines becomes so large that the usual expansion opacity formalism breaks down requires further consideration and simulation work. Another theoretical issue which deserves prompt attention is the robustness of the presence of free neutrons in the outermost layers of the ejecta, given their potentially large impact on the very early-time kilonova optical emission (Sect. 6.1.1). The issue of large-scale magnetic field generation, and its impact on the GRB jet and post-merger outflows (e.g., from the remnant NS or accretion disk), is likely to remain a challenging issue for man years to come, one which is hardly unique to this field of astrophysics. Here, I suspect that nature will teach us more than we can deduce ourselves.
With ongoing dedicated effort, as more detections or constraints on kilonovae become possible over the next few years, we will be in an excellent position use these observations to probe the physics of binary NS mergers, their remnants, and their role as an origin of the r-process.
https://dcc.ligo.org/LIGO-P1800307/public.
One argument linking short GRBs to NS–NS/BH–NS mergers is the lack of viable alternative models. Accretion-induced collapse of a NS to form a BH was once considered an alternative short GRB model (MacFadyen et al. 2005; Dermer and Atoyan 2006). However, Margalit et al. (2015) showed that even a maximally-spinning NS rotating as a solid body (the expected configuration in such aged systems) will not produce upon collapse a sufficiently massive accretion disk around the newly-formed BH given extant constraints on NS properties (see also Shibata 2003; Camelio et al. 2018).
A high entropy (low density) results in an \(\alpha \)-rich freeze-out of the 3-body and effective 4-body reactions responsible for forming seed nuclei in the wind, similar to big bang nucleosynthesis. The resulting higher ratio of neutrons to seed nuclei (because the protons are trapped in \(\alpha \) particles) then allows the r-process to proceed to heavier elements.
Another r-process mechanism in the core collapse environment results from \(\nu -\)induced spallation in the He shell (Banerjee et al. 2011). This channel is limited to very low metallicity \(Z \lesssim 10^{-3}\) and thus cannot represent the dominant r-process source over the age of the galaxy (though it could be important for the first generations of stars).
Nevertheless, a more deeply embedded source of heavy r-process ejecta would be less conspicuous, as the characteristic signatures of lanthanide elements would only appear well after the supernova's optical peak. Promising in this regard are outflows from a fall-back accretion disk around the central BH or NS, such as those which may be responsible for powering long-duration GRBs (Pruet et al. 2004; Siegel et al. 2019).
As a student entering this field in the mid 2000s, it was clear to me that the optical transients proposed by Li and Paczyński (1998) were not connected in most people's mind with the r-process. Rosswog (2005) in principle had all the information needed to calculate the radioactive heating rate of the ejecta based on the earlier Freiburghaus et al. (1999) calculations, and thus to determine the true luminosity scale of these merger transients well before Metzger et al. (2010b). I make this point not to cast blame, but simply to point out that the concept, now taken for granted, that the radioactive heating rate was something that could actually be calculated with any precision, came as a revelation, at least to a student of the available literature.
This robustness is rooted in 'fission recycling' (Goriely et al. 2005): the low initial \(Y_e\) results in a large neutron-to-seed ratio, allowing the nuclear flow to reach heavy nuclei for which fission is possible (\(A \sim 250\)). The fission fragments are then subject to additional neutron captures, generating more heavy nuclei and closing the cycle.
A useful analogy can be drawn between weak freeze-out in the viscously-expanding accretion disk of a NS merger, and that which occurs in the expanding Universe during the first minutes following the Big Bang. However, unlike a NS merger, the Universe freezes-out proton-rich, due to the lower densities (which favor proton-forming reactions over the neutron-forming ones instead favored under conditions of high electron degeneracy).
Another uncertainty arises because, at low temperatures \(\lesssim 10^{3}\) K, the ejecta may condense from gaseous to solid phase (Takami et al. 2014; Gall et al. 2017). The formation of such r-process dust could act to either increase or decrease the optical/UV opacity, depending on uncertain details such as when the dust condenses and how quickly it grows. Dust formation is already complex and poorly understood in less exotic astrophysical environments (Cherchneff and Dwek 2009; Lazzati and Heger 2016).
This is also consistent with upper limits on non-afterglow contributions to the X-ray emission from a central remnant in GW170817 (for which \(M_{\mathrm{ej}} > rsim 0.03\,M_{\odot }\); e.g., Margutti et al. 2017).
The r-process abundance pattern itself is much more sensitive to these nuclear physics uncertainties (Eichler et al. 2015; Wu et al. 2016; Mumpower et al. 2016).
Rosswog et al. (2018) argue that the ejecta must have possessed \(Y_{e} \lesssim 0.3{-}0.35\) to produce a smooth radioactive heating rate consistent with the bolometric light curve (due to the fact that discrete isotopes dominate the heating rate for higher \(Y_e\); e.g., Lippuner and Roberts 2015). However, this makes the over-restrictive assumption that all the ejecta contains a single precise value of \(Y_e\). A small but finite spread in \(Y_e\) about the mean value \({\bar{Y}}_e\) results in a smooth light curve decay consistent with observations even for \({\bar{Y}}_e > 0.35\) (Wanajo 2018; Wu et al. 2019b; see Fig. 10).
The inspiral waveform most precisely encodes the binary chirp mass \({\mathcal {M}}_{\mathrm{c}}\) (Eq. 1). However, the mapping between \(M_{\mathrm{tot}}\) and \({\mathcal {M}}_{\mathrm{c}}\) is only weakly dependent on the binary mass ratio \(q = M_{1}/M_{2}\) for values of \(q > rsim 0.7\) characteristic of the known Galactic double NS systems (Margalit and Metzger 2019).
This secular acceleration of the wind is driven by its diminishing baryon-loading rate due to neutrino–driven mass ablation from the neutron star surface following its Kelvin–Helmholtz cooling evolution.
If the jet derives its power from the Blandford–Znajek process, then the jet luminosity actually depends on the magnetic flux threading the BH rather than its accretion rate, at least up to fluxes for which the jet power saturates at \(\epsilon _j \approx 1\) (Tchekhovskoy et al. 2011). Kisaka and Ioka (2015) suggest that topology of the accreted magnetic field from fall-back could give rise to a complex temporal evolution of the jet power, which differs from the \(\propto t^{-5/3}\) decay predicted by Eq. (34) for \(\epsilon _j = constant\).
The extractable energy is defined as the difference between the rotational energy at break-up and the rotational energy after the object has spun down to the point of becoming unstable and collapsing into a BH.
Unlike vacuum dipole spin-down, the spin-down rate is not zero for an aligned rotator in the force-free case, which is of greatest relevance to the plasma-dense, post-merger environment.
If the remnant is born with a shorter period, mass shedding or non-axisymmetric instabilities set in which will result in much more rapid loss of angular momentum to GWs (Shibata et al. 2000), until the NS rotates at a rate close to \(P_0 > rsim 0.7\) ms.
The collapse event itself has been speculated to produce a brief (sub-millisecond) electromagnetic flare (Palenzuela et al. 2013) or a fast radio burst (Falcke and Rezzolla 2014) from the detaching magnetosphere; however, no accretion disk, and hence long-lived transient, is likely to be produced (Margalit et al. 2015).
We also assume that dipole spin-down exceeds gravitational wave losses, as is likely valid if the non-axisymmetric components of the interior field are \(\lesssim 100\) times weaker than the external dipole field (Dall'Osso et al. 2009). We have also neglected angular momentum losses due to f-mode instabilities (Doneva et al. 2015).
In fact, Kulkarni (2005) earlier had suggested energy input from a central pulsar as a power source.
Such magnetar nebulae may also be promising sources of high-energy neutrinos for hours to days after the merger (e.g. Fang and Metzger 2017).
We emphasize, however, that when one is actually calculating the light curve, the pair suppression (Eq. 39) should be applied after the luminosity has been calculated using the full spin-down power as the heating source (Eq. 38). This is because the non-thermal radiation trapped by pairs is also available to do PdV work on the ejecta, accelerating it according to Eq. (17).
Rebrightening in the X-ray luminosity, coincident with the optical brightening, was also observed following GRB 080503 (Perley et al. 2009). Whether the optical emission is powered exclusively by r-process heating or not, this could potentially be consistent with non-thermal emission from a central engine (Metzger and Piro 2014; Gao et al. 2015; Siegel and Ciolfi 2016a, b).
If the jet is powered by the Blandford-Znajek mechanism, the disk mass \(M_{\mathrm{t}}\) only need be sufficiently massive to hold a magnetic field of the requisite strength in place. This is not a very stringent constraint: GRB jets are weak in energy \(E_{\mathrm{GRB}} \sim 10^{49}\) erg compared to the maximum accretion power available, \(\sim M_{\mathrm{t}}c^{2} \sim 10^{52}(M_{\mathrm{t}}/10^{-2}\,M_{\odot })\) erg.
I want to thank Gabriel Martínez-Pinedo and Almudena Arcones, my colleagues who developed the nuclear reaction network and assembled the microphysics needed to calculate the late-time radioactive heating, and who were enthusiastic back in 2009 about reviving the relic idea of Burbidge et al. (1956) of an "r-process-powered supernova". I am also indebted to Rodrigo Fernández, Tony Piro, Eliot Quataert, and Daniel Siegel, with whom I worked over many years to make the case to a skeptical community that accretion disk outflows, rather than dynamical material, was the dominant source of ejecta in NS mergers (a prediction which most believe was borne out by the large ejecta masses GW170817). I want to thank Todd Thompson, who taught me about the r-process and the potentially important role of magnetar winds. I also want to thank Edo Berger, with who we worked out the practical aspects of what it would take to detect and characterize kilonovae. I also want to acknowledge my many other collaborators on binary neutron star mergers, who helped shape many of the ideas expressed in this article. These include, but are not limited to, Jennifer Barnes, Andrei Beloborodov, Josh Bloom, Geoff Bower, Niccolo Bucciantini, Phil Cowperthwaite, Alessandro Drago, Wen-Fai Fong, Daniel Kasen, Ben Margalit, Raffaella Margutti, Daniel Perley, Eliot Quataert, Antonia Rowlinson, and Meng-Ru Wu. I gratefully acknowledge support from NASA (Grant Number NNX16AB30G) and from the Simons Foundation (Grant Number 606260).
Abadie J et al (2010) Predictions for the rates of compact binary coalescences observable by ground-based gravitational-wave detectors. Class Quantum Grav 27:173001. https://doi.org/10.1088/0264-9381/27/17/173001. arXiv:1003.2480 ADSCrossRefGoogle Scholar
Abbott BP et al (2016a) Binary black hole mergers in the first advanced LIGO observing run. Phys Rev X 6(4):041015. https://doi.org/10.1103/PhysRevX.6.041015. arXiv:1606.04856 CrossRefGoogle Scholar
Abbott BP et al (2016b) Localization and broadband follow-up of the gravitational-wave transient GW150914. Astrophys J Lett 826:L13. https://doi.org/10.3847/2041-8205/826/1/L13. arXiv:1602.08492 ADSCrossRefGoogle Scholar
Abbott BP et al (2016c) Observation of gravitational waves from a binary black hole merger. Phys Rev Lett 116:061102. https://doi.org/10.1103/PhysRevLett.116.061102. arXiv:1602.03837 ADSMathSciNetCrossRefGoogle Scholar
Abbott BP et al (2016) Astrophysical implications of the binary black-hole merger GW150914. Astrophys J Lett 818(2):L22. https://doi.org/10.3847/2041-8205/818/2/L22. arXiv:1602.03846 ADSCrossRefGoogle Scholar
Abbott BP et al (2017a) A gravitational-wave standard siren measurement of the Hubble constant. Nature 551:85–88. https://doi.org/10.1038/nature24471. arXiv:1710.05835 ADSCrossRefGoogle Scholar
Abbott BP et al (2017b) GW170817: observation of gravitational waves from a binary neutron star inspiral. Phys Rev Lett 119(16):161101. https://doi.org/10.1103/PhysRevLett.119.161101. arXiv:1710.05832 ADSCrossRefGoogle Scholar
Abbott BP et al (2017c) Multi-messenger observations of a binary neutron star merger. Astrophys J Lett 848(2):L12. https://doi.org/10.3847/2041-8213/aa91c9. arXiv:1710.05833 ADSCrossRefGoogle Scholar
Abbott BP et al (2017d) Gravitational waves and gamma-rays from a binary neutron star merger: GW170817 and GRB 170817A. Astrophys J Lett 848(2):L13. https://doi.org/10.3847/2041-8213/aa920c. arXiv:1710.05834 ADSCrossRefGoogle Scholar
Abbott BP et al (2017e) Search for post-merger gravitational waves from the remnant of the binary neutron star merger GW170817. Astrophys J Lett 851(1):L16. https://doi.org/10.3847/2041-8213/aa9a35. arXiv:1710.09320 ADSCrossRefGoogle Scholar
Abbott BP et al (2018) GW170817: measurements of neutron star radii and equation of state. Phys Rev Lett 121(16):161101. https://doi.org/10.1103/PhysRevLett.121.161101. arXiv:1805.11581 ADSCrossRefGoogle Scholar
Abbott BP et al (2019a) Properties of the binary neutron star merger GW170817. Phys Rev X 9(1):011001. https://doi.org/10.1103/PhysRevX.9.011001. arXiv:1805.11579 CrossRefGoogle Scholar
Abbott BP et al (2019b) GWTC-1: A Gravitational-Wave Transient Catalog of compact ninary mergers observed by LIGO and Virgo during the first and second observing runs. Phys Rev X 9:031040. https://doi.org/10.1103/PhysRevX.9.031040. arXiv:1811.12907 CrossRefGoogle Scholar
Abbott BP et al (2019c) Tests of general relativity with the binary black hole signals from the LIGO-Virgo catalog GWTC-1. arXiv e-prints arXiv:1903.04467
Alexander KD, Berger E, Fong W, Williams PKG, Guidorzi C, Margutti R, Metzger BD, Annis J, Blanchard PK, Brout D (2017) The electromagnetic counterpart of the binary neutron star merger LIGO/Virgo GW170817. VI. Radio constraints on a relativistic jet and predictions for late-time emission from the kilonova ejecta. Astrophys J Lett 848(2):L21. https://doi.org/10.3847/2041-8213/aa905d. arXiv:1710.05457 ADSCrossRefGoogle Scholar
Andreoni I et al (2017) Follow up of GW170817 and its electromagnetic counterpart by australian-led observing programmes. Publ Astron Soc Australia 34:e069. https://doi.org/10.1017/pasa.2017.65. arXiv:1710.05846 ADSCrossRefGoogle Scholar
Antoniadis J et al (2013) A massive pulsar in a compact relativistic binary. Science 340:448. https://doi.org/10.1126/science.1233232. arXiv:1304.6875 ADSCrossRefGoogle Scholar
Arcavi I (2018) The first hours of the GW170817 kilonova and the importance of early optical and ultraviolet observations for constraining emission models. Astrophys J Lett 855(2):L23. https://doi.org/10.3847/2041-8213/aab267. arXiv:1802.02164 ADSCrossRefGoogle Scholar
Arcavi I, Hosseinzadeh G, Howell DA, McCully C, Poznanski D, Kasen D, Barnes J, Zaltzman M, Vasylyev S, Maoz D, Valenti S (2017a) Optical emission from a kilonova following a gravitational-wave-detected neutron-star merger. Nature 551(7678):64–66. https://doi.org/10.1038/nature24291. arXiv:1710.05843 ADSCrossRefGoogle Scholar
Arcavi I, McCully C, Hosseinzadeh G, Howell DA, Vasylyev S, Poznanski D, Zaltzman M, Maoz D, Singer L, Valenti S, Kasen D, Barnes J, Piran T, Wf Fong (2017b) Optical follow-up of gravitational-wave events with Las Cumbres observatory. Astrophys J Lett 848(2):L33. https://doi.org/10.3847/2041-8213/aa910f. arXiv:1710.05842 ADSCrossRefGoogle Scholar
Arcones A, Janka HT, Scheck L (2007) Nucleosynthesis-relevant conditions in neutrino-driven supernova outflows. I. Spherically symmetric hydrodynamic simulations. Astron Astrophys 467:1227–1248. https://doi.org/10.1051/0004-6361:20066983. arXiv:astro-ph/0612582 ADSCrossRefGoogle Scholar
Arnett WD (1982) Type I supernovae. I. Analytic solutions for the early part of the light curve. Astrophys J 253:785–797. https://doi.org/10.1086/159681 ADSCrossRefGoogle Scholar
Arnould M, Goriely S, Takahashi K (2007) The \(r\)-process of stellar nucleosynthesis: astrophysics and nuclear physics achievements and mysteries. Phys Rep 450:97–213. https://doi.org/10.1016/j.physrep.2007.06.002. arXiv:0705.4512 ADSCrossRefGoogle Scholar
Ascenzi S, De Lillo N, Haster CJ, Ohme F, Pannarale F (2019) Constraining the neutron star radius with joint gravitational-wave and short gamma-ray burst observations of neutron star-black hole coalescing binaries. Astrophys J 877(2):94. https://doi.org/10.3847/1538-4357/ab1b15. arXiv:1808.06848 ADSCrossRefGoogle Scholar
Baiotti L (2019) Gravitational waves from neutron star mergers and their relation to the nuclear equation of state. arXiv e-prints arXiv:1907.08534
Baiotti L, Rezzolla L (2017) Binary neutron-star mergers: a review of Einstein's richest laboratory. Rep Prog Phys. https://doi.org/10.1088/1361-6633/aa67bb. arXiv:1607.03540 MathSciNetCrossRefGoogle Scholar
Banerjee P, Haxton WC, Qian YZ (2011) Long, cold, early \(r\) process? Neutrino-induced nucleosynthesis in he shells revisited. Phys Rev Lett 106:201104. https://doi.org/10.1103/PhysRevLett.106.201104. arXiv:1103.1193 ADSCrossRefGoogle Scholar
Barnes J, Kasen D (2013) Effect of a high opacity on the light curves of radioactively powered transients from compact object mergers. Astrophys J 775:18. https://doi.org/10.1088/0004-637X/775/1/18. arXiv:1303.5787 ADSCrossRefGoogle Scholar
Barnes J, Kasen D, Wu MR, Martínez-Pinedo G (2016) Radioactivity and thermalization in the ejecta of compact object mergers and their impact on kilonova light curves. Astrophys J 829:110. https://doi.org/10.3847/0004-637X/829/2/110. arXiv:1605.07218 ADSCrossRefGoogle Scholar
Bartos I, Marka S (2019) A nearby neutron-star merger explains the actinide abundances in the early solar system. Nature 569(7754):85–88. https://doi.org/10.1038/s41586-019-1113-7 ADSCrossRefGoogle Scholar
Bartos I, Huard TL, Márka S (2016) James Webb Space Telescope can detect kilonovae in gravitational wave follow-up search. Astrophys J 816:61. https://doi.org/10.3847/0004-637X/816/2/61. arXiv:1502.07426 ADSCrossRefGoogle Scholar
Baumgarte TW, Shapiro SL, Shibata M (2000) On the maximum mass of differentially rotating neutron stars. Astrophys J Lett 528:L29–L32. https://doi.org/10.1086/312425. arXiv:astro-ph/9910565 ADSCrossRefGoogle Scholar
Bauswein A, Baumgarte TW, Janka HT (2013a) Prompt merger collapse and the maximum mass of neutron stars. Phys Rev Lett 111(13):131101. https://doi.org/10.1103/PhysRevLett.111.131101. arXiv:1307.5191 ADSCrossRefGoogle Scholar
Bauswein A, Goriely S, Janka HT (2013b) Systematics of dynamical mass ejection, nucleosynthesis, and radioactively powered electromagnetic signals from neutron-star mergers. Astrophys J 773:78. https://doi.org/10.1088/0004-637X/773/1/78. arXiv:1302.6530 ADSCrossRefGoogle Scholar
Bauswein A, Just O, Janka HT, Stergioulas N (2017) Neutron-star radius constraints from GW170817 and future detections. Astrophys J Lett 850:L34. https://doi.org/10.3847/2041-8213/aa9994. arXiv:1710.06843 ADSCrossRefGoogle Scholar
Belcher JW, MacGregor KB (1976) Magnetic acceleration of winds from solar-type stars. Astrophys J 210:498–507. https://doi.org/10.1086/154853 ADSCrossRefGoogle Scholar
Beloborodov AM (2008) Hyper-accreting black holes. In: Axelsson M (ed) Cool discs, hot flows: the varying faces of accreting compact objects, AIP conference series, vol 1054. American Institute of Physics, Melville, NY, pp 51–70. https://doi.org/10.1063/1.3002509. arXiv:0810.2690
Beloborodov AM, Lundman C, Levin Y (2018) Relativistic envelopes and gamma-rays from neutron star mergers. arXiv e-prints arXiv:1812.11247
Beniamini P, Hotokezaka K, Piran T (2016) Natal kicks and time delays in merging neutron star binaries: implications for \(r\)-process nucleosynthesis in ultra-faint dwarfs and in the milky way. Astrophys J Lett 829:L13. https://doi.org/10.3847/2041-8205/829/1/L13. arXiv:1607.02148 ADSCrossRefGoogle Scholar
Beniamini P, Petropoulou M, Barniol Duran R, Giannios D (2019) A lesson from GW170817: most neutron star mergers result in tightly collimated successful GRB jets. Mon Not R Astron Soc 483(1):840–851. https://doi.org/10.1093/mnras/sty3093. arXiv:1808.04831 ADSCrossRefGoogle Scholar
Berger E (2014) Short-duration gamma-ray bursts. Annu Rev Astron Astrophys 52:43–105. https://doi.org/10.1146/annurev-astro-081913-035926. arXiv:1311.2603 ADSCrossRefGoogle Scholar
Berger E, Fong W, Chornock R (2013) An \(r\)-process kilonova associated with the short-hard GRB 130603B. Astrophys J Lett 774:L23. https://doi.org/10.1088/2041-8205/774/2/L23. arXiv:1306.3960 ADSCrossRefGoogle Scholar
Bhattacharya M, Kumar P, Smoot G (2019) Mergers of black hole-neutron star binaries and rates of associated electromagnetic counterparts. Mon Not R Astron Soc 486(4):5289–5309. https://doi.org/10.1093/mnras/stz1147. arXiv:1809.00006 ADSCrossRefGoogle Scholar
Biscoveanu S, Vitale S, Haster CJ (2019) The reliability of the low-latency estimation of binary neutron star chirp mass. arXiv e-prints arXiv:1908.03592
Blanchard PK, Berger E, Fong W, Nicholl M, Leja J, Conroy C, Alexander KD, Margutti R, Williams PKG, Doctor Z, Chornock R, Villar VA, Cowperthwaite PS, Annis J, Brout D, Brown DA, Chen HY, Eftekhari T, Frieman JA, Holz DE, Metzger BD, Rest A, Sako M, Soares-Santos M (2017) The electromagnetic counterpart of the binary neutron star merger LIGO/Virgo GW170817. VII. Properties of the host galaxy and constraints on the merger timescale. Astrophys J Lett 848(2):L22. https://doi.org/10.3847/2041-8213/aa9055. arXiv:1710.05458 ADSCrossRefGoogle Scholar
Blinnikov SI, Novikov ID, Perevodchikova TV, Polnarev AG (1984) Exploding neutron stars in close binaries. Sov Astro Lett 10:177–179ADSGoogle Scholar
Bloom JS, Sigurdsson S (2017) A cosmic multimessenger gold rush. Science 358(6361):301–302. https://doi.org/10.1126/science.aaq0321 ADSCrossRefGoogle Scholar
Bloom JS, Holz DE, Hughes SA, Menou K, Adams A, Anderson SF, Becker A, Bower GC, Brandt N, Cobb B, Cook K, Corsi A, Covino S, Fox D, Fruchter A, Fryer C, Grindlay J, Hartmann D, Haiman Z, Kocsis B, Jones L, Loeb A, Marka S, Metzger B, Nakar E, Nissanke S, Perley DA, Piran T, Poznanski D, Prince T, Schnittman J, Soderberg A, Strauss M, Shawhan PS, Shoemaker DH, Sievers J, Stubbs C, Tagliaferri G, Ubertini P, Wozniak P (2009) Astro2010 decadal survey whitepaper: coordinated science in the gravitational and electromagnetic skies. ArXiv e-prints arXiv:0902.1527
Bloom JS et al (2006) Closing in on a short-hard burst progenitor: constraints from early-time optical imaging and spectroscopy of a possible host galaxy of GRB 050509b. Astrophys J 638:354–368. https://doi.org/10.1086/498107. arXiv:astro-ph/0505480 ADSCrossRefGoogle Scholar
Bonetti M, Perego A, Dotti M, Cescutti G (2019) Neutron star binary orbits in their host potential: effect on early r-process enrichment. arXiv e-prints arXiv:1905.12016
Bovard L, Martin D, Guercilena F, Arcones A, Rezzolla L, Korobkin O (2017) r -process nucleosynthesis from matter ejected in binary neutron star mergers. Phys Rev D 96(12):124005. https://doi.org/10.1103/PhysRevD.96.124005. arXiv:1709.09630 ADSCrossRefGoogle Scholar
Bromberg O, Tchekhovskoy A (2016) Relativistic MHD simulations of core-collapse GRB jets: 3D instabilities and magnetic dissipation. Mon Not R Astron Soc 456:1739–1760. https://doi.org/10.1093/mnras/stv2591. arXiv:1508.02721 ADSCrossRefGoogle Scholar
Bromberg O, Tchekhovskoy A, Gottlieb O, Nakar E, Piran T (2018) The \(\gamma \)-rays that accompanied GW170817 and the observational signature of a magnetic jet breaking out of NS merger ejecta. Mon Not R Astron Soc 475(3):2971–2977. https://doi.org/10.1093/mnras/stx3316. arXiv:1710.05897 ADSCrossRefGoogle Scholar
Bucciantini N, Metzger BD, Thompson TA, Quataert E (2012) Short gamma-ray bursts with extended emission from magnetar birth: jet formation and collimation. Mon Not R Astron Soc 419:1537–1545. https://doi.org/10.1111/j.1365-2966.2011.19810.x. arXiv:1106.4668 ADSCrossRefGoogle Scholar
Buckley DAH, Andreoni I, Barway S, Cooke J, Crawford SM, Gorbovskoy E, Gromadzki M, Lipunov V, Mao J, Potter SB, Pretorius ML, Pritchard TA, Romero-Colmenero E, Shara MM, Väisänen P, Williams TB (2018) A comparison between SALT/SAAO observations and kilonova models for AT 2017gfo: the first electromagnetic counterpart of a gravitational wave transient—GW170817. Mon Not R Astron Soc 474(1):L71–L75. https://doi.org/10.1093/mnrasl/slx196. arXiv:1710.05855 ADSCrossRefGoogle Scholar
Bulla M, Covino S, Kyutoku K, Tanaka M, Maund JR, Patat F, Toma K, Wiersema K, Bruten J, Jin ZP, Testa V (2019) The origin of polarization in kilonovae and the case of the gravitational-wave counterpart AT 2017gfo. Nat Astron 3:99–106. https://doi.org/10.1038/s41550-018-0593-y. arXiv:1809.04078 ADSCrossRefGoogle Scholar
Burbidge EM, Burbidge GR, Fowler WA, Hoyle F (1957) Synthesis of the elements in stars. Rev Mod Phys 29:547–650. https://doi.org/10.1103/RevModPhys.29.547 ADSCrossRefGoogle Scholar
Burbidge GR, Hoyle F, Burbidge EM, Christy RF, Fowler WA (1956) Californium-254 and supernovae. Phys Rev 103:1145–1149. https://doi.org/10.1103/PhysRev.103.1145 ADSCrossRefGoogle Scholar
Camelio G, Dietrich T, Rosswog S (2018) Disc formation in the collapse of supramassive neutron stars. Mon Not R Astron Soc 480(4):5272–5285. https://doi.org/10.1093/mnras/sty2181. arXiv:1806.07775 ADSCrossRefGoogle Scholar
Cameron AGW (1957) Nuclear reactions in stars and nucleogenesis. Publ Astron Soc Pac 69:201. https://doi.org/10.1086/127051 ADSCrossRefGoogle Scholar
Cannon K, Cariou R, Chapman A, Crispin-Ortuzar M, Fotopoulos N, Frei M, Hanna C, Kara E, Keppel D, Liao L, Privitera S, Searle A, Singer L, Weinstein A (2012) Toward early-warning detection of gravitational waves from compact binary coalescence. Astrophys J 748:136. https://doi.org/10.1088/0004-637X/748/2/136. arXiv:1107.2665 ADSCrossRefGoogle Scholar
Cantiello M, Jensen JB, Blakeslee JP, Berger E, Levan AJ, Tanvir NR, Raimondo G, Brocato E, Alexander KD, Blanchard PK (2018) A precise distance to the host galaxy of the binary neutron star merger GW170817 using surface brightness fluctuations. Astrophys J 854(2):L31. https://doi.org/10.3847/2041-8213/aaad64. arXiv:1801.06080 ADSCrossRefGoogle Scholar
Cardall CY, Fuller GM (1997) General relativistic effects in the neutrino-driven wind and \(r\)-process nucleosynthesis. Astrophys J Lett 486:L111–L114. https://doi.org/10.1086/310838. arXiv:astro-ph/9701178 ADSCrossRefGoogle Scholar
Chawla S, Anderson M, Besselman M, Lehner L, Liebling SL, Motl PM, Neilsen D (2010) Mergers of magnetized neutron stars with spinning black holes: disruption, accretion, and fallback. Phys Rev Lett 105(11):111101. https://doi.org/10.1103/PhysRevLett.105.111101. arXiv:1006.2839 ADSCrossRefGoogle Scholar
Chen HY, Holz DE (2017) Facilitating follow-up of LIGO-Virgo events using rapid sky localization. ApJ 840:88. https://doi.org/10.3847/1538-4357/aa6f0d. arXiv:1509.00055 ADSCrossRefGoogle Scholar
Chen HY, Vitale S, Narayan R (2019) On the viewing angle of binary neutron star mergers. Phys Rev X 9:031028. https://doi.org/10.1103/PhysRevX.9.031028. arXiv:1807.05226 CrossRefGoogle Scholar
Cherchneff I, Dwek E (2009) The chemistry of population III supernova ejecta. I. Formation of molecules in the early universe. Astrophys J 703:642–661. https://doi.org/10.1088/0004-637X/703/1/642. arXiv:0907.3621 ADSCrossRefGoogle Scholar
Chornock R et al (2017) The electromagnetic counterpart of the binary neutron star merger LIGO/Virgo GW170817. IV. Detection of near-infrared signatures of r-process nucleosynthesis with Gemini-south. Astrophys J Lett 848(2):L19. https://doi.org/10.3847/2041-8213/aa905c. arXiv:1710.05454 ADSCrossRefGoogle Scholar
Christie IM, Lalakos A, Tchekhovskoy A, Fernández R, Foucart F, Quataert E, Kasen D (2019) The role of magnetic field geometry in the evolution of neutron star merger accretion discs. Mon Not R Astron Soc 490(4):4811–4825. https://doi.org/10.1093/mnras/stz2552. arXiv:1907.02079 ADSCrossRefGoogle Scholar
Ciolfi R, Kastaun W, Giacomazzo B, Endrizzi A, Siegel DM, Perna R (2017) General relativistic magnetohydrodynamic simulations of binary neutron star mergers forming a long-lived neutron star. Phys Rev D 95(6):063016. https://doi.org/10.1103/PhysRevD.95.063016. arXiv:1701.08738 ADSCrossRefGoogle Scholar
Corsi A, Mészáros P (2009) Gamma-ray burst afterglow plateaus and gravitational waves: multi-messenger signature of a millisecond magnetar? Astrophys J 702(2):1171–1178. https://doi.org/10.1088/0004-637X/702/2/1171. arXiv:0907.2290 ADSCrossRefGoogle Scholar
Côté B, Fryer CL, Belczynski K, Korobkin O, Chruślińska M, Vassh N, Mumpower MR, Lippuner J, Sprouse TM, Surman R, Wollaeger R (2018) The origin of r-process elements in the Milky Way. Astrophys J 855(2):99. https://doi.org/10.3847/1538-4357/aaad67. arXiv:1710.05875 ADSCrossRefGoogle Scholar
Côté B, Eichler M, Arcones A, Hansen CJ, Simonetti P, Frebel A, Fryer CL, Pignatari M, Reichert M, Belczynski K (2019a) Neutron star mergers might not be the only source of r-process elements in the Milky Way. Astrophys J 875(2):106. https://doi.org/10.3847/1538-4357/ab10db. arXiv:1809.03525 ADSCrossRefGoogle Scholar
Côté B, Lugaro M, Reifarth R, Pignatari M, Világos B, Yagüe A, Gibson BK (2019b) Galactic chemical evolution of radioactive isotopes. Astrophys J 878:156. https://doi.org/10.3847/1538-4357/ab21d1. arXiv:1905.07828 ADSCrossRefGoogle Scholar
Coughlin MW, Dietrich T, Doctor Z, Kasen D, Coughlin S, Jerkstrand A, Leloudas G, McBrien O, Metzger BD, O'Shaughnessy R, Smartt SJ (2018) Constraints on the neutron star equation of state from AT2017gfo using radiative transfer simulations. Mon Not R Astron Soc 480(3):3871–3878. https://doi.org/10.1093/mnras/sty2174. arXiv:1805.09371 ADSCrossRefGoogle Scholar
Coughlin MW, Dietrich T, Margalit B, Metzger BD (2019) Multimessenger Bayesian parameter inference of a binary neutron star merger. Mon Not R Astron Soc 489(1):L91–L96. https://doi.org/10.1093/mnrasl/slz133. arXiv:1812.04803 ADSCrossRefGoogle Scholar
Coulter DA, Foley RJ, Kilpatrick CD, Drout MR, Piro AL, Shappee BJ, Siebert MR, Simon JD, Ulloa N, Kasen D, Madore BF, Murguia-Berthier A, Pan YC, Prochaska JX, Ramirez-Ruiz E, Rest A, Rojas-Bravo C (2017) Swope Supernova Survey 2017a (SSS17a), the optical counterpart to a gravitational wave source. Science 358(6370):1556–1558. https://doi.org/10.1126/science.aap9811. arXiv:1710.05452 ADSCrossRefGoogle Scholar
Covino S, Wiersema K, Fan YZ, Toma K, Higgins AB, Melandri A, D'Avanzo P, Mundell CG, Palazzi E, Tanvir NR, Bernardini MG, Branchesi M, Brocato E, Campana S, di Serego Alighieri S, Götz D, Fynbo JPU, Gao W, Gomboc A, Gompertz B, Greiner J, Hjorth J, Jin ZP, Kaper L, Klose S, Kobayashi S, Kopac D, Kouveliotou C, Levan AJ, Mao J, Malesani D, Pian E, Rossi A, Salvaterra R, Starling RLC, Steele I, Tagliaferri G, Troja E, van der Horst AJ, Wijers RAMJ (2017) The unpolarized macronova associated with the gravitational wave event GW 170817. Nat Astron 1:791–794. https://doi.org/10.1038/s41550-017-0285-z. arXiv:1710.05849 ADSCrossRefGoogle Scholar
Cowan JJ, Sneden C, Lawler JE, Aprahamian A, Wiescher M, Langanke K, Martínez-Pinedo G, Thielemann FK (2019) Making the heaviest elements in the universe: a review of the rapid neutron capture process. arXiv e-prints arXiv:1901.01410
Cowperthwaite PS, Berger E (2015) A comprehensive study of detectability and contamination in deep rapid optical searches for gravitational wave counterparts. Astrophys J 814:25. https://doi.org/10.1088/0004-637X/814/1/25. arXiv:1503.07869 ADSCrossRefGoogle Scholar
Cowperthwaite PS, Berger E, Villar VA et al (2017) The electromagnetic counterpart of the binary neutron star merger LIGO/Virgo GW170817. II. UV, optical, and near-infrared light curves and comparison to kilonova models. Astrophys J Lett 848:L17. https://doi.org/10.3847/2041-8213/aa8fc7. arXiv:1710.05840 ADSCrossRefGoogle Scholar
Cromartie HT, Fonseca E, Ransom SM, Demorest PB, Arzoumanian Z, Blumer H, Brook PR, DeCesar ME, Dolch T, Ellis JA, Ferdman RD, Ferrara EC, Garver–Daniels N, Gentile PA, Jones ML, Lam MT, Lorimer DR, Lynch RS, McLaughlin MA, Ng C, Nice DJ, Pennucci TT, Spiewak R, Stairs IH, Stovall K, Swiggum JK, Zhu WW (2019) Relativistic Shapiro delay measurements of an extremely massive millisecond pulsar. Nat Astron. https://doi.org/10.1038/41550-019-0880-2. arXiv:1904.06759
Dall'Osso S, Stella L (2007) Newborn magnetars as sources of gravitational radiation: constraints from high energy observations of magnetar candidates. Astrophys Space Sci 308(1–4):119–124. https://doi.org/10.1007/s10509-007-9323-0. arXiv:astro-ph/0702075 ADSCrossRefGoogle Scholar
Dall'Osso S, Shore SN, Stella L (2009) Early evolution of newly born magnetars with a strong toroidal field. Mon Not R Astron Soc 398:1869–1885. https://doi.org/10.1111/j.1365-2966.2008.14054.x. arXiv:0811.4311 ADSCrossRefGoogle Scholar
Davies MB, Benz W, Piran T, Thielemann FK (1994) Merging neutron stars. I. Initial results for coalescence of noncorotating systems. Astrophys J 431:742–753. https://doi.org/10.1086/174525. arXiv:astro-ph/9401032 ADSCrossRefGoogle Scholar
De S, Finstad D, Lattimer JM, Brown DA, Berger E, Biwer CM (2018) Tidal deformabilities and radii of neutron stars from the observation of GW170817. Phys Rev Lett 121(9):091102. https://doi.org/10.1103/PhysRevLett.121.091102. arXiv:1804.08583 ADSCrossRefGoogle Scholar
Demorest PB, Pennucci T, Ransom SM, Roberts MSE, Hessels JWT (2010) A two-solar-mass neutron star measured using Shapiro delay. Nature 467:1081–1083. https://doi.org/10.1038/nature09466. arXiv:1010.5788 ADSCrossRefGoogle Scholar
Dermer CD, Atoyan A (2006) Collapse of neutron stars to black holes in binary systems: a model for short gamma-ray bursts. Astrophys J Lett 643:L13–L16. https://doi.org/10.1086/504895. arXiv:astro-ph/0601142 ADSCrossRefGoogle Scholar
Desai D, Metzger BD, Foucart F (2019) Imprints of r-process heating on fall-back accretion: distinguishing black hole-neutron star from double neutron star mergers. Mon Not R Astron Soc 485(3):4404–4412. https://doi.org/10.1093/mnras/stz644. arXiv:1812.04641 ADSCrossRefGoogle Scholar
Dessart L, Ott CD, Burrows A, Rosswog S, Livne E (2009) Neutrino signatures and the neutrino-driven wind in binary neutron star mergers. Astrophys J 690:1681–1705. https://doi.org/10.1088/0004-637X/690/2/1681. arXiv:0806.4380 ADSCrossRefGoogle Scholar
Dexter J, Kasen D (2013) Supernova light curves powered by fallback accretion. Astrophys J 772(1):30. https://doi.org/10.1088/0004-637X/772/1/30. arXiv:1210.7240 ADSCrossRefGoogle Scholar
Díaz MC et al (2017) Observations of the first electromagnetic counterpart to a gravitational-wave source by the TOROS collaboration. Astrophys J Lett 848(2):L29. https://doi.org/10.3847/2041-8213/aa9060. arXiv:1710.05844 ADSCrossRefGoogle Scholar
Dietrich T, Ujevic M (2017) Modeling dynamical ejecta from binary neutron star mergers and implications for electromagnetic counterparts. Class Quantum Grav 34(10):105014. https://doi.org/10.1088/1361-6382/aa6bb0. arXiv:1612.03665 ADSCrossRefzbMATHGoogle Scholar
Dietrich T, Bernuzzi S, Ujevic M, Tichy W (2017a) Gravitational waves and mass ejecta from binary neutron star mergers: effect of the stars' rotation. Phys Rev D 95(4):044045. https://doi.org/10.1103/PhysRevD.95.044045. arXiv:1611.07367 ADSCrossRefGoogle Scholar
Dietrich T, Ujevic M, Tichy W, Bernuzzi S, Brügmann B (2017b) Gravitational waves and mass ejecta from binary neutron star mergers: effect of the mass ratio. Phys Rev D 95(2):024029. https://doi.org/10.1103/PhysRevD.95.024029. arXiv:1607.06636 ADSCrossRefGoogle Scholar
Dominik M, Berti E, O'Shaughnessy R, Mandel I, Belczynski K, Fryer C, Holz DE, Bulik T, Pannarale F (2015) Double compact objects. III. Gravitational-wave detection rates. Astrophys J 806:263. https://doi.org/10.1088/0004-637X/806/2/263. arXiv:1405.7016 ADSCrossRefGoogle Scholar
Doneva DD, Kokkotas KD, Pnigouras P (2015) Gravitational wave afterglow in binary neutron star mergers. Phys Rev D 92:104040. https://doi.org/10.1103/PhysRevD.92.104040. arXiv:1510.00673 ADSCrossRefGoogle Scholar
Drout MR et al (2017) Light curves of the neutron star merger GW170817/SSS17a: implications for r-process nucleosynthesis. Science 358(6370):1570–1574. https://doi.org/10.1126/science.aaq0049. arXiv:1710.05443 ADSCrossRefGoogle Scholar
Duez MD, Liu YT, Shapiro SL, Shibata M, Stephens BC (2006) Collapse of magnetized hypermassive neutron stars in general relativity. Phys Rev Lett 96:031101. https://doi.org/10.1103/PhysRevLett.96.031101. arXiv:astro-ph/0510653 ADSCrossRefGoogle Scholar
Duffell PC, Quataert E, MacFadyen AI (2015) A narrow short-duration GRB jet from a wide central engine. Astrophys J 813:64. https://doi.org/10.1088/0004-637X/813/1/64. arXiv:1505.05538 ADSCrossRefGoogle Scholar
Duffell PC, Quataert E, Kasen D, Klion H (2018) Jet dynamics in compact object mergers: GW170817 likely had a successful jet. Astrophys J 866(1):3. https://doi.org/10.3847/1538-4357/aae084. arXiv:1806.10616 ADSCrossRefGoogle Scholar
Duflo J, Zuker AP (1995) Microscopic mass formulas. Phys Rev C 52(1):R23–R27. https://doi.org/10.1103/PhysRevC.52.R23. arXiv:nucl-th/9505011 ADSCrossRefGoogle Scholar
Duncan RC, Shapiro SL, Wasserman I (1986) Neutrino-driven winds from young, hot neutron stars. Astrophys J 309:141–160. https://doi.org/10.1086/164587 ADSCrossRefGoogle Scholar
East WE, Pretorius F, Stephens BC (2012) Eccentric black hole-neutron star mergers: effects of black hole spin and equation of state. Phys Rev D 85:124009. https://doi.org/10.1103/PhysRevD.85.124009. arXiv:1111.3055 ADSCrossRefGoogle Scholar
East WE, Paschalidis V, Pretorius F, Tsokaros A (2019) Binary neutron star mergers: effects of spin and post-merger dynamics. arXiv e-prints arXiv:1906.05288
Eichler D, Livio M, Piran T, Schramm DN (1989) Nucleosynthesis, neutrino bursts and gamma-rays from coalescing neutron stars. Nature 340:126–128. https://doi.org/10.1038/340126a0 ADSCrossRefGoogle Scholar
Eichler M, Arcones A, Kelic A, Korobkin O, Langanke K, Marketin T, Martinez-Pinedo G, Panov I, Rauscher T, Rosswog S, Winteler C, Zinner NT, Thielemann FK (2015) The role of fission in neutron star mergers and its impact on the \(r\)-process peaks. Astrophys J 808:30. https://doi.org/10.1088/0004-637X/808/1/30. arXiv:1411.0974 ADSCrossRefGoogle Scholar
Evans PA et al (2017) Swift and NuSTAR observations of GW170817: detection of a blue kilonova. Science 358(6370):1565–1570. https://doi.org/10.1126/science.aap9580. arXiv:1710.05437 ADSCrossRefGoogle Scholar
Even W, Korobkin O, Fryer CL, Fontes CJ, Wollaeger RT, Hungerford A, Lippuner J, Miller J, Mumpower MR, Misch GW (2019) Composition effects on kilonova spectra and light curves: I. arXiv e-prints arXiv:1904.13298
Faber JA, Rasio FA (2012) Binary neutron star mergers. Living Rev Relativ 15:lrr-2012-8. https://doi.org/10.12942/lrr-2012-8. arXiv:1204.3858
Fahlman S, Fernández R (2018) Hypermassive neutron star disk outflows and blue kilonovae. Astrophys J Lett 869(1):L3. https://doi.org/10.3847/2041-8213/aaf1ab. arXiv:1811.08906 ADSCrossRefGoogle Scholar
Fairhurst S (2011) Source localization with an advanced gravitational wave detector network. Class Quantum Grav 28:105021. https://doi.org/10.1088/0264-9381/28/10/105021. arXiv:1010.6192 ADSCrossRefzbMATHGoogle Scholar
Falcke H, Rezzolla L (2014) Fast radio bursts: the last sign of supramassive neutron stars. Astron Astrophys 562:A137. https://doi.org/10.1051/0004-6361/201321996. arXiv:1307.1409 ADSCrossRefGoogle Scholar
Fan X, Hendry M (2015) Multimessenger astronomy. ArXiv e-prints arXiv:1509.06022
Fang K, Metzger BD (2017) High-energy neutrinos from millisecond magnetars formed from the merger of binary neutron stars. Astrophys J 849(2):153. https://doi.org/10.3847/1538-4357/aa8b6a. arXiv:1707.04263 ADSCrossRefGoogle Scholar
Fernández R, Metzger BD (2013) Delayed outflows from black hole accretion tori following neutron star binary coalescence. Mon Not R Astron Soc 435:502–517. https://doi.org/10.1093/mnras/stt1312. arXiv:1304.6720 ADSCrossRefGoogle Scholar
Fernández R, Metzger BD (2016) Electromagnetic signatures of neutron star mergers in the advanced LIGO era. Annu Rev Nucl Part Sci 66:23–45. https://doi.org/10.1146/annurev-nucl-102115-044819. arXiv:1512.05435 ADSCrossRefGoogle Scholar
Fernández R, Kasen D, Metzger BD, Quataert E (2015a) Outflows from accretion discs formed in neutron star mergers: effect of black hole spin. Mon Not R Astron Soc 446:750–758. https://doi.org/10.1093/mnras/stu2112. arXiv:1409.4426 ADSCrossRefGoogle Scholar
Fernández R, Quataert E, Schwab J, Kasen D, Rosswog S (2015b) The interplay of disc wind and dynamical ejecta in the aftermath of neutron star-black hole mergers. Mon Not R Astron Soc 449:390–402. https://doi.org/10.1093/mnras/stv238. arXiv:1412.5588 ADSCrossRefGoogle Scholar
Fernández R, Tchekhovskoy A, Quataert E, Foucart F, Kasen D (2019) Long-term GRMHD simulations of neutron star merger accretion discs: implications for electromagnetic counterparts. Mon Not R Astron Soc 482(3):3373–3393. https://doi.org/10.1093/mnras/sty2932. arXiv:1808.00461 ADSCrossRefGoogle Scholar
Finstad D, De S, Brown DA, Berger E, Biwer CM (2018) Measuring the viewing angle of GW170817 with electromagnetic and gravitational waves. Astrophys J 860(1):L2. https://doi.org/10.3847/2041-8213/aac6c1. arXiv:1804.04179 ADSCrossRefGoogle Scholar
Fischer T, Whitehouse SC, Mezzacappa A, Thielemann FK, Liebendörfer M (2010) Protoneutron star evolution and the neutrino-driven wind in general relativistic neutrino radiation hydrodynamics simulations. Astron Astrophys 517:A80. https://doi.org/10.1051/0004-6361/200913106. arXiv:0908.1871 CrossRefzbMATHGoogle Scholar
Fong W, Berger E (2013) The locations of short gamma-ray bursts as evidence for compact object binary progenitors. Astrophys J 776:18. https://doi.org/10.1088/0004-637X/776/1/18. arXiv:1307.0819 ADSCrossRefGoogle Scholar
Fong W, Berger E, Metzger BD, Margutti R, Chornock R, Migliori G, Foley RJ, Zauderer BA, Lunnan R, Laskar T, Desch SJ, Meech KJ, Sonnett S, Dickey CM, Hedlund AM, Harding P (2014) Short GRB130603B: discovery of a jet break in the optical and radio afterglows, and a mysterious late-time X-ray excess. Astrophys J 780:118. https://doi.org/10.1088/0004-637X/780/2/118. arXiv:1309.7479 ADSCrossRefGoogle Scholar
Fong W, Berger E, Margutti R, Zauderer BA (2015) A decade of short-duration gamma-ray burst broadband afterglows: energetics, circumburst densities, and jet opening angles. Astrophys J 815:102. https://doi.org/10.1088/0004-637X/815/2/102. arXiv:1509.02922 ADSCrossRefGoogle Scholar
Fong W, Margutti R, Chornock R, Berger E, Shappee BJ, Levan AJ, Tanvir NR, Smith N, Milne PA, Laskar T, Fox DB, Lunnan R, Blanchard PK, Hjorth J, Wiersema K, van der Horst AJ, Zaritsky D (2016a) The afterglow and early-type host galaxy of the short GRB 150101B at \(z=0.1343\). Astrophys J 833:151. https://doi.org/10.3847/1538-4357/833/2/151. arXiv:1608.08626 ADSCrossRefGoogle Scholar
Fong W, Metzger BD, Berger E, Özel F (2016b) Radio constraints on long-lived magnetar remnants in short gamma-ray bursts. Astrophys J 831:141. https://doi.org/10.3847/0004-637X/831/2/141. arXiv:1607.00416 ADSCrossRefGoogle Scholar
Fong W, Berger E, Blanchard PK, Margutti R, Cowperthwaite PS, Chornock R, Alexander KD, Metzger BD, Villar VA, Nicholl M (2017) The electromagnetic counterpart of the binary neutron star merger LIGO/Virgo GW170817. VIII. A comparison to cosmological short-duration gamma-ray bursts. Astrophys J 848(2):L23. https://doi.org/10.3847/2041-8213/aa9018. arXiv:1710.05438 ADSCrossRefGoogle Scholar
Fontes CJ, Fryer CL, Hungerford AL, Hakel P, Colgan J, Kilcrease DP, Sherrill ME (2015) Relativistic opacities for astrophysical applications. High Energy Density Phys 16:53–59. https://doi.org/10.1016/j.hedp.2015.06.002 ADSCrossRefGoogle Scholar
Fontes CJ, Fryer CL, Hungerford AL, Wollaeger RT, Rosswog S, Berger E (2017) A line-smeared treatment of opacities for the spectra and light curves from macronovae. ArXiv e-prints arXiv:1702.02990
Foucart F (2012) Black-hole-neutron-star mergers: disk mass predictions. Phys Rev D 86:124007. https://doi.org/10.1103/PhysRevD.86.124007. arXiv:1207.6304 ADSCrossRefGoogle Scholar
Foucart F, O'Connor E, Roberts L, Duez MD, Haas R, Kidder LE, Ott CD, Pfeiffer HP, Scheel MA, Szilagyi B (2015) Post-merger evolution of a neutron star-black hole binary with neutrino transport. Phys Rev D 91:124021. https://doi.org/10.1103/PhysRevD.91.124021. arXiv:1502.04146 ADSCrossRefGoogle Scholar
Foucart F, Desai D, Brege W, Duez MD, Kasen D, Hemberger DA, Kidder LE, Pfeiffer HP, Scheel MA (2017) Dynamical ejecta from precessing neutron star-black hole mergers with a hot, nuclear-theory based equation of state. Class Quantum Grav 34(4):044002. https://doi.org/10.1088/1361-6382/aa573b. arXiv:1611.01159 ADSCrossRefGoogle Scholar
Foucart F, Hinderer T, Nissanke S (2018) Remnant baryon mass in neutron star-black hole mergers: predictions for binary neutron star mimickers and rapidly spinning black holes. Phys Rev D 98(8):081501. https://doi.org/10.1103/PhysRevD.98.081501. arXiv:1807.00011 ADSCrossRefGoogle Scholar
Fraija N, De Colle F, Veres P, Dichiara S, Barniol Duran R, Galvan-Gamez A (2019) The short GRB 170817A: modelling the off-axis emission and implications on the ejecta magnetization. Astrophys J 871:123. https://doi.org/10.3847/1538-4357/aaf564. arXiv:1710.08514 ADSCrossRefGoogle Scholar
Freiburghaus C, Rosswog S, Thielemann F (1999) \(r\)-process in neutron star mergers. Astrophys J 525:L121–L124. https://doi.org/10.1086/312343 ADSCrossRefGoogle Scholar
Fruchter AS et al (2006) Long \(\gamma \)-ray bursts and core-collapse supernovae have different environments. Nature 441:463–468. https://doi.org/10.1038/nature04787. arXiv:astro-ph/0603537 ADSCrossRefGoogle Scholar
Fryer CL, Herwig F, Hungerford A, Timmes FX (2006) Supernova fallback: a possible site for the r-process. Astrophys J Lett 646(2):L131–L134. https://doi.org/10.1086/507071. arXiv:astro-ph/0606450 ADSCrossRefGoogle Scholar
Fujibayashi S, Kiuchi K, Nishimura N, Sekiguchi Y, Shibata M (2018) Mass ejection from the remnant of a binary neutron star merger: viscous-radiation hydrodynamics study. Astrophys J 860(1):64. https://doi.org/10.3847/1538-4357/aabafd. arXiv:1711.02093 ADSCrossRefGoogle Scholar
Gaensler BM, Slane PO (2006) The evolution and structure of pulsar wind nebulae. Annu Rev Astron Astrophys 44:17–47. https://doi.org/10.1146/annurev.astro.44.051905.092528. arXiv:astro-ph/0601081 ADSCrossRefGoogle Scholar
Gall C, Hjorth J, Rosswog S, Tanvir NR, Levan AJ (2017) Lanthanides or dust in kilonovae: lessons learned from GW170817. Astrophys J Lett 849(2):L19. https://doi.org/10.3847/2041-8213/aa93f9. arXiv:1710.05863 ADSCrossRefGoogle Scholar
Gao H, Ding X, Wu XF, Zhang B, Dai ZG (2013) Bright broadband afterglows of gravitational wave bursts from mergers of binary neutron stars. Astrophys J 771:86. https://doi.org/10.1088/0004-637X/771/2/86. arXiv:1301.0439 ADSCrossRefGoogle Scholar
Gao H, Ding X, Wu XF, Dai ZG, Zhang B (2015) GRB 080503 late afterglow re-brightening: signature of a magnetar-powered merger-nova. Astrophys J 807:163. https://doi.org/10.1088/0004-637X/807/2/163. arXiv:1506.06816 ADSCrossRefGoogle Scholar
Gao H, Cao Z, Ai S, Zhang B (2017) A more stringent constraint on the mass ratio of binary neutron star merger GW170817. Astrophys J Lett 851(2):L45. https://doi.org/10.3847/2041-8213/aaa0c6. arXiv:1711.08577 ADSCrossRefGoogle Scholar
Gehrels N, Spergel D, WFIRST SDT Project (2015) Wide-field infrared survey telescope (WFIRST) mission and synergies with LISA and LIGO-Virgo. J Phys: Conf Ser 610:012007. https://doi.org/10.1088/1742-6596/610/1/012007. arXiv:1411.0313 Google Scholar
Gehrels N, Cannizzo JK, Kanner J, Kasliwal MM, Nissanke S, Singer LP (2016) Galaxy strategy for LIGO-Virgo gravitational wave counterpart searches. Astrophys J 820:136. https://doi.org/10.3847/0004-637X/820/2/136. arXiv:1508.03608 ADSCrossRefGoogle Scholar
Ghosh S, Bloemen S, Nelemans G, Groot PJ, Price LR (2016) Tiling strategies for optical follow-up of gravitational-wave triggers by telescopes with a wide field of view. Astron Astrophys 592:A82. https://doi.org/10.1051/0004-6361/201527712. arXiv:1511.02673 ADSCrossRefGoogle Scholar
Giacomazzo B, Perna R (2013) Formation of stable magnetars from binary neutron star mergers. Astrophys J Lett 771:L26. https://doi.org/10.1088/2041-8205/771/2/L26. arXiv:1306.1608 ADSCrossRefGoogle Scholar
Gold R, Bernuzzi S, Thierfelder M, Brügmann B, Pretorius F (2012) Eccentric binary neutron star mergers. Phys Rev D 86:121501. https://doi.org/10.1103/PhysRevD.86.121501. arXiv:1109.5128 ADSCrossRefGoogle Scholar
Goldstein A et al (2017) An ordinary short gamma-ray burst with extraordinary implications: Fermi-GBM detection of GRB 170817A. Astrophys J Lett 848:L14. https://doi.org/10.3847/2041-8213/aa8f41. arXiv:1710.05446 ADSCrossRefGoogle Scholar
Gompertz BP, Levan AJ, Tanvir NR, Hjorth J, Covino S, Evans PA, Fruchter AS, González-Fernández C, Jin ZP, Lyman JD, Oates SR, O'Brien PT, Wiersema K (2018) The diversity of kilonova emission in short gamma-ray bursts. Astrophys J 860(1):62. https://doi.org/10.3847/1538-4357/aac206. arXiv:1710.05442 ADSCrossRefGoogle Scholar
Goriely S, Demetriou P, Janka HT, Pearson JM, Samyn M (2005) The \(r\)-process nucleosynthesis: a continued challenge for nuclear physics and astrophysics. Nucl Phys A 758:587–594. https://doi.org/10.1016/j.nuclphysa.2005.05.107. arXiv:astro-ph/0410429 ADSCrossRefGoogle Scholar
Goriely S, Bauswein A, Janka HT (2011) \(r\)-process nucleosynthesis in dynamically ejected matter of neutron star mergers. Astrophys J Lett 738:L32. https://doi.org/10.1088/2041-8205/738/2/L32. arXiv:1107.0899 ADSCrossRefGoogle Scholar
Gottlieb O, Nakar E, Piran T, Hotokezaka K (2018) A cocoon shock breakout as the origin of the \(\gamma \)-ray emission in GW170817. Mon Not R Astron Soc 479(1):588–600. https://doi.org/10.1093/mnras/sty1462. arXiv:1710.05896 ADSCrossRefGoogle Scholar
Granot J, Guetta D, Gill R (2017) Lessons from the short GRB 170817A: the first gravitational-wave detection of a binary neutron star merger. Astrophys J 850(2):L24. https://doi.org/10.3847/2041-8213/aa991d. arXiv:1710.06407 ADSCrossRefGoogle Scholar
Grossman D, Korobkin O, Rosswog S, Piran T (2014) The long-term evolution of neutron star merger remnants—II. Radioactively powered transients. Mon Not R Astron Soc 439:757–770. https://doi.org/10.1093/mnras/stt2503. arXiv:1307.2943 ADSCrossRefGoogle Scholar
Haggard D, Nynka M, Ruan JJ, Kalogera V, Cenko SB, Evans P, Kennea JA (2017) A deep Chandra X-ray study of neutron star coalescence GW170817. Astrophys J Lett 848:L25. https://doi.org/10.3847/2041-8213/aa8ede. arXiv:1710.05852 ADSCrossRefGoogle Scholar
Halevi G, Mösta P (2018) r-process nucleosynthesis from three-dimensional jet-driven core-collapse supernovae with magnetic misalignments. Mon Not R Astron Soc 477(2):2366–2375. https://doi.org/10.1093/mnras/sty797. arXiv:1801.08943 ADSCrossRefGoogle Scholar
Hallinan G, Corsi A, Mooley KP, Hotokezaka K, Nakar E, Kasliwal MM, Kaplan DL, Frail DA, Myers ST, Murphy T (2017) A radio counterpart to a neutron star merger. Science 358(6370):1579–1583. https://doi.org/10.1126/science.aap9855. arXiv:1710.05435 ADSCrossRefGoogle Scholar
Hjorth J, Levan AJ, Tanvir NR, Lyman JD, Wojtak R, Schrøder SL, Mandel I, Gall C, Bruun SH (2017) The distance to NGC 4993: the host galaxy of the gravitational-wave event GW170817. Astrophys J Lett 848(2):L31. https://doi.org/10.3847/2041-8213/aa9110. arXiv:1710.05856 ADSCrossRefGoogle Scholar
Holmbeck EM, Sprouse TM, Mumpower MR, Vassh N, Surman R, Beers TC, Kawano T (2019) Actinide production in the neutron-rich ejecta of a neutron star merger. Astrophys J 870(1):23. https://doi.org/10.3847/1538-4357/aaefef. arXiv:1807.06662 ADSCrossRefGoogle Scholar
Holz DE, Hughes SA (2005) Using gravitational-wave standard sirens. Astrophys J 629:15–22. https://doi.org/10.1086/431341. arXiv:astro-ph/0504616 ADSCrossRefGoogle Scholar
Horesh A, Hotokezaka K, Piran T, Nakar E, Hancock P (2016) Testing the magnetar model via a late-time radio observations of two macronova candidates. Astrophys J Lett 819:L22. https://doi.org/10.3847/2041-8205/819/2/L22. arXiv:1601.01692 ADSCrossRefGoogle Scholar
Horowitz CJ, Arcones A, Côté B, Dillmann I, Nazarewicz W, Roederer IU, Schatz H, Aprahamian A, Atanasov D, Bauswein A, Bliss J, Brodeur M, Clark JA, Frebel A, Foucart F, Hansen CJ, Just O, Kankainen A, McLaughlin GC, Kelly JM, Liddick SN, Lee DM, Lippuner J, Martin D, Mendoza-Temis J, Metzger BD, Mumpower MR, Perdikakis G, Pereira J, O'Shea BW, Reifarth R, Rogers AM, Siegel DM, Spyrou A, Surman R, Tang X, Uesaka T, Wang M (2019) r-process nucleosynthesis: connecting rare-isotope beam facilities with the cosmos. J Phys G: Nucl Part Phys 46:083001. https://doi.org/10.1088/1361-6471/ab0849. arXiv:1805.04637 ADSCrossRefGoogle Scholar
Hotokezaka K, Kyutoku K, Okawa H, Shibata M, Kiuchi K (2011) Binary neutron star mergers: dependence on the nuclear equation of state. Phys Rev D 83:124008. https://doi.org/10.1103/PhysRevD.83.124008. arXiv:1105.4370 ADSCrossRefGoogle Scholar
Hotokezaka K, Kiuchi K, Kyutoku K, Okawa H, Sekiguchi YI, Shibata M, Taniguchi K (2013a) Mass ejection from the merger of binary neutron stars. Phys Rev D 87:024001. https://doi.org/10.1103/PhysRevD.87.024001. arXiv:1212.0905 ADSCrossRefGoogle Scholar
Hotokezaka K, Kyutoku K, Tanaka M, Kiuchi K, Sekiguchi Y, Shibata M, Wanajo S (2013b) Progenitor models of the electromagnetic transient associated with the short gamma ray burst 130603B. Astrophys J Lett 778:L16. https://doi.org/10.1088/2041-8205/778/1/L16. arXiv:1310.1623 ADSCrossRefGoogle Scholar
Hotokezaka K, Piran T, Paul M (2015) Short-lived \(^{244}\)Pu points to compact binary mergers as sites for heavy \(r\)-process nucleosynthesis. Nature Phys 11:1042. https://doi.org/10.1038/nphys3574. arXiv:1510.00711 ADSCrossRefGoogle Scholar
Hotokezaka K, Wanajo S, Tanaka M, Bamba A, Terada Y, Piran T (2016) Radioactive decay products in neutron star merger ejecta: heating efficiency and \(\gamma \)-ray emission. Mon Not R Astron Soc 459:35–43. https://doi.org/10.1093/mnras/stw404. arXiv:1511.05580 ADSCrossRefGoogle Scholar
Hotokezaka K, Sari R, Piran T (2017) Analytic heating rate of neutron star merger ejecta derived from Fermi's theory of beta decay. Mon Not R Astron Soc 468:91–96. https://doi.org/10.1093/mnras/stx411. arXiv:1701.02785 ADSCrossRefGoogle Scholar
Hotokezaka K, Beniamini P, Piran T (2018) Neutron star mergers as sites of r-process nucleosynthesis and short gamma-ray bursts. Int J Mod Phys D 27(13):1842005. https://doi.org/10.1142/S0218271818420051. arXiv:1801.01141 ADSMathSciNetCrossRefGoogle Scholar
Howell EJ, Chu Q, Rowlinson A, Gao H, Zhang B, Tingay SJ, Boër M, Wen L (2016) Fast response electromagnetic follow-ups from low latency GW triggers. J Phys: Conf Ser 716:012009. https://doi.org/10.1088/1742-6596/716/1/012009. arXiv:1603.04120 CrossRefGoogle Scholar
Howell EJ, Ackley K, Rowlinson A, Coward D (2019) Joint gravitational wave–gamma-ray burst detection rates in the aftermath of GW170817. Mon Not R Astron Soc 485(1):1435–1447. https://doi.org/10.1093/mnras/stz455. arXiv:1811.09168 ADSCrossRefGoogle Scholar
Hu L, Wu X, Andreoni I, Ashley MCB, Cooke J, Cui X, Du F, Dai Z, Gu B, Hu Y, Lu H, Li X, Li Z, Liang E, Liu L, Ma B, Shang Z, Sun T, Suntzeff NB, Tao C, Udden SA, Wang L, Wang X, Wen H, Xiao D, Su J, Yang J, Yang S, Yuan X, Zhou H, Zhang H, Zhou J, Zhu Z (2017) Optical observations of LIGO source GW 170817 by the Antarctic Survey Telescopes at Dome A, Antarctica. Sci Bull 62:1433–1438. https://doi.org/10.1016/j.scib.2017.10.006. arXiv:1710.05462 CrossRefGoogle Scholar
Hüdepohl L, Müller B, Janka HT, Marek A, Raffelt GG (2010) Neutrino signal of electron-capture supernovae from core collapse to cooling. Phys Rev Lett 104:251101. https://doi.org/10.1103/PhysRevLett.104.251101. arXiv:0912.0260 ADSCrossRefGoogle Scholar
Hulse RA, Taylor JH (1975) Discovery of a pulsar in a binary system. Astrophys J Lett 195:L51–L53. https://doi.org/10.1086/181708 ADSCrossRefGoogle Scholar
Hurley K (2013) All-sky monitoring of high-energy transients. In: Huber MCE, Pauluhn A, Culhane JL, Timothy JG, Wilhelm K, Zehnder A (eds) Observing photons in space: a guide to experimental space astronomy, ISSI Scientific Reports Series, vol 9. Springer, New York, pp 255–260. https://doi.org/10.1007/978-1-4614-7804-1_13 CrossRefGoogle Scholar
Im M, Yoon Y, Lee SKJ, Lee HM, Kim J, Lee CU, Kim SL, Troja E, Choi C, Lim G, Ko J, Shim H (2017) Distance and properties of NGC 4993 as the host galaxy of the gravitational-wave source GW170817. Astrophys J Lett 849(1):L16. https://doi.org/10.3847/2041-8213/aa9367. arXiv:1710.05861 ADSCrossRefGoogle Scholar
Ishii A, Shigeyama T, Tanaka M (2018) Free neutron ejection from shock breakout in binary neutron star mergers. Astrophys J 861(1):25. https://doi.org/10.3847/1538-4357/aac385. arXiv:1805.04909 ADSCrossRefGoogle Scholar
Ji AP, Frebel A, Chiti A, Simon JD (2016) R-process enrichment from a single event in an ancient dwarf galaxy. Nature 531:610–613. https://doi.org/10.1038/nature17425. arXiv:1512.01558 ADSCrossRefGoogle Scholar
Jin ZP, Li X, Cano Z, Covino S, Fan YZ, Wei DM (2015) The light curve of the macronova associated with the long-short burst GRB 060614. Astrophys J Lett 811:L22. https://doi.org/10.1088/2041-8205/811/2/L22. arXiv:1507.07206 ADSCrossRefGoogle Scholar
Jin ZP, Hotokezaka K, Li X, Tanaka M, D'Avanzo P, Fan YZ, Covino S, Wei DM, Piran T (2016) The macronova in GRB 050709 and the GRB-macronova connection. Nature Commun 7:12898. https://doi.org/10.1038/ncomms12898. arXiv:1603.07869 ADSCrossRefGoogle Scholar
Just O, Bauswein A, Pulpillo RA, Goriely S, Janka HT (2015) Comprehensive nucleosynthesis analysis for ejecta of compact binary mergers. Mon Not R Astron Soc 448:541–567. https://doi.org/10.1093/mnras/stv009. arXiv:1406.2687 ADSCrossRefGoogle Scholar
Kagawa Y, Yonetoku D, Sawano T, Toyanago A, Nakamura T, Takahashi K, Kashiyama K, Ioka K (2015) X-raying extended emission and rapid decay of short gamma-ray bursts. Astrophys J 811:4. https://doi.org/10.1088/0004-637X/811/1/4. arXiv:1506.02359 ADSCrossRefGoogle Scholar
Kalogera V, Kim C, Lorimer DR, Burgay M, D'Amico N, Possenti A, Manchester RN, Lyne AG, Joshi BC, McLaughlin MA, Kramer M, Sarkissian JM, Camilo F (2004) Erratum: "The cosmic coalescence rates for double neutron star binaries" (ApJ, 601, L179 [2004]). Astrophys J Lett 614:L137–L138. https://doi.org/10.1086/425868. arXiv:astro-ph/0312101 ADSCrossRefGoogle Scholar
Kaplan DL, Murphy T, Rowlinson A, Croft SD, Wayth RB, Trott CM (2016) Strategies for finding prompt radio counterparts to gravitational wave transients with the Murchison Widefield Array. Publ Astron Soc Australia 33:e050. https://doi.org/10.1017/pasa.2016.43. arXiv:1609.00634 ADSCrossRefGoogle Scholar
Kaplan JD, Ott CD, O'Connor EP, Kiuchi K, Roberts L, Duez M (2014) The influence of thermal pressure on equilibrium models of hypermassive neutron star merger remnants. Astrophys J 790:19. https://doi.org/10.1088/0004-637X/790/1/19. arXiv:1306.4034 ADSCrossRefGoogle Scholar
Kasen D, Barnes J (2019) Radioactive heating and late time kilonova light curves. Astrophys J 876(2):128. https://doi.org/10.3847/1538-4357/ab06c2. arXiv:1807.03319 ADSCrossRefGoogle Scholar
Kasen D, Bildsten L (2010) Supernova light curves powered by young magnetars. Astrophys J 717:245–249. https://doi.org/10.1088/0004-637X/717/1/245. arXiv:0911.0680 ADSCrossRefGoogle Scholar
Kasen D, Badnell NR, Barnes J (2013) Opacities and spectra of the r-process ejecta from neutron star mergers. Astrophys J 774:25. https://doi.org/10.1088/0004-637X/774/1/25. arXiv:1303.5788 ADSCrossRefGoogle Scholar
Kasen D, Fernández R, Metzger BD (2015) Kilonova light curves from the disc wind outflows of compact object mergers. Mon Not R Astron Soc 450:1777–1786. https://doi.org/10.1093/mnras/stv721. arXiv:1411.3726 ADSCrossRefGoogle Scholar
Kasen D, Metzger BD, Bildsten L (2016) Magnetar-driven shock breakout and double-peaked supernova light curves. Astrophys J 821(1):36. https://doi.org/10.3847/0004-637X/821/1/36. arXiv:1507.03645 ADSCrossRefGoogle Scholar
Kasen D, Metzger B, Barnes J, Quataert E, Ramirez-Ruiz E (2017) Origin of the heavy elements in binary neutron-star mergers from a gravitational-wave event. Nature 551:80–84. https://doi.org/10.1038/nature24453. arXiv:1710.05463 ADSCrossRefGoogle Scholar
Kasliwal MM, Nissanke S (2014) On discovering electromagnetic emission from neutron star mergers: the early years of two gravitational wave detectors. Astrophys J Lett 789:L5. https://doi.org/10.1088/2041-8205/789/1/L5. arXiv:1309.1554 ADSCrossRefGoogle Scholar
Kasliwal MM, Kasen D, Lau RM, Perley DA, Rosswog S, Ofek EO, Hotokezaka K, Chary RR, Sollerman J, Goobar A, Kaplan DL (2019) Spitzer mid-infrared detections of neutron star merger GW170817 suggests synthesis of the heaviest elements. Mon Not R Astron Soc L14. https://doi.org/10.1093/mnrasl/slz007. arXiv:1812.08708
Kasliwal MM et al (2017) Illuminating gravitational waves: a concordant picture of photons from a neutron star merger. Science 358(6370):1559–1565. https://doi.org/10.1126/science.aap9455. arXiv:1710.05436 ADSCrossRefGoogle Scholar
Kawaguchi K, Kyutoku K, Nakano H, Okawa H, Shibata M, Taniguchi K (2015) Black hole-neutron star binary merger: dependence on black hole spin orientation and equation of state. Phys Rev D 92:024014. https://doi.org/10.1103/PhysRevD.92.024014. arXiv:1506.05473 ADSCrossRefGoogle Scholar
Kawaguchi K, Kyutoku K, Shibata M, Tanaka M (2016) Models of kilonova/macronova emission from black hole-neutron star mergers. Astrophys J 825:52. https://doi.org/10.3847/0004-637X/825/1/52. arXiv:1601.07711 ADSCrossRefGoogle Scholar
Kawaguchi K, Shibata M, Tanaka M (2018) Radiative transfer simulation for the optical and near-infrared electromagnetic counterparts to GW170817. Astrophys J Lett 865(2):L21. https://doi.org/10.3847/2041-8213/aade02. arXiv:1806.04088 ADSCrossRefGoogle Scholar
Kelley LZ, Ramirez-Ruiz E, Zemp M, Diemand J, Mandel I (2010) The distribution of coalescing compact binaries in the local universe: prospects for gravitational-wave observations. Astrophys J Lett 725:L91–L96. https://doi.org/10.1088/2041-8205/725/1/L91. arXiv:1011.1256 ADSCrossRefGoogle Scholar
Kennel CF, Coroniti FV (1984) Confinement of the Crab pulsar's wind by its supernova remnant. Astrophys J 283:694–709. https://doi.org/10.1086/162356 ADSCrossRefGoogle Scholar
Kilpatrick CD, Foley RJ, Kasen D, Murguia-Berthier A, Ramirez-Ruiz E, Coulter DA, Drout MR, Piro AL, Shappee BJ, Boutsia K, Contreras C, Di Mille F, Madore BF, Morrell N, Pan YC, Prochaska JX, Rest A, Rojas-Bravo C, Siebert MR, Simon JD, Ulloa N (2017) Electromagnetic evidence that SSS17a is the result of a binary neutron star merger. Science 358(6370):1583–1587. https://doi.org/10.1126/science.aaq0073. arXiv:1710.05434 ADSMathSciNetCrossRefGoogle Scholar
Kim C, Perera BBP, McLaughlin MA (2015) Implications of PSR J0737–3039B for the galactic NS–NS binary merger rate. Mon Not R Astron Soc 448:928–938. https://doi.org/10.1093/mnras/stu2729. arXiv:1308.4676 ADSCrossRefGoogle Scholar
Kisaka S, Ioka K (2015) Long-lasting black hole jets in short gamma-ray bursts. Astrophys J Lett 804:L16. https://doi.org/10.1088/2041-8205/804/1/L16. arXiv:1503.06791 ADSCrossRefGoogle Scholar
Kisaka S, Ioka K, Nakar E (2016) X-ray-powered macronovae. Astrophys J 818:104. https://doi.org/10.3847/0004-637X/818/2/104. arXiv:1508.05093 ADSCrossRefGoogle Scholar
Kisaka S, Ioka K, Sakamoto T (2017) Bimodal long-lasting components in short gamma-ray bursts: promising electromagnetic counterparts to neutron star binary mergers. Astrophys J 846(2):142. https://doi.org/10.3847/1538-4357/aa8775. arXiv:1707.00675 ADSCrossRefGoogle Scholar
Kiuchi K, Kyutoku K, Sekiguchi Y, Shibata M, Wada T (2014) High resolution numerical relativity simulations for the merger of binary magnetized neutron stars. Phys Rev D 90:041502. https://doi.org/10.1103/PhysRevD.90.041502. arXiv:1407.2660 ADSCrossRefGoogle Scholar
Kiuchi K, Sekiguchi Y, Kyutoku K, Shibata M, Taniguchi K, Wada T (2015) High resolution magnetohydrodynamic simulation of black hole-neutron star merger: mass ejection and short gamma ray bursts. Phys Rev D 92:064034. https://doi.org/10.1103/PhysRevD.92.064034. arXiv:1506.06811 ADSCrossRefGoogle Scholar
Kiuchi K, Kyutoku K, Shibata M, Taniguchi K (2019) Revisiting the lower bound on tidal deformability derived by AT 2017gfo. Astrophys J Lett 876(2):L31. https://doi.org/10.3847/2041-8213/ab1e45. arXiv:1903.01466 ADSCrossRefGoogle Scholar
Kiziltan B, Kottas A, De Yoreo M, Thorsett SE (2013) The neutron star mass distribution. Astrophys J 778(1):66. https://doi.org/10.1088/0004-637X/778/1/66. arXiv:1011.4291 ADSCrossRefGoogle Scholar
Kocevski D, Thöne CC, Ramirez-Ruiz E, Bloom JS, Granot J, Butler NR, Perley DA, Modjaz M, Lee WH, Cobb BE, Levan AJ, Tanvir N, Covino S (2010) Limits on radioactive powered emission associated with a short-hard GRB 070724A in a star-forming galaxy. Mon Not R Astron Soc 404:963–974. https://doi.org/10.1111/j.1365-2966.2010.16327.x. arXiv:0908.0030 ADSCrossRefGoogle Scholar
Kohri K, Narayan R, Piran T (2005) Neutrino-dominated accretion and supernovae. Astrophys J 629:341–361. https://doi.org/10.1086/431354. arXiv:astro-ph/0502470 ADSCrossRefGoogle Scholar
Köppel S, Bovard L, Rezzolla L (2019) A general-relativistic determination of the threshold mass to prompt collapse in binary neutron star mergers. Astrophys J Lett 872(1):L16. https://doi.org/10.3847/2041-8213/ab0210. arXiv:1901.09977 ADSCrossRefGoogle Scholar
Korobkin O, Rosswog S, Arcones A, Winteler C (2012) On the astrophysical robustness of the neutron star merger \(r\)-process. Mon Not R Astron Soc 426:1940–1949. https://doi.org/10.1111/j.1365-2966.2012.21859.x. arXiv:1206.2379 ADSCrossRefGoogle Scholar
Korobkin O, Hungerford AM, Fryer CL, Mumpower MR, Misch GW, Sprouse TM, Lippuner J, Surman R, Couture AJ, Bloser PF, Shirazi F, Even WP, Vestrand WT, Miller RS (2019) Gamma-rays from kilonova: a potential probe of r-process nucleosynthesis. arXiv e-prints arXiv:1905.05089
Kulkarni SR (2005) Modeling supernova-like explosions associated with gamma-ray bursts with short durations. ArXiv e-prints arXiv:astro-ph/0510256
Kyutoku K, Okawa H, Shibata M, Taniguchi K (2011) Gravitational waves from spinning black hole-neutron star binaries: dependence on black hole spins and on neutron star equations of state. Phys Rev D 84(6):064018. https://doi.org/10.1103/PhysRevD.84.064018. arXiv:1108.1189 ADSCrossRefGoogle Scholar
Kyutoku K, Ioka K, Shibata M (2013) Anisotropic mass ejection from black hole-neutron star binaries: diversity of electromagnetic counterparts. Phys Rev D 88:041503. https://doi.org/10.1103/PhysRevD.88.041503. arXiv:1305.6309 ADSCrossRefGoogle Scholar
Kyutoku K, Ioka K, Okawa H, Shibata M, Taniguchi K (2015) Dynamical mass ejection from black hole-neutron star binaries. Phys Rev D 92:044028. https://doi.org/10.1103/PhysRevD.92.044028. arXiv:1502.05402 ADSCrossRefGoogle Scholar
Lackey BD, Kyutoku K, Shibata M, Brady PR, Friedman JL (2014) Extracting equation of state parameters from black hole-neutron star mergers: aligned-spin black holes and a preliminary waveform model. Phys Rev D 89(4):043009. https://doi.org/10.1103/PhysRevD.89.043009. arXiv:1303.6298 ADSCrossRefGoogle Scholar
Lamb GP, Kobayashi S (2017) Electromagnetic counterparts to structured jets from gravitational wave detected mergers. Mon Not R Astron Soc 472(4):4953–4964. https://doi.org/10.1093/mnras/stx2345. arXiv:1706.03000 ADSCrossRefGoogle Scholar
Lattimer JM, Prakash M (2016) The equation of state of hot, dense matter and neutron stars. Phys Rep 621:127–164. https://doi.org/10.1016/j.physrep.2015.12.005. arXiv:1512.07820 ADSMathSciNetCrossRefGoogle Scholar
Lattimer JM, Schramm DN (1974) Black-hole-neutron-star collisions. Astrophys J Lett 192:L145–L147. https://doi.org/10.1086/181612 ADSCrossRefGoogle Scholar
Lattimer JM, Schramm DN (1976) The tidal disruption of neutron stars by black holes in close binaries. Astrophys J 210:549–567. https://doi.org/10.1086/154860 ADSCrossRefGoogle Scholar
Lattimer JM, Schutz BF (2005) Constraining the equation of state with moment of inertia measurements. Astrophys J 629:979–984. https://doi.org/10.1086/431543. arXiv:astro-ph/0411470 ADSCrossRefGoogle Scholar
Lazzati D, Heger A (2016) The interplay between chemistry and nucleation in the formation of carbonaceous dust in supernova ejecta. Astrophys J 817:134. https://doi.org/10.3847/0004-637X/817/2/134. arXiv:1512.03453 ADSCrossRefGoogle Scholar
Lazzati D, Deich A, Morsony BJ, Workman JC (2017) Off-axis emission of short \(\gamma \)-ray bursts and the detectability of electromagnetic counterparts of gravitational-wave-detected binary mergers. Mon Not R Astron Soc 471(2):1652–1661. https://doi.org/10.1093/mnras/stx1683. arXiv:1610.01157 ADSCrossRefGoogle Scholar
Lazzati D, Perna R, Morsony BJ, Lopez-Camara D, Cantiello M, Ciolfi R, Giacomazzo B, Workman JC (2018) Late time afterglow observations reveal a collimated relativistic jet in the ejecta of the binary neutron star merger GW170817. Phys Rev Lett 120(24):241103. https://doi.org/10.1103/PhysRevLett.120.241103. arXiv:1712.03237 ADSCrossRefGoogle Scholar
Lee WH, Ramirez-Ruiz E, López-Cámara D (2009) Phase transitions and he-synthesis-driven winds in neutrino cooled accretion disks: prospects for late flares in short gamma-ray bursts. Astrophys J Lett 699:L93–L96. https://doi.org/10.1088/0004-637X/699/2/L93. arXiv:0904.3752 ADSCrossRefGoogle Scholar
Lehner L, Liebling SL, Palenzuela C, Caballero OL, O'Connor E, Anderson M, Neilsen D (2016) Unequal mass binary neutron star mergers and multimessenger signals. Class Quantum Grav 33:184002. https://doi.org/10.1088/0264-9381/33/18/184002. arXiv:1603.00501 ADSCrossRefGoogle Scholar
Levan AJ et al (2017) The environment of the binary neutron star merger GW170817. Astrophys J Lett 848(2):L28. https://doi.org/10.3847/2041-8213/aa905f. arXiv:1710.05444 ADSCrossRefGoogle Scholar
Li LX, Paczyński B (1998) Transient events from neutron star mergers. Astrophys J Lett 507:L59–L62. https://doi.org/10.1086/311680. arXiv:astro-ph/9807272 ADSCrossRefGoogle Scholar
Li SZ, Liu LD, Yu YW, Zhang B (2018) What powered the optical transient AT2017gfo associated with GW170817? Astrophys J Lett 861(2):L12. https://doi.org/10.3847/2041-8213/aace61. arXiv:1804.06597 ADSCrossRefGoogle Scholar
Lightman AP, Zdziarski AA, Rees MJ (1987) Effects of electron-positron pair opacity for spherical accretion onto black holes. Astrophys J Lett 315:L113–L118. https://doi.org/10.1086/184871 ADSCrossRefGoogle Scholar
Lippuner J, Roberts LF (2015) r-process lanthanide production and heating rates in kilonovae. Astrophys J 815:82. https://doi.org/10.1088/0004-637X/815/2/82. arXiv:1508.03133 ADSCrossRefGoogle Scholar
Lippuner J, Fernández R, Roberts LF, Foucart F, Kasen D, Metzger BD, Ott CD (2017) Signatures of hypermassive neutron star lifetimes on r-process nucleosynthesis in the disc ejecta from neutron star mergers. Mon Not R Astron Soc 472(1):904–918. https://doi.org/10.1093/mnras/stx1987. arXiv:1703.06216 ADSCrossRefGoogle Scholar
Lipunov VM, Gorbovskoy E, Kornilov VG, Tyurina N, Balanutsa P, Kuznetsov A, Vlasenko D, Kuvshinov D, Gorbunov I, Buckley DAH, Krylov AV, Podesta R, Lopez C, Podesta F, Levato H, Saffe C, Mallamachi C, Potter S, Budnev NM, Gress O, Ishmuhametova Y, Vladimirov V, Zimnukhov D, Yurkov V, Sergienko Y, Gabovich A, Rebolo R, Serra-Ricart M, Israelyan G, Chazov V, Wang X, Tlatov A, Panchenko MI (2017) MASTER optical detection of the first LIGO/Virgo neutron star binary merger GW170817. Astrophys J Lett 850(1):L1. https://doi.org/10.3847/2041-8213/aa92c0. arXiv:1710.05461 ADSCrossRefGoogle Scholar
Lyman JD, Lamb GP, Levan AJ, Mandel I, Tanvir NR, Kobayashi S, Gompertz B, Hjorth J, Fruchter AS, Kangas T (2018) The optical afterglow of the short gamma-ray burst associated with GW170817. Nat Astron 2:751–754. https://doi.org/10.1038/s41550-018-0511-3. arXiv:1801.02669 ADSCrossRefGoogle Scholar
MacFadyen AI, Ramirez-Ruiz E, Zhang W (2005) X-ray flares following short gamma-ray bursts from shock heating of binary stellar companions. ArXiv e-prints arXiv:astro-ph/0510192
Mandhai S, Tanvir N, Lamb G, Levan A, Tsang D (2018) The rate of short-duration gamma-ray bursts in the local universe. Galaxies 6(4):130. https://doi.org/10.3390/galaxies6040130. arXiv:1812.00507 ADSCrossRefGoogle Scholar
Margalit B, Metzger BD (2017) Constraining the maximum mass of neutron stars from multi-messenger observations of GW170817. Astrophys J Lett 850(2):L19. https://doi.org/10.3847/2041-8213/aa991c. arXiv:1710.05938 ADSCrossRefGoogle Scholar
Margalit B, Metzger BD (2019) The multi-messenger matrix: the future of neutron star merger constraints on the nuclear equation of state. Astrophys J Lett 880:L15. https://doi.org/10.3847/2041-8213/ab2ae2. arXiv:1904.11995 ADSCrossRefGoogle Scholar
Margalit B, Metzger BD, Beloborodov AM (2015) Does the collapse of a supramassive neutron star leave a debris disk? Phys Rev Lett 115:171101. https://doi.org/10.1103/PhysRevLett.115.171101. arXiv:1505.01842 ADSCrossRefGoogle Scholar
Margalit B, Metzger BD, Berger E, Nicholl M, Eftekhari T, Margutti R (2018) Unveiling the engines of fast radio bursts, superluminous supernovae, and gamma-ray bursts. Mon Not R Astron Soc 481(2):2407–2426. https://doi.org/10.1093/mnras/sty2417. arXiv:1806.05690 ADSCrossRefGoogle Scholar
Margutti R, Berger E, Fong W, Guidorzi C, Alexander KD, Metzger BD, Blanchard PK, Cowperthwaite PS, Chornock R, Eftekhari T (2017) The electromagnetic counterpart of the binary neutron star merger LIGO/Virgo GW170817. V. Rising X-ray emission from an off-axis jet. Astrophys J 848(2):L20. https://doi.org/10.3847/2041-8213/aa9057. arXiv:1710.05431 ADSCrossRefGoogle Scholar
Margutti R, Alexander KD, Xie X, Sironi L, Metzger BD, Kathirgamaraju A, Fong W, Blanchard PK, Berger E, MacFadyen A, Giannios D, Guidorzi C, Hajela A, Chornock R, Cowperthwaite PS, Eftekhari T, Nicholl M, Villar VA, Williams PKG, Zrake J (2018) The binary neutron star event LIGO/Virgo GW170817 160 days after merger: synchrotron emission across the electromagnetic spectrum. Astrophys J Lett 856:L18. https://doi.org/10.3847/2041-8213/aab2ad. arXiv:1801.03531 ADSCrossRefGoogle Scholar
Martin D, Perego A, Arcones A, Thielemann FK, Korobkin O, Rosswog S (2015) Neutrino-driven winds in the aftermath of a neutron star merger: nucleosynthesis and electromagnetic transients. Astrophys J 813:2. https://doi.org/10.1088/0004-637X/813/1/2. arXiv:1506.05048 ADSCrossRefGoogle Scholar
Martínez-Pinedo G, Fischer T, Lohs A, Huther L (2012) Charged-current weak interaction processes in hot and dense matter and its impact on the spectra of neutrinos emitted from protoneutron star cooling. Phys Rev Lett 109:251104. https://doi.org/10.1103/PhysRevLett.109.251104. arXiv:1205.2793 ADSCrossRefGoogle Scholar
Mathews GJ, Bazan G, Cowan JJ (1992) Evolution of heavy-element abundances as a constraint on sites for neutron-capture nucleosynthesis. Astrophys J 391:719–735. https://doi.org/10.1086/171383 ADSCrossRefGoogle Scholar
Matsumoto T (2018) Polarization of the first-hour macronovae. Mon Not R Astron Soc 481(1):1008–1015. https://doi.org/10.1093/mnras/sty2317. arXiv:1807.04766 ADSCrossRefGoogle Scholar
Matsumoto T, Ioka K, Kisaka S, Nakar E (2018) Is the macronova in GW170817 powered by the central engine? Astrophys J 861(1):55. https://doi.org/10.3847/1538-4357/aac4a8. arXiv:1802.07732 ADSCrossRefGoogle Scholar
McCully C, Hiramatsu D, Howell DA, Hosseinzadeh G, Arcavi I, Kasen D, Barnes J, Shara MM, Williams TB, Väisänen P, Potter SB, Romero-Colmenero E, Crawford SM, Buckley DAH, Cooke J, Andreoni I, Pritchard TA, Mao J, Gromadzki M, Burke J (2017) The rapid reddening and featureless optical spectra of the optical counterpart of GW170817, AT 2017gfo, during the first four days. Astrophys J Lett 848(2):L32. https://doi.org/10.3847/2041-8213/aa9111. arXiv:1710.05853 ADSCrossRefGoogle Scholar
Mendoza-Temis JJ, Wu MR, Langanke K, Martínez-Pinedo G, Bauswein A, Janka HT (2015) Nuclear robustness of the \(r\) process in neutron-star mergers. Phys Rev C 92:055805. https://doi.org/10.1103/PhysRevC.92.055805 ADSCrossRefGoogle Scholar
Metzger BD (2017a) Kilonovae. Living Rev Relativ 20:3. https://doi.org/10.1007/s41114-017-0006-z. arXiv:1610.09381 ADSCrossRefGoogle Scholar
Metzger BD (2017b) Welcome to the multi-messenger era! Lessons from a neutron star merger and the landscape ahead. arXiv e-prints arXiv:1710.05931
Metzger BD, Berger E (2012) What is the most promising electromagnetic counterpart of a neutron star binary merger? Astrophys J 746:48. https://doi.org/10.1088/0004-637X/746/1/48. arXiv:1108.6056 ADSCrossRefGoogle Scholar
Metzger BD, Bower GC (2014) Constraints on long-lived remnants of neutron star binary mergers from late-time radio observations of short duration gamma-ray bursts. Mon Not R Astron Soc 437:1821–1827. https://doi.org/10.1093/mnras/stt2010. arXiv:1310.4506 ADSCrossRefGoogle Scholar
Metzger BD, Fernández R (2014) Red or blue? A potential kilonova imprint of the delay until black hole formation following a neutron star merger. Mon Not R Astron Soc 441:3444–3453. https://doi.org/10.1093/mnras/stu802. arXiv:1402.4803 ADSCrossRefGoogle Scholar
Metzger BD, Piro AL (2014) Optical and X-ray emission from stable millisecond magnetars formed from the merger of binary neutron stars. Mon Not R Astron Soc 439:3916–3930. https://doi.org/10.1093/mnras/stu247. arXiv:1311.1519 ADSCrossRefGoogle Scholar
Metzger BD, Thompson TA, Quataert E (2007) Proto-neutron star winds with magnetic fields and rotation. Astrophys J 659:561–579. https://doi.org/10.1086/512059. arXiv:astro-ph/0608682 ADSCrossRefGoogle Scholar
Metzger BD, Piro AL, Quataert E (2008a) Time-dependent models of accretion discs formed from compact object mergers. Mon Not R Astron Soc 390(2):781–797. https://doi.org/10.1111/j.1365-2966.2008.13789.x. arXiv:0805.4415 ADSCrossRefGoogle Scholar
Metzger BD, Quataert E, Thompson TA (2008b) Short-duration gamma-ray bursts with extended emission from protomagnetar spin-down. Mon Not R Astron Soc 385:1455–1460. https://doi.org/10.1111/j.1365-2966.2008.12923.x. arXiv:0712.1233 ADSCrossRefGoogle Scholar
Metzger BD, Thompson TA, Quataert E (2008c) On the conditions for neutron-rich gamma-ray burst outflows. Astrophys J 676:1130–1150. https://doi.org/10.1086/526418. arXiv:0708.3395 ADSCrossRefGoogle Scholar
Metzger BD, Piro AL, Quataert E (2009) Neutron-rich freeze-out in viscously spreading accretion discs formed from compact object mergers. Mon Not R Astron Soc 396:304–314. https://doi.org/10.1111/j.1365-2966.2008.14380.x. arXiv:0810.2535 ADSCrossRefGoogle Scholar
Metzger BD, Arcones A, Quataert E, Martínez-Pinedo G (2010a) The effects of r-process heating on fallback accretion in compact object mergers. Mon Not R Astron Soc 402(4):2771–2777. https://doi.org/10.1111/j.1365-2966.2009.16107.x. arXiv:0908.0530 ADSCrossRefGoogle Scholar
Metzger BD, Martínez-Pinedo G, Darbha S, Quataert E, Arcones A, Kasen D, Thomas R, Nugent P, Panov IV, Zinner NT (2010b) Electromagnetic counterparts of compact object mergers powered by the radioactive decay of \(r\)-process nuclei. Mon Not R Astron Soc 406:2650–2662. https://doi.org/10.1111/j.1365-2966.2010.16864.x. arXiv:1001.5029 ADSCrossRefGoogle Scholar
Metzger BD, Vurm I, Hascoët R, Beloborodov AM (2014) Ionization break-out from millisecond pulsar wind nebulae: an X-ray probe of the origin of superluminous supernovae. Mon Not R Astron Soc 437:703–720. https://doi.org/10.1093/mnras/stt1922. arXiv:1307.8115 ADSCrossRefGoogle Scholar
Metzger BD, Bauswein A, Goriely S, Kasen D (2015) Neutron-powered precursors of kilonovae. Mon Not R Astron Soc 446:1115–1120. https://doi.org/10.1093/mnras/stu2225. arXiv:1409.0544 ADSCrossRefGoogle Scholar
Metzger BD, Thompson TA, Quataert E (2018) A magnetar origin for the kilonova ejecta in GW170817. Astrophys J 856(2):101. https://doi.org/10.3847/1538-4357/aab095. arXiv:1801.04286 ADSCrossRefGoogle Scholar
Meyer BS (1989) Decompression of initially cold neutron star matter: a mechanism for the \(r\)-process? Astrophys J 343:254–276. https://doi.org/10.1086/167702 ADSCrossRefGoogle Scholar
Miller JM, Ryan BR, Dolence JC, Burrows A, Fontes CJ, Fryer CL, Korobkin O, Lippuner J, Mumpower MR, Wollaeger RT (2019) Full transport model of GW170817-like disk produces a blue kilonova. Phys Rev D 100:023008. https://doi.org/10.1103/PhysRevD.100.023008. arXiv:1905.07477 ADSCrossRefGoogle Scholar
Miller MC (2016) Implications of the gravitational wave event GW150914. Gen Relativ Gravit 48:95. https://doi.org/10.1007/s10714-016-2088-4. arXiv:1606.06526 ADSCrossRefGoogle Scholar
Miller MC (2017) Gravitational waves: a golden binary. Nature 551(7678):36–37. https://doi.org/10.1038/nature24153 ADSCrossRefGoogle Scholar
Möller P, Nix JR, Myers WD, Swiatecki WJ (1995) Nuclear ground-state masses and deformations. Atomic Data Nucl Data Tables 59:185. https://doi.org/10.1006/adnd.1995.1002. arXiv:nucl-th/9308022 ADSCrossRefGoogle Scholar
Mooley KP, Deller AT, Gottlieb O, Nakar E, Hallinan G, Bourke S, Frail DA, Horesh A, Corsi A, Hotokezaka K (2018) Superluminal motion of a relativistic jet in the neutron-star merger GW170817. Nature 561(7723):355–359. https://doi.org/10.1038/s41586-018-0486-3. arXiv:1806.09693 ADSCrossRefGoogle Scholar
Most ER, Papenfort LJ, Tsokaros A, Rezzolla L (2019) Impact of high spins on the ejection of mass in GW170817. Astrophys J 884:40. https://doi.org/10.3847/1538-4357/ab3ebb. arXiv:1904.04220 ADSCrossRefGoogle Scholar
Mösta P, Richers S, Ott CD, Haas R, Piro AL, Boydstun K, Abdikamalov E, Reisswig C, Schnetter E (2014) Magnetorotational core-collapse supernovae in three dimensions. Astrophys J Lett 785:L29. https://doi.org/10.1088/2041-8205/785/2/L29. arXiv:1403.1230 ADSCrossRefGoogle Scholar
Mumpower MR, Surman R, McLaughlin GC, Aprahamian A (2016) The impact of individual nuclear properties on \(r\)-process nucleosynthesis. Prog Part Nucl Phys 86:86–126. https://doi.org/10.1016/j.ppnp.2015.09.001. arXiv:1508.07352 ADSCrossRefGoogle Scholar
Murguia-Berthier A, Ramirez-Ruiz E, Kilpatrick CD, Foley RJ, Kasen D, Lee WH, Piro AL, Coulter DA, Drout MR, Madore BF, Shappee BJ, Pan YC, Prochaska JX, Rest A, Rojas-Bravo C, Siebert MR, Simon JD (2017a) A neutron star binary merger model for GW170817/GRB 170817A/SSS17a. Astrophys J Lett 848(2):L34. https://doi.org/10.3847/2041-8213/aa91b3. arXiv:1710.05453 ADSCrossRefGoogle Scholar
Murguia-Berthier A, Ramirez-Ruiz E, Montes G, De Colle F, Rezzolla L, Rosswog S, Takami K, Perego A, Lee WH (2017b) The properties of short gamma-ray burst jets triggered by neutron star mergers. Astrophys J Lett 835(2):L34. https://doi.org/10.3847/2041-8213/aa5b9e. arXiv:1609.04828 ADSCrossRefGoogle Scholar
Nakar E (2007) Short-hard gamma-ray bursts. Phys Rep 442:166–236. https://doi.org/10.1016/j.physrep.2007.02.005. arXiv:astro-ph/0701748 ADSCrossRefGoogle Scholar
Nakar E, Piran T (2017) The observable signatures of GRB cocoons. Astrophys J 834:28. https://doi.org/10.3847/1538-4357/834/1/28. arXiv:1610.05362 ADSCrossRefGoogle Scholar
Narayan R, Paczyński B, Piran T (1992) Gamma-ray bursts as the death throes of massive binary stars. Astrophys J Lett 395:L83–L86. https://doi.org/10.1086/186493. arXiv:astro-ph/9204001 ADSCrossRefGoogle Scholar
Nedora V, Bernuzzi S, Radice D, Perego A, Endrizzi A, Ortiz N (2019) Spiral-wave wind for the blue kilonova. arXiv e-prints arXiv:1907.04872 ADSCrossRefGoogle Scholar
Nicholl M et al (2017) The electromagnetic counterpart of the binary neutron star merger LIGO/Virgo GW170817. III. Optical and UV spectra of a blue kilonova from fast polar ejecta. Astrophys J Lett 848:L18. https://doi.org/10.3847/2041-8213/aa9029. arXiv:1710.05456 ADSCrossRefGoogle Scholar
Nishimura N, Takiwaki T, Thielemann FK (2015) The r-process nucleosynthesis in the various jet-like explosions of magnetorotational core-collapse supernovae. Astrophys J 810(2):109. https://doi.org/10.1088/0004-637X/810/2/109. arXiv:1501.06567 ADSCrossRefGoogle Scholar
Nissanke S, Holz DE, Dalal N, Hughes SA, Sievers JL, Hirata CM (2013) Determining the Hubble constant from gravitational wave observations of merging compact binaries. ArXiv e-prints arXiv:1307.2638
Norris JP, Bonnell JT (2006) Short gamma-ray bursts with extended emission. Astrophys J 643:266–275. https://doi.org/10.1086/502796. arXiv:astro-ph/0601190 ADSCrossRefGoogle Scholar
Nousek JA et al (2006) Evidence for a canonical gamma-ray burst afterglow light curve in the Swift XRT data. Astrophys J 642:389–400. https://doi.org/10.1086/500724. arXiv:astro-ph/0508332 ADSCrossRefGoogle Scholar
Oechslin R, Janka HT (2006) Torus formation in neutron star mergers and well-localized short gamma-ray bursts. Mon Not R Astron Soc 368:1489–1499. https://doi.org/10.1111/j.1365-2966.2006.10238.x. arXiv:astro-ph/0507099 ADSCrossRefGoogle Scholar
Oechslin R, Janka HT, Marek A (2007) Relativistic neutron star merger simulations with non-zero temperature equations of state. I. Variation of binary parameters and equation of state. Astron Astrophys 467:395–409. https://doi.org/10.1051/0004-6361:20066682. arXiv:astro-ph/0611047 ADSCrossRefGoogle Scholar
Özel F, Freire P (2016) Masses, radii, and the equation of state of neutron stars. Annu Rev Astron Astrophys 54:401–440. https://doi.org/10.1146/annurev-astro-081915-023322. arXiv:1603.02698 ADSCrossRefGoogle Scholar
Özel F, Psaltis D, Ransom S, Demorest P, Alford M (2010) The massive pulsar PSR J1614–2230: linking quantum chromodynamics, gamma-ray bursts, and gravitational wave astronomy. Astrophys J Lett 724:L199–L202. https://doi.org/10.1088/2041-8205/724/2/L199. arXiv:1010.5790 ADSCrossRefGoogle Scholar
Paczyński B (1986) Gamma-ray bursters at cosmological distances. Astrophys J Lett 308:L43–L46. https://doi.org/10.1086/184740 ADSCrossRefGoogle Scholar
Palenzuela C, Lehner L, Ponce M, Liebling SL, Anderson M, Neilsen D, Motl P (2013) Electromagnetic and gravitational outputs from binary-neutron-star coalescence. Phys Rev Lett 111:061105. https://doi.org/10.1103/PhysRevLett.111.061105. arXiv:1301.7074 ADSCrossRefGoogle Scholar
Pan YC, Kilpatrick CD, Simon JD, Xhakaj E, Boutsia K, Coulter DA, Drout MR, Foley RJ, Kasen D, Morrell N, Murguia-Berthier A, Osip D, Piro AL, Prochaska JX, Ramirez-Ruiz E, Rest A, Rojas-Bravo C, Shappee BJ, Siebert MR (2017) The old host-galaxy environment of SSS17a, the first electromagnetic counterpart to a gravitational-wave source. Astrophys J Lett 848(2):L30. https://doi.org/10.3847/2041-8213/aa9116. arXiv:1710.05439 ADSCrossRefGoogle Scholar
Pannarale F (2013) Black hole remnant of black hole-neutron star coalescing binaries. Phys Rev D 88(10):104025. https://doi.org/10.1103/PhysRevD.88.104025. arXiv:1208.5869 ADSCrossRefGoogle Scholar
Pannarale F, Berti E, Kyutoku K, Lackey BD, Shibata M (2015) Gravitational-wave cutoff frequencies of tidally disruptive neutron star-black hole binary mergers. Phys Rev D 92:081504. https://doi.org/10.1103/PhysRevD.92.081504. arXiv:1509.06209 ADSCrossRefGoogle Scholar
Perego A, Rosswog S, Cabezón RM, Korobkin O, Käppeli R, Arcones A, Liebendörfer M (2014) Neutrino-driven winds from neutron star merger remnants. Mon Not R Astron Soc 443:3134–3156. https://doi.org/10.1093/mnras/stu1352. arXiv:1405.6730 ADSCrossRefGoogle Scholar
Perego A, Radice D, Bernuzzi S (2017) AT 2017gfo: an anisotropic and three-component kilonova counterpart of GW170817. Astrophys J Lett 850(2):L37. https://doi.org/10.3847/2041-8213/aa9ab9. arXiv:1711.03982 ADSCrossRefGoogle Scholar
Perley DA et al (2009) GRB 080503: implications of a naked short gamma-ray burst dominated by extended emission. Astrophys J 696:1871–1885. https://doi.org/10.1088/0004-637X/696/2/1871. arXiv:0811.1044 ADSCrossRefGoogle Scholar
Perna R, Sari R, Frail D (2003) Jets in gamma-ray bursts: tests and predictions for the structured jet model. Astrophys J 594:379–384. https://doi.org/10.1086/376772. arXiv:astro-ph/0305145 ADSCrossRefGoogle Scholar
Philippov AA, Spitkovsky A, Cerutti B (2015) Ab initio pulsar magnetosphere: three-dimensional particle-in-cell simulations of oblique pulsars. Astrophys J Lett 801:L19. https://doi.org/10.1088/2041-8205/801/1/L19. arXiv:1412.0673 ADSCrossRefGoogle Scholar
Phinney ES (1991) The rate of neutron star binary mergers in the universe: minimal predictions for gravity wave detectors. Astrophys J 380:L17. https://doi.org/10.1086/186163 ADSCrossRefGoogle Scholar
Pian E et al (2017) Spectroscopic identification of r-process nucleosynthesis in a double neutron-star merger. Nature 551(7678):67–70. https://doi.org/10.1038/nature24298. arXiv:1710.05858 ADSCrossRefGoogle Scholar
Pinto PA, Eastman RG (2000) The physics of type ia supernova light curves. II. Opacity and diffusion. Astrophys J 530:757–776. https://doi.org/10.1086/308380 ADSCrossRefGoogle Scholar
Piran T, Nakar E, Rosswog S (2013) The electromagnetic signals of compact binary mergers. Mon Not R Astron Soc 430:2121–2136. https://doi.org/10.1093/mnras/stt037. arXiv:1204.6242 ADSCrossRefGoogle Scholar
Piro AL, Kollmeier JA (2018) Evidence for cocoon emission from the early light curve of SSS17a. Astrophys J 855(2):103. https://doi.org/10.3847/1538-4357/aaaab3. arXiv:1710.05822 ADSCrossRefGoogle Scholar
Piro L, Troja E, Zhang B, Ryan G, van Eerten H, Ricci R, Wieringa MH, Tiengo A, Butler NR, Cenko SB, Fox OD, Khandrika HG, Novara G, Rossi A, Sakamoto T (2019) A long-lived neutron star merger remnant in GW170817: constraints and clues from X-ray observations. Mon Not R Astron Soc 483(2):1912–1921. https://doi.org/10.1093/mnras/sty3047. arXiv:1810.04664 ADSCrossRefGoogle Scholar
Podsiadlowski P, Mazzali PA, Nomoto K, Lazzati D, Cappellaro E (2004) The rates of hypernovae and gamma-ray bursts: implications for their progenitors. Astrophys J Lett 607:L17–L20. https://doi.org/10.1086/421347. arXiv:astro-ph/0403399 ADSCrossRefGoogle Scholar
Pooley D, Kumar P, Wheeler JC, Grossan B (2018) GW170817 most likely made a black hole. Astrophys J Lett 859(2):L23. https://doi.org/10.3847/2041-8213/aac3d6. arXiv:1712.03240 ADSCrossRefGoogle Scholar
Popham R, Woosley SE, Fryer C (1999) Hyperaccreting black holes and gamma-ray bursts. Astrophys J 518:356–374. https://doi.org/10.1086/307259. arXiv:astro-ph/9807028 ADSCrossRefGoogle Scholar
Price DJ, Rosswog S (2006) Producing ultrastrong magnetic fields in neutron star mergers. Science 312:719–722. https://doi.org/10.1126/science.1125201. arXiv:astro-ph/0603845 ADSCrossRefGoogle Scholar
Pruet J, Thompson TA, Hoffman RD (2004) Nucleosynthesis in outflows from the inner regions of collapsars. Astrophys J 606(2):1006–1018. https://doi.org/10.1086/382036. arXiv:astro-ph/0309278 ADSCrossRefGoogle Scholar
Punturo M et al (2010) The Einstein Telescope: a third-generation gravitational wave observatory. Class Quantum Grav 27(19):194002. https://doi.org/10.1088/0264-9381/27/19/194002 ADSCrossRefGoogle Scholar
Qian Y, Woosley SE (1996) Nucleosynthesis in neutrino-driven winds. I. The physical conditions. Astrophys J 471:331. https://doi.org/10.1086/177973. arXiv:astro-ph/9611094 ADSCrossRefGoogle Scholar
Qian YZ (2000) Supernovae versus neutron star mergers as the major \(r\)-process sources. Astrophys J Lett 534:L67–L70. https://doi.org/10.1086/312659. arXiv:astro-ph/0003242 ADSCrossRefGoogle Scholar
Qian YZ, Wasserburg GJ (2007) Where, oh where has the \(r\)-process gone? Phys Rep 442:237–268. https://doi.org/10.1016/j.physrep.2007.02.006. arXiv:0708.1767 ADSCrossRefGoogle Scholar
Radice D, Galeazzi F, Lippuner J, Roberts LF, Ott CD, Rezzolla L (2016a) Dynamical mass ejection from binary neutron star mergers. Mon Not R Astron Soc. https://doi.org/10.1093/mnras/stw1227. arXiv:1601.02426 CrossRefGoogle Scholar
Radice D, Galeazzi F, Lippuner J, Roberts LF, Ott CD, Rezzolla L (2016b) Dynamical mass ejection from binary neutron star mergers. Mon Not R Astron Soc 460(3):3255–3271. https://doi.org/10.1093/mnras/stw1227. arXiv:1601.02426 ADSCrossRefGoogle Scholar
Radice D, Perego A, Bernuzzi S, Zhang B (2018a) Long-lived remnants from binary neutron star mergers. Mon Not R Astron Soc 481(3):3670–3682. https://doi.org/10.1093/mnras/sty2531. arXiv:1803.10865 ADSCrossRefGoogle Scholar
Radice D, Perego A, Hotokezaka K, Bernuzzi S, Fromm SA, Roberts LF (2018b) Viscous-dynamical ejecta from binary neutron star mergers. Astrophys J Lett 869(2):L35. https://doi.org/10.3847/2041-8213/aaf053. arXiv:1809.11163 ADSCrossRefGoogle Scholar
Radice D, Perego A, Zappa F, Bernuzzi S (2018c) GW170817: joint constraint on the neutron star equation of state from multimessenger observations. Astrophys J Lett 852(2):L29. https://doi.org/10.3847/2041-8213/aaa402. arXiv:1711.03647 ADSCrossRefGoogle Scholar
Raithel CA (2019) Constraints on the neutron star equation of state from GW170817. Eur Phys J A 55(5):80. https://doi.org/10.1140/epja/i2019-12759-5. arXiv:1904.10002 ADSCrossRefGoogle Scholar
Raithel CA, Özel F, Psaltis D (2018) Tidal deformability from GW170817 as a direct probe of the neutron star radius. Astrophys J Lett 857(2):L23. https://doi.org/10.3847/2041-8213/aabcbf. arXiv:1803.07687 ADSCrossRefGoogle Scholar
Ramirez-Ruiz E, Trenti M, MacLeod M, Roberts LF, Lee WH, Saladino-Rosas MI (2015) Compact stellar binary assembly in the first nuclear star clusters and \(r\)-process synthesis in the early universe. Astrophys J Lett 802:L22. https://doi.org/10.1088/2041-8205/802/2/L22. arXiv:1410.3467 ADSCrossRefGoogle Scholar
Rana J, Singhal A, Gadre B, Bhalerao V, Bose S (2017) An optimal method for scheduling observations of large sky error regions for finding optical counterparts to transients. Astrophys J 838:108. https://doi.org/10.3847/1538-4357/838/2/108. arXiv:1603.01689 ADSCrossRefGoogle Scholar
Reitze D, Adhikari RX, Ballmer S, Barish B, Barsotti L, Billingsley G, Brown DA, Chen Y, Coyne D, Eisenstein R, Evans M, Fritschel P, Hall ED, Lazzarini A, Lovelace G, Read J, Sathyaprakash BS, Shoemaker D, Smith J, Torrie C, Vitale S, Weiss R, Wipf C, Zucker M (2019a) Cosmic explorer: the U.S. contribution to gravitational-wave astronomy beyond LIGO. arXiv e-prints arXiv:1907.04833
Reitze D, LIGO Laboratory: California Institute of Technology, LIGO Laboratory: Massachusetts Institute of Technology, LIGO Hanford Observatory, LIGO Livingston Observatory (2019b) The US program in ground-based gravitational wave science: contribution from the LIGO laboratory. Bull Am Astron Soc 51(3):141. arXiv:1903.04615
Rezzolla L, Most ER, Weih LR (2018) Using gravitational-wave observations and quasi-universal relations to constrain the maximum mass of neutron stars. Astrophys J Lett 852(2):L25. https://doi.org/10.3847/2041-8213/aaa401. arXiv:1711.00314 ADSCrossRefGoogle Scholar
Richers S, Kasen D, O'Connor E, Fernández R, Ott CD (2015) Monte Carlo neutrino transport through remnant disks from neutron star mergers. Astrophys J 813:38. https://doi.org/10.1088/0004-637X/813/1/38. arXiv:1507.03606 ADSCrossRefGoogle Scholar
Roberts LF, Woosley SE, Hoffman RD (2010) Integrated nucleosynthesis in neutrino-driven winds. Astrophys J 722:954–967. https://doi.org/10.1088/0004-637X/722/1/954. arXiv:1004.4916 ADSCrossRefGoogle Scholar
Roberts LF, Kasen D, Lee WH, Ramirez-Ruiz E (2011) Electromagnetic transients powered by nuclear decay in the tidal tails of coalescing compact binaries. Astrophys J Lett 736:L21. https://doi.org/10.1088/2041-8205/736/1/L21. arXiv:1104.5504 ADSCrossRefGoogle Scholar
Roberts LF, Reddy S, Shen G (2012) Medium modification of the charged-current neutrino opacity and its implications. Phys Rev C 86:065803. https://doi.org/10.1103/PhysRevC.86.065803. arXiv:1205.4066 ADSCrossRefGoogle Scholar
Rodriguez CL, Farr B, Raymond V, Farr WM, Littenberg TB, Fazi D, Kalogera V (2014) Basic parameter estimation of binary neutron star systems by the advanced LIGO/Virgo network. Astrophys J 784:119. https://doi.org/10.1088/0004-637X/784/2/119. arXiv:1309.3273 ADSCrossRefGoogle Scholar
Roederer IU (2011) Primordial r-process dispersion in metal-poor globular clusters. Astrophys J Lett 732(1):L17. https://doi.org/10.1088/2041-8205/732/1/L17. arXiv:1104.5056 ADSCrossRefGoogle Scholar
Romani RW, Filippenko AV, Cenko SB (2015) A spectroscopic study of the extreme black widow PSR J1311–3430. Astrophys J 804(2):115. https://doi.org/10.1088/0004-637X/804/2/115. arXiv:1503.05247 ADSCrossRefGoogle Scholar
Rossi EM, Begelman MC (2009) Delayed X-ray emission from fallback in compact-object mergers. Mon Not R Astron Soc 392:1451–1455. https://doi.org/10.1111/j.1365-2966.2008.14139.x. arXiv:0808.1284 ADSCrossRefGoogle Scholar
Rosswog S (2005) Mergers of neutron star-black hole binaries with small mass ratios: nucleosynthesis, gamma-ray bursts, and electromagnetic transients. Astrophys J 634:1202–1213. https://doi.org/10.1086/497062. arXiv:astro-ph/0508138 ADSCrossRefGoogle Scholar
Rosswog S (2007) Fallback accretion in the aftermath of a compact binary merger. Mon Not R Astron Soc 376:L48–L51. https://doi.org/10.1111/j.1745-3933.2007.00284.x. arXiv:astro-ph/0611440 ADSCrossRefGoogle Scholar
Rosswog S (2015) The multi-messenger picture of compact binary mergers. Int J Mod Phys D 24:1530012. https://doi.org/10.1142/S0218271815300128. arXiv:1501.02081 ADSMathSciNetCrossRefGoogle Scholar
Rosswog S, Liebendörfer M, Thielemann FK, Davies MB, Benz W, Piran T (1999) Mass ejection in neutron star mergers. Astron Astrophys 341:499–526 arXiv:astro-ph/9811367 ADSGoogle Scholar
Rosswog S, Piran T, Nakar E (2013) The multimessenger picture of compact object encounters: binary mergers versus dynamical collisions. Mon Not R Astron Soc 430:2585–2604. https://doi.org/10.1093/mnras/sts708. arXiv:1204.6240 ADSCrossRefGoogle Scholar
Rosswog S, Korobkin O, Arcones A, Thielemann FK, Piran T (2014) The long-term evolution of neutron star merger remnants—I. The impact of \(r\)-process nucleosynthesis. Mon Not R Astron Soc 439:744–756. https://doi.org/10.1093/mnras/stt2502. arXiv:1307.2939 ADSCrossRefGoogle Scholar
Rosswog S, Sollerman J, Feindt U, Goobar A, Korobkin O, Wollaeger R, Fremling C, Kasliwal MM (2018) The first direct double neutron star merger detection: implications for cosmic nucleosynthesis. Astron Astrophys 615:A132. https://doi.org/10.1051/0004-6361/201732117. arXiv:1710.05445 ADSCrossRefGoogle Scholar
Rowlinson A, et al (2010) The unusual X-ray emission of the short Swift GRB 090515: evidence for the formation of a magnetar? Mon Not R Astron Soc 1479. https://doi.org/10.1111/j.1365-2966.2010.17354.x. arXiv:1007.2185 ADSCrossRefGoogle Scholar
Ruiz M, Shapiro SL, Tsokaros A (2018) GW170817, general relativistic magnetohydrodynamic simulations, and the neutron star maximum mass. Phys Rev D 97(2):021501. https://doi.org/10.1103/PhysRevD.97.021501. arXiv:1711.00473 ADSCrossRefGoogle Scholar
Ryan G, van Eerten H, MacFadyen A, Zhang BB (2015) Gamma-ray bursts are observed off-axis. Astrophys J 799(1):3. https://doi.org/10.1088/0004-637X/799/1/3. arXiv:1405.5516 ADSCrossRefGoogle Scholar
Safarzadeh M, Sarmento R, Scannapieco E (2019) On neutron star mergers as the source of r-process-enhanced metal-poor stars in the milky way. Astrophys J 876(1):28. https://doi.org/10.3847/1538-4357/ab1341. arXiv:1812.02779 ADSCrossRefGoogle Scholar
Salafia OS, Ghisellini G, Ghirlanda G (2018) Jet-driven and jet-less fireballs from compact binary mergers. Mon Not R Astron Soc 474(1):L7–L11. https://doi.org/10.1093/mnrasl/slx189. arXiv:1710.05859 ADSCrossRefGoogle Scholar
Savchenko V et al (2017) INTEGRAL detection of the first prompt gamma-ray signal coincident with the gravitational-wave event GW170817. Astrophys J Lett 848:L15. https://doi.org/10.3847/2041-8213/aa8f94. arXiv:1710.05449 ADSCrossRefGoogle Scholar
Schutz BF (1986) Determining the Hubble constant from gravitational wave observations. Nature 323(6086):310–311. https://doi.org/10.1038/323310a0 ADSCrossRefGoogle Scholar
Sekiguchi Y, Kiuchi K, Kyutoku K, Shibata M (2015) Dynamical mass ejection from binary neutron star mergers: radiation-hydrodynamics study in general relativity. Phys Rev D 91:064059. https://doi.org/10.1103/PhysRevD.91.064059. arXiv:1502.06660 ADSCrossRefGoogle Scholar
Sekiguchi Y, Kiuchi K, Kyutoku K, Shibata M, Taniguchi K (2016) Dynamical mass ejection from the merger of asymmetric binary neutron stars: radiation-hydrodynamics study in general relativity. Phys Rev D 93(12):124046. https://doi.org/10.1103/PhysRevD.93.124046. arXiv:1603.01918 ADSCrossRefGoogle Scholar
Shappee BJ et al (2017) Early spectra of the gravitational wave source GW170817: evolution of a neutron star merger. Science 358(6370):1574–1578. https://doi.org/10.1126/science.aaq0186. arXiv:1710.05432 ADSCrossRefGoogle Scholar
Shen S, Cooke RJ, Ramirez-Ruiz E, Madau P, Mayer L, Guedes J (2015) The history of \(r\)-process enrichment in the milky way. Astrophys J 807:115. https://doi.org/10.1088/0004-637X/807/2/115. arXiv:1407.3796 ADSCrossRefGoogle Scholar
Shibata M (2003) Collapse of rotating supramassive neutron stars to black holes: fully general relativistic simulations. Astrophys J 595:992–999. https://doi.org/10.1086/377435. arXiv:astro-ph/0310020 ADSCrossRefGoogle Scholar
Shibata M, Hotokezaka K (2019) Merger and mass ejection of neutron star binaries. Annu Rev Nucl Part Sci 69(1):annurev. https://doi.org/10.1146/annurev-nucl-101918-023625. arXiv:1908.02350 ADSCrossRefGoogle Scholar
Shibata M, Taniguchi K (2006) Merger of binary neutron stars to a black hole: disk mass, short gamma-ray bursts, and quasinormal mode ringing. Phys Rev D 73:064027. https://doi.org/10.1103/PhysRevD.73.064027. arXiv:astro-ph/0603145 ADSCrossRefGoogle Scholar
Shibata M, Uryū K (2000) Simulation of merging binary neutron stars in full general relativity: \(\Gamma =2\) case. Phys Rev D 61:064001. https://doi.org/10.1103/PhysRevD.61.064001. arXiv:gr-qc/9911058 ADSCrossRefGoogle Scholar
Shibata M, Baumgarte TW, Shapiro SL (2000) The bar-mode instability in differentially rotating neutron stars: simulations in full general relativity. Astrophys J 542:453–463. https://doi.org/10.1086/309525. arXiv:astro-ph/0005378 ADSCrossRefGoogle Scholar
Shibata M, Fujibayashi S, Hotokezaka K, Kiuchi K, Kyutoku K, Sekiguchi Y, Tanaka M (2017) Modeling GW170817 based on numerical relativity and its implications. Phys Rev D 96(12):123012. https://doi.org/10.1103/PhysRevD.96.123012. arXiv:1710.07579 ADSCrossRefGoogle Scholar
Shibata M, Zhou E, Kiuchi K, Fujibayashi S (2019) Constraint on the maximum mass of neutron stars using GW170817 event. Phys Rev D 100:023015. https://doi.org/10.1103/PhysRevD.100.023015. arXiv:1905.03656 ADSCrossRefGoogle Scholar
Siegel DM (2019) GW170817—the first observed neutron star merger and its kilonova: implications for the astrophysical site of the r-process. Eur Phys J A 55:203. https://doi.org/10.1140/epja/i2019-12888-9. arXiv:1901.09044 ADSCrossRefGoogle Scholar
Siegel DM, Ciolfi R (2016a) Electromagnetic emission from long-lived binary neutron star merger remnants. I. Formulation of the problem. Astrophys J 819:14. https://doi.org/10.3847/0004-637X/819/1/14. arXiv:1508.07911 ADSCrossRefGoogle Scholar
Siegel DM, Ciolfi R (2016b) Electromagnetic emission from long-lived binary neutron star merger remnants. II. Lightcurves and spectra. Astrophys J 819:15. https://doi.org/10.3847/0004-637X/819/1/15. arXiv:1508.07939 ADSCrossRefGoogle Scholar
Siegel DM, Metzger BD (2017) Three-dimensional general-relativistic magnetohydrodynamic simulations of remnant accretion disks from neutron star mergers: outflows and r -process nucleosynthesis. Phys Rev Lett 119(23):231102. https://doi.org/10.1103/PhysRevLett.119.231102. arXiv:1705.05473 ADSCrossRefGoogle Scholar
Siegel DM, Metzger BD (2018) Three-dimensional GRMHD simulations of neutrino-cooled accretion disks from neutron star mergers. Astrophys J 858(1):52. https://doi.org/10.3847/1538-4357/aabaec. arXiv:1711.00868 ADSCrossRefGoogle Scholar
Siegel DM, Ciolfi R, Harte AI, Rezzolla L (2013) Magnetorotational instability in relativistic hypermassive neutron stars. Phys Rev D 87:121302. https://doi.org/10.1103/PhysRevD.87.121302. arXiv:1302.4368 ADSCrossRefGoogle Scholar
Siegel DM, Ciolfi R, Rezzolla L (2014) Magnetically driven winds from differentially rotating neutron stars and X-ray afterglows of short gamma-ray bursts. Astrophys J Lett 785:L6. https://doi.org/10.1088/2041-8205/785/1/L6. arXiv:1401.4544 ADSCrossRefGoogle Scholar
Siegel DM, Barnes J, Metzger BD (2019) Collapsars as a major source of r-process elements. Nature 569(7755):241–244. https://doi.org/10.1038/s41586-019-1136-0. arXiv:1810.00098 ADSCrossRefGoogle Scholar
Skidmore W, TMT International Science Development Teams, Science Advisory Committee T (2015) Thirty meter telescope detailed science case: 2015. Res Astron Astrophys 15:1945. https://doi.org/10.1088/1674-4527/15/12/001. arXiv:1505.01195 ADSCrossRefGoogle Scholar
Smartt SJ et al (2017) A kilonova as the electromagnetic counterpart to a gravitational-wave source. Nature 551:75–79. https://doi.org/10.1038/nature24303. arXiv:1710.05841 ADSCrossRefGoogle Scholar
Sneden C, Cowan JJ, Gallino R (2008) Neutron-capture elements in the early galaxy. Annu Rev Astron Astrophys 46:241–288. https://doi.org/10.1146/annurev.astro.46.060407.145207 ADSCrossRefGoogle Scholar
Soares-Santos M et al (2017) The electromagnetic counterpart of the binary neutron star merger LIGO/Virgo GW170817. I. Discovery of the optical counterpart using the dark energy camera. Astrophys J Lett 848(2):L16. https://doi.org/10.3847/2041-8213/aa9059. arXiv:1710.05459 ADSCrossRefGoogle Scholar
Somiya K (2012) Detector configuration of KAGRA—the Japanese cryogenic gravitational-wave detector. Class Quantum Grav 29:124007. https://doi.org/10.1088/0264-9381/29/12/124007. arXiv:1111.7185 ADSCrossRefGoogle Scholar
Surman R, McLaughlin GC, Ruffert M, Janka HT, Hix WR (2008) \(r\)-process nucleosynthesis in hot accretion disk flows from black hole-neutron star mergers. Astrophys J Lett 679:L117–L120. https://doi.org/10.1086/589507. arXiv:0803.1785 ADSCrossRefGoogle Scholar
Suzuki A, Maeda K (2017) Supernova ejecta with a relativistic wind from a central compact object: a unified picture for extraordinary supernovae. Mon Not R Astron Soc 466(3):2633–2657. https://doi.org/10.1093/mnras/stw3259. arXiv:1612.03911 ADSCrossRefGoogle Scholar
Svensson R (1987) Non-thermal pair production in compact X-ray sources: first-order compton cascades in soft radiation fields. Mon Not R Astron Soc 227:403–451. https://doi.org/10.1093/mnras/227.2.403 ADSCrossRefGoogle Scholar
Symbalisty E, Schramm DN (1982) Neutron star collisions and the r-process. Astrophys J Lett 22:143–145ADSGoogle Scholar
Takahashi K, Witti J, Janka HT (1994) Nucleosynthesis in neutrino-driven winds from protoneutron stars II. The \(r\)-process. Astron Astrophys 286:857–869ADSGoogle Scholar
Takami H, Nozawa T, Ioka K (2014) Dust formation in macronovae. Astrophys J Lett 789:L6. https://doi.org/10.1088/2041-8205/789/1/L6. arXiv:1403.5872 ADSCrossRefGoogle Scholar
Tamborra I, Raffelt GG, Hüdepohl L, Janka HT (2012) Impact of eV-mass sterile neutrinos on neutrino-driven supernova outflows. J Cosmol Astropart Phys 1:013. https://doi.org/10.1088/1475-7516/2012/01/013. arXiv:1110.2104 ADSCrossRefGoogle Scholar
Tanaka M (2016) Kilonova/macronova emission from compact binary mergers. Adv Astron 2016:634197. https://doi.org/10.1155/2016/6341974. arXiv:1605.07235 CrossRefGoogle Scholar
Tanaka M, Hotokezaka K (2013) Radiative transfer simulations of neutron star merger ejecta. Astrophys J 775:113. https://doi.org/10.1088/0004-637X/775/2/113. arXiv:1306.3742 ADSCrossRefGoogle Scholar
Tanaka M, Hotokezaka K, Kyutoku K, Wanajo S, Kiuchi K, Sekiguchi Y, Shibata M (2014) Radioactively powered emission from black hole-neutron star mergers. Astrophys J 780:31. https://doi.org/10.1088/0004-637X/780/1/31. arXiv:1310.2774 ADSCrossRefGoogle Scholar
Tanaka M, Kato D, Gaigalas G, Kawaguchi K (2019) Systematic opacity calculations for kilonovae. arXiv e-prints arXiv:1906.08914
Tanaka M et al (2017) Kilonova from post-merger ejecta as an optical and near-Infrared counterpart of GW170817. Publ Astron Soc Japan 69(6):102. https://doi.org/10.1093/pasj/psx121. arXiv:1710.05850 ADSCrossRefGoogle Scholar
Tanvir NR, Levan AJ, Fruchter AS, Hjorth J, Hounsell RA, Wiersema K, Tunnicliffe RL (2013) A 'kilonova' associated with the short-duration \(\gamma \)-ray burst GRB130603B. Nature 500:547–549. https://doi.org/10.1038/nature12505. arXiv:1306.4971 ADSCrossRefGoogle Scholar
Tanvir NR et al (2017) The emergence of a lanthanide-rich kilonova following the merger of two neutron stars. Astrophys J Lett 848(2):L27. https://doi.org/10.3847/2041-8213/aa90b6. arXiv:1710.05455 ADSCrossRefGoogle Scholar
Tchekhovskoy A, Narayan R, McKinney JC (2011) Efficient generation of jets from magnetically arrested accretion on a rapidly spinning black hole. Mon Not R Astron Soc 418:L79–L83. https://doi.org/10.1111/j.1745-3933.2011.01147.x. arXiv:1108.0412 ADSCrossRefGoogle Scholar
Thielemann FK, Arcones A, Käppeli R, Liebendörfer M, Rauscher T, Winteler C, Fröhlich C, Dillmann I, Fischer T, Martinez-Pinedo G, Langanke K, Farouqi K, Kratz KL, Panov I, Korneev IK (2011) What are the astrophysical sites for the \(r\)-process and the production of heavy elements? Prog Part Nucl Phys 66:346–353. https://doi.org/10.1016/j.ppnp.2011.01.032 ADSCrossRefGoogle Scholar
Thompson C, Duncan RC (1993) Neutron star dynamos and the origins of pulsar magnetism. Astrophys J 408:194. https://doi.org/10.1086/172580 ADSCrossRefGoogle Scholar
Thompson TA (2003) Magnetic protoneutron star winds and \(r\)-process nucleosynthesis. Astrophys J Lett 585:L33–L36. https://doi.org/10.1086/374261. arXiv:astro-ph/0302132 ADSCrossRefGoogle Scholar
Thompson TA, Burrows A, Meyer BS (2001) The physics of proto-neutron star winds: implications for \(r\)-process nucleosynthesis. Astrophys J 562:887–908. https://doi.org/10.1086/323861. arXiv:astro-ph/0105004 ADSCrossRefGoogle Scholar
Thompson TA, Chang P, Quataert E (2004) Magnetar spin-down, hyperenergetic supernovae, and gamma-ray bursts. Astrophys J 611:380–393. https://doi.org/10.1086/421969. arXiv:astro-ph/0401555 ADSCrossRefGoogle Scholar
Totani T, Panaitescu A (2002) Orphan afterglows of collimated gamma-ray bursts: rate predictions and prospects for detection. Astrophys J 576:120–134. https://doi.org/10.1086/341738. arXiv:astro-ph/0204258 ADSCrossRefGoogle Scholar
Troja E, Piro L, van Eerten H, Wollaeger RT, Im M, Fox OD, Butler NR, Cenko SB, Sakamoto T, Fryer CL (2017) The X-ray counterpart to the gravitational-wave event GW170817. Nature 551(7678):71–74. https://doi.org/10.1038/nature24290. arXiv:1710.05433 ADSCrossRefGoogle Scholar
Troja E et al (2016) An achromatic break in the afterglow of the short GRB 140903A: evidence for a narrow jet. Astrophys J 827:102. https://doi.org/10.3847/0004-637X/827/2/102. arXiv:1605.03573 ADSCrossRefGoogle Scholar
Tsang D (2013) Shattering flares during close encounters of neutron stars. Astrophys J 777:103. https://doi.org/10.1088/0004-637X/777/2/103. arXiv:1307.3554 ADSCrossRefGoogle Scholar
Utsumi Y et al (2017) J-GEM observations of an electromagnetic counterpart to the neutron star merger GW170817. Publ Astron Soc Japan 69(6):101. https://doi.org/10.1093/pasj/psx118. arXiv:1710.05848 ADSMathSciNetCrossRefGoogle Scholar
Valenti S, Sand DJ, Yang S, Cappellaro E, Tartaglia L, Corsi A, Jha SW, Reichart DE, Haislip J, Kouprianov V (2017) The discovery of the electromagnetic counterpart of GW170817: kilonova AT 2017gfo/DLT17ck. Astrophys J Lett 848(2):L24. https://doi.org/10.3847/2041-8213/aa8edf. arXiv:1710.05854 ADSCrossRefGoogle Scholar
van de Voort F, Quataert E, Hopkins PF, Kereš D, Faucher-Giguère CA (2015) Galactic | CommonCrawl |
Sample records for black hole physics
Black holes and everyday physics
Bekenstein, J.D.
Black holes have piqued much curiosity. But thus far they have been important only in ''remote'' subjects like astrophysics and quantum gravity. It is shown that the situation can be improved. By a judicious application of black hole physics, one can obtain new results in ''everyday physics''. For example, black holes yield a quantum universal upper bound on the entropy-to-energy ratio for ordinary thermodynamical systems which was unknown earlier. It can be checked, albeit with much labor, by ordinary statistical methods. Black holes set a limitation on the number of species of elementary particles-quarks, leptons, neutrinos - which may exist. And black holes lead to a fundamental limitation on the rate at which information can be transferred for given message energy by any communication system. (author)
Modified dispersion relations and black hole physics
Ling Yi; Li Xiang; Hu Bo
A modified formulation of the energy-momentum relation is proposed in the context of doubly special relativity. We investigate its impact on black hole physics. It turns out that such a modification will give corrections to both the temperature and the entropy of black holes. In particular, this modified dispersion relation also changes the picture of Hawking radiation greatly when the size of black holes approaches the Planck scale. It can prevent black holes from total evaporation, as a result providing a plausible mechanism to treat the remnant of black holes as a candidate for dark matter
Surface effects in black hole physics
Damour, T.
This contribution reviews briefly the various analogies which have been drawn between black holes and ordinary physical objects. It is shown how, by concentrating on the properties of the surface of a black hole, it is possible to set up a sequence of tight analogies allowing one to conclude that a black hole is, qualitatively and quantitatively, similar to a fluid bubble possessing a negative surface tension and endowed with finite values of the electrical conductivity and of the shear and bulk viscosities. These analogies are valid simultaneously at the levels of electromagnetic, mechanical and thermodynamical laws. Explicit applications of this framework are worked out (eddy currents, tidal drag). The thermostatic equilibrium of a black hole electrically interacting with its surroundings is discussed, as well as the validity of a minimum entropy production principle in black hole physics. (Auth.)
Hagedorn temperature and physics of black holes
Zakharov, V.I.; Mertens, Thomas G.; Verschelde, Henri
A mini-review devoted to some implications of the Hagedorn temperature for black hole physics. The existence of a limiting temperature is a generic feature of string models. The Hagedorn temperature was introduced first in the context of hadronic physics. Nowadays, the emphasis is shifted to fundamental strings which might be a necessary ingredient to obtain a consistent theory of black holes. The point is that, in field theory, the local temperature close to the horizon could be arbitrarily high, and this observation is difficult to reconcile with the finiteness of the entropy of black holes. After preliminary remarks, we review our recent attempt to evaluate the entropy of large black holes in terms of fundamental strings. We also speculate on implications for dynamics of large-N_c gauge theories arising within holographic models
Horowitz, Gary T.; Teukolsky, Saul A.
Black holes are among the most intriguing objects in modern physics. Their influence ranges from powering quasars and other active galactic nuclei, to providing key insights into quantum gravity. We review the observational evidence for black holes, and briefly discuss some of their properties. We also describe some recent developments involving cosmic censorship and the statistical origin of black hole entropy.
Physical effects in gravitational field of black holes
Frolov, V.P.
A large number of problems related to peculiarities of physical processes in a strong gravitational field of black holes has been considered. Energy shift and the complete structure of physical fields for charged sources near a black hole have been investigated. Density matrix and generating functional for quantum effects in stationary black holes have been calculated. Contributions of massless and massive fields to vacuum polarization in black holes have been investigated and influence of quantum effects on the global structure of a black hole has been discussed
BOOK REVIEW: Introduction to Black Hole Physics Introduction to Black Hole Physics
Tanaka, Takahiro
Introduction to Black Hole Physics is a large volume (504 pages), and yet despite this it is still really an introductory text. The book gives an introduction to general relativity, but most of the text is dedicated to attracting the reader's attention to the interesting world of black hole physics. In this sense, the book is very distinct from other textbooks on general relativity. We are told that it was based on the lectures given by Professor Frolov, one of the authors, over the last 30 years. One can obtain the basic ideas about black holes, and also the necessary tips to understand general relativity at a very basic level. For example, in the discussion about particle motion in curved space, the authors start with a brief review on analytical mechanics. The book does not require its readers to have a great deal of knowledge in advance. If you are familiar with such a basic subject, you can simply omit that section. The reason why I especially picked up on this topic as an example is that the book devotes a significant number of pages to geodesic motions in black hole spacetime. One of the main motivations to study black holes is related to how they will actually be observed, once we develop the ability to observe them clearly. The book does explain such discoveries as, for instance, how the motion of a particle is related to a beautiful mathematical structure arising from the hidden symmetry of spacetime, which became transparent via the recent progress in the exploration of black holes in higher dimensions; a concise introduction to this latest topic is deferred to Appendix D, so as not to distract the reader with its mathematical complexities. It should be also mentioned that the book is not limited to general relativistic aspects: quantum fields on a black hole background and Hawking radiation are also covered. Also included are current hot topics, for instance the gravitational waves from a system including black holes, whose first direct detection is
Black Holes from Particle Physics Perspective (1/2)
CERN. Geneva
We review physics of black holes, both large and small, from a particle physicist's perspective, using particle physics tools for describing concepts such as entropy, temperature and quantum information processing. We also discuss microscopic picture of black hole formation in high energy particle scattering, potentially relevant for high energy accelerator experiments, and some differences and similarities with the signatures of other BSM physics.
Black Holes: Physics and Astrophysics - Stellar-mass, supermassive and primordial black holes
Bekenstein, Jacob D.
I present an elementary primer of black hole physics, including its general relativity basis, all peppered with astrophysical illustrations. Following a brief review of the process stellar collapse to a black hole, I discuss the gravitational redshift, particle trajectories in gravitational fields, the Schwarzschild and Kerr solutions to Einstein's equations, orbits in Schwarzschild and in Kerr geometry, and the dragging of inertial frames. I follow with a brief review of galactic X-ray binar...
Andreev reflections and the quantum physics of black holes
Manikandan, Sreenath K.; Jordan, Andrew N.
We establish an analogy between superconductor-metal interfaces and the quantum physics of a black hole, using the proximity effect. We show that the metal-superconductor interface can be thought of as an event horizon and Andreev reflection from the interface is analogous to the Hawking radiation in black holes. We describe quantum information transfer in Andreev reflection with a final state projection model similar to the Horowitz-Maldacena model for black hole evaporation. We also propose the Andreev reflection analogue of Hayden and Preskill's description of a black hole final state, where the black hole is described as an information mirror. The analogy between crossed Andreev reflections and Einstein-Rosen bridges is discussed: our proposal gives a precise mechanism for the apparent loss of quantum information in a black hole by the process of nonlocal Andreev reflection, transferring the quantum information through a wormhole and into another universe. Given these established connections, we conjecture that the final quantum state of a black hole is exactly the same as the ground state wave function of the superconductor/superfluid in the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity; in particular, the infalling matter and the infalling Hawking quanta, described in the Horowitz-Maldacena model, forms a Cooper pairlike singlet state inside the black hole. A black hole evaporating and shrinking in size can be thought of as the analogue of Andreev reflection by a hole where the superconductor loses a Cooper pair. Our model does not suffer from the black hole information problem since Andreev reflection is unitary. We also relate the thermodynamic properties of a black hole to that of a superconductor, and propose an experiment which can demonstrate the negative specific heat feature of black holes in a growing/evaporating condensate.
Artificial black holes on the threshold of new physics
Thaller, M
Theorists believe that it is necessary to study black holes in order to merge the two theories of relativity and quantum physics. The next generation of accelerators, including the LHC currently under construction in Geneva, should be able to produce mini black holes on demand for scientists to study (2 pages).
Feast, M.W.
This article deals with two questions, namely whether it is possible for black holes to exist, and if the answer is yes, whether we have found any yet. In deciding whether black holes can exist or not the central role in the shaping of our universe played by the forse of gravity is discussed, and in deciding whether we are likely to find black holes in the universe the author looks at the way stars evolve, as well as white dwarfs and neutron stars. He also discusses the problem how to detect a black hole, possible black holes, a southern black hole, massive black holes, as well as why black holes are studied
Black Holes and Pulsars in the Introductory Physics Course
Orear, Jay; Salpeter, E. E.
Discusses the phenomenon of formation of white dwarfs, neutron stars, and black holes from dying stars for the purpose of providing college teachers with materials usable in the introductory physics course. (CC)
Brügmann, B.; Ghez, A. M.; Greiner, J.
Recent progress in black hole research is illustrated by three examples. We discuss the observational challenges that were met to show that a supermassive black hole exists at the center of our galaxy. Stellar-size black holes have been studied in x-ray binaries and microquasars. Finally, numerical simulations have become possible for the merger of black hole binaries.
Townsend, P. K.
This paper is concerned with several not-quantum aspects of black holes, with emphasis on theoretical and mathematical issues related to numerical modeling of black hole space-times. Part of the material has a review character, but some new results or proposals are also presented. We review the experimental evidence for existence of black holes. We propose a definition of black hole region for any theory governed by a symmetric hyperbolic system of equations. Our definition reproduces the usu...
Applications of hidden symmetries to black hole physics
Frolov, Valeri
This work is a brief review of applications of hidden symmetries to black hole physics. Symmetry is one of the most important concepts of the science. In physics and mathematics the symmetry allows one to simplify a problem, and often to make it solvable. According to the Noether theorem symmetries are responsible for conservation laws. Besides evident (explicit) spacetime symmetries, responsible for conservation of energy, momentum, and angular momentum of a system, there also exist what is called hidden symmetries, which are connected with higher order in momentum integrals of motion. A remarkable fact is that black holes in four and higher dimensions always possess a set ('tower') of explicit and hidden symmetries which make the equations of motion of particles and light completely integrable. The paper gives a general review of the recently obtained results. The main focus is on understanding why at all black holes have something (symmetry) to hide.
Statistical physics of black holes as quantum-mechanical systems
Giddings, Steven B.
Some basic features of black-hole statistical mechanics are investigated, assuming that black holes respect the principles of quantum mechanics. Care is needed in defining an entropy S_bh corresponding to the number of microstates of a black hole, given that the black hole interacts with its surroundings. An open question is then the relationship between this entropy and the Bekenstein-Hawking entropy S_BH. For a wide class of models with interactions needed to ensure unitary quantum evolutio...
Black Holes: Eliminating Information or Illuminating New Physics?
Sumanta Chakraborty
Full Text Available Black holes, initially thought of as very interesting mathematical and geometric solutions of general relativity, over time, have come up with surprises and challenges for modern physics. In modern times, they have started to test our confidence in the fundamental understanding of nature. The most serious charge on the black holes is that they eat up information, never to release and subsequently erase it. This goes absolutely against the sacred principles of all other branches of fundamental sciences. This realization has shaken the very base of foundational concepts, both in quantum theory and gravity, which we always took for granted. Attempts to get rid of of this charge, have led us to crossroads with concepts, hold dearly in quantum theory. The sphere of black hole's tussle with quantum theory has readily and steadily grown, from the advent of the Hawking radiation some four decades back, into domain of quantum information theory in modern times, most aptly, recently put in the form of the firewall puzzle. Do black holes really indicate something sinister about their existence or do they really point towards the troubles of ignoring the fundamental issues, our modern theories are seemingly plagued with? In this review, we focus on issues pertaining to black hole evaporation, the development of the information loss paradox, its recent formulation, the leading debates and promising directions in the community.
Constraining jet physics in weakly accreting black holes
Markoff, Sera
Outflowing jets are observed in a variety of astronomical objects such as accreting compact objects from X-ray binaries (XRBs) to active galactic nuclei (AGN), as well as at stellar birth and death. Yet we still do not know exactly what they are comprised of, why and how they form, or their exact relationship with the accretion flow. In this talk I will focus on jets in black hole systems, which provide the ideal test population for studying the relationship between inflow and outflow over an extreme range in mass and accretion rate. I will present several recent results from coordinated multi-wavelength studies of low-luminosity sources. These results not only support similar trends in weakly accreting black hole behavior across the mass scale, but also suggest that the same underlying physical model can explain their broadband spectra. I will discuss how comparisons between small- and large-scale systems are revealing new information about the regions nearest the black hole, providing clues about the creation of these weakest of jets. Furthermore, comparisons between our Galactic center nucleus Sgr A* and other sources at slightly higher accretion rates can illucidate the processes which drive central activity, and pave the way for new tests with upcoming instruments.
Black hole physics. Black hole lightning due to particle acceleration at subhorizon scales.
Aleksić, J; Ansoldi, S; Antonelli, L A; Antoranz, P; Babic, A; Bangale, P; Barrio, J A; Becerra González, J; Bednarek, W; Bernardini, E; Biasuzzi, B; Biland, A; Blanch, O; Bonnefoy, S; Bonnoli, G; Borracci, F; Bretz, T; Carmona, E; Carosi, A; Colin, P; Colombo, E; Contreras, J L; Cortina, J; Covino, S; Da Vela, P; Dazzi, F; De Angelis, A; De Caneva, G; De Lotto, B; de Oña Wilhelmi, E; Delgado Mendez, C; Dominis Prester, D; Dorner, D; Doro, M; Einecke, S; Eisenacher, D; Elsaesser, D; Fonseca, M V; Font, L; Frantzen, K; Fruck, C; Galindo, D; García López, R J; Garczarczyk, M; Garrido Terrats, D; Gaug, M; Godinović, N; González Muñoz, A; Gozzini, S R; Hadasch, D; Hanabata, Y; Hayashida, M; Herrera, J; Hildebrand, D; Hose, J; Hrupec, D; Idec, W; Kadenius, V; Kellermann, H; Kodani, K; Konno, Y; Krause, J; Kubo, H; Kushida, J; La Barbera, A; Lelas, D; Lewandowska, N; Lindfors, E; Lombardi, S; Longo, F; López, M; López-Coto, R; López-Oramas, A; Lorenz, E; Lozano, I; Makariev, M; Mallot, K; Maneva, G; Mankuzhiyil, N; Mannheim, K; Maraschi, L; Marcote, B; Mariotti, M; Martínez, M; Mazin, D; Menzel, U; Miranda, J M; Mirzoyan, R; Moralejo, A; Munar-Adrover, P; Nakajima, D; Niedzwiecki, A; Nilsson, K; Nishijima, K; Noda, K; Orito, R; Overkemping, A; Paiano, S; Palatiello, M; Paneque, D; Paoletti, R; Paredes, J M; Paredes-Fortuny, X; Persic, M; Poutanen, J; Prada Moroni, P G; Prandini, E; Puljak, I; Reinthal, R; Rhode, W; Ribó, M; Rico, J; Rodriguez Garcia, J; Rügamer, S; Saito, T; Saito, K; Satalecka, K; Scalzotto, V; Scapin, V; Schultz, C; Schweizer, T; Shore, S N; Sillanpää, A; Sitarek, J; Snidaric, I; Sobczynska, D; Spanier, F; Stamatescu, V; Stamerra, A; Steinbring, T; Storz, J; Strzys, M; Takalo, L; Takami, H; Tavecchio, F; Temnikov, P; Terzić, T; Tescaro, D; Teshima, M; Thaele, J; Tibolla, O; Torres, D F; Toyama, T; Treves, A; Uellenbeck, M; Vogler, P; Zanin, R; Kadler, M; Schulz, R; Ros, E; Bach, U; Krauß, F; Wilms, J
Supermassive black holes with masses of millions to billions of solar masses are commonly found in the centers of galaxies. Astronomers seek to image jet formation using radio interferometry but still suffer from insufficient angular resolution. An alternative method to resolve small structures is to measure the time variability of their emission. Here we report on gamma-ray observations of the radio galaxy IC 310 obtained with the MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) telescopes, revealing variability with doubling time scales faster than 4.8 min. Causality constrains the size of the emission region to be smaller than 20% of the gravitational radius of its central black hole. We suggest that the emission is associated with pulsar-like particle acceleration by the electric field across a magnetospheric gap at the base of the radio jet. Copyright © 2014, American Association for the Advancement of Science.
Nonextremal stringy black hole
Suzuki, K.
We construct a four-dimensional BPS saturated heterotic string solution from the Taub-NUT solution. It is a nonextremal black hole solution since its Euler number is nonzero. We evaluate its black hole entropy semiclassically. We discuss the relation between the black hole entropy and the degeneracy of string states. The entropy of our string solution can be understood as the microscopic entropy which counts the elementary string states without any complications. copyright 1997 The American Physical Society
Naked black holes
Horowitz, G.T.; Ross, S.F.
It is shown that there are large static black holes for which all curvature invariants are small near the event horizon, yet any object which falls in experiences enormous tidal forces outside the horizon. These black holes are charged and near extremality, and exist in a wide class of theories including string theory. The implications for cosmic censorship and the black hole information puzzle are discussed. copyright 1997 The American Physical Society
Black hole astrophysics
Blandford, R.D.; Thorne, K.S.
Following an introductory section, the subject is discussed under the headings: on the character of research in black hole astrophysics; isolated holes produced by collapse of normal stars; black holes in binary systems; black holes in globular clusters; black holes in quasars and active galactic nuclei; primordial black holes; concluding remarks on the present state of research in black hole astrophysics. (U.K.)
Black Holes in the Cosmos, the Lab, and in Fundamental Physics (3/3)
Black holes present the extreme limits of physics. They are ubiquitous in the cosmos, and in some extra-dimensional scenarios they could be produced at colliders. They have also yielded a puzzle that challenges the foundations of physics. These talks will begin with an overview of the basics of black hole physics, and then briefly summarize some of the exciting developments with cosmic black holes. They will then turn to properties of quantum black holes, and the question of black hole production in high energy collisions, perhaps beginning with the LHC. I will then overview the apparent paradox emerging from Hawking's discovery of black hole evaporation, and what it could be teaching us about the foundations of quantum mechanics and gravity.
Astrophysical black holes
Gorini, Vittorio; Moschella, Ugo; Treves, Aldo; Colpi, Monica
Based on graduate school lectures in contemporary relativity and gravitational physics, this book gives a complete and unified picture of the present status of theoretical and observational properties of astrophysical black holes. The chapters are written by internationally recognized specialists. They cover general theoretical aspects of black hole astrophysics, the theory of accretion and ejection of gas and jets, stellar-sized black holes observed in the Milky Way, the formation and evolution of supermassive black holes in galactic centers and quasars as well as their influence on the dynamics in galactic nuclei. The final chapter addresses analytical relativity of black holes supporting theoretical understanding of the coalescence of black holes as well as being of great relevance in identifying gravitational wave signals. With its introductory chapters the book is aimed at advanced graduate and post-graduate students, but it will also be useful for specialists.
Topics in black-hole physics: geometric constraints on noncollapsing, gravitating systems, and tidal distortions of a Schwarzschild black hole
Redmount, I.H.
This dissertation consists of two studies on the general-relativistic theory of black holes. The first work concerns the fundamental issue of black-hole formation: in it geometric constraints are sought on gravitating matter systems, in the special case of axial symmetry, which determine whether or not those systems undergo gravitational collapse to form black holes. The second project deals with mechanical behavior of a black hole: specifically, the tidal deformation of a static black hole is studied by the gravitational fields of external bodies
Physics and initial data for multiple black hole spacetimes
Bonning, Erin; Marronetti, Pedro; Neilsen, David; Matzner, Richard
An orbiting black hole binary will generate strong gravitational radiation signatures, making these binaries important candidates for detection in gravitational wave observatories. The gravitational radiation is characterized by the orbital parameters, including the frequency and separation at the innermost stable circular orbit (ISCO). One approach to estimating these parameters relies on a sequence of initial data slices that attempt to capture the physics of the inspiral. Using calculations of the binding energy, several authors have estimated the ISCO parameters using initial data constructed with various algorithms. In this paper we examine this problem using conformally Kerr-Schild initial data. We present convergence results for our initial data solutions, and give data from numerical solutions of the constraint equations representing a range of physical configurations. In a first attempt to understand the physical content of the initial data, we find that the Newtonian binding energy is contained in the superposed Kerr-Schild background before the constraints are solved. We examine some deficiencies with the initial data approach to orbiting binaries, especially touching on the effects of prior motion and spin-orbital coupling of the angular momenta. Making rough estimates of these effects, we find that they are not insignificant compared to the binding energy, leaving some doubt of the utility of using initial data to predict ISCO parameters. In computations of specific initial-data configurations we find spin-specific effects that are consistent with analytical estimates
Physics of Rotating and Expanding Black Hole Universe
Seshavatharam U. V. S.
Full Text Available Throughout its journey universe follows strong gravity. By unifying general theory of relativity and quantum mechanics a simple derivation is given for rotating black hole's temperature. It is shown that when the rotation speed approaches light speed temperature approaches Hawking's black hole temperature. Applying this idea to the cosmic black hole it is noticed that there is "no cosmic temperature" if there is "no cosmic rotation". Starting from the Planck scale it is assumed that universe is a rotating and expanding black hole. Another key assumption is that at any time cosmic black hole rotates with light speed. For this cosmic sphere as a whole while in light speed rotation "rate of decrease" in temperature or "rate of increase" in cosmic red shift is a measure of "rate of cosmic expansion". Since 1992, measured CMBR data indicates that, present CMB is same in all directions equal to $2.726^circ$ K, smooth to 1 part in 100,000 and there is no continuous decrease! This directly indicates that, at present rate of decrease in temperature is practically zero and rate of expansion is practically zero. Universe is isotropic and hence static and is rotating as a rigid sphere with light speed. At present galaxies are revolving with speeds proportional to their distances from the cosmic axis of rotation. If present CMBR temperature is $2.726^circ$ K, present value of obtained angular velocity is $2.17 imes 10^{-18}$ rad/sec $cong$ 67 Km/sec$imes$Mpc. Present cosmic mass density and cosmic time are fitted with a $ln (volume ratio$ parameter. Finally it can be suggested that dark matter and dark energy are ad-hoc and misleading concepts.
Quantum Mechanics of Black Holes
These lectures give a pedagogical review of dilaton gravity, Hawking radiation, the black hole information problem, and black hole pair creation. (Lectures presented at the 1994 Trieste Summer School in High Energy Physics and Cosmology)
Academic Training Lectures | Black Holes from a Particle Physics Perspective | 18-19 November
Black Holes from a Particle Physics Perspective by Georgi Dvali Tuesday 18 and Wednesday 19 November 2014 from 11 am to 12 noon at CERN ( 40-S2-A01 - Salle Anderson ) Description: We will review the physics of black holes, both large and small, from a particle physicist's perspective, using particle physics tools for describing concepts such as entropy, temperature and quantum information processing. We will also discuss microscopic pictures of black hole formation in high energy particle scattering, potentially relevant for high-energy accelerator experiments, and some differences and similarities with the signatures of other BSM physics. See the Indico page here.
Colliding black hole solution
Ahmed, Mainuddin
A new solution of Einstein equation in general relativity is found. This solution solves an outstanding problem of thermodynamics and black hole physics. Also this work appears to conclude the interpretation of NUT spacetime. (author)
Black-hole thermodynamics
Including black holes in the scheme of thermodynamics has disclosed a deep-seated connection between gravitation, heat and the quantum that may lead us to a synthesis of the corresponding branches of physics
White dwarfs - black holes
Sexl, R.; Sexl, H.
The physical arguments and problems of relativistic astrophysics are presented in a correct way, but without any higher mathematics. The book is addressed to teachers, experimental physicists, and others with a basic knowledge covering an introductory lecture in physics. The issues dealt with are: fundamentals of general relativity, classical tests of general relativity, curved space-time, stars and planets, pulsars, gravitational collapse and black holes, the search for black holes, gravitational waves, cosmology, cosmogony, and the early universe. (BJ/AK) [de
Full Text Available Throughout its journey universe follows strong gravity. By unifying general theory of relativity and quantum mechanics a simple derivation is given for rotating black hole's temperature. It is shown that when the rotation speed approaches light speed temperature approaches Hawking's black hole temperature. Applying this idea to the cosmic black hole it is noticed that there is "no cosmic temperature� if there is "no cosmic rotation�. Starting from the Planck scale it is assumed that- universe is a rotating and expanding black hole. Another key assumption is that at any time cosmic black hole rotates with light speed. For this cosmic sphere as a whole while in light speed rotation "rate of decrease� in temperature or "rate of increase� in cosmic red shift is a measure of "rate of cosmic expansion�. Since 1992, measured CMBR data indicates that, present CMB is same in all directions equal to 2 : 726 K ; smooth to 1 part in 100,000 and there is no continuous decrease! This directly indicates that, at present rate of decrease in temperature is practically zero and rate of expansion is practically zero. Universe is isotropic and hence static and is rotating as a rigid sphere with light speed. At present galaxies are revolving with speeds proportional to their distances from the cosmic axis of rotation. If present CMBR temperature is 2 : 726 K, present value of obtained angular velocity is 2 : 17 10 Present cosmic mass density and cosmic time are fitted with a ln ( volume ratio parameter. Finally it can be suggested that dark matter and dark energy are ad-hoc and misleading concepts.
The black hole symphony: probing new physics using gravitational waves.
Gair, Jonathan R
The next decade will very likely see the birth of a new field of astronomy as we become able to directly detect gravitational waves (GWs) for the first time. The existence of GWs is one of the key predictions of Einstein's theory of general relativity, but they have eluded direct detection for the last century. This will change thanks to a new generation of laser interferometers that are already in operation or which are planned for the near future. GW observations will allow us to probe some of the most exotic and energetic events in the Universe, the mergers of black holes. We will obtain information about the systems to a precision unprecedented in astronomy, and this will revolutionize our understanding of compact astrophysical systems. Moreover, if any of the assumptions of relativity theory are incorrect, this will lead to subtle, but potentially detectable, differences in the emitted GWs. Our observations will thus provide very precise verifications of the theory in an as yet untested regime. In this paper, I will discuss what GW observations could tell us about known and (potentially) unknown physics.
Schwarzschild black hole in the background of the Einstein universe: some physical effects
Ramachandra, B S; Vishveshwara, C V
A prototype of an asymptotically non-flat black hole spacetime is that of a Schwarzschild black hole in the background of the Einstein universe, which is a special case of the representation of a black hole in a cosmological background given by Vaidya. Recently, this spacetime has been studied in detail by Nayak et al. They constructed a composite spacetime called the Vaidya-Einstein-Schwarzschild (VES) spacetime. We investigate some of the physical effects inherent to this spacetime. We carry out a background-black hole decomposition of the spacetime in order to separate out the effects due to the background spacetime and the black hole. The physical effects we study include the classical tests - the gravitational redshift, perihelion precession and light bending - and circular geodesics. A detailed classification of geodesics, in general, is also given
Black Hole Binaries: The Journey from Astrophysics to Physics
McClintock, Jeffrey E.
This paper is based on a talk presented at the 208th Meeting of the American Astronomical Society in the session on Short-Period Binary Stars. The talk (and this paper in turn) are based on a parent paper, which is a comprehensive review by Remillard and McClintock (2006; hereafter RM06) on the X-ray properties of binary stars that contain a stellar black-hole primary. We refer to these systems as black hole binaries. In this present paper, which follows closely the content of the talk, we give sketches of some of the main topics covered in RM06. For a detailed account of the topics discussed herein and a full list of references (which are provided only sketchily below), see RM06 and also a second review paper by McClintock & Remillard (2006; hereafter MR06). There is one subject that is treated in more detail here than in the two review papers just cited, namely, the measurement of black hole spin; on this topic, see McClintock et al. (2006) for further details and references.
Redundant and physical black hole parameters: Is there an independent physical dilaton charge?
K. Hajian
Full Text Available Black holes as solutions to gravity theories, are generically identified by a set of parameters. Some of these parameters are associated with black hole physical conserved charges, like ADM charges. There can also be some "redundant parameters.� We propose necessary conditions for a parameter to be physical. The conditions are essentially integrability and non-triviality of the charge variations arising from "parametric variations,� variation of the solution with respect to the chosen parameters. In addition, we prove that variation of the redundant parameters which do not meet our criteria do not appear in the first law of thermodynamics. As an interesting application, we show that dilaton moduli are redundant parameters for black hole solutions to Einstein–Maxwell–(Axion–Dilaton theories, because variations in dilaton moduli would render entropy, mass, electric charges or angular momenta non-integrable. Our results are in contrast with modification of the first law due to scalar charges suggested in Gibbons–Kallosh–Kol paper [1] and its follow-ups. We also briefly discuss implications of our results for the attractor behavior of extremal black holes.
Hajian, K., E-mail: [email protected]; Sheikh-Jabbari, M.M., E-mail: [email protected]
Black holes as solutions to gravity theories, are generically identified by a set of parameters. Some of these parameters are associated with black hole physical conserved charges, like ADM charges. There can also be some "redundant parameters.� We propose necessary conditions for a parameter to be physical. The conditions are essentially integrability and non-triviality of the charge variations arising from "parametric variations,� variation of the solution with respect to the chosen parameters. In addition, we prove that variation of the redundant parameters which do not meet our criteria do not appear in the first law of thermodynamics. As an interesting application, we show that dilaton moduli are redundant parameters for black hole solutions to Einstein–Maxwell–(Axion)–Dilaton theories, because variations in dilaton moduli would render entropy, mass, electric charges or angular momenta non-integrable. Our results are in contrast with modification of the first law due to scalar charges suggested in Gibbons–Kallosh–Kol paper and its follow-ups. We also briefly discuss implications of our results for the attractor behavior of extremal black holes.
Black holes. Chapter 6
Penrose, R.
Conditions for the formation of a black hole are considered, and the properties of black holes. The possibility of Cygnus X-1 as a black hole is discussed. Einstein's theory of general relativity in relation to the formation of black holes is discussed. (U.K.)
Black holes go supersonic
Leonhardt, Ulf [School of Physics and Astronomy, University of St. Andrews (United Kingdom)
In modern physics, the unification of gravity and quantum mechanics remains a mystery. Gravity rules the macroscopic world of planets, stars and galaxies, while quantum mechanics governs the micro-cosmos of atoms, light quanta and elementary particles. However, cosmologists believe that these two disparate worlds may meet at the edges of black holes. Now Luis Garay, James Anglin, Ignacio Cirac and Peter Zoller at the University of Innsbruck in Austria have proposed a realistic way to make an artificial 'sonic' black hole in a tabletop experiment (L J Garay et al. 2000 Phys. Rev. Lett. 85 4643). In the February issue of Physics World, Ulf Leonhardt of the School of Physics and Astronomy, University of St. Andrews, UK, explains how the simulated black holes work. (U.K.)
Search for black holes
Cherepashchuk, Anatolii M
Methods and results of searching for stellar mass black holes in binary systems and for supermassive black holes in galactic nuclei of different types are described. As of now (June 2002), a total of 100 black hole candidates are known. All the necessary conditions Einstein's General Relativity imposes on the observational properties of black holes are satisfied for candidate objects available, thus further assuring the existence of black holes in the Universe. Prospects for obtaining sufficient criteria for reliably distinguishing candidate black holes from real black holes are discussed. (reviews of topical problems)
Black holes and quantum processes in them
The latest achievements in the physics of black holes are reviewed. The problem of quantum production in a strong gravitational field of black holes is considered. Another parallel discovered during investigation of interactions between black holes and between black holes and surrounding media, is also drawn with thermodynamics. A gravitational field of rotating black holes is considered. Some cosmological aspects of evaporation of small black holes are discussed as well as possibilities to observe them
Black holes and beyond
Belief in the existence of black holes is the ultimate act of faith for a physicist. First suggested by the English clergyman John Michell in the year 1784, the gravitational pull of a black hole is so strong that nothing - not even light - can escape. Gravity might be the weakest of the fundamental forces but black-hole physics is not for the faint-hearted. Black holes present obvious problems for would-be observers because they cannot, by definition, be seen with conventional telescopes - although before the end of the decade gravitational-wave detectors should be able to study collisions between black holes. Until then astronomers can only infer the existence of a black hole from its gravitational influence on other matter, or from the X-rays emitted by gas and dust as they are dragged into the black hole. However, once this material passes through the 'event horizon' that surrounds the black hole, we will never see it again - not even with X-ray specs. Despite these observational problems, most physicists and astronomers believe that black holes do exist. Small black holes a few kilometres across are thought to form when stars weighing more than about two solar masses collapse under the weight of their own gravity, while supermassive black holes weighing millions of solar masses appear to be present at the centre of most galaxies. Moreover, some brave physicists have proposed ways to make black holes - or at least event horizons - in the laboratory. The basic idea behind these 'artificial black holes' is not to compress a large amount of mass into a small volume, but to reduce the speed of light in a moving medium to less than the speed of the medium and so create an event horizon. The parallels with real black holes are not exact but the experiments could shed new light on a variety of phenomena. The first challenge, however, is to get money for the research. One year on from a high-profile meeting on artificial black holes in London, for
A Dancing Black Hole
Shoemaker, Deirdre; Smith, Kenneth; Schnetter, Erik; Fiske, David; Laguna, Pablo; Pullin, Jorge
Recently, stationary black holes have been successfully simulated for up to times of approximately 600-1000M, where M is the mass of the black hole. Considering that the expected burst of gravitational radiation from a binary black hole merger would last approximately 200-500M, black hole codes are approaching the point where simulations of mergers may be feasible. We will present two types of simulations of single black holes obtained with a code based on the Baumgarte-Shapiro-Shibata-Nakamura formulation of the Einstein evolution equations. One type of simulations addresses the stability properties of stationary black hole evolutions. The second type of simulations demonstrates the ability of our code to move a black hole through the computational domain. This is accomplished by shifting the stationary black hole solution to a coordinate system in which the location of the black hole is time dependent.
Modeling black hole evaporation
Fabbri, Alessandro
The scope of this book is two-fold: the first part gives a fully detailed and pedagogical presentation of the Hawking effect and its physical implications, and the second discusses the backreaction problem, especially in connection with exactly solvable semiclassical models that describe analytically the black hole evaporation process. The book aims to establish a link between the general relativistic viewpoint on black hole evaporation and the new CFT-type approaches to the subject. The detailed discussion on backreaction effects is also extremely valuable.
Scattering from black holes
Futterman, J.A.H.; Handler, F.A.; Matzner, R.A.
This book provides a comprehensive treatment of the propagation of waves in the presence of black holes. While emphasizing intuitive physical thinking in their treatment of the techniques of analysis of scattering, the authors also include chapters on the rigorous mathematical development of the subject. Introducing the concepts of scattering by considering the simplest, scalar wave case of scattering by a spherical (Schwarzschild) black hole, the book then develops the formalism of spin weighted spheroidal harmonics and of plane wave representations for neutrino, electromagnetic, and gravitational scattering. Details and results of numerical computations are given. The techniques involved have important applications (references are given) in acoustical and radar imaging
Black Hole Paradoxes
Joshi, Pankaj S.; Narayan, Ramesh
We propose here that the well-known black hole paradoxes such as the information loss and teleological nature of the event horizon are restricted to a particular idealized case, which is the homogeneous dust collapse model. In this case, the event horizon, which defines the boundary of the black hole, forms initially, and the singularity in the interior of the black hole at a later time. We show that, in contrast, gravitational collapse from physically more realistic initial conditions typically leads to the scenario in which the event horizon and space-time singularity form simultaneously. We point out that this apparently simple modification can mitigate the causality and teleological paradoxes, and also lends support to two recently suggested solutions to the information paradox, namely, the 'firewall' and 'classical chaos' proposals. (paper)
Black hole critical phenomena without black holes
large values of Ф, black holes do form and for small values the scalar field ... on the near side of the ridge ultimately evolve to form black holes while those configu- ... The inset shows a bird's eye view looking down on the saddle point.
Black hole hair removal
Banerjee, Nabamita; Mandal, Ipsita; Sen, Ashoke
Macroscopic entropy of an extremal black hole is expected to be determined completely by its near horizon geometry. Thus two black holes with identical near horizon geometries should have identical macroscopic entropy, and the expected equality between macroscopic and microscopic entropies will then imply that they have identical degeneracies of microstates. An apparent counterexample is provided by the 4D-5D lift relating BMPV black hole to a four dimensional black hole. The two black holes have identical near horizon geometries but different microscopic spectrum. We suggest that this discrepancy can be accounted for by black hole hair - degrees of freedom living outside the horizon and contributing to the degeneracies. We identify these degrees of freedom for both the four and the five dimensional black holes and show that after their contributions are removed from the microscopic degeneracies of the respective systems, the result for the four and five dimensional black holes match exactly.
A note on physical mass and the thermodynamics of AdS-Kerr black holes
McInnes, Brett [Department of Mathematics, National University of Singapore, 10, Lower Kent Ridge Road, 119076 (Singapore); Ong, Yen Chin, E-mail: [email protected], E-mail: [email protected] [Nordic Institute for Theoretical Physics, KTH Royal Institute of Technology Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)
As with any black hole, asymptotically anti-de Sitter Kerr black holes are described by a small number of parameters, including a ''mass parameter'' M that reduces to the AdS-Schwarzschild mass in the limit of vanishing angular momentum. In sharp contrast to the asymptotically flat case, the horizon area of such a black hole increases with the angular momentum parameter a if one fixes M; this appears to mean that the Penrose process in this case would violate the Second Law of black hole thermodynamics. We show that the correct procedure is to fix not M but rather the ''physical'' mass E=M/(1−a{sup 2}/L{sup 2}){sup 2}; this is motivated by the First Law. For then the horizon area decreases with a. We recommend that E always be used as the mass in physical processes: for example, in attempts to ''over-spin'' AdS-Kerr black holes.
Noncommutative black holes
Lopez-DomInguez, J C [Instituto de Fisica de la Universidad de Guanajuato PO Box E-143, 37150 Leoen Gto. (Mexico); Obregon, O [Instituto de Fisica de la Universidad de Guanajuato PO Box E-143, 37150 Leoen Gto. (Mexico); RamIrez, C [Facultad de Ciencias FIsico Matematicas, Universidad Autonoma de Puebla, PO Box 1364, 72000 Puebla (Mexico); Sabido, M [Instituto de Fisica de la Universidad de Guanajuato PO Box E-143, 37150 Leoen Gto. (Mexico)
We study noncommutative black holes, by using a diffeomorphism between the Schwarzschild black hole and the Kantowski-Sachs cosmological model, which is generalized to noncommutative minisuperspace. Through the use of the Feynman-Hibbs procedure we are able to study the thermodynamics of the black hole, in particular, we calculate Hawking's temperature and entropy for the 'noncommutative' Schwarzschild black hole.
Black holes without firewalls
Larjo, Klaus; Lowe, David A.; Thorlacius, Larus
The postulates of black hole complementarity do not imply a firewall for infalling observers at a black hole horizon. The dynamics of the stretched horizon, that scrambles and reemits information, determines whether infalling observers experience anything out of the ordinary when entering a large black hole. In particular, there is no firewall if the stretched horizon degrees of freedom retain information for a time of the order of the black hole scrambling time.
Black holes are hot
Gibbons, G.
Recent work, which has been investigating the use of the concept of entropy with respect to gravitating systems, black holes and the universe as a whole, is discussed. The resulting theory of black holes assigns a finite temperature to them -about 10 -7 K for ordinary black holes of stellar mass -which is in complete agreement with thermodynamical concepts. It is also shown that black holes must continuously emit particles just like ordinary bodies which have a certain temperature. (U.K.)
Black Hole's 1/N Hair
Dvali, Gia
According to the standard view classically black holes carry no hair, whereas quantum hair is at best exponentially weak. We show that suppression of hair is an artifact of the semi-classical treatment and that in the quantum picture hair appears as an inverse mass-square effect. Such hair is predicted in the microscopic quantum description in which a black hole represents a self-sustained leaky Bose-condensate of N soft gravitons. In this picture the Hawking radiation is the quantum depletion of the condensate. Within this picture we show that quantum black hole physics is fully compatible with continuous global symmetries and that global hair appears with the strength B/N, where B is the global charge swallowed by the black hole. For large charge this hair has dramatic effect on black hole dynamics. Our findings can have interesting astrophysical consequences, such as existence of black holes with large detectable baryonic and leptonic numbers.
Statistical Hair on Black Holes
Strominger, A.
The Bekenstein-Hawking entropy for certain BPS-saturated black holes in string theory has recently been derived by counting internal black hole microstates at weak coupling. We argue that the black hole microstate can be measured by interference experiments even in the strong coupling region where there is clearly an event horizon. Extracting information which is naively behind the event horizon is possible due to the existence of statistical quantum hair carried by the black hole. This quantum hair arises from the arbitrarily large number of discrete gauge symmetries present in string theory. copyright 1996 The American Physical Society
Monopole Black Hole Skyrmions
Moss, Ian G; Shiiki, N; Winstanley, E
Charged black hole solutions with pion hair are discussed. These can be\\ud used to study monopole black hole catalysis of proton decay.\\ud There also exist\\ud multi-black hole skyrmion solutions with BPS monopole behaviour.
What is black hole?
First page Back Continue Last page Overview Graphics. What is black hole? Possible end phase of a star: A star is a massive, luminous ball of plasma having continuous nuclear burning. Star exhausts nuclear fuel →. White Dwarf, Neutron Star, Black Hole. Black hole's gravitational field is so powerful that even ...
Beyond the black hole
Boslough, J.
This book is about the life and work of Stephen Hawking. It traces the development of his theories about the universe and particularly black holes, in a biographical context. Hawking's lecture 'Is the end in sight for theoretical physics' is presented as an appendix. In this, he discusses the possibility of achieving a complete, consistent and unified theory of the physical interactions which would describe all possible observations. (U.K.)
Caged black holes: Black holes in compactified spacetimes. I. Theory
Kol, Barak; Sorkin, Evgeny; Piran, Tsvi
In backgrounds with compact dimensions there may exist several phases of black objects including a black hole and a black string. The phase transition between them raises questions and touches on fundamental issues such as topology change, uniqueness, and cosmic censorship. No analytic solution is known for the black hole, and moreover one can expect approximate solutions only for very small black holes, while phase transition physics happens when the black hole is large. Hence we turn to numerical solutions. Here some theoretical background to the numerical analysis is given, while the results will appear in a subsequent paper. The goals for a numerical analysis are set. The scalar charge and tension along the compact dimension are defined and used as improved order parameters which put both the black hole and the black string at finite values on the phase diagram. The predictions for small black holes are presented. The differential and the integrated forms of the first law are derived, and the latter (Smarr's formula) can be used to estimate the 'overall numerical error'. Field asymptotics and expressions for physical quantities in terms of the numerical values are supplied. The techniques include the 'method of equivalent charges', free energy, dimensional reduction, and analytic perturbation for small black holes
Black holes in binary stars
Wijers, R.A.M.J.
Introduction Distinguishing neutron stars and black holes Optical companions and dynamical masses X-ray signatures of the nature of a compact object Structure and evolution of black-hole binaries High-mass black-hole binaries Low-mass black-hole binaries Low-mass black holes Formation of black holes
Black hole levitron
Arsiwalla, Xerxes D.; Verlinde, Erik P.
We study the problem of spatially stabilizing four dimensional extremal black holes in background electric/magnetic fields. Whilst looking for stationary stable solutions describing black holes placed in external fields we find that taking a continuum limit of Denef et al.'s multicenter supersymmetric black hole solutions provides a supergravity description of such backgrounds within which a black hole can be trapped within a confined volume. This construction is realized by solving for a levitating black hole over a magnetic dipole base. We comment on how such a construction is akin to a mechanical levitron.
Nonsingular black hole
Chamseddine, Ali H. [American University of Beirut, Physics Department, Beirut (Lebanon); I.H.E.S., Bures-sur-Yvette (France); Mukhanov, Viatcheslav [Niels Bohr Institute, Niels Bohr International Academy, Copenhagen (Denmark); Ludwig-Maximilians University, Theoretical Physics, Munich (Germany); MPI for Physics, Munich (Germany)
We consider the Schwarzschild black hole and show how, in a theory with limiting curvature, the physical singularity ''inside it'' is removed. The resulting spacetime is geodesically complete. The internal structure of this nonsingular black hole is analogous to Russian nesting dolls. Namely, after falling into the black hole of radius r{sub g}, an observer, instead of being destroyed at the singularity, gets for a short time into the region with limiting curvature. After that he re-emerges in the near horizon region of a spacetime described by the Schwarzschild metric of a gravitational radius proportional to r{sub g}{sup 1/3}. In the next cycle, after passing the limiting curvature, the observer finds himself within a black hole of even smaller radius proportional to r{sub g}{sup 1/9}, and so on. Finally after a few cycles he will end up in the spacetime where he remains forever at limiting curvature. (orig.)
Black holes and the multiverse
Garriga, Jaume; Vilenkin, Alexander; Zhang, Jun
Vacuum bubbles may nucleate and expand during the inflationary epoch in the early universe. After inflation ends, the bubbles quickly dissipate their kinetic energy; they come to rest with respect to the Hubble flow and eventually form black holes. The fate of the bubble itself depends on the resulting black hole mass. If the mass is smaller than a certain critical value, the bubble collapses to a singularity. Otherwise, the bubble interior inflates, forming a baby universe, which is connected to the exterior FRW region by a wormhole. A similar black hole formation mechanism operates for spherical domain walls nucleating during inflation. As an illustrative example, we studied the black hole mass spectrum in the domain wall scenario, assuming that domain walls interact with matter only gravitationally. Our results indicate that, depending on the model parameters, black holes produced in this scenario can have significant astrophysical effects and can even serve as dark matter or as seeds for supermassive black holes. The mechanism of black hole formation described in this paper is very generic and has important implications for the global structure of the universe. Baby universes inside super-critical black holes inflate eternally and nucleate bubbles of all vacua allowed by the underlying particle physics. The resulting multiverse has a very non-trivial spacetime structure, with a multitude of eternally inflating regions connected by wormholes. If a black hole population with the predicted mass spectrum is discovered, it could be regarded as evidence for inflation and for the existence of a multiverse
Garriga, Jaume [Departament de Fisica Fonamental i Institut de Ciencies del Cosmos, Universitat de Barcelona, Marti i Franques, 1, Barcelona, 08028 Spain (Spain); Vilenkin, Alexander; Zhang, Jun, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Institute of Cosmology, Tufts University, 574 Boston Ave, Medford, MA, 02155 (United States)
Vacuum bubbles may nucleate and expand during the inflationary epoch in the early universe. After inflation ends, the bubbles quickly dissipate their kinetic energy; they come to rest with respect to the Hubble flow and eventually form black holes. The fate of the bubble itself depends on the resulting black hole mass. If the mass is smaller than a certain critical value, the bubble collapses to a singularity. Otherwise, the bubble interior inflates, forming a baby universe, which is connected to the exterior FRW region by a wormhole. A similar black hole formation mechanism operates for spherical domain walls nucleating during inflation. As an illustrative example, we studied the black hole mass spectrum in the domain wall scenario, assuming that domain walls interact with matter only gravitationally. Our results indicate that, depending on the model parameters, black holes produced in this scenario can have significant astrophysical effects and can even serve as dark matter or as seeds for supermassive black holes. The mechanism of black hole formation described in this paper is very generic and has important implications for the global structure of the universe. Baby universes inside super-critical black holes inflate eternally and nucleate bubbles of all vacua allowed by the underlying particle physics. The resulting multiverse has a very non-trivial spacetime structure, with a multitude of eternally inflating regions connected by wormholes. If a black hole population with the predicted mass spectrum is discovered, it could be regarded as evidence for inflation and for the existence of a multiverse.
Artificial black holes
Visser, Matt; Volovik, Grigory E
Physicists are pondering on the possibility of simulating black holes in the laboratory by means of various "analog models". These analog models, typically based on condensed matter physics, can be used to help us understand general relativity (Einstein's gravity); conversely, abstract techniques developed in general relativity can sometimes be used to help us understand certain aspects of condensed matter physics. This book contains 13 chapters - written by experts in general relativity, particle physics, and condensed matter physics - that explore various aspects of this two-way traffic.
Black-hole driven winds
Punsly, B.M.
This dissertation is a study of the physical mechanism that allows a large scale magnetic field to torque a rapidly rotating, supermassive black hole. This is an interesting problem as it has been conjectured that rapidly rotating black holes are the central engines that power the observed extragalactic double radio sources. Axisymmetric solutions of the curved space-time version of Maxwell's equations in the vacuum do not torque black holes. Plasma must be introduced for the hole to mechanically couple to the field. The dynamical aspect of rotating black holes that couples the magnetic field to the hole is the following. A rotating black hole forces the external geometry of space-time to rotate (the dragging of inertial frames). Inside of the stationary limit surface, the ergosphere, all physical particle trajectories must appear to rotate in the same direction as the black hole as viewed by the stationary observers at asymptotic infinity. In the text, it is demonstrated how plasma that is created on field lines that thread both the ergosphere and the equatorial plane will be pulled by gravity toward the equator. By the aforementioned properties of the ergosphere, the disk must rotate. Consequently, the disk acts like a unipolar generator. It drives a global current system that supports the toroidal magnetic field in an outgoing, magnetically dominated wind. This wind carries energy (mainly in the form of Poynting flux) and angular momentum towards infinity. The spin down of the black hole is the ultimate source of this energy and angular momentum flux
Primary black holes
Novikov, I.; Polnarev, A.
Proves are searched for of the formation of the so-called primary black holes at the very origin of the universe. The black holes would weigh less than 10 13 kg. The formation of a primary black hole is conditional on strong fluctuations of the gravitational field corresponding roughly to a half of the fluctuation maximally permissible by the general relativity theory. Only big fluctuations of the gravitational field can overcome the forces of the hot gas pressure and compress the originally expanding matter into a black hole. Low-mass black holes have a temperature exceeding that of the black holes formed from stars. A quantum process of particle formation, the so-called evaporation takes place in the strong gravitational field of a black hole. The lower the mass of the black hole, the shorter the evaporation time. The analyses of processes taking place during the evaporation of low-mass primary black holes show that only a very small proportion of the total mass of the matter in the universe could turn into primary black holes. (M.D.)
White holes and eternal black holes
Hsu, Stephen D H
We investigate isolated white holes surrounded by vacuum, which correspond to the time reversal of eternal black holes that do not evaporate. We show that isolated white holes produce quasi-thermal Hawking radiation. The time reversal of this radiation, incident on a black hole precursor, constitutes a special preparation that will cause the black hole to become eternal. (paper)
Accreting Black Holes
Begelman, Mitchell C.
I outline the theory of accretion onto black holes, and its application to observed phenomena such as X-ray binaries, active galactic nuclei, tidal disruption events, and gamma-ray bursts. The dynamics as well as radiative signatures of black hole accretion depend on interactions between the relatively simple black-hole spacetime and complex radiation, plasma and magnetohydrodynamical processes in the surrounding gas. I will show how transient accretion processes could provide clues to these ...
Artificial black holes: on the threshold of new physics
For several decades now, there has been a fundamental problem with modern physics: we have two systems that describe the universe remarkably well. Scientists realized that we've to make these two systems work together (1 page)
The physics of the relativistic counter-streaming instability that drives mass inflation inside black holes
Hamilton, Andrew J.S.; Avelino, Pedro P.
If you fall into a real astronomical black hole (choosing a supermassive black hole, to make sure that the tidal forces do not get you first), then you will probably meet your fate not at a central singularity, but rather in the exponentially growing, relativistic counter-streaming instability at the inner horizon first pointed out by Poisson and Israel (1990), who called it mass inflation. The chief purpose of this paper is to present a clear exposition of the physical cause and consequence of inflation in spherical, charged black holes. Inflation acts like a particle accelerator in that it accelerates cold ingoing and outgoing streams through each other to prodigiously high energies. Inflation feeds on itself: the acceleration is powered by the gravity produced by the streaming energy. The paper: (1) uses physical arguments to develop simple approximations that follow the evolution of inflation from ignition, through inflation itself, to collapse; (2) confirms that the simple approximations capture accurately the results of fully nonlinear one- and two-fluid self-similar models; (3) demonstrates that, counter-intuitively, the smaller the accretion rate, the more rapidly inflation exponentiates; (4) shows that in single perfect fluid models, inflation occurs only if the sound speed equals the speed of light, supporting the physical idea that inflation in single fluids is driven by relativistic counter-streaming of waves; (5) shows that what happens during inflation up to the Planck curvature depends not on the distant past or future, but rather on events happening only a few hundred black hole crossing times into the past or future; (6) shows that, if quantum gravity does not intervene, then the generic end result of inflation is not a general relativistic null singularity, but rather a spacelike singularity at zero radius.
Black hole Berry phase
de Boer, J.; Papadodimas, K.; Verlinde, E.
Supersymmetric black holes are characterized by a large number of degenerate ground states. We argue that these black holes, like other quantum mechanical systems with such a degeneracy, are subject to a phenomenon which is called the geometric or Berry's phase: under adiabatic variations of the
Black holes are warm
Ravndal, F.
Applying Einstein's theory of gravitation to black holes and their interactions with their surroundings leads to the conclusion that the sum of the surface areas of several black holes can never become less. This is shown to be analogous to entropy in thermodynamics, and the term entropy is also thus applied to black holes. Continuing, expressions are found for the temperature of a black hole and its luminosity. Thermal radiation is shown to lead to explosion of the black hole. Numerical examples are discussed involving the temperature, the mass, the luminosity and the lifetime of black mini-holes. It is pointed out that no explosions corresponding to the prediction have been observed. It is also shown that the principle of conservation of leptons and baryons is broken by hot black holes, but that this need not be a problem. The related concept of instantons is cited. It is thought that understanding of thermal radiation from black holes may be important for the development of a quantified gravitation theory. (JIW)
Black holes matter
Kragh, Helge Stjernholm
Review essay, Marcia Bartusiak, Black Hole: How an Idea Abandoned by Newtonians, Hated by Einstein, and Gambled On by Hawking Became Loved (New Haven: Yale University Press, 2015).......Review essay, Marcia Bartusiak, Black Hole: How an Idea Abandoned by Newtonians, Hated by Einstein, and Gambled On by Hawking Became Loved (New Haven: Yale University Press, 2015)....
Quantum black holes
Hooft, G. 't
This article is divided into three parts. First, a systematic derivation of the Hawking radiation is given in three different ways. The information loss problem is then discussed in great detail. The last part contains a concise discussion of black hole thermodynamics. This article was published as chapter $6$ of the IOP book "Lectures on General Relativity, Cosmology and Quantum Black Holes" (July $2017$).
Arsiwalla, X.D.; Verlinde, E.P.
We study the problem of spatially stabilizing four dimensional extremal black holes in background electric/magnetic fields. Whilst looking for stationary stable solutions describing black holes placed in external fields we find that taking a continuum limit of Denef et al.'s multicenter
Lifshitz topological black holes
Mann, R.B.
I find a class of black hole solutions to a (3+1) dimensional theory gravity coupled to abelian gauge fields with negative cosmological constant that has been proposed as the dual theory to a Lifshitz theory describing critical phenomena in (2+1) dimensions. These black holes are all asymptotic to a Lifshitz fixed point geometry and depend on a single parameter that determines both their area (or size) and their charge. Most of the solutions are obtained numerically, but an exact solution is also obtained for a particular value of this parameter. The thermodynamic behaviour of large black holes is almost the same regardless of genus, but differs considerably for small black holes. Screening behaviour is exhibited in the dual theory for any genus, but the critical length at which it sets in is genus-dependent for small black holes.
New regular black hole solutions
Lemos, Jose P. S.; Zanchin, Vilson T.
In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.
Black holes: the membrane paradigm
Thorne, K.S.; Price, R.H.; Macdonald, D.A.
The physics of black holes is explored in terms of a membrane paradigm which treats the event horizon as a two-dimensional membrane embedded in three-dimensional space. A 3+1 formalism is used to split Schwarzschild space-time and the laws of physics outside a nonrotating hole, which permits treatment of the atmosphere in terms of the physical properties of thin slices. The model is applied to perturbed slowly or rapidly rotating and nonrotating holes, and to quantify the electric and magnetic fields and eddy currents passing through a membrane surface which represents a stretched horizon. Features of tidal gravitational fields in the vicinity of the horizon, quasars and active galalctic nuclei, the alignment of jets perpendicular to accretion disks, and the effects of black holes at the center of ellipsoidal star clusters are investigated. Attention is also given to a black hole in a binary system and the interactions of black holes with matter that is either near or very far from the event horizon. Finally, a statistical mechanics treatment is used to derive a second law of thermodynamics for a perfectly thermal atmosphere of a black hole
Black Holes at the LHC: Progress since 2002
Park, Seong Chan
We review the recent noticeable progresses in black hole physics focusing on the up-coming super-collider, the LHC. We discuss the classical formation of black holes by particle collision, the greybody factors for higher dimensional rotating black holes, the deep implications of black hole physics to the 'energy-distance' relation, the security issues of the LHC associated with black hole formation and the newly developed Monte-Carlo generators for black hole events.
Neutrino constraints that transform black holes into grey holes
Ruderfer, M.
Existing black hole theory is found to be defective in its neglect of the physical properties of matter and radiation at superhigh densities. Nongravitational neutrino effects are shown to be physically relevant to the evolution of astronomical black holes and their equations of state. Gravitational collapse to supernovae combined with the Davis and Ray vacuum solution for neutrinos limit attainment of a singularity and require black holes to evolve into ''grey holes''. These allow a better justification than do black holes for explaining the unique existence of galactic masses. (Auth.)
ULTRAMASSIVE BLACK HOLE COALESCENCE
Khan, Fazeel Mahmood; Holley-Bockelmann, Kelly; Berczik, Peter
Although supermassive black holes (SMBHs) correlate well with their host galaxies, there is an emerging view that outliers exist. Henize 2-10, NGCÂ 4889, and NGCÂ 1277 are examples of SMBHs at least an order of magnitude more massive than their host galaxy suggests. The dynamical effects of such ultramassive central black holes is unclear. Here, we perform direct N-body simulations of mergers of galactic nuclei where one black hole is ultramassive to study the evolution of the remnant and the black hole dynamics in this extreme regime. We find that the merger remnant is axisymmetric near the center, while near the large SMBH influence radius, the galaxy is triaxial. The SMBH separation shrinks rapidly due to dynamical friction, and quickly forms a binary black hole; if we scale our model to the most massive estimate for the NGCÂ 1277 black hole, for example, the timescale for the SMBH separation to shrink from nearly a kiloparsec to less than a parsec is roughly 10Â Myr. By the time the SMBHs form a hard binary, gravitational wave emission dominates, and the black holes coalesce in a mere few Myr. Curiously, these extremely massive binaries appear to nearly bypass the three-body scattering evolutionary phase. Our study suggests that in this extreme case, SMBH coalescence is governed by dynamical friction followed nearly directly by gravitational wave emission, resulting in a rapid and efficient SMBH coalescence timescale. We discuss the implications for gravitational wave event rates and hypervelocity star production
Black-holes-hedgehogs in the false vacuum and a new physics beyond the Standard Model
Das, C. R.; Laperashvili, L. V.; Sidharth, B. G.; Nielsen, H. B.
In the present talk, we consider the existence of the two degenerate universal vacua: a) the first Electroweak vacuum at v = 246 GeV - "true vacuum�, and b) the second Planck scale "false vacuum� at v 2 ∼ 1018 GeV. In these vacua, we investigated the different topological defects. The main aim of this paper is an investigation of the hedgehog's configurations as defects of the false vacuum. In the framework of the f(R) gravity, suggested by authors in their Gravi-Weak Unification model, we obtained a black hole solution, which corresponds to a "hedgehog� - global monopole, "swallowed� by a black-hole with mass ∼ 1019 GeV. These black-holes form a lattice-like structure of the vacuum at the Planck scale. Considering the results of the hedgehog lattice theory in the framework of the SU(2) Yang-Mills gauge-invariant theory with hedgehogs in the Wilson loops, we have used the critical value of temperature for the hedgehog's confinement phase. This result gave us the possibility to conclude that there exist triplet Higgs fields which can contribute to the SM at the energy scale ≃ 104 ∼ 105 GeV. Showing a new physics at the scale 10÷100 TeV, these triplet Higgs particles can provide the stability of the EW-vacuum of the SM.
Black holes by analytic continuation
Amati, Daniele
In the context of a two-dimensional exactly solvable model, the dynamics of quantum black holes is obtained by analytically continuing the description of the regime where no black hole is formed. The resulting spectrum of outgoing radiation departs from the one predicted by the Hawking model in the region where the outgoing modes arise from the horizon with Planck-order frequencies. This occurs early in the evaporation process, and the resulting physical picture is unconventional. The theory predicts that black holes will only radiate out an energy of Planck mass order, stabilizing after a transitory period. The continuation from a regime without black hole formation --accessible in the 1+1 gravity theory considered-- is implicit in an S matrix approach and provides in this way a possible solution to the problem of information loss.
Black and white holes
Zeldovich, Ya.; Novikov, I.; Starobinskij, A.
The theory is explained of the origination of white holes as a dual phenomenon with regard to the formation of black holes. Theoretically it is possible to derive the white hole by changing the sign of time in solving the general theory of relativity equation implying the black hole. The white hole represents the amount of particles formed in the vicinity of a singularity. For a distant observer, matter composed of these particles expands and the outer boundaries of this matter approach from the inside the gravitational radius Rsub(r). At t>>Rsub(r)/c all radiation or expulsion of matter terminates. For the outside observer the white hole exists for an unlimited length of time. In fact, however, it acquires the properties of a black hole and all processes in it cease. The qualitative difference between a white hole and a black hole is in that a white hole is formed as the result of an inner quantum explosion from the singularity to the gravitational radius and not as the result of a gravitational collapse, i.e., the shrinkage of diluted matter towards the gravitational radius. (J.B.)
Zeldovich, Ya; Novikov, I; Starobinskii, A
The theory is explained of the origination of white holes as a dual phenomenon with regard to the formation of black holes. Theoretically it is possible to derive the white hole by changing the sign of time in solving the general theory of relativity equation implying the black hole. The white hole represents the amount of particles formed in the vicinity of a singularity. For a distant observer, matter composed of these particles expands and the outer boundaries of this matter approach from the inside the gravitational radius R/sub r/. At t>>R/sub r//c all radiation or expulsion of matter terminates. For the outside observer the white hole exists for an unlimited length of time. In fact, however, it acquires the properties of a black hole and all processes in it cease. The qualitative difference between a white hole and a black hole is in that a white hole is formed as the result of an inner quantum explosion from the singularity to the gravitational radius and not as the result of a gravitational collapse, i.e., the shrinkage of diluted matter towards the gravitational radius.
Black holes new horizons
Hayward, Sean Alan
Black holes, once just fascinating theoretical predictions of how gravity warps space-time according to Einstein's theory, are now generally accepted as astrophysical realities, formed by post-supernova collapse, or as supermassive black holes mysteriously found at the cores of most galaxies, powering active galactic nuclei, the most powerful objects in the universe. Theoretical understanding has progressed in recent decades with a wider realization that local concepts should characterize black holes, rather than the global concepts found in textbooks. In particular, notions such as trapping h
Attempt to explain black hole spin in X-ray binaries by new physics
Bambi, Cosimo
It is widely believed that the spin of black holes in X-ray binaries is mainly natal. A significant spin-up from accretion is not possible. If the secondary has a low mass, the black hole spin cannot change too much even if the black hole swallows the whole stellar companion. If the secondary has a high mass, its lifetime is too short to transfer the necessary amount of matter and spin the black hole up. However, while black holes formed from the collapse of a massive star with solarmetallicity are expected to have low birth spin, current spin measurements show that some black holes in X-ray binaries are rotating very rapidly. Here we show that, if these objects are not the Kerr black holes of general relativity, the accretion of a small amount of matter (�2 M s un) can make them look like very fast-rotating Kerr black holes. Such a possibility is not in contradiction with any observation and it can explain current spin measurements in a very simple way. (orig.)
Black holes, white dwarfs and neutron stars: The physics of compact objects
Shapiro, S.L.; Teukolsky, S.A.
The contents include: Star deaths and the formation of compact objects; White dwarfs; Rotation and magnetic fields; Cold equation of state above neutron drip; Pulsars; Accretion onto black holes; Supermassive stars and black holes; Appendices; and Indexes. This book discusses one aspect, compact objects, of astronomy and provides information of astrophysics or general relativity
Black holes with halos
Monten, Ruben; Toldo, Chiara
We present new AdS4 black hole solutions in N =2 gauged supergravity coupled to vector and hypermultiplets. We focus on a particular consistent truncation of M-theory on the homogeneous Sasaki–Einstein seven-manifold M 111, characterized by the presence of one Betti vector multiplet. We numerically construct static and spherically symmetric black holes with electric and magnetic charges, corresponding to M2 and M5 branes wrapping non-contractible cycles of the internal manifold. The novel feature characterizing these nonzero temperature configurations is the presence of a massive vector field halo. Moreover, we verify the first law of black hole mechanics and we study the thermodynamics in the canonical ensemble. We analyze the behavior of the massive vector field condensate across the small-large black hole phase transition and we interpret the process in the dual field theory.
Introducing the Black Hole
Ruffini, Remo; Wheeler, John A.
discusses the cosmology theory of a black hole, a region where an object loses its identity, but mass, charge, and momentum are conserved. Include are three possible formation processes, theorized properties, and three way they might eventually be detected. (DS)
Intermediate-Mass Black Holes
Miller, M. Coleman; Colbert, E. J. M.
The mathematical simplicity of black holes, combined with their links to some of the most energetic events in the universe, means that black holes are key objects for fundamental physics and astrophysics. Until recently, it was generally believed that black holes in nature appear in two broad mass ranges: stellar-mass (M~3 20 M⊙), which are produced by the core collapse of massive stars, and supermassive (M~106 1010 M⊙), which are found in the centers of galaxies and are produced by a still uncertain combination of processes. In the last few years, however, evidence has accumulated for an intermediate-mass class of black holes, with M~102 104 M⊙. If such objects exist they have important implications for the dynamics of stellar clusters, the formation of supermassive black holes, and the production and detection of gravitational waves. We review the evidence for intermediate-mass black holes and discuss future observational and theoretical work that will help clarify numerous outstanding questions about these objects.
Supersymmetric black holes
de Wit, Bernard
The effective action of $N=2$, $d=4$ supergravity is shown to acquire no quantum corrections in background metrics admitting super-covariantly constant spinors. In particular, these metrics include the Robinson-Bertotti metric (product of two 2-dimensional spaces of constant curvature) with all 8 supersymmetries unbroken. Another example is a set of arbitrary number of extreme Reissner-Nordstr\\"om black holes. These black holes break 4 of 8 supersymmetries, leaving the other 4 unbroken. We ha...
Black Holes and Thermodynamics
Wald, Robert M.
We review the remarkable relationship between the laws of black hole mechanics and the ordinary laws of thermodynamics. It is emphasized that - in analogy with the laws of thermodynamics - the validity the laws of black hole mechanics does not appear to depend upon the details of the underlying dynamical theory (i.e., upon the particular field equations of general relativity). It also is emphasized that a number of unresolved issues arise in ``ordinary thermodynamics'' in the context of gener...
Erratic Black Hole Regulates Itself
't entirely understand, the other one gets the upper hand." GRS 1915+105 Chandra X-ray Image of GRS 1915+105 The latest Chandra results also show that the wind and the jet carry about the same amount of matter away from the black hole. This is evidence that the black hole is somehow regulating its accretion rate, which may be related to the toggling between mass expulsion via either a jet or a wind from the accretion disk. Self-regulation is a common topic when discussing supermassive black holes, but this is the first clear evidence for it in stellar-mass black holes. "It is exciting that we may be on the track of explaining two mysteries at the same time: how black hole jets can be shut down and also how black holes regulate their growth," said co-author Julia Lee, assistant professor in the Astronomy department at the Harvard-Smithsonian Center for Astrophysics. "Maybe black holes can regulate themselves better than the financial markets!" Although micro-quasars and quasars differ in mass by factors of millions, they should show a similarity in behavior when their very different physical scales are taken into account. People Who Read This Also Read... Chandra Data Reveal Rapidly Whirling Black Holes Jet Power and Black Hole Assortment Revealed in New Chandra Image Celebrate the International Year of Astronomy Ghost Remains After Black Hole Eruption "If quasars and micro-quasars behave very differently, then we have a big problem to figure out why, because gravity treats them the same," said Neilsen. "So, our result is actually very reassuring, because it's one more link between these different types of black holes." The timescale for changes in behavior of a black hole should vary in proportion to the mass. For example, an hour-long timescale for changes in GRS 1915 would correspond to about 10,000 years for a supermassive black hole that weighs a billion times the mass of the Sun. "We cannot hope to explore at this level of detail in any single supermassive black hole
Spin One Hawking Radiation from Dirty Black Holes
Petarpa Boonserm; Tritos Ngampitipan; Matt Visser
A "clean� black hole is a black hole in vacuum such as the Schwarzschild black hole. However in real physical systems, there are matter fields around a black hole. Such a black hole is called a "dirty black hole�. In this paper, the effect of matter fields on the black hole and the greybody factor is investigated. The results show that matter fields make a black hole smaller. They can increase the potential energy to a black hole to obstruct Hawking radiation to propagate. This causes the gre...
Black hole thermodynamical entropy
Tsallis, Constantino; Cirto, Leonardo J.L.
As early as 1902, Gibbs pointed out that systems whose partition function diverges, e.g. gravitation, lie outside the validity of the Boltzmann-Gibbs (BG) theory. Consistently, since the pioneering Bekenstein-Hawking results, physically meaningful evidence (e.g., the holographic principle) has accumulated that the BG entropy S BG of a (3+1) black hole is proportional to its area L 2 (L being a characteristic linear length), and not to its volume L 3 . Similarly it exists the area law, so named because, for a wide class of strongly quantum-entangled d-dimensional systems, S BG is proportional to lnL if d=1, and to L d-1 if d>1, instead of being proportional to L d (d ≥ 1). These results violate the extensivity of the thermodynamical entropy of a d-dimensional system. This thermodynamical inconsistency disappears if we realize that the thermodynamical entropy of such nonstandard systems is not to be identified with the BG additive entropy but with appropriately generalized nonadditive entropies. Indeed, the celebrated usefulness of the BG entropy is founded on hypothesis such as relatively weak probabilistic correlations (and their connections to ergodicity, which by no means can be assumed as a general rule of nature). Here we introduce a generalized entropy which, for the Schwarzschild black hole and the area law, can solve the thermodynamic puzzle. (orig.)
Newborn Black Holes
Science Teacher, 2005
Scientists using NASA's Swift satellite say they have found newborn black holes, just seconds old, in a confused state of existence. The holes are consuming material falling into them while somehow propelling other material away at great speeds. "First comes a blast of gamma rays followed by intense pulses of x-rays. The energies involved are much…
A Black Hole in Our Galactic Center
Ruiz, Michael J.
An introductory approach to black holes is presented along with astronomical observational data pertaining to the presence of a supermassive black hole at the center of our galaxy. Concepts of conservation of energy and Kepler's third law are employed so students can apply formulas from their physics class to determine the mass of the black hole…
What does a black hole look like?
Bailyn, Charles D
Emitting no radiation or any other kind of information, black holes mark the edge of the universe--both physically and in our scientific understanding. Yet astronomers have found clear evidence for the existence of black holes, employing the same tools and techniques used to explore other celestial objects. In this sophisticated introduction, leading astronomer Charles Bailyn goes behind the theory and physics of black holes to describe how astronomers are observing these enigmatic objects and developing a remarkably detailed picture of what they look like and how they interact with their surroundings. Accessible to undergraduates and others with some knowledge of introductory college-level physics, this book presents the techniques used to identify and measure the mass and spin of celestial black holes. These key measurements demonstrate the existence of two kinds of black holes, those with masses a few times that of a typical star, and those with masses comparable to whole galaxies--supermassive black holes...
Irreducible mass, unincreasable angular momentum and isoareal transformations for black hole physics
Calvani, M [Padua Univ. (Italy). Ist. di Astronomia; Francaviglia, M [Turin Univ. (Italy)
The concept of unincreasable angular momentum for a Kerr black hole is introduced and related to the isoareal transformations of the horizons. A thermodynamical interpretation is proposed for the new parameter.
Calvani, M.
The concept of unincreasable angular momentum for a Kerr black hole is introduced and related to the isoareal transformations of the horizons. A thermodynamical interpretation is proposed for the new parameter. (author)
Quantum information versus black hole physics: deep firewalls from narrow assumptions
Braunstein, Samuel L.; Pirandola, Stefano
The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes `all the way down' in contrast with earlier work describing only a structure at the horizon. This article is part of a discussion meeting issue `Foundations of quantum mechanics and their impact on contemporary society'.
Superradiance energy extraction, black-hole bombs and implications for astrophysics and particle physics
Brito, Richard; Pani, Paolo
This volume gives a unified picture of the multifaceted subject of superradiance, with a focus on recent developments in the field, ranging from fundamental physics to astrophysics. Superradiance is a radiation enhancement process that involves dissipative systems. With a 60 year-old history, superradiance has played a prominent role in optics, quantum mechanics and especially in relativity and astrophysics. In Einstein's General Relativity, black-hole superradiance is permitted by dissipation at the event horizon, which allows energy extraction from the vacuum, even at the classical level. When confined, this amplified radiation can give rise to strong instabilities known as "blackhole bombs'', which have applications in searches for dark matter, in physics beyond the Standard Model and in analog models of gravity. This book discusses and draws together all these fascinating aspects of superradiance.
Merging Black Holes
Centrella, Joan
The final merger of two black holes is expected to be the strongest source of gravitational waves for both ground-based detectors such as LIGO and VIRGO, as well as future. space-based detectors. Since the merger takes place in the regime of strong dynamical gravity, computing the resulting gravitational waveforms requires solving the full Einstein equations of general relativity on a computer. For many years, numerical codes designed to simulate black hole mergers were plagued by a host of instabilities. However, recent breakthroughs have conquered these instabilities and opened up this field dramatically. This talk will focus on.the resulting 'gold rush' of new results that is revealing the dynamics and waveforms of binary black hole mergers, and their applications in gravitational wave detection, testing general relativity, and astrophysics
Black hole gravitohydromagnetics
Punsly, Brian
Black hole gravitohydromagnetics (GHM) is developed from the rudiments to the frontiers of research in this book. GHM describes plasma interactions that combine the effects of gravity and a strong magnetic field, in the vicinity (ergosphere) of a rapidly rotating black hole. This topic was created in response to the astrophysical quest to understand the central engines of radio loud extragalactic radio sources. The theory describes a "torsional tug of war" between rotating ergospheric plasma and the distant asymptotic plasma that extracts the rotational inertia of the black hole. The recoil from the struggle between electromagnetic and gravitational forces near the event horizon is manifested as a powerful pair of magnetized particle beams (jets) that are ejected at nearly the speed of light. These bipolar jets feed large-scale magnetized plasmoids on scales as large as millions of light years (the radio lobes of extragalactic radio sources). This interaction can initiate jets that transport energy fluxes exc...
Turbulent black holes.
Yang, Huan; Zimmerman, Aaron; Lehner, Luis
We demonstrate that rapidly spinning black holes can display a new type of nonlinear parametric instability-which is triggered above a certain perturbation amplitude threshold-akin to the onset of turbulence, with possibly observable consequences. This instability transfers from higher temporal and azimuthal spatial frequencies to lower frequencies-a phenomenon reminiscent of the inverse cascade displayed by (2+1)-dimensional fluids. Our finding provides evidence for the onset of transitory turbulence in astrophysical black holes and predicts observable signatures in black hole binaries with high spins. Furthermore, it gives a gravitational description of this behavior which, through the fluid-gravity duality, can potentially shed new light on the remarkable phenomena of turbulence in fluids.
Anyon black holes
Aghaei Abchouyeh, Maryam; Mirza, Behrouz; Karimi Takrami, Moein; Younesizadeh, Younes
We propose a correspondence between an Anyon Van der Waals fluid and a (2 + 1) dimensional AdS black hole. Anyons are particles with intermediate statistics that interpolates between a Fermi-Dirac statistics and a Bose-Einstein one. A parameter α (0 quasi Fermi-Dirac statistics for α >αc, but a quasi Bose-Einstein statistics for α quasi Bose-Einstein statistics. For α >αc and a range of values of the cosmological constant, there is, however, no event horizon so there is no black hole solution. Thus, for these values of cosmological constants, the AdS Anyon Van der Waals black holes have only quasi Bose-Einstein statistics.
Bringing Black Holes Home
Furmann, John M.
Black holes are difficult to study because they emit no light. To overcome this obstacle, scientists are trying to recreate a black hole in the laboratory. The article gives an overview of the theories of Einstein and Hawking as they pertain to the construction of the Large Hadron Collider (LHC) near Geneva, Switzerland, scheduled for completion in 2006. The LHC will create two beams of protons traveling in opposing directions that will collide and create a plethora of scattered elementary particles. Protons traveling in opposite directions at very high velocities may create particles that come close enough to each other to feel their compacted higher dimensions and create a mega force of gravity that can create tiny laboratory-sized black holes for fractions of a second. The experiments carried out with LHC will be used to test modern string theory and relativity.
Black holes: a slanted overview
Vishveshwara, C.V.
The black hole saga spanning some seventy years may be broadly divided into four phases, namely, (a) the dark ages when little was known about black holes even though they had come into existence quite early through the Schwarzschild solution, (b) the age of enlightenment bringing in deep and prolific discoveries, (c) the age of fantasy that cast black holes in all sorts of extraordinary roles, and (d) the golden age of relativistic astrophysics - to some extent similar to Dirac's characterisation of the development of quantum theory - in which black holes have been extensively used to elucidate a number of astrophysical phenomena. It is impossible to give here even the briefest outline of the major developments in this vast area. We shall only attempt to present a few aspects of black hole physics which have been actively pursued in the recent past. Some details are given in the case of those topics that have not found their way into text books or review articles. (author)
Geometric inequalities for black holes
Dain, Sergio
Full text: A geometric inequality in General Relativity relates quantities that have both a physical interpretation and a geometrical definition. It is well known that the parameters that characterize the Kerr-Newman black hole satisfy several important geometric inequalities. Remarkably enough, some of these inequalities also hold for dynamical black holes. This kind of inequalities, which are valid in the dynamical and strong field regime, play an important role in the characterization of the gravitational collapse. They are closed related with the cosmic censorship conjecture. In this talk I will review recent results in this subject. (author)
Control of black hole evaporation?
Ahn, Doyeol
Contradiction between Hawking's semi-classical arguments and the string theory on the evaporation of a black hole has been one of the most intriguing problems in fundamental physics. A final-state boundary condition inside the black hole was proposed by Horowitz and Maldacena to resolve this contradiction. We point out that the original Hawking effect can also be regarded as a separate boundary condition at the event horizon for this scenario. Here, we found that the change of the Hawking boundary condition may affect the information transfer from the initial collapsing matter to the outgoing Hawking radiation during the evaporation process and as a result the evaporation process itself, significantly
Dain, Sergio [Universidad Nacional de Cordoba (Argentina)
Slowly balding black holes
Lyutikov, Maxim; McKinney, Jonathan C.
The 'no-hair' theorem, a key result in general relativity, states that an isolated black hole is defined by only three parameters: mass, angular momentum, and electric charge; this asymptotic state is reached on a light-crossing time scale. We find that the no-hair theorem is not formally applicable for black holes formed from the collapse of a rotating neutron star. Rotating neutron stars can self-produce particles via vacuum breakdown forming a highly conducting plasma magnetosphere such that magnetic field lines are effectively ''frozen in'' the star both before and during collapse. In the limit of no resistivity, this introduces a topological constraint which prohibits the magnetic field from sliding off the newly-formed event horizon. As a result, during collapse of a neutron star into a black hole, the latter conserves the number of magnetic flux tubes N B =eΦ ∞ /(πc(ℎ/2π)), where Φ ∞ ≅2π 2 B NS R NS 3 /(P NS c) is the initial magnetic flux through the hemispheres of the progenitor and out to infinity. We test this theoretical result via 3-dimensional general relativistic plasma simulations of rotating black holes that start with a neutron star dipole magnetic field with no currents initially present outside the event horizon. The black hole's magnetosphere subsequently relaxes to the split-monopole magnetic field geometry with self-generated currents outside the event horizon. The dissipation of the resulting equatorial current sheet leads to a slow loss of the anchored flux tubes, a process that balds the black hole on long resistive time scales rather than the short light-crossing time scales expected from the vacuum no-hair theorem.
Paths toward understanding black holes
Mayerson, D.R.
This work can be summarized as trying to understand aspects of black holes, gravity, and geometry, in the context of supergravity and string theory in high-energy theoretical physics. The two parts of this thesis have been written with entirely different audiences in mind. The first part consists of
Magnonic black holes
Roldán-Molina, A.; Nunez, A.S.; Duine, R. A.
We show that the interaction between spin-polarized current and magnetization dynamics can be used to implement black-hole and white-hole horizons for magnons - the quanta of oscillations in the magnetization direction in magnets. We consider three different systems: easy-plane ferromagnetic metals, isotropic antiferromagnetic metals, and easy-plane magnetic insulators. Based on available experimental data, we estimate that the Hawking temperature can be as large as 1 K. We comment on the imp...
Characterizing Black Hole Mergers
Baker, John; Boggs, William Darian; Kelly, Bernard
Binary black hole mergers are a promising source of gravitational waves for interferometric gravitational wave detectors. Recent advances in numerical relativity have revealed the predictions of General Relativity for the strong burst of radiation generated in the final moments of binary coalescence. We explore features in the merger radiation which characterize the final moments of merger and ringdown. Interpreting the waveforms in terms of an rotating implicit radiation source allows a unified phenomenological description of the system from inspiral through ringdown. Common features in the waveforms allow quantitative description of the merger signal which may provide insights for observations large-mass black hole binaries.
Moulting Black Holes
Bena, Iosif; Chowdhury, Borun D.; de Boer, Jan; El-Showk, Sheer; Shigemori, Masaki
We find a family of novel supersymmetric phases of the D1-D5 CFT, which in certain ranges of charges have more entropy than all known ensembles. We also find bulk BPS configurations that exist in the same range of parameters as these phases, and have more entropy than a BMPV black hole; they can be thought of as coming from a BMPV black hole shedding a "hair" condensate outside of the horizon. The entropy of the bulk configurations is smaller than that of the CFT phases, which indicates that ...
Are black holes springlike?
Good, Michael R. R.; Ong, Yen Chin
A (3 +1 )-dimensional asymptotically flat Kerr black hole angular speed Ω+ can be used to define an effective spring constant, k =m Ω+2. Its maximum value is the Schwarzschild surface gravity, k =κ , which rapidly weakens as the black hole spins down and the temperature increases. The Hawking temperature is expressed in terms of the spring constant: 2 π T =κ -k . Hooke's law, in the extremal limit, provides the force F =1 /4 , which is consistent with the conjecture of maximum force in general relativity.
Dancing with Black Holes
Aarseth, S. J.
We describe efforts over the last six years to implement regularization methods suitable for studying one or more interacting black holes by direct N-body simulations. Three different methods have been adapted to large-N systems: (i) Time-Transformed Leapfrog, (ii) Wheel-Spoke, and (iii) Algorithmic Regularization. These methods have been tried out with some success on GRAPE-type computers. Special emphasis has also been devoted to including post-Newtonian terms, with application to moderately massive black holes in stellar clusters. Some examples of simulations leading to coalescence by gravitational radiation will be presented to illustrate the practical usefulness of such methods.
Virtual Black Holes
Hawking, Stephen W.
One would expect spacetime to have a foam-like structure on the Planck scale with a very high topology. If spacetime is simply connected (which is assumed in this paper), the non-trivial homology occurs in dimension two, and spacetime can be regarded as being essentially the topological sum of $S^2\\times S^2$ and $K3$ bubbles. Comparison with the instantons for pair creation of black holes shows that the $S^2\\times S^2$ bubbles can be interpreted as closed loops of virtual black holes. It is ...
Superfluid Black Holes.
Hennigar, Robie A; Mann, Robert B; Tjoa, Erickson
We present what we believe is the first example of a "λ-line" phase transition in black hole thermodynamics. This is a line of (continuous) second order phase transitions which in the case of liquid ^{4}He marks the onset of superfluidity. The phase transition occurs for a class of asymptotically anti-de Sitter hairy black holes in Lovelock gravity where a real scalar field is conformally coupled to gravity. We discuss the origin of this phase transition and outline the circumstances under which it (or generalizations of it) could occur.
Partons and black holes
Susskind, L.; Griffin, P.
A light-front renormalization group analysis is applied to study matter which falls into massive black holes, and the related problem of matter with transplankian energies. One finds that the rate of matter spreading over the black hole's horizon unexpectedly saturates the causality bound. This is related to the transverse growth behavior of transplankian particles as their longitudinal momentum increases. This growth behavior suggests a natural mechanism to implement 't Hooft's scenario that the universe is an image of data stored on a 2 + 1 dimensional hologram-like projection
Bumpy black holes
Emparan, Roberto; Figueras, Pau; Martinez, Marina
We study six-dimensional rotating black holes with bumpy horizons: these are topologically spherical, but the sizes of symmetric cycles on the horizon vary non-monotonically with the polar angle. We construct them numerically for the first three bumpy families, and follow them in solution space until they approach critical solutions with localized singularities on the horizon. We find strong evidence of the conical structures that have been conjectured to mediate the transitions to black ring...
Semiclassical Approach to Black Hole Evaporation
Lowe, David A.
Black hole evaporation may lead to massive or massless remnants, or naked singularities. This paper investigates this process in the context of two quite different two dimensional black hole models. The first is the original CGHS model, the second is another two dimensional dilaton-gravity model, but with properties much closer to physics in the real, four dimensional, world. Numerical simulations are performed of the formation and subsequent evaporation of black holes and the results are fou...
Electron-positron pairs in physics and astrophysics: From heavy nuclei to black holes
Ruffini, Remo; Vereshchagin, Gregory; Xue, She-Sheng
Due to the interaction of physics and astrophysics we are witnessing in these years a splendid synthesis of theoretical, experimental and observational results originating from three fundamental physical processes. They were originally proposed by Dirac, by Breit and Wheeler and by Sauter, Heisenberg, Euler and Schwinger. For almost seventy years they have all three been followed by a continued effort of experimental verification on Earth-based experiments. The Dirac process, e+e-→2γ, has been by far the most successful. It has obtained extremely accurate experimental verification and has led as well to an enormous number of new physics in possibly one of the most fruitful experimental avenues by introduction of storage rings in Frascati and followed by the largest accelerators worldwide: DESY, SLAC etc. The Breit-Wheeler process, 2γ→e+e-, although conceptually simple, being the inverse process of the Dirac one, has been by far one of the most difficult to be verified experimentally. Only recently, through the technology based on free electron X-ray laser and its numerous applications in Earth-based experiments, some first indications of its possible verification have been reached. The vacuum polarization process in strong electromagnetic field, pioneered by Sauter, Heisenberg, Euler and Schwinger, introduced the concept of critical electric field Ec=me2c3/(eħ). It has been searched without success for more than forty years by heavy-ion collisions in many of the leading particle accelerators worldwide. The novel situation today is that these same processes can be studied on a much more grandiose scale during the gravitational collapse leading to the formation of a black hole being observed in Gamma Ray Bursts (GRBs). This report is dedicated to the scientific race. The theoretical and experimental work developed in Earth-based laboratories is confronted with the theoretical interpretation of space-based observations of phenomena originating on cosmological
Black Holes and Exotic Spinors
J. M. Hoff da Silva
Full Text Available Exotic spin structures are non-trivial liftings, of the orthogonal bundle to the spin bundle, on orientable manifolds that admit spin structures according to the celebrated Geroch theorem. Exotic spin structures play a role of paramount importance in different areas of physics, from quantum field theory, in particular at Planck length scales, to gravity, and in cosmological scales. Here, we introduce an in-depth panorama in this field, providing black hole physics as the fount of spacetime exoticness. Black holes are then studied as the generators of a non-trivial topology that also can correspond to some inequivalent spin structure. Moreover, we investigate exotic spinor fields in this context and the way exotic spinor fields branch new physics. We also calculate the tunneling probability of exotic fermions across a Kerr-Sen black hole, showing that the exotic term does affect the tunneling probability, altering the black hole evaporation rate. Finally we show that it complies with the Hawking temperature universal law.
Black holes and quantum mechanics
Wilczek, Frank
1. Qualitative introduction to black holes : classical, quantum2. Model black holes and model collapse process: The Schwarzschild and Reissner-Nordstrom metrics, The Oppenheimer-Volkov collapse scenario3. Mode mixing4. From mode mixing to radiance.
Quantum aspects of black holes
Beginning with an overview of the theory of black holes by the editor, this book presents a collection of ten chapters by leading physicists dealing with the variety of quantum mechanical and quantum gravitational effects pertinent to black holes. The contributions address topics such as Hawking radiation, the thermodynamics of black holes, the information paradox and firewalls, Monsters, primordial black holes, self-gravitating Bose-Einstein condensates, the formation of small black holes in high energetic collisions of particles, minimal length effects in black holes and small black holes at the Large Hadron Collider. Viewed as a whole the collection provides stimulating reading for researchers and graduate students seeking a summary of the quantum features of black holes.
Aspects of hairy black holes
Anabalón, Andrés, E-mail: [email protected] [Departamento de Ciencias, Facultad de Artes Liberales y Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Viña del Mar (Chile); Astefanesei, Dumitru [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059, Valparaíso (Chile)
We review the existence of exact hairy black holes in asymptotically flat, anti-de Sitter and de Sitter space-times. We briefly discuss the issue of stability and the charging of the black holes with a Maxwell field.
When Black Holes Collide
Among the fascinating phenomena predicted by General Relativity, Einstein's theory of gravity, black holes and gravitational waves, are particularly important in astronomy. Though once viewed as a mathematical oddity, black holes are now recognized as the central engines of many of astronomy's most energetic cataclysms. Gravitational waves, though weakly interacting with ordinary matter, may be observed with new gravitational wave telescopes, opening a new window to the universe. These observations promise a direct view of the strong gravitational dynamics involving dense, often dark objects, such as black holes. The most powerful of these events may be merger of two colliding black holes. Though dark, these mergers may briefly release more energy that all the stars in the visible universe, in gravitational waves. General relativity makes precise predictions for the gravitational-wave signatures of these events, predictions which we can now calculate with the aid of supercomputer simulations. These results provide a foundation for interpreting expect observations in the emerging field of gravitational wave astronomy.
Exploring hadron physics in black hole formations: A new promising target of neutrino astronomy
Nakazato, Ken'ichiro; Sumiyoshi, Kohsuke; Suzuki, Hideyuki; Yamada, Shoichi
The detection of neutrinos from massive stellar collapses can teach us a great deal not only about source objects but also about microphysics working deep inside them. In this study we discuss quantitatively the possibility to extract information on the properties of dense and hot hadronic matter from neutrino signals coming out of black-hole-forming collapses of nonrotational massive stars. Based on our detailed numerical simulations we evaluate the event numbers for SuperKamiokande, with neutrino oscillations fully taken into account. We demonstrate that the event numbers from a Galactic event are large enough not only to detect but also to distinguish one hadronic equation of state from another by our statistical method, assuming the same progenitor model and nonrotation. This means that the massive stellar collapse can be a unique probe into hadron physics and will be a promising target of the nascent neutrino astronomy.
A tensorial description of particle perception in black-hole physics
Barbado, Luis C.; Barceló, Carlos; Garay, Luis J.; Jannes, G.
In quantum field theory in curved backgrounds, one typically distinguishes between objective, tensorial quantities such as the renormalized stress-energy tensor (RSET) and subjective, nontensorial quantities such as Bogoliubov coefficients which encode perception effects associated with the specific trajectory of a detector. In this work, we propose a way to treat both objective and subjective notions on an equal tensorial footing. For that purpose, we define a new tensor which we will call the perception renormalized stress-energy tensor (PeRSET). The PeRSET is defined as the subtraction of the RSET corresponding to two different vacuum states. Based on this tensor, we can define perceived energy densities and fluxes. The PeRSET helps us to have a more organized and systematic understanding of various results in the literature regarding quantum field theory in black hole spacetimes. We illustrate the physics encoded in this tensor by working out various examples of special relevance.
Over spinning a black hole?
Bouhmadi-Lopez, Mariam; Cardoso, Vitor; Nerozzi, Andrea; Rocha, Jorge V, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [CENTRA, Department de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais 1, 1049 Lisboa (Portugal)
A possible process to destroy a black hole consists on throwing point particles with sufficiently large angular momentum into the black hole. In the case of Kerr black holes, it was shown by Wald that particles with dangerously large angular momentum are simply not captured by the hole, and thus the event horizon is not destroyed. Here we reconsider this gedanken experiment for black holes in higher dimensions. We show that this particular way of destroying a black hole does not succeed and that Cosmic Censorship is preserved.
Black-hole astrophysics
Bender, P. [Univ. of Colorado, Boulder, CO (United States); Bloom, E. [Stanford Linear Accelerator Center, Menlo Park, CA (United States); Cominsky, L. [Sonoma State Univ., Rohnert Park, CA (United States). Dept. of Physics and Astronomy] [and others
Black-hole astrophysics is not just the investigation of yet another, even if extremely remarkable type of celestial body, but a test of the correctness of the understanding of the very properties of space and time in very strong gravitational fields. Physicists` excitement at this new prospect for testing theories of fundamental processes is matched by that of astronomers at the possibility to discover and study a new and dramatically different kind of astronomical object. Here the authors review the currently known ways that black holes can be identified by their effects on their neighborhood--since, of course, the hole itself does not yield any direct evidence of its existence or information about its properties. The two most important empirical considerations are determination of masses, or lower limits thereof, of unseen companions in binary star systems, and measurement of luminosity fluctuations on very short time scales.
Braneworld black holes and entropy bounds
Y. Heydarzade
Full Text Available The Bousso's D-bound entropy for the various possible black hole solutions on a 4-dimensional brane is checked. It is found that the D-bound entropy here is apparently different from that of obtained for the 4-dimensional black hole solutions. This difference is interpreted as the extra loss of information, associated to the extra dimension, when an extra-dimensional black hole is moved outward the observer's cosmological horizon. Also, it is discussed that N-bound entropy is hold for the possible solutions here. Finally, by adopting the recent Bohr-like approach to black hole quantum physics for the excited black holes, the obtained results are written also in terms of the black hole excited states.
Information Retention by Stringy Black Holes
Ellis, John
Building upon our previous work on two-dimensional stringy black holes and its extension to spherically-symmetric four-dimensional stringy black holes, we show how the latter retain information. A key r\\^ole is played by an infinite-dimensional $W_\\infty$ symmetry that preserves the area of an isolated black-hole horizon and hence its entropy. The exactly-marginal conformal world-sheet operator representing a massless stringy particle interacting with the black hole necessarily includes a contribution from $W_\\infty$ generators in its vertex function. This admixture manifests the transfer of information between the string black hole and external particles. We discuss different manifestations of $W_\\infty$ symmetry in black-hole physics and the connections between them.
Seeding black holes in cosmological simulations
Taylor, P.; Kobayashi, C.
We present a new model for the formation of black holes in cosmological simulations, motivated by the first star formation. Black holes form from high density peaks of primordial gas, and grow via both gas accretion and mergers. Massive black holes heat the surrounding material, suppressing star formation at the centres of galaxies, and driving galactic winds. We perform an investigation into the physical effects of the model parameters, and obtain a `best' set of these parameters by comparing the outcome of simulations to observations. With this best set, we successfully reproduce the cosmic star formation rate history, black hole mass-velocity dispersion relation, and the size-velocity dispersion relation of galaxies. The black hole seed mass is ˜103 M⊙, which is orders of magnitude smaller than that which has been used in previous cosmological simulations with active galactic nuclei, but suggests that the origin of the seed black holes is the death of Population III stars.
Black hole evaporation in conformal gravity
Bambi, Cosimo; Rachwał, Lesław [Center for Field Theory and Particle Physics and Department of Physics, Fudan University, 220 Handan Road, 200433 Shanghai (China); Modesto, Leonardo [Department of Physics, Southern University of Science and Technology, 1088 Xueyuan Road, Shenzhen 518055 (China); Porey, Shiladitya, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Department of Physics, Indian Institute of Technology, 208016 Kanpur (India)
We study the formation and the evaporation of a spherically symmetric black hole in conformal gravity. From the collapse of a spherically symmetric thin shell of radiation, we find a singularity-free non-rotating black hole. This black hole has the same Hawking temperature as a Schwarzschild black hole with the same mass, and it completely evaporates either in a finite or in an infinite time, depending on the ensemble. We consider the analysis both in the canonical and in the micro-canonical statistical ensembles. Last, we discuss the corresponding Penrose diagram of this physical process.
Particle accelerators inside spinning black holes.
Lake, Kayll
On the basis of the Kerr metric as a model for a spinning black hole accreting test particles from rest at infinity, I show that the center-of-mass energy for a pair of colliding particles is generically divergent at the inner horizon. This shows not only that classical black holes are internally unstable, but also that Planck-scale physics is a characteristic feature within black holes at scales much larger that the Planck length. The novel feature of the divergence discussed here is that the phenomenon is present only for black holes with rotation, and in this sense it is distinct from the well-known Cauchy horizon instability.
Warped products and black holes
Hong, Soon-Tae
We apply the warped product space-time scheme to the Banados-Teitelboim-Zanelli black holes and the Reissner-Nordstroem-anti-de Sitter black hole to investigate their interior solutions in terms of warped products. It is shown that there exist no discontinuities of the Ricci and Einstein curvatures across event horizons of these black holes
Magnetohydrodynamics near a black hole
Wilson, J.R.
A numerical computer study of hydromagnetic flow near a black hole is presented. First, the equations of motion are developed to a form suitable for numerical computations. Second, the results of calculations describing the magnetic torques exerted by a rotating black hole on a surrounding magnetic plasma and the electric charge that is induced on the surface of the black hole are presented. (auth)
Black Hole Universe Model and Dark Energy
Zhang, Tianxi
Considering black hole as spacetime and slightly modifying the big bang theory, the author has recently developed a new cosmological model called black hole universe, which is consistent with Mach principle and Einsteinian general relativity and self consistently explains various observations of the universe without difficulties. According to this model, the universe originated from a hot star-like black hole and gradually grew through a supermassive black hole to the present universe by accreting ambient material and merging with other black holes. The entire space is infinitely and hierarchically layered and evolves iteratively. The innermost three layers are the universe that we lives, the outside space called mother universe, and the inside star-like and supermassive black holes called child universes. The outermost layer has an infinite radius and zero limits for both the mass density and absolute temperature. All layers or universes are governed by the same physics, the Einstein general relativity with the Robertson-Walker metric of spacetime, and tend to expand outward physically. When one universe expands out, a new similar universe grows up from its inside black holes. The origin, structure, evolution, expansion, and cosmic microwave background radiation of black hole universe have been presented in the recent sequence of American Astronomical Society (AAS) meetings and published in peer-review journals. This study will show how this new model explains the acceleration of the universe and why dark energy is not required. We will also compare the black hole universe model with the big bang cosmology.
Physical process version of the first law of thermodynamics for black holes in Einstein-Maxwell axion-dilaton gravity
Rogatko, Marek [Institute of Physics, Maria Curie-Sklodowska University, 20-031 Lublin (Poland)
We derive general formulae for the first-order variation of the ADM mass and angular momentum for the linear perturbations of a stationary background in Einstein-Maxwell axion-dilaton gravity which is the low-energy limit of the heterotic string theory. All these variations were expressed in terms of the perturbed matter energy-momentum tensor and the perturbed charge current density. Combining these expressions, we reached at the form of the physical process version of the first law of black-hole dynamics for the stationary black holes in the considered theory which is a strong support for the cosmic censorship hypothesis.
From binary black hole simulation to triple black hole simulation
Bai Shan; Cao Zhoujian; Han, Wen-Biao; Lin, Chun-Yu; Yo, Hwei-Jang; Yu, Jui-Ping
Black hole systems are among the most promising sources for a gravitational wave detection project. Now, China is planning to construct a space-based laser interferometric detector as a follow-on mission of LISA in the near future. Aiming to provide some theoretical support to this detection project on the numerical relativity side, we focus on black hole systems simulation in this work. Considering the globular galaxy, multiple black hole systems also likely to exist in our universe and play a role as a source for the gravitational wave detector we are considering. We will give a progress report in this paper on our black hole system simulation. More specifically, we will present triple black hole simulation together with binary black hole simulation. On triple black hole simulations, one novel perturbational method is proposed.
Black Hole - Neutron Star Binary Mergers
National Aeronautics and Space Administration — Gravitational radiation waveforms for black hole-neutron star coalescence calculations. The physical input is Newtonian physics, an ideal gas equation of state with...
High energy colliders as black hole factories: The end of short distance physics
Giddings, Steven B.; Thomas, Scott
If the fundamental Planck scale is of order of a TeV, as is the case in some extra-dimension scenarios, future hadron colliders such as the CERN Large Hadron Collider will be black hole factories. The nonperturbative process of black hole formation and decay by Hawking evaporation gives rise to spectacular events with up to many dozens of relatively hard jets and leptons with a characteristic ratio of hadronic to leptonic activity of roughly 5:1. The total transverse energy of such events is typically a sizable fraction of the beam energy. Perturbative hard scattering processes at energies well above the Planck scale are cloaked behind a horizon, thus limiting the ability to probe short distances. The high energy black hole cross section grows with energy at a rate determined by the dimensionality and geometry of the extra dimensions. This dependence therefore probes the extra dimensions at distances larger than the Planck scale
Physics of the interior of a black hole with an exotic scalar matter
Doroshkevich, Andrey; Shatskiy, Alexander; Hansen, Jakob; Novikov, Dmitriy; Novikov, Igor; Park, Dong-Ho
We use a numerical code to consider the nonlinear processes arising when a Reissner-Nordstroem black hole is irradiated by an exotic scalar field ( modeled as a free massless scalar field with an opposite sign for its energy-momentum tensor). These processes are quite different from the processes arising in the case of the same black hole being irradiated by a pulse of a normal scalar field. In our case, we did not observe the creation of a spacelike strong singularity in the T region of the space-time. We investigate the antifocusing effects in the gravity field of the exotic scalar field with the negative energy density and the evolution of the mass function. We demonstrate the process of the vanishing of the black hole when it is irradiated by a strong pulse of an exotic scalar field.
Magnonic Black Holes.
Roldán-Molina, A; Nunez, Alvaro S; Duine, R A
We show that the interaction between the spin-polarized current and the magnetization dynamics can be used to implement black-hole and white-hole horizons for magnons-the quanta of oscillations in the magnetization direction in magnets. We consider three different systems: easy-plane ferromagnetic metals, isotropic antiferromagnetic metals, and easy-plane magnetic insulators. Based on available experimental data, we estimate that the Hawking temperature can be as large as 1Â K. We comment on the implications of magnonic horizons for spin-wave scattering and transport experiments, and for magnon entanglement.
Statistical mechanics of black holes
Harms, B.; Leblanc, Y.
We analyze the statistical mechanics of a gas of neutral and charged black holes. The microcanonical ensemble is the only possible approach to this system, and the equilibrium configuration is the one for which most of the energy is carried by a single black hole. Schwarzschild black holes are found to obey the statistical bootstrap condition. In all cases, the microcanonical temperature is identical to the Hawking temperature of the most massive black hole in the gas. U(1) charges in general break the bootstrap property. The problems of black-hole decay and of quantum coherence are also addressed
Internal structure of black holes
Cvetic, Mirjam
Full text: We review recent progress that sheds light on the internal structure of general black holes. We first summarize properties of general multi-charged rotating black holes both in four and five dimensions. We show that the asymptotic boundary conditions of these general asymptotically flat black holes can be modified such that a conformal symmetry emerges. These subtracted geometries preserve the thermodynamic properties of the original black holes and are of the Lifshitz type, thus describing 'a black hole in the asymptotically conical box'. Recent efforts employ solution generating techniques to construct interpolating geometries between the original black hole and their subtracted geometries. Upon lift to one dimension higher, these geometries lift to AdS 3 times a sphere, and thus provide a microscopic interpretation of the black hole entropy in terms of dual two-dimensional conformal field theory. (author)
Micro black holes and the democratic transition
Dvali, Gia; Pujolas, Oriol
Unitarity implies that the evaporation of microscopic quasiclassical black holes cannot be universal in different particle species. This creates a puzzle, since it conflicts with the thermal nature of quasiclassical black holes, according to which all of the species should see the same horizon and be produced with the same Hawking temperatures. We resolve this puzzle by showing that for the microscopic black holes, on top of the usual quantum evaporation time, there is a new time scale which characterizes a purely classical process during which the black hole loses the ability to differentiate among the species and becomes democratic. We demonstrate this phenomenon in a well-understood framework of large extra dimensions, with a number of parallel branes. An initially nondemocratic black hole is the one localized on one of the branes, with its high-dimensional Schwarzschild radius being much shorter than the interbrane distance. Such a black hole seemingly cannot evaporate into the species localized on the other branes that are beyond its reach. We demonstrate that in reality the system evolves classically in time, in such a way that the black hole accretes the neighboring branes. The end result is a completely democratic static configuration, in which all of the branes share the same black hole and all of the species are produced with the same Hawking temperature. Thus, just like their macroscopic counterparts, the microscopic black holes are universal bridges to the hidden sector physics.
Black Holes and Firewalls
Polchinski, Joseph
Our modern understanding of space, time, matter, and even reality itself arose from the three great revolutions of the early twentieth century: special relativity, general relativity, and quantum mechanics. But a century later, this work is unfinished. Many deep connections have been discovered, but the full form of a unified theory incorporating all three principles is not known. Thought experiments and paradoxes have often played a key role in figuring out how to fit theories together. For the unification of general relativity and quantum mechanics, black holes have been an important arena. I will talk about the quantum mechanics of black holes, the information paradox, and the latest version of this paradox, the firewall. The firewall points to a conflict between our current theories of spacetime and of quantum mechanics. It may lead to a new understanding of how these are connected, perhaps based on quantum entanglement.
Black Holes in Higher Dimensions
Reall Harvey S.
Full Text Available We review black-hole solutions of higher-dimensional vacuum gravity and higher-dimensional supergravity theories. The discussion of vacuum gravity is pedagogical, with detailed reviews of Myers–Perry solutions, black rings, and solution-generating techniques. We discuss black-hole solutions of maximal supergravity theories, including black holes in anti-de Sitter space. General results and open problems are discussed throughout.
Quantum information versus black hole physics: deep firewalls from narrow assumptions.
Braunstein, Samuel L; Pirandola, Stefano
The prevalent view that evaporating black holes should simply be smaller black holes has been challenged by the firewall paradox. In particular, this paradox suggests that something different occurs once a black hole has evaporated to one-half its original surface area. Here, we derive variations of the firewall paradox by tracking the thermodynamic entropy within a black hole across its entire lifetime and extend it even to anti-de Sitter space-times. Our approach sweeps away many unnecessary assumptions, allowing us to demonstrate a paradox exists even after its initial onset (when conventional assumptions render earlier analyses invalid). The most natural resolution may be to accept firewalls as a real phenomenon. Further, the vast entropy accumulated implies a deep firewall that goes 'all the way down' in contrast with earlier work describing only a structure at the horizon.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).
Shaping Globular Clusters with Black Holes
Kohler, Susanna
How many black holes lurk within the dense environments of globular clusters, and how do these powerful objects shape the properties of the cluster around them? One such cluster, NGC 3201, is now helping us to answer these questions.Hunting Stellar-Mass Black HolesSince the detection of merging black-hole binaries by the Laser Interferometer Gravitational-Wave Observatory (LIGO), the dense environments of globular clusters have received increasing attention as potential birthplaces of these compact binary systems.The central region of the globular star cluster NGC 3201, as viewed by Hubble. The black hole is in orbit with the star marked by the blue circle. [NASA/ESA]In addition, more and more stellar-mass black-hole candidates have been observed within globular clusters, lurking in binary pairs with luminous, non-compact companions. The most recent of these detections, found in the globular cluster NGC 3201, stands alone as the first stellar-mass black hole candidate discovered via radial velocity observations: the black holes main-sequence companion gave away its presence via a telltale wobble.Now a team of scientists led by Kyle Kremer (CIERA and Northwestern University) is using models of this system to better understand the impact that black holes might have on their host clusters.A Model ClusterThe relationship between black holes and their host clusters is complicated. Though the cluster environment can determine the dynamical evolution of the black holes, the retention rate of black holes in a globular cluster (i.e., how many remain in the cluster when they are born as supernovae, rather than being kicked out during the explosion) influences how the host cluster evolves.Kremer and collaborators track this complex relationship by modeling the evolution of a cluster similar to NGC 3201 with a Monte Carlo code. The code incorporates physics relevant to the evolution of black holes and black-hole binaries in globular clusters, such as two-body relaxation
Black Hole Area Quantization rule from Black Hole Mass Fluctuations
Schiffer, Marcelo
We calculate the black hole mass distribution function that follows from the random emission of quanta by Hawking radiation and with this function we calculate the black hole mass fluctuation. From a complete different perspective we regard the black hole as quantum mechanical system with a quantized event horizon area and transition probabilities among the various energy levels and then calculate the mass dispersion. It turns out that there is a perfect agreement between the statistical and ...
Black holes and holography
Mathur, Samir D
The idea of holography in gravity arose from the fact that the entropy of black holes is given by their surface area. The holography encountered in gauge/gravity duality has no such relation however; the boundary surface can be placed at an arbitrary location in AdS space and its area does not give the entropy of the bulk. The essential issues are also different between the two cases: in black holes we get Hawking radiation from the 'holographic surface' which leads to the information issue, while in gauge/gravity duality there is no such radiation. To resolve the information paradox we need to show that there are real degrees of freedom at the horizon of the hole; this is achieved by the fuzzball construction. In gauge/gravity duality we have instead a field theory defined on an abstract dual space; there are no gravitational degrees of freedom at the holographic boundary. It is important to understand the relations and differences between these two notions of holography to get a full understanding of the lessons from the information paradox.
EVIDENCE FOR LOW BLACK HOLE SPIN AND PHYSICALLY MOTIVATED ACCRETION MODELS FROM MILLIMETER-VLBI OBSERVATIONS OF SAGITTARIUS A*
Broderick, Avery E [Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON M5S 3H8 (Canada); Fish, Vincent L; Doeleman, Sheperd S [Massachusetts Institute of Technology, Haystack Observatory, Route 40, Westford, MA 01886 (United States); Loeb, Abraham [Institute for Theory and Computation, Harvard University, Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
Millimeter very long baseline interferometry (mm-VLBI) provides the novel capacity to probe the emission region of a handful of supermassive black holes on sub-horizon scales. For Sagittarius A* (Sgr A*), the supermassive black hole at the center of the Milky Way, this provides access to the region in the immediate vicinity of the horizon. Broderick et al. have already shown that by leveraging spectral and polarization information as well as accretion theory, it is possible to extract accretion-model parameters (including black hole spin) from mm-VLBI experiments containing only a handful of telescopes. Here we repeat this analysis with the most recent mm-VLBI data, considering a class of aligned, radiatively inefficient accretion flow (RIAF) models. We find that the combined data set rules out symmetric models for Sgr A*'s flux distribution at the 3.9{sigma} level, strongly favoring length-to-width ratios of roughly 2.4:1. More importantly, we find that physically motivated accretion flow models provide a significantly better fit to the mm-VLBI observations than phenomenological models, at the 2.9{sigma} level. This implies that not only is mm-VLBI presently capable of distinguishing between potential physical models for Sgr A*'s emission, but further that it is sensitive to the strong gravitational lensing associated with the propagation of photons near the black hole. Based upon this analysis we find that the most probable magnitude, viewing angle, and position angle for the black hole spin are a = 0.0{sup +0.64+0.86}, {theta}=68{sup o+5o+9o}{sub -20}{sup o}{sub -28}{sup o}, and {xi}=-52{sup o+17o+33o}{sub -15}{sup o}{sub -24}{sup o} east of north, where the errors quoted are the 1{sigma} and 2{sigma} uncertainties.
Broderick, Avery E.; Fish, Vincent L.; Doeleman, Sheperd S.; Loeb, Abraham
Millimeter very long baseline interferometry (mm-VLBI) provides the novel capacity to probe the emission region of a handful of supermassive black holes on sub-horizon scales. For Sagittarius A* (Sgr A*), the supermassive black hole at the center of the Milky Way, this provides access to the region in the immediate vicinity of the horizon. Broderick et al. have already shown that by leveraging spectral and polarization information as well as accretion theory, it is possible to extract accretion-model parameters (including black hole spin) from mm-VLBI experiments containing only a handful of telescopes. Here we repeat this analysis with the most recent mm-VLBI data, considering a class of aligned, radiatively inefficient accretion flow (RIAF) models. We find that the combined data set rules out symmetric models for Sgr A*'s flux distribution at the 3.9σ level, strongly favoring length-to-width ratios of roughly 2.4:1. More importantly, we find that physically motivated accretion flow models provide a significantly better fit to the mm-VLBI observations than phenomenological models, at the 2.9σ level. This implies that not only is mm-VLBI presently capable of distinguishing between potential physical models for Sgr A*'s emission, but further that it is sensitive to the strong gravitational lensing associated with the propagation of photons near the black hole. Based upon this analysis we find that the most probable magnitude, viewing angle, and position angle for the black hole spin are a = 0.0 +0.64+0.86 , θ=68 o+5 o +9 o -20 o -28 o , and ξ=-52 o+17 o +33 o -15 o -24 o east of north, where the errors quoted are the 1σ and 2σ uncertainties.
STU black holes and string triality
Behrndt, K.; Kallosh, R.; Rahmfeld, J.; Shmakova, M.; Wong, W.K.
We find double-extreme black holes associated with the special geometry of the Calabi-Yau moduli space with the prepotential F=STU. The area formula is STU-moduli independent and has [SL(2,Z)] 3 symmetry in space of charges. The dual version of this theory without a prepotential treats the dilaton S asymmetric vs T,U moduli. We display the dual relation between new (STU) black holes and stringy (S|TU) black holes using a particular Sp(8,Z) transformation. The area formula of one theory equals that of the dual theory when expressed in terms of dual charges. We analyze the relation between (STU) black holes to string triality of black holes: (S|TU), (T|US), (U|ST) solutions. In the democratic STU-symmetric version we find that all three S, T, and U duality symmetries are nonperturbative and mix electric and magnetic charges. copyright 1996 The American Physical Society
Black hole thermodynamics with conical defects
Appels, Michael [Centre for Particle Theory, Durham University,South Road, Durham, DH1 3LE (United Kingdom); Gregory, Ruth [Centre for Particle Theory, Durham University,South Road, Durham, DH1 3LE (United Kingdom); Perimeter Institute,31 Caroline Street North, Waterloo, ON, N2L 2Y5 (Canada); Kubiznák, David [Perimeter Institute,31 Caroline Street North, Waterloo, ON, N2L 2Y5 (Canada)
Recently we have shown https://www.doi.org/10.1103/PhysRevLett.117.131303 how to formulate a thermodynamic first law for a single (charged) accelerated black hole in AdS space by fixing the conical deficit angles present in the spacetime. Here we show how to generalise this result, formulating thermodynamics for black holes with varying conical deficits. We derive a new potential for the varying tension defects: the thermodynamic length, both for accelerating and static black holes. We discuss possible physical processes in which the tension of a string ending on a black hole might vary, and also map out the thermodynamic phase space of accelerating black holes and explore their critical phenomena.
Quantum black holes and Planck's constant
Ross, D.K.
It is shown that the Planck-scale black holes of quantum gravity must obey a consistency condition relating Planck's constant to the integral of the mass of the black holes over time, if the usual path integral formulation of quantum mechanics is to make sense on physical spacetime. It is also shown, using time-dependent perturbation theory in ordinary quantum mechanics, that a massless particle will not propagate on physical spacetime with the black holes present unless the same condition is met. (author)
Cosmic strings and black holes
Aryal, M.; Ford, L.H.; Vilenkin, A.
The metric for a Schwarzschild black hole with a cosmic string passing through it is discussed. The thermodynamics of such an object is considered, and it is shown that S = (1/4)A, where S is the entropy and A is the horizon area. It is noted that the Schwarzschild mass parameter M, which is the gravitational mass of the system, is no longer identical to its energy. A solution representing a pair of black holes held apart by strings is discussed. It is nearly identical to a static, axially symmetric solution given long ago by Bach and Weyl. It is shown how these solutions, which were formerly a mathematical curiosity, may be given a more physical interpretation in terms of cosmic strings
Symmetries of supergravity black holes
Chow, David D K
We investigate Killing tensors for various black hole solutions of supergravity theories. Rotating black holes of an ungauged theory, toroidally compactified heterotic supergravity, with NUT parameters and two U(1) gauge fields are constructed. If both charges are set equal, then the solutions simplify, and then there are concise expressions for rank-2 conformal Killing-Staeckel tensors. These are induced by rank-2 Killing-Staeckel tensors of a conformally related metric that possesses a separability structure. We directly verify the separation of the Hamilton-Jacobi equation on this conformally related metric and of the null Hamilton-Jacobi and massless Klein-Gordon equations on the 'physical' metric. Similar results are found for more general solutions; we mainly focus on those with certain charge combinations equal in gauged supergravity but also consider some other solutions.
Black hole with quantum potential
Ali, Ahmed Farag, E-mail: [email protected] [Department of Physics, Faculty of Science, Benha University, Benha 13518 (Egypt); Khalil, Mohammed M., E-mail: [email protected] [Department of Electrical Engineering, Alexandria University, Alexandria 12544 (Egypt)
In this work, we investigate black hole (BH) physics in the context of quantum corrections. These quantum corrections were introduced recently by replacing classical geodesics with quantal (Bohmian) trajectories and hence form a quantum Raychaudhuri equation (QRE). From the QRE, we derive a modified Schwarzschild metric, and use that metric to investigate BH singularity and thermodynamics. We find that these quantum corrections change the picture of Hawking radiation greatly when the size of BH approaches the Planck scale. They prevent the BH from total evaporation, predicting the existence of a quantum BH remnant, which may introduce a possible resolution for the catastrophic behavior of Hawking radiation as the BH mass approaches zero. Those corrections also turn the spacelike singularity of the black hole to be timelike, and hence this may ameliorate the information loss problem.
Ahmed Farag Ali
Full Text Available In this work, we investigate black hole (BH physics in the context of quantum corrections. These quantum corrections were introduced recently by replacing classical geodesics with quantal (Bohmian trajectories and hence form a quantum Raychaudhuri equation (QRE. From the QRE, we derive a modified Schwarzschild metric, and use that metric to investigate BH singularity and thermodynamics. We find that these quantum corrections change the picture of Hawking radiation greatly when the size of BH approaches the Planck scale. They prevent the BH from total evaporation, predicting the existence of a quantum BH remnant, which may introduce a possible resolution for the catastrophic behavior of Hawking radiation as the BH mass approaches zero. Those corrections also turn the spacelike singularity of the black hole to be timelike, and hence this may ameliorate the information loss problem.
Quantum effects in black holes
A strict definition of black holes is presented and some properties with regard to their mass are enumerated. The Hawking quantum effect - the effect of vacuum instability in the black hole gravitational field, as a result of shich the black hole radiates as a heated body is analyzed. It is shown that in order to obtain results on the black hole radiation it is sufficient to predetermine the in-vacuum state at a time moment in the past, when the collapsing body has a large size, and its gravitational field can be neglected. The causes and the place of particle production by the black hole, and also the space-time inside the black hole, are considered
Particle creation by black holes
Hawking, S.W.
In the classical theory black holes can only absorb and not emit particles. However it is shown that quantum mechanical effects cause black holes to create and emit particles. This thermal emission leads to a slow decrease in the mass of the black hole and to its eventual disappearance: any primordial black hole of mass less than about 10 15 g would have evaporated by now. Although these quantum effects violate the classical law that the area of the event horizon of a black hole cannot decrease, there remains a Generalized Second Law: S + 1/4 A never decreases where S is the entropy of matter outside black holes and A is the sum of the surface areas of the event horizons. This shows that gravitational collapse converts the baryons and leptons in the collapsing body into entropy. It is tempting to speculate that this might be the reason why the Universe contains so much entropy per baryon. (orig.) [de
Black holes and groups of type 7
Supergravity; groups of type 7; black holes; quantum �eld theory. ... representation are reviewed, along with a connection between special Kähler geometry and a 'generalization' of groups of type 7. ... Pramana – Journal of Physics | News.
Do stringy corrections stabilize colored black holes?
Kanti, P.; Winstanley, E.
We consider hairy black hole solutions of Einstein-Yang-Mills-dilaton theory, coupled to a Gauss-Bonnet curvature term, and we study their stability under small, spacetime-dependent perturbations. We demonstrate that stringy corrections do not remove the sphaleronic instabilities of colored black holes with the number of unstable modes being equal to the number of nodes of the background gauge function. In the gravitational sector and in the limit of an infinitely large horizon, colored black holes are also found to be unstable. Similar behavior is exhibited by magnetically charged black holes while the bulk of neutral black holes are proved to be stable under small, gauge-dependent perturbations. Finally, electrically charged black holes are found to be characterized only by the existence of a gravitational sector of perturbations. As in the case of neutral black holes, we demonstrate that for the bulk of electrically charged black holes no unstable modes arise in this sector. (c) 2000 The American Physical Society
Thermal BEC Black Holes
Roberto Casadio
Full Text Available We review some features of Bose–Einstein condensate (BEC models of black holes obtained by means of the horizon wave function formalism. We consider the Klein–Gordon equation for a toy graviton field coupled to a static matter current in a spherically-symmetric setup. The classical field reproduces the Newtonian potential generated by the matter source, while the corresponding quantum state is given by a coherent superposition of scalar modes with a continuous occupation number. An attractive self-interaction is needed for bound states to form, the case in which one finds that (approximately one mode is allowed, and the system of N bosons can be self-confined in a volume of the size of the Schwarzschild radius. The horizon wave function formalism is then used to show that the radius of such a system corresponds to a proper horizon. The uncertainty in the size of the horizon is related to the typical energy of Hawking modes: it decreases with the increasing of the black hole mass (larger number of gravitons, resulting in agreement with the semiclassical calculations and which does not hold for a single very massive particle. The spectrum of these systems has two components: a discrete ground state of energy m (the bosons forming the black hole and a continuous spectrum with energy ω > m (representing the Hawking radiation and modeled with a Planckian distribution at the expected Hawking temperature. Assuming the main effect of the internal scatterings is the Hawking radiation, the N-particle state can be collectively described by a single-particle wave-function given by a superposition of a total ground state with energy M = Nm and Entropy 2015, 17 6894 a Planckian distribution for E > M at the same Hawking temperature. This can be used to compute the partition function and to find the usual area law for the entropy, with a logarithmic correction related to the Hawking component. The backreaction of modes with ω > m is also shown to reduce
Statistical black-hole thermodynamics
Traditional methods from statistical thermodynamics, with appropriate modifications, are used to study several problems in black-hole thermodynamics. Jaynes's maximum-uncertainty method for computing probabilities is used to show that the earlier-formulated generalized second law is respected in statistically averaged form in the process of spontaneous radiation by a Kerr black hole discovered by Hawking, and also in the case of a Schwarzschild hole immersed in a bath of black-body radiation, however cold. The generalized second law is used to motivate a maximum-entropy principle for determining the equilibrium probability distribution for a system containing a black hole. As an application we derive the distribution for the radiation in equilibrium with a Kerr hole (it is found to agree with what would be expected from Hawking's results) and the form of the associated distribution among Kerr black-hole solution states of definite mass. The same results are shown to follow from a statistical interpretation of the concept of black-hole entropy as the natural logarithm of the number of possible interior configurations that are compatible with the given exterior black-hole state. We also formulate a Jaynes-type maximum-uncertainty principle for black holes, and apply it to obtain the probability distribution among Kerr solution states for an isolated radiating Kerr hole
Acceleration of black hole universe
Zhang, T. X.; Frederick, C.
Recently, Zhang slightly modified the standard big bang theory and developed a new cosmological model called black hole universe, which is consistent with Mach's principle, governed by Einstein's general theory of relativity, and able to explain all observations of the universe. Previous studies accounted for the origin, structure, evolution, expansion, and cosmic microwave background radiation of the black hole universe, which grew from a star-like black hole with several solar masses through a supermassive black hole with billions of solar masses to the present state with hundred billion-trillions of solar masses by accreting ambient matter and merging with other black holes. This paper investigates acceleration of the black hole universe and provides an alternative explanation for the redshift and luminosity distance measurements of type Ia supernovae. The results indicate that the black hole universe accelerates its expansion when it accretes the ambient matter in an increasing rate. In other words, i.e., when the second-order derivative of the mass of the black hole universe with respect to the time is positive . For a constant deceleration parameter , we can perfectly explain the type Ia supernova measurements with the reduced chi-square to be very close to unity, χ red˜1.0012. The expansion and acceleration of black hole universe are driven by external energy.
On black hole horizon fluctuations
Tuchin, K.L.
A study of the high angular momentum particles 'atmosphere' near the Schwarzschild black hole horizon suggested that strong gravitational interactions occur at invariant distance of the order of 3 √M [2]. We present a generalization of this result to the Kerr-Newman black hole case. It is shown that the larger charge and angular momentum black hole bears, the larger invariant distance at which strong gravitational interactions occur becomes. This invariant distance is of order 3 √((r + 2 )/((r + - r - ))). This implies that the Planckian structure of the Hawking radiation of extreme black holes is completely broken
Thermodynamics of Accelerating Black Holes.
Appels, Michael; Gregory, Ruth; Kubizňák, David
We address a long-standing problem of describing the thermodynamics of an accelerating black hole. We derive a standard first law of black hole thermodynamics, with the usual identification of entropy proportional to the area of the event horizon-even though the event horizon contains a conical singularity. This result not only extends the applicability of black hole thermodynamics to realms previously not anticipated, it also opens a possibility for studying novel properties of an important class of exact radiative solutions of Einstein equations describing accelerated objects. We discuss the thermodynamic volume, stability, and phase structure of these black holes.
Glory scattering by black holes
Matzner, R.A.; DeWitte-Morette, C.; Nelson, B.; Zhang, T.
We present a physically motivated derivation of the JWKB backward glory-scattering cross section of massless waves by Schwarzschild black holes. The angular dependence of the cross section is identical with the one derived by path integration, namely, dsigma/dΩ = 4π 2 lambda -1 B/sub g/ 2 (dB mWπ, where lambda is the wavelength, B(theta) is the inverse of the classical deflection function CTHETA(B), B/sub g/ is the glory impact parameter, s is the helicity of the scattered wave, and J/sub 2s/ is the Bessel function of order 2s. The glory rings formed by scalar waves are bright at the center; those formed by polarized waves are dark at the center. For scattering of massless particles by a spherical black hole of mass M, B(theta)/Mapprox.3 √3 + 3.48 exp(-theta), theta > owigπ. The numerical values of dsigma/dΩ for this deflection function are found to agree with earlier computer calculations of glory cross sections from black holes
Soft Hair on Black Holes
Hawking, Stephen W.; Perry, Malcolm J.; Strominger, Andrew
It has recently been shown that Bondi-van der Burg-Metzner-Sachs supertranslation symmetries imply an infinite number of conservation laws for all gravitational theories in asymptotically Minkowskian spacetimes. These laws require black holes to carry a large amount of soft (i.e., zero-energy) supertranslation hair. The presence of a Maxwell field similarly implies soft electric hair. This Letter gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units.
Soft Hair on Black Holes.
Hawking, Stephen W; Perry, Malcolm J; Strominger, Andrew
Simulations of black holes in compactified spacetimes
Zilhao, Miguel; Herdeiro, Carlos [Centro de Fisica do Porto, Departamento de Fisica e Astronomia, Faculdade de Ciencias da Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto (Portugal); Cardoso, Vitor; Nerozzi, Andrea; Sperhake, Ulrich; Witek, Helvi [Centro Multidisciplinar de Astrofisica, Deptartamento de Fisica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Gualtieri, Leonardo, E-mail: [email protected] [Dipartimento di Fisica, Universita di Roma ' Sapienza' and Sezione INFN Roma1, P.A. Moro 5, 00185, Roma (Italy)
From the gauge/gravity duality to braneworld scenarios, black holes in compactified spacetimes play an important role in fundamental physics. Our current understanding of black hole solutions and their dynamics in such spacetimes is rather poor because analytical tools are capable of handling a limited class of idealized scenarios, only. Breakthroughs in numerical relativity in recent years, however, have opened up the study of such spacetimes to a computational treatment which facilitates accurate studies of a wider class of configurations. We here report on recent efforts of our group to perform numerical simulations of black holes in cylindrical spacetimes.
New class of accelerating black hole solutions
Camps, Joan; Emparan, Roberto
We construct several new families of vacuum solutions that describe black holes in uniformly accelerated motion. They generalize the C metric to the case where the energy density and tension of the strings that pull (or push) on the black holes are independent parameters. These strings create large curvatures near their axis and when they have infinite length they modify the asymptotic properties of the spacetime, but we discuss how these features can be dealt with physically, in particular, in terms of 'wiggly cosmic strings'. We comment on possible extensions and extract lessons for the problem of finding higher-dimensional accelerating black hole solutions.
Lowe, D.A.
Black hole evaporation may lead to massive or massless remnants, or naked singularities. This paper investigates this process in the context of two quite different two-dimensional black hole models. The first is the original Callan-Giddings-Harvey-Strominger (CGHS) model, the second is another two-dimensional dilaton-gravity model, but with properties much closer to physics in the real, four-dimensional, world. Numerical simulations are performed of the formation and subsequent evaporation of black holes and the results are found to agree qualitatively with the exactly solved modified CGHS models, namely, that the semiclassical approximation breaks down just before a naked singularity appears
2T Physics, Weyl Symmetry and the Geodesic Completion of Black Hole Backgrounds
Araya Quezada, Ignacio Jesus
In this thesis, we discuss two different contexts where the idea of gauge symmetry and duality is used to solve the dynamics of physical systems. The first of such contexts is 2T-physics in the worldline in d+2 dimensions, where the principle of Sp(2,R) gauge symmetry in phase space is used to relate different 1T systems in (d -- 1) + 1 dimensions, such as a free relativistic particle, and a relativistic particle in an arbitrary V(x2) potential. Because each 1T shadow system corresponds to a particular gauge of the underlying symmetry, there is a web of dualities relating them. The dualities between said systems amount to canonical transformations including time and energy, which allows the different systems to be described by different Hamiltonians, and consequently, to correspond to different dynamics in the (d -- 1)+1 phase space. The second context, corresponds to a Weyl invariant scalar-tensor theory of gravity, obtained as a direct prediction of 2T gravity, where the Weyl symmetry is used to obtain geodesically complete dynamics both in the context of cosmology and black hole (BH) backgrounds. The geodesic incompleteness of usual Einstein gravity, in the presence of singularities in spacetime, is related to the definition of the Einstein gauge, which fixes the sign and magnitude of the gravitational constant GN, and therefore misses the existence of antigravity patches, which are expected to arise generically just beyond gravitational singularities. The definition of the Einstein gauge can be generalized by incorporating a sign flip of the gravitational constant GN at the transitions between gravity and antigravity. This sign is a key aspect that allows us to define geodesically complete dynamics in cosmology and in BH backgrounds, particularly, in the case of the 4D Schwarzschild BH and the 2D stringy BH. The complete nature of particle geodesics in these BH backgrounds is exhibited explicitly at the classical level, and the extension of these results to the
Black-Hole Mass Measurements
Vestergaard, Marianne
The applicability and apparent uncertainties of the techniques currently available for measuring or estimating black-hole masses in AGNs are briefly summarized.......The applicability and apparent uncertainties of the techniques currently available for measuring or estimating black-hole masses in AGNs are briefly summarized....
ATLAS simulated black hole event
Pequenão, J
The simulated collision event shown is viewed along the beampipe. The event is one in which a microscopic-black-hole was produced in the collision of two protons (not shown). The microscopic-black-hole decayed immediately into many particles. The colors of the tracks show different types of particles emerging from the collision (at the center).
Nonequatorial tachyon trajectories in Kerr space-time and the second law of black-hole physics
Dhurandhar, S.V.
The behavior of tachyon trajectories (spacelike geodesics) in Kerr space-time is discussed. It is seen that the trajectories may be broadly classified into three types according to the magnitude of the angular momentum of the tachyon. When the magnitude of angular momentum is large [vertical-barhvertical-bar > or = a (1 + GAMMA 2 )atsup 1/2at, where h and GAMMA are the angular momentum and energy at infinity and a 0. In the other cases, a negative value for Carter's constant of motion Q is permitted, which happens to be a necessary condition for the tachyon to fall into the singularity. Next, the second law of black-hole physics is investigated in the general case of nonequatorial trajectories. It is shown that nonequatorial tachyons can decrease the area of the Kerr black hole only if it is rotating sufficiently rapidly [a > (4/3√3) M
Coalescing black hole solution in the De-Sitter universe
A new coalescing black hole solution of Einstein-Maxwell equation in general relativity is given. The new solution is also found to support the 'Nerst Theorem' of thermodynamics in the case of black hole. Thus this solution poses to solve an outstanding problem of thermodynamics and black hole physics. (author)
Stationary Configurations and Geodesic Description of Supersymmetric Black Holes
Käppeli, Jürg
This thesis contains a detailed study of various properties of supersymmetric black holes. In chapter I an overview over some of the fascinating aspects of black hole physics is provided. In particular, the string theory approach to black hole entropy is discussed. One of the consequences of the
The Membrane Paradigm and black-hole thermodynamics
Thorne, K.S.
A brief overview is given of the theoretical underpinnings of the Membrane Paradigm for black-hole physics. Then those underpinnings are used to elucidate the Paradigm's view that the laws of black-hole thermodynamics (including the statistical origin of black-hole entropy) are just a special case of the laws of thermodynamics for an ordinary, rotating, thermal reservoir
The search for black holes
Torn, K.
Conceivable experimental investigations to prove the existence of black holes are discussed. Double system with a black hole turning around a star-satellite are in the spotlight. X-radiation emmited by such systems and resulting from accretion of the stellar gas by a black hole, and the gas heating when falling on the black hole might prove the model suggested. A source of strong X-radiation observed in the Cygnus star cluster and referred to as Cygnus X-1 may be thus identified as a black hole. Direct registration of short X-ray pulses with msec intervals might prove the suggestion. The lack of appropriate astrophysic facilities is pointed out to be the major difficulty on the way of experimental verifications
Black hole final state conspiracies
McInnes, Brett
The principle that unitarity must be preserved in all processes, no matter how exotic, has led to deep insights into boundary conditions in cosmology and black hole theory. In the case of black hole evaporation, Horowitz and Maldacena were led to propose that unitarity preservation can be understood in terms of a restriction imposed on the wave function at the singularity. Gottesman and Preskill showed that this natural idea only works if one postulates the presence of 'conspiracies' between systems just inside the event horizon and states at much later times, near the singularity. We argue that some AdS black holes have unusual internal thermodynamics, and that this may permit the required 'conspiracies' if real black holes are described by some kind of sum over all AdS black holes having the same entropy
String-Corrected Black Holes
Hubeny, V.
We investigate the geometry of four dimensional black hole solutions in the presence of stringy higher curvature corrections to the low energy effective action. For certain supersymmetric two charge black holes these corrections drastically alter the causal structure of the solution, converting seemingly pathological null singularities into timelike singularities hidden behind a finite area horizon. We establish, analytically and numerically, that the string-corrected two-charge black hole metric has the same Penrose diagram as the extremal four-charge black hole. The higher derivative terms lead to another dramatic effect--the gravitational force exerted by a black hole on an inertial observer is no longer purely attractive. The magnitude of this effect is related to the size of the compactification manifold.
Compressibility of rotating black holes
Dolan, Brian P.
Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).
Destruction and recreation of black holes
Bell, Peter M.
Even though the existence of the gravitationally collapsed concentrations of matter in space known as 'black holes' is accepted at all educational levels in our society, the basis for the black hole concept is really only the result of approximate calculations done over 40 years ago. The concept of the black hole is an esoteric subject, and recently the mathematical and physical frailties of the concept have come to light in an interesting round of theoretical shuffling. The recent activity in theorizing about black holes began about 10 years ago, when Cambridge University mathematican Stephen Hawking calculated that black holes could become unstable by losing mass and thus 'evaporate.' Hawking's results were surprisingly well received, considering the lack of theoretical understanding of the relations between quantum mechanics and relativity. (There is no quantized theory of gravitation, even today.) Nonetheless, his semiclassical calculations implied that the rate of 'evaporation' of a black hole would be slower than the rate of degradation of the universe. In fact, based on these and other calculations, the British regard Hawking as 'the nearest thing we have to a new Einstein' [New Scientist, Oct. 9, 1980]. Within the last few months, Frank Tipler, provocative mathematical physicist at the University of Texas, has reexamined Hawking's calculations [Physical Review Letters, 45, 941, 1980], concluding, in simple terms, (1) that because of possible vital difficulties in the assumptions, the very concept of black holes could be wrong; (2) that Hawkings' evaporation hypothesis is so efficient that a black hole once created must disappear in less than a second; or (3) that he, Tipler, may be wrong. The latter possibility has been the conclusion of physicist James Bardeen of the University of Washington, who calculated that black hole masses do evaporate but they do so according to Hawking's predicted rate and that Tipler's findings cause only a second
Action growth for black holes in modified gravity
Sebastiani, Lorenzo; Vanzo, Luciano; Zerbini, Sergio
The general form of the action growth for a large class of static black hole solutions in modified gravity which includes F (R ) -gravity models is computed. The cases of black hole solutions with nonconstant Ricci scalar are also considered, generalizing the results previously found and valid only for black holes with constant Ricci scalar. An argument is put forward to provide a physical interpretation of the results, which seem tightly connected with the generalized second law of black hole thermodynamics.
Discrete quantum spectrum of black holes
Lochan, Kinjalk, E-mail: [email protected]; Chakraborty, Sumanta, E-mail: [email protected]
The quantum genesis of Hawking radiation is a long-standing puzzle in black hole physics. Semi-classically one can argue that the spectrum of radiation emitted by a black hole look very much sparse unlike what is expected from a thermal object. It was demonstrated through a simple quantum model that a quantum black hole will retain a discrete profile, at least in the weak energy regime. However, it was suggested that this discreteness might be an artifact of the simplicity of eigen-spectrum of the model considered. Different quantum theories can, in principle, give rise to different complicated spectra and make the radiation from black hole dense enough in transition lines, to make them look continuous in profile. We show that such a hope from a geometry-quantized black hole is not realized as long as large enough black holes are dubbed with a classical mass area relation in any gravity theory ranging from GR, Lanczos–Lovelock to f(R) gravity. We show that the smallest frequency of emission from black hole in any quantum description, is bounded from below, to be of the order of its inverse mass. That leaves the emission with only two possibilities. It can either be non-thermal, or it can be thermal only with the temperature being much larger than 1/M.
What is a black hole
Tipler, F.J.
A definition of a black hole is proposed that should work in any stably causal space-time. This is that a black hole is the closure of the smaller future set that contains all noncosmological trapped surfaces and which has its boundary generated by null geodesic segments that are boundary generators of TIPs. This allows precise definitions of cosmic censorship and white holes. (UK)
Duality invariance of black hole creation rates
Brown, J.D.
Pair creation of electrically charged black holes and its dual process, pair creation of magnetically charged black holes, are considered. It is shown that the creation rates are equal provided the boundary conditions for the two processes are dual to one another. This conclusion follows from a careful analysis of boundary terms and boundary conditions for the Maxwell action. copyright 1997 The American Physical Society
Black hole information, unitarity, and nonlocality
The black hole information paradox apparently indicates the need for a fundamentally new ingredient in physics. The leading contender is nonlocality. Possible mechanisms for the nonlocality needed to restore unitarity to black hole evolution are investigated. Suggestions that such dynamics arises from ultra-planckian modes in Hawking's derivation are investigated and found not to be relevant, in a picture using smooth slices spanning the exterior and interior of the horizon. However, no simul...
String model of black hole microstates
Larsen, F.
The statistical mechanics of black holes arbitrarily far from extremality is modeled by a gas of weakly interacting strings. As an effective low-energy description of black holes the string model provides several highly nontrivial consistency checks and predictions. Speculations on a fundamental origin of the model suggest surprising simplifications in nonperturbative string theory, even in the absence of supersymmetry. copyright 1997 The American Physical Society
Black hole decay as geodesic motion
Gupta, Kumar S.; Sen, Siddhartha
We show that a formalism for analyzing the near-horizon conformal symmetry of Schwarzschild black holes using a scalar field probe is capable of describing black hole decay. The equation governing black hole decay can be identified as the geodesic equation in the space of black hole masses. This provides a novel geometric interpretation for the decay of black holes. Moreover, this approach predicts a precise correction term to the usual expression for the decay rate of black holes
Thermodynamic light on black holes
Davies, P.
The existence of black holes and their relevance to our understanding of the nature of space and time are considered, with especial reference to the application of thermodynamic arguments which can reveal their energy-transfer processes in a new light. The application of thermodynamics to strongly gravitating systems promises some fascinating new insights into the nature of gravity. Situations can occur during gravitational collapse in which existing physics breaks down. Under these circumstances, the application of universal thermodynamical principles might be our only guide. (U.K.)
When Supermassive Black Holes Wander
Are supermassive black holes found only at the centers of galaxies? Definitely not, according to a new study in fact, galaxies like the Milky Way may harbor several such monsters wandering through their midst.Collecting Black Holes Through MergersIts generally believed that galaxies are built up hierarchically, growing in size through repeated mergers over time. Each galaxy in a major merger likely hosts a supermassive black hole a black hole of millions to billions of times the mass of the Sun at its center. When a pair of galaxies merges, their supermassive black holes will often sink to the center of the merger via a process known as dynamical friction. There the supermassive black holes themselves will eventually merge in a burst of gravitational waves.Spatial distribution and velocities of wandering supermassive black holes in three of the authors simulated galaxies, shown in edge-on (left) and face-on (right) views of the galaxy disks. Click for a closer look. [Tremmel et al. 2018]But if a galaxy the size of the Milky Way was built through a history of many major galactic mergers, are we sure that all its accumulated supermassive black holes eventually merged at the galactic center? A new study suggests that some of these giants might have escaped such a fate and they now wander unseen on wide orbits through their galaxies.Black Holes in an Evolving UniverseLed by Michael Tremmel (Yale Center for Astronomy Astrophysics), a team of scientists has used data from a large-scale cosmological simulation, Romulus25, to explore the possibility of wandering supermassive black holes. The Romulus simulations are uniquely suited to track the formation and subsequent orbital motion of supermassive black holes as galactic halos are built up through mergers over the history of the universe.From these simulations, Tremmel and collaborators find an end total of 316 supermassive black holes residing within the bounds of 26 Milky-Way-mass halos. Of these, roughly a third are
Black Holes Have Simple Feeding Habits
properties of these black holes should be very helpful. In addition to Chandra, three radio arrays (the Giant Meterwave Radio Telescope, the Very Large Array and the Very Long Baseline Array), two millimeter telescopes (the Plateau de Bure Interferometer and the Submillimeter Array), and Lick Observatory in the optical were used to monitor M81. These observations were made simultaneously to ensure that brightness variations because of changes in feeding rates did not confuse the results. Chandra is the only X-ray satellite able to isolate the faint X-rays of the black hole from the emission of the rest of the galaxy. This result confirms less detailed earlier work by Andrea Merloni from the Max Planck Institute for Extraterrestrial Physics (MPE) in Garching, Germany and colleagues that suggested that the basic properties of larger black holes are similar to the smaller ones. Their study, however, was not based on simultaneous, multi-wavelength observations nor the application of a detailed physical model. These results will appear in an upcoming issue of The Astrophysical Journal. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for the agency's Science Mission Directorate. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass.
Black Hole Safari: Tracking Populations and Hunting Big Game
McConnell, N. J.
Understanding the physical connection, or lack thereof, between the growth of galaxies and supermassive black holes is a key challenge in extragalactic astronomy. Dynamical studies of nearby galaxies are building a census of black hole masses across a broad range of galaxy types and uncovering statistical correlations between galaxy bulge properties and black hole masses. These local correlations provide a baseline for studying galaxies and black holes at higher redshifts. Recent measurements have probed the extremes of the supermassive black hole population and introduced surprises that challenge simple models of black hole and galaxy co-evolution. Future advances in the quality and quantity of dynamical black hole mass measurements will shed light upon the growth of massive galaxies and black holes in different cosmic environments.
Thermodynamic studies of different black holes with modifications of entropy
Haldar, Amritendu; Biswas, Ritabrata
In recent years, the thermodynamic properties of black holes are topics of interests. We investigate the thermodynamic properties like surface gravity and Hawking temperature on event horizon of regular black holes viz. Hayward Class and asymptotically AdS (Anti-de Sitter) black holes. We also analyze the thermodynamic volume and naive geometric volume of asymptotically AdS black holes and show that the entropy of these black holes is simply the ratio of the naive geometric volume to thermodynamic volume. We plot the different graphs and interpret them physically. We derive the `cosmic-Censorship-Inequality' for both type of black holes. Moreover, we calculate the thermal heat capacity of aforesaid black holes and study their stabilities in different regimes. Finally, we compute the logarithmic correction to the entropy for both the black holes considering the quantum fluctuations around the thermal equilibrium and study the corresponding thermodynamics.
A nonsingular rotating black hole
Ghosh, Sushant G.
The spacetime singularities in classical general relativity are inevitable, as predicated by the celebrated singularity theorems. However, it is a general belief that singularities do not exist in Nature and that they are the limitations of the general relativity. In the absence of a welldefined quantum gravity, models of regular black holes have been studied. We employ a probability distribution inspired mass function m(r) to replace the Kerr black hole mass M to represent a nonsingular rotating black hole that is identified asymptotically (r >> k, k > 0 constant) exactly as the Kerr-Newman black hole, and as the Kerr black hole when k = 0. The radiating counterpart renders a nonsingular generalization of Carmeli's spacetime as well as Vaidya's spacetime, in the appropriate limits. The exponential correction factor changing the geometry of the classical black hole to remove the curvature singularity can also be motivated by quantum arguments. The regular rotating spacetime can also be understood as a black hole of general relativity coupled to nonlinear electrodynamics. (orig.)
Black Hole Grabs Starry Snack
[figure removed for brevity, see original site] Poster Version This artist's concept shows a supermassive black hole at the center of a remote galaxy digesting the remnants of a star. NASA's Galaxy Evolution Explorer had a 'ringside' seat for this feeding frenzy, using its ultraviolet eyes to study the process from beginning to end. The artist's concept chronicles the star being ripped apart and swallowed by the cosmic beast over time. First, the intact sun-like star (left) ventures too close to the black hole, and its own self-gravity is overwhelmed by the black hole's gravity. The star then stretches apart (middle yellow blob) and eventually breaks into stellar crumbs, some of which swirl into the black hole (cloudy ring at right). This doomed material heats up and radiates light, including ultraviolet light, before disappearing forever into the black hole. The Galaxy Evolution Explorer was able to watch this process unfold by observing changes in ultraviolet light. The area around the black hole appears warped because the gravity of the black hole acts like a lens, twisting and distorting light.
Black holes at neutrino telescopes
Kowalski, M.; Ringwald, A.; Tu, H.
In scenarios with extra dimensions and TeV-scale quantum gravity, black holes are expected to be produced in the collision of light particles at center-of-mass energies above the fundamental Planck scale with small impact parameters. Black hole production and evaporation may thus be studied in detail at the large hadron collider (LHC). But even before the LHC starts operating, neutrino telescopes such as AMANDA/IceCube, ANTARES, Baikal, and RICE have an opportunity to search for black hole signatures. Black hole production in the scattering of ultrahigh energy cosmic neutrinos on nucleons in the ice or water may initiate cascades and through-going muons with distinct characteristics above the Standard Model rate. In this Letter, we investigate the sensitivity of neutrino telescopes to black hole production and compare it to the one expected at the Pierre Auger Observatory, an air shower array currently under construction, and at the LHC. We find that, already with the currently available data, AMANDA and RICE should be able to place sensible constraints in black hole production parameter space, which are competitive with the present ones from the air shower facilities Fly's Eye and AGASA. In the optimistic case that a ultrahigh energy cosmic neutrino flux significantly higher than the one expected from cosmic ray interactions with the cosmic microwave background radiation is realized in nature, one even has discovery potential for black holes at neutrino telescopes beyond the reach of LHC. (orig.)
Thermodynamic theory of black holes
Davies, P C.W. [King' s Coll., London (UK). Dept. of Mathematics
The thermodynamic theory underlying black hole processes is developed in detail and applied to model systems. It is found that Kerr-Newman black holes undergo a phase transition at a = 0.68M or Q = 0.86M, where the heat capacity has an infinite discontinuity. Above the transition values the specific heat is positive, permitting isothermal equilibrium with a surrounding heat bath. Simple processes and stability criteria for various black hole situations are investigated. The limits for entropically favoured black hole formation are found. The Nernst conditions for the third law of thermodynamics are not satisfied fully for black holes. There is no obvious thermodynamic reason why a black hole may not be cooled down below absolute zero and converted into a naked singularity. Quantum energy-momentum tensor calculations for uncharged black holes are extended to the Reissner-Nordstrom case, and found to be fully consistent with the thermodynamic picture for Q < M. For Q < M the model predicts that 'naked' collapse also produces radiation, with such intensity that the collapsing matter is entirely evaporated away before a naked singularity can form.
Energy level diagrams for black hole orbits
Levin, Janna
A spinning black hole with a much smaller black hole companion forms a fundamental gravitational system, like a colossal classical analog to an atom. In an appealing if imperfect analogy with atomic physics, this gravitational atom can be understood through a discrete spectrum of periodic orbits. Exploiting a correspondence between the set of periodic orbits and the set of rational numbers, we are able to construct periodic tables of orbits and energy level diagrams of the accessible states around black holes. We also present a closed-form expression for the rational q, thereby quantifying zoom-whirl behavior in terms of spin, energy and angular momentum. The black hole atom is not just a theoretical construct, but corresponds to extant astrophysical systems detectable by future gravitational wave observatories.
Correspondence principle for black holes and strings
Horowitz, G.T.; Polchinski, J.
For most black holes in string theory, the Schwarzschild radius in string units decreases as the string coupling is reduced. We formulate a correspondence principle, which states that (i) when the size of the horizon drops below the size of a string, the typical black hole state becomes a typical state of strings and D-branes with the same charges, and (ii) the mass does not change abruptly during the transition. This provides a statistical interpretation of black hole entropy. This approach does not yield the numerical coefficient, but gives the correct dependence on mass and charge in a wide range of cases, including neutral black holes. copyright 1997 The American Physical Society
Lectures on Black Hole Quantum Mechanics
The lectures that follow were originally given in 1992, and written up only slightly later. Since then there have been dramatic developments in the quantum theory of black holes, especially in the context of string theory. None of these are reflected here. The concept of quantum hair, which is discussed at length in the lectures, is certainly of permanent interest, and I continue to believe that in some generalized form it will prove central to the whole question of how information is stored in black holes. The discussion of scattering and emission modes from various classes of black holes could be substantially simplified using modern techniques, and from currently popular perspectives the choice of examples might look eccentric. On the other hand fashions have changed rapidly in the field, and the big questions as stated and addressed here, especially as formulated for "real" black holes (nonextremal, in four-dimensional, asymptotically flat space-time, with supersymmetry broken), remain pertinent even as the tools to address them may evolve. The four lectures I gave at the school were based on two lengthy papers that have now been published, "Black Holes as Elementary Particles," Nuclear Physics B380, 447 (1992) and "Quantum Hair on Black Holes," Nuclear Physics B378, 175 (1992). The unifying theme of this work is to help make plausible the possibility that black holes, although they are certainly unusual and extreme states of matter, may be susceptible to a description using concepts that are not fundamentally different from those we use in describing other sorts of quantum-mechanical matter. In the first two lectures I discussed dilaton black holes. The fact that apparently innocuous changes in the "matter" action can drastically change the properties of a black hole is already very significant: it indicates that the physical properties of small black holes cannot be discussed reliably in the abstract, but must be considered with due regard to the rest of
Does black-hole entropy make sense
Wilkins, D.
Bekenstein and Hawking saved the second law of thermodynamics near a black hole by assigning to the hole an entropy Ssub(h) proportional to the area of its event horizon. It is tempting to assume that Ssub(h) possesses all the features commonly associated with the physical entropy. Kundt has shown, however, that Ssub(h) violates several reasonable physical expectations. This criticism is reviewed, augmenting it as follows: (a) Ssub(h) is a badly behaved state function requiring knowledge of the hole's future history; and (b) close analogs of event horizons in other space-times do not possess an 'entropy'. These questions are also discussed: (c) Is Ssub(h) suitable for all regions of a black-hole space-time. And (b) should Ssub(h) be attributed to the exterior of a white hole. One can retain Ssub(h) for the interior (respectively, exterior) of a black (respectively, white) hole, but is rejected as contrary to the information-theoretic derivation of horizon entropy given by Berkenstein. The total entropy defined by Kundt (all ordinary entropy on space-section cutting through the hole, no horizon term) and that of Bekenstein-Hawking (ordinary entropy outside horizon plus horizon term) appear to be complementary concepts with separate domains of validity. In the most natural choice, an observer inside a black hole will use Kundt's entropy, and one remaining outside that of Bekenstein-Hawking. (author)
Black holes and Higgs stability
Tetradis, Nikolaos
We study the effect of primordial black holes on the classical rate of nucleation of AdS regions within the standard electroweak vacuum. We find that the energy barrier for transitions to the new vacuum, which characterizes the exponential suppression of the nucleation rate, can be reduced significantly in the black-hole background. A precise analysis is required in order to determine whether the the existence of primordial black holes is compatible with the form of the Higgs potential at high temperature or density in the Standard Model or its extensions.
Vacuum metastability with black holes
Burda, Philipp [Centre for Particle Theory, Durham University,South Road, Durham, DH1 3LE (United Kingdom); Gregory, Ruth [Centre for Particle Theory, Durham University,South Road, Durham, DH1 3LE (United Kingdom); Perimeter Institute, 31 Caroline Street North,Waterloo, ON, N2L 2Y5 (Canada); Moss, Ian G. annd [School of Mathematics and Statistics, Newcastle University,Newcastle Upon Tyne, NE1 7RU (United Kingdom)
We consider the possibility that small black holes can act as nucleation seeds for the decay of a metastable vacuum, focussing particularly on the Higgs potential. Using a thin-wall bubble approximation for the nucleation process, which is possible when generic quantum gravity corrections are added to the Higgs potential, we show that primordial black holes can stimulate vacuum decay. We demonstrate that for suitable parameter ranges, the vacuum decay process dominates over the Hawking evaporation process. Finally, we comment on the application of these results to vacuum decay seeded by black holes produced in particle collisions.
Orbital resonances around black holes.
Brink, Jeandrew; Geyer, Marisa; Hinderer, Tanja
We compute the length and time scales associated with resonant orbits around Kerr black holes for all orbital and spin parameters. Resonance-induced effects are potentially observable when the Event Horizon Telescope resolves the inner structure of Sgr A*, when space-based gravitational wave detectors record phase shifts in the waveform during the resonant passage of a compact object spiraling into the black hole, or in the frequencies of quasiperiodic oscillations for accreting black holes. The onset of geodesic chaos for non-Kerr spacetimes should occur at the resonance locations quantified here.
Burda, Philipp; Gregory, Ruth; Moss, Ian G. annd
Tunnelling from Goedel black holes
Kerner, Ryan; Mann, R. B.
We consider the spacetime structure of Kerr-Goedel black holes, analyzing their parameter space in detail. We apply the tunnelling method to compute their temperature and compare the results to previous calculations obtained via other methods. We claim that it is not possible to have the closed timelike curve (CTC) horizon in between the two black hole horizons and include a discussion of issues that occur when the radius of the CTC horizon is smaller than the radius of both black hole horizons
Quantum mechanics of black holes.
Witten, Edward
The popular conception of black holes reflects the behavior of the massive black holes found by astronomers and described by classical general relativity. These objects swallow up whatever comes near and emit nothing. Physicists who have tried to understand the behavior of black holes from a quantum mechanical point of view, however, have arrived at quite a different picture. The difference is analogous to the difference between thermodynamics and statistical mechanics. The thermodynamic description is a good approximation for a macroscopic system, but statistical mechanics describes what one will see if one looks more closely.
Gravitational polarizability of black holes
Damour, Thibault; Lecian, Orchidea Maria
The gravitational polarizability properties of black holes are compared and contrasted with their electromagnetic polarizability properties. The 'shape' or 'height' multipolar Love numbers h l of a black hole are defined and computed. They are then compared to their electromagnetic analogs h l EM . The Love numbers h l give the height of the lth multipolar 'tidal bulge' raised on the horizon of a black hole by faraway masses. We also discuss the shape of the tidal bulge raised by a test-mass m, in the limit where m gets very close to the horizon.
Black hole meiosis
van Herck, Walter; Wyder, Thomas
The enumeration of BPS bound states in string theory needs refinement. Studying partition functions of particles made from D-branes wrapped on algebraic Calabi-Yau 3-folds, and classifying states using split attractor flow trees, we extend the method for computing a refined BPS index, [1]. For certain D-particles, a finite number of microstates, namely polar states, exclusively realized as bound states, determine an entire partition function (elliptic genus). This underlines their crucial importance: one might call them the 'chromosomes' of a D-particle or a black hole. As polar states also can be affected by our refinement, previous predictions on elliptic genera are modified. This can be metaphorically interpreted as 'crossing-over in the meiosis of a D-particle'. Our results improve on [2], provide non-trivial evidence for a strong split attractor flow tree conjecture, and thus suggest that we indeed exhaust the BPS spectrum. In the D-brane description of a bound state, the necessity for refinement results from the fact that tachyonic strings split up constituent states into 'generic' and 'special' states. These are enumerated separately by topological invariants, which turn out to be partitions of Donaldson-Thomas invariants. As modular predictions provide a check on many of our results, we have compelling evidence that our computations are correct.
Quantum physics, mini black holes, and the multiverse debunking common misconceptions in theoretical physics
Nomura, Yasunori; Terning, John; Nekoogar, Farzad
"Modern physics is rife with provocative and fascinating ideas, from quantum mechanics to the multiverse. But as interesting as these concepts are, they are also easy to understand. This book, written with deft hands by true experts in the field, helps to illuminate some of the most important and game-changing ideas in physics today." Sean M. Carroll "The Multiversal book series is equally unique, providing book-length extensions of the lectures with enough additional depth for those who truly want to explore these fields, while also providing the kind of clarity that is appropriate for interested lay people to grasp the general principles involved. " Lawrence M. Krauss Th...
Cosmic microwave background radiation of black hole universe
Zhang, T. X.
Modifying slightly the big bang theory, the author has recently developed a new cosmological model called black hole universe. This new cosmological model is consistent with the Mach principle, Einsteinian general theory of relativity, and observations of the universe. The origin, structure, evolution, and expansion of the black hole universe have been presented in the recent sequence of American Astronomical Society (AAS) meetings and published recently in a scientific journal: Progress in Physics. This paper explains the observed 2.725 K cosmic microwave background radiation of the black hole universe, which grew from a star-like black hole with several solar masses through a supermassive black hole with billions of solar masses to the present universe with hundred billion-trillions of solar masses. According to the black hole universe model, the observed cosmic microwave background radiation can be explained as the black body radiation of the black hole universe, which can be considered as an ideal black body. When a hot and dense star-like black hole accretes its ambient materials and merges with other black holes, it expands and cools down. A governing equation that expresses the possible thermal history of the black hole universe is derived from the Planck law of black body radiation and radiation energy conservation. The result obtained by solving the governing equation indicates that the radiation temperature of the present universe can be ˜2.725 K if the universe originated from a hot star-like black hole, and is therefore consistent with the observation of the cosmic microwave background radiation. A smaller or younger black hole universe usually cools down faster. The characteristics of the original star-like or supermassive black hole are not critical to the physical properties of the black hole universe at present, because matter and radiation are mainly from the outside space, i.e., the mother universe.
Black hole evaporation: a paradigm
Ashtekar, Abhay; Bojowald, Martin
A paradigm describing black hole evaporation in non-perturbative quantum gravity is developed by combining two sets of detailed results: (i) resolution of the Schwarzschild singularity using quantum geometry methods and (ii) time evolution of black holes in the trapping and dynamical horizon frameworks. Quantum geometry effects introduce a major modification in the traditional spacetime diagram of black hole evaporation, providing a possible mechanism for recovery of information that is classically lost in the process of black hole formation. The paradigm is developed directly in the Lorentzian regime and necessary conditions for its viability are discussed. If these conditions are met, much of the tension between expectations based on spacetime geometry and structure of quantum theory would be resolved
Axion-dilation black holes
Kallosh, R.
In this talk some essential features of stringy black holes are described. The author considers charged U(1) and U(1) x U(1) four-dimensional axion-dilaton black holes. The Hawking temperature and the entropy of all solutions are shown to be simple functions of the squares of supercharges, defining the positivity bounds. Spherically symmetric and multi black hole solutions are presented. The extreme solutions with zero entropy (holons) represent a ground state of the theory and are characterized by elementary dilaton, axion, electric, and magnetic charges. The attractive gravitational and axion-dilaton force is balanced by the repulsive electromagnetic force. The author discusses the possibility of splitting of nearly extreme black holes. 11 refs
Cracking the Einstein code relativity and the birth of black hole physics
Melia, Fulvio
Albert Einstein's theory of general relativity describes the effect of gravitation on the shape of space and the flow of time. But for more than four decades after its publication, the theory remained largely a curiosity for scientists; however accurate it seemed, Einstein's mathematical code—represented by six interlocking equations—was one of the most difficult to crack in all of science. That is, until a twenty-nine-year-old Cambridge graduate solved the great riddle in 1963. Roy Kerr's solution emerged coincidentally with the discovery of black holes that same year and provided fertile testing ground—at long last—for general relativity
Foundations of Black Hole Accretion Disk Theory.
Abramowicz, Marek A; Fragile, P Chris
This review covers the main aspects of black hole accretion disk theory. We begin with the view that one of the main goals of the theory is to better understand the nature of black holes themselves. In this light we discuss how accretion disks might reveal some of the unique signatures of strong gravity: the event horizon, the innermost stable circular orbit, and the ergosphere. We then review, from a first-principles perspective, the physical processes at play in accretion disks. This leads us to the four primary accretion disk models that we review: Polish doughnuts (thick disks), Shakura-Sunyaev (thin) disks, slim disks, and advection-dominated accretion flows (ADAFs). After presenting the models we discuss issues of stability, oscillations, and jets. Following our review of the analytic work, we take a parallel approach in reviewing numerical studies of black hole accretion disks. We finish with a few select applications that highlight particular astrophysical applications: measurements of black hole mass and spin, black hole vs. neutron star accretion disks, black hole accretion disk spectral states, and quasi-periodic oscillations (QPOs).
Foundations of Black Hole Accretion Disk Theory
Marek A. Abramowicz
Full Text Available This review covers the main aspects of black hole accretion disk theory. We begin with the view that one of the main goals of the theory is to better understand the nature of black holes themselves. In this light we discuss how accretion disks might reveal some of the unique signatures of strong gravity: the event horizon, the innermost stable circular orbit, and the ergosphere. We then review, from a first-principles perspective, the physical processes at play in accretion disks. This leads us to the four primary accretion disk models that we review: Polish doughnuts (thick disks, Shakura-Sunyaev (thin disks, slim disks, and advection-dominated accretion flows (ADAFs. After presenting the models we discuss issues of stability, oscillations, and jets. Following our review of the analytic work, we take a parallel approach in reviewing numerical studies of black hole accretion disks. We finish with a few select applications that highlight particular astrophysical applications: measurements of black hole mass and spin, black hole vs. neutron star accretion disks, black hole accretion disk spectral states, and quasi-periodic oscillations (QPOs.
Do evaporating black holes form photospheres?
MacGibbon, Jane H.; Carr, B. J.; Page, Don N.
Several authors, most notably Heckler, have claimed that the observable Hawking emission from a microscopic black hole is significantly modified by the formation of a photosphere around the black hole due to QED or QCD interactions between the emitted particles. In this paper we analyze these claims and identify a number of physical and geometrical effects which invalidate these scenarios. We point out two key problems. First, the interacting particles must be causally connected to interact, and this condition is satisfied by only a small fraction of the emitted particles close to the black hole. Second, a scattered particle requires a distance ∼E/m e 2 for completing each bremsstrahlung interaction, with the consequence that it is improbable for there to be more than one complete bremsstrahlung interaction per particle near the black hole. These two effects have not been included in previous analyses. We conclude that the emitted particles do not interact sufficiently to form a QED photosphere. Similar arguments apply in the QCD case and prevent a QCD photosphere (chromosphere) from developing when the black hole temperature is much greater than Λ QCD , the threshold for QCD particle emission. Additional QCD phenomenological arguments rule out the development of a chromosphere around black hole temperatures of order Λ QCD . In all cases, the observational signatures of a cosmic or Galactic halo background of primordial black holes or an individual black hole remain essentially those of the standard Hawking model, with little change to the detection probability. We also consider the possibility, as proposed by Belyanin et al. and D. Cline et al., that plasma interactions between the emitted particles form a photosphere, and we conclude that this scenario too is not supported.
Black holes from extended inflation
Hsu, S.D.H.; Lawrence Berkeley Lab., CA
It is argued that models of extended inflation, in which modified Einstein gravity allows a graceful exit from the false vacuum, lead to copious production of black holes. The critical temperature of the inflationary phase transition must be >10 8 GeV in order to avoid severe cosmological problems in a universe dominated by black holes. We speculate on the possibility that the interiors of false vacuum regions evolve into baby universes. (orig.)
Black holes and cosmic censorship
Hiscock, W.A.
It is widely accepted that the complete gravitational collapse of a body always yields a black hole, and that naked singularities are never produced (the cosmic censorship hypothesis). The local (or strong) cosmic censorship hypothesis states that singularities which are even locally naked (e.g., to an observer inside a black hole) are never produced. This dissertation studies the validity of these two conjectures. The Kerr-Newman metrics describes the black holes only when M 2 greater than or equal to Q 2 + P 2 , where M is the mass of the black hole, a = J/M its specific angular momentum, Q its electric charge, and P its magnetic charge. In the first part of this dissertation, the possibility of converting an extreme Kerr-Newman black hole (M 2 = a 2 + Q 2 + P 2 ) into a naked singularity by the accretion of test particles is considered. The motion of test particles is studied with a large angular momentum to energy ratio, and also test particles with a large charge to energy ratio. The final state is always found to be a black hole if the angular momentum, electric charge, and magnetic charge of the black hole are all much greater than the corresponding angular momentum, electric charge, and magnetic charge of the test particle. In Part II of this dissertation possible black hole interior solutions are studied. The Cauchy horizons and locally naked timelike singularities of the charged (and/or rotating) solutions are contrasted with the spacelike all-encompassing singularity of the Schwarzschild solution. It is determined which portions of the analytic extension of the Reissner-Nordstroem solution are relevant to realistic gravitational collapse
Are Black Holes Elementary Particles?
Ha, Yuan K.
Quantum black holes are the smallest and heaviest conceivable elementary particles. They have a microscopic size but a macroscopic mass. Several fundamental types have been constructed with some remarkable properties. Quantum black holes in the neighborhood of the Galaxy could resolve the paradox of ultra-high energy cosmic rays detected in Earth's atmosphere. They may also play a role as dark matter in cosmology.
Black Hole Complementary Principle and Noncommutative Membrane
Wei Ren
In the spirit of black hole complementary principle, we have found the noncommutative membrane of Scharzchild black holes. In this paper we extend our results to Kerr black hole and see the same story. Also we make a conjecture that spacetimes are noncommutative on the stretched membrane of the more general Kerr-Newman black hole.
Accretion, primordial black holes and standard cosmology
Primordial black holes evaporate due to Hawking radiation. We find that the evaporation times of primordial black holes increase when accretion of radiation is included. Thus, depending on accretion efficiency, more primordial black holes are existing today, which strengthens the conjecture that the primordial black holes ...
Revealing Black Holes with Gaia
Breivik, Katelyn; Chatterjee, Sourav; Larson, Shane L.
We estimate the population of black holes with luminous stellar companions (BH-LCs) in the Milky Way (MW) observable by Gaia. We evolve a realistic distribution of BH-LC progenitors from zero-age to the current epoch taking into account relevant physics, including binary stellar evolution, BH-formation physics, and star formation rate, in order to estimate the BH-LC population in the MW today. We predict that Gaia will discover between 3800 and 12,000 BH-LCs by the end of its 5 {years} mission, depending on BH natal kick strength and observability constraints. We find that the overall yield, and distributions of eccentricities and masses of observed BH-LCs, can provide important constraints on the strength of BH natal kicks. Gaia-detected BH-LCs are expected to have very different orbital properties compared to those detectable via radio, X-ray, or gravitational-wave observations.
Before Inflation and after Black Holes
Stoltenberg, Henry
This dissertation covers work from three research projects relating to the physics before the start of inflation and information after the decay of a black hole. For the first project, we analyze the cosmological role of terminal vacua in the string theory landscape, and point out that existing work on this topic makes very strong assumptions about the properties of the terminal vacua. We explore the implications of relaxing these assumptions (by including "arrival" as well as "departure" terminals) and demonstrate that the results in earlier work are highly sensitive to their assumption of no arrival terminals. We use our discussion to make some general points about tuning and initial conditions in cosmology. The second project is a discussion of the black hole information problem. Under certain conditions the black hole information puzzle and the (related) arguments that firewalls are a typical feature of black holes can break down. We first review the arguments of Almheiri, Marolf, Polchinski and Sully (AMPS) favoring firewalls, focusing on entanglements in a simple toy model for a black hole and the Hawking radiation. By introducing a large and inaccessible system entangled with the black hole (representing perhaps a de Sitter stretched horizon or inaccessible part of a landscape) we show complementarity can be restored and firewalls can be avoided throughout the black hole's evolution. Under these conditions black holes do not have an "information problem". We point out flaws in some of our earlier arguments that such entanglement might be generically present in some cosmological scenarios, and call out certain ways our picture may still be realized. The third project also examines the firewall argument. A fundamental limitation on the behavior of quantum entanglement known as "monogamy" plays a key role in the AMPS argument. Our goal is to study and apply many-body entanglement theory to consider the entanglement among different parts of Hawking radiation and
Black holes in the universe
Camenzind, M.
While physicists have been grappling with the theory of black holes (BH), as shown by the many contributions to the Einstein year, astronomers have been successfully searching for real black holes in the Universe. Black hole astrophysics began in the 1960s with the discovery of quasars and other active galactic nuclei (AGN) in distant galaxies. Already in the 1960s it became clear that the most natural explanation for the quasar activity is the release of gravitational energy through accretion of gas onto supermassive black holes. The remnants of this activity have now been found in the centers of about 50 nearby galaxies. BH astrophysics received a new twist in the 1970s with the discovery of the X-ray binary (XRB) Cygnus X-1. The X-ray emitting compact object was too massive to be explained by a neutron star. Today, about 20 excellent BH candidates are known in XRBs. On the extragalactic scale, more than 100.000 quasars have been found in large galaxy surveys. At the redshift of the most distant ones, the Universe was younger than one billion year. The most enigmatic black hole candidates identified in the last years are the compact objects behind the Gamma-Ray Bursters. The formation of all these types of black holes is accompanied by extensive emission of gravitational waves. The detection of these strong gravity events is one of the biggest challenges for physicists in the near future. (author)
Stationary black holes as holographs
Racz, Istvan [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-01 (Japan); MTA KFKI, Reszecske- es Magfizikai Kutatointezet, H-1121 Budapest, Konkoly Thege Miklos ut 29-33 (Hungary)
Smooth spacetimes possessing a (global) one-parameter group of isometries and an associated Killing horizon in Einstein's theory of gravity are investigated. No assumption concerning the asymptotic structure is made; thereby, the selected spacetimes may be considered as generic distorted stationary black holes. First, spacetimes of arbitrary dimension, n {>=} 3, with matter satisfying the dominant energy condition and allowing a non-zero cosmological constant are investigated. In this part, complete characterization of the topology of the event horizon of 'distorted' black holes is given. It is shown that the topology of the event horizon of 'distorted' black holes is allowed to possess a much larger variety than that of the isolated black hole configurations. In the second part, four-dimensional (non-degenerate) electrovac distorted black hole spacetimes are considered. It is shown that the spacetime geometry and the electromagnetic field are uniquely determined in the black hole region once the geometry of the bifurcation surface and one of the electromagnetic potentials are specified there. Conditions guaranteeing the same type of determinacy, in a neighbourhood of the event horizon, on the domain of outer communication side are also investigated. In particular, they are shown to be satisfied in the analytic case.
Atomic structure in black hole
Nagatani, Yukinori
We propose that any black hole has atomic structure in its inside and has no horizon as a model of black holes. Our proposal is founded on a mean field approximation of gravity. The structure of our model consists of a (charged) singularity at the center and quantum fluctuations of fields around the singularity, namely, it is quite similar to that of atoms. Any properties of black holes, e.g. entropy, can be explained by the model. The model naturally quantizes black holes. In particular, we find the minimum black hole, whose structure is similar to that of the hydrogen atom and whose Schwarzschild radius is approximately 1.1287 times the Planck length. Our approach is conceptually similar to Bohr's model of the atomic structure, and the concept of the minimum Schwarzschild radius is similar to that of the Bohr radius. The model predicts that black holes carry baryon number, and the baryon number is rapidly violated. This baryon number violation can be used as verification of the model. (author)
Black hole quantum spectrum
Corda, Christian [Institute for Theoretical Physics and Advanced Mathematics (IFM) Einstein-Galilei, Prato (Italy); Istituto Universitario di Ricerca ' ' Santa Rita' ' , Prato (Italy); International Institute for Applicable Mathematics and Information Sciences (IIAMIS), Hyderabad (India)
Introducing a black hole (BH) effective temperature, which takes into account both the non-strictly thermal character of Hawking radiation and the countable behavior of emissions of subsequent Hawking quanta, we recently re-analysed BH quasi-normal modes (QNMs) and interpreted them naturally in terms of quantum levels. In this work we improve such an analysis removing some approximations that have been implicitly used in our previous works and obtaining the corrected expressions for the formulas of the horizon's area quantization and the number of quanta of area and hence also for Bekenstein-Hawking entropy, its subleading corrections and the number of micro-states, i.e. quantities which are fundamental to realize the underlying quantum gravity theory, like functions of the QNMs quantum ''overtone'' number n and, in turn, of the BH quantum excited level. An approximation concerning the maximum value of n is also corrected. On the other hand, our previous results were strictly corrected only for scalar and gravitational perturbations. Here we show that the discussion holds also for vector perturbations. The analysis is totally consistent with the general conviction that BHs result in highly excited states representing both the ''hydrogen atom'' and the ''quasi-thermal emission'' in quantum gravity. Our BH model is somewhat similar to the semi-classical Bohr's model of the structure of a hydrogen atom. The thermal approximation of previous results in the literature is consistent with the results in this paper. In principle, such results could also have important implications for the BH information paradox. (orig.)
Corda, Christian
Introducing a black hole (BH) effective temperature, which takes into account both the non-strictly thermal character of Hawking radiation and the countable behavior of emissions of subsequent Hawking quanta, we recently re-analysed BH quasi-normal modes (QNMs) and interpreted them naturally in terms of quantum levels. In this work we improve such an analysis removing some approximations that have been implicitly used in our previous works and obtaining the corrected expressions for the formulas of the horizon's area quantization and the number of quanta of area and hence also for Bekenstein-Hawking entropy, its subleading corrections and the number of micro-states, i.e. quantities which are fundamental to realize the underlying quantum gravity theory, like functions of the QNMs quantum "overtone" number n and, in turn, of the BH quantum excited level. An approximation concerning the maximum value of n is also corrected. On the other hand, our previous results were strictly corrected only for scalar and gravitational perturbations. Here we show that the discussion holds also for vector perturbations. The analysis is totally consistent with the general conviction that BHs result in highly excited states representing both the "hydrogen atom" and the "quasi-thermal emission" in quantum gravity. Our BH model is somewhat similar to the semi-classical Bohr's model of the structure of a hydrogen atom. The thermal approximation of previous results in the literature is consistent with the results in this paper. In principle, such results could also have important implications for the BH information paradox.
Regular black hole in three dimensions
Myung, Yun Soo; Yoon, Myungseok
We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.
Black holes in brane worlds
Abstract. A Kerr metric describing a rotating black hole is obtained on the three brane in a five-dimensional Randall-Sundrum brane world by considering a rotating five-dimensional black string in the bulk. We examine the causal structure of this space-time through the geodesic equations.
A Presentation of the Black Hole Stretching Effect
Kontomaris, Stylianos Vasileios; Malamou, Anna
Black holes and the physics behind them is a fascinating topic for students of all levels. The exotic conditions which prevail near a black hole should be discussed and presented to undergraduate students in order to increase their interest in studying physics and to provide useful insights into basic physics concepts, such as non-uniform…
MASSIVE BLACK HOLES IN STELLAR SYSTEMS: 'QUIESCENT' ACCRETION AND LUMINOSITY
Volonteri, M.; Campbell, D.; Mateo, M.; Dotti, M.
Only a small fraction of local galaxies harbor an accreting black hole, classified as an active galactic nucleus. However, many stellar systems are plausibly expected to host black holes, from globular clusters to nuclear star clusters, to massive galaxies. The mere presence of stars in the vicinity of a black hole provides a source of fuel via mass loss of evolved stars. In this paper, we assess the expected luminosities of black holes embedded in stellar systems of different sizes and properties, spanning a large range of masses. We model the distribution of stars and derive the amount of gas available to a central black hole through a geometrical model. We estimate the luminosity of the black holes under simple, but physically grounded, assumptions on the accretion flow. Finally, we discuss the detectability of 'quiescent' black holes in the local universe.
The black hole information paradox apparently indicates the need for a fundamentally new ingredient in physics. The leading contender is nonlocality. Possible mechanisms for the nonlocality needed to restore unitarity to black hole evolution are investigated. Suggestions that such dynamics arise from ultra-Planckian modes in Hawking's derivation are investigated and found not to be relevant, in a picture using smooth slices spanning the exterior and interior of the horizon. However, no simultaneous description of modes that have fallen into the black hole and outgoing Hawking modes can be given without appearance of a large kinematic invariant, or other dependence on ultra-Planckian physics. This indicates that a reliable argument for information loss has not been constructed, and that strong gravitational dynamics is important. Such dynamics has been argued to be fundamentally nonlocal in extreme situations, such as those required to investigate the fate of information
Black holes, qubits and octonions
Borsten, L.; Dahanayake, D.; Duff, M.J.; Ebrahim, H.; Rubens, W.
We review the recently established relationships between black hole entropy in string theory and the quantum entanglement of qubits and qutrits in quantum information theory. The first example is provided by the measure of the tripartite entanglement of three qubits (Alice, Bob and Charlie), known as the 3-tangle, and the entropy of the 8-charge STU black hole of N=2 supergravity, both of which are given by the [SL(2)] 3 invariant hyperdeterminant, a quantity first introduced by Cayley in 1845. Moreover the classification of three-qubit entanglements is related to the classification of N=2 supersymmetric STU black holes. There are further relationships between the attractor mechanism and local distillation protocols and between supersymmetry and the suppression of bit flip errors. At the microscopic level, the black holes are described by intersecting D3-branes whose wrapping around the six compact dimensions T 6 provides the string-theoretic interpretation of the charges and we associate the three-qubit basis vectors, |ABC>(A,B,C=0 or 1), with the corresponding 8 wrapping cycles. The black hole/qubit correspondence extends to the 56 charge N=8 black holes and the tripartite entanglement of seven qubits where the measure is provided by Cartan's E 7 contains [SL(2)] 7 invariant. The qubits are naturally described by the seven vertices ABCDEFG of the Fano plane, which provides the multiplication table of the seven imaginary octonions, reflecting the fact that E 7 has a natural structure of an O-graded algebra. This in turn provides a novel imaginary octonionic interpretation of the 56=7x8 charges of N=8: the 24=3x8 NS-NS charges correspond to the three imaginary quaternions and the 32=4x8 R-R to the four complementary imaginary octonions. We contrast this approach with that based on Jordan algebras and the Freudenthal triple system. N=8 black holes (or black strings) in five dimensions are also related to the bipartite entanglement of three qutrits (3-state systems
Black holes in the early Universe.
Volonteri, Marta; Bellovary, Jillian
The existence of massive black holes (MBHs) was postulated in the 1960s, when the first quasars were discovered. In the late 1990s their reality was proven beyond doubt in the Milky way and a handful nearby galaxies. Since then, enormous theoretical and observational efforts have been made to understand the astrophysics of MBHs. We have discovered that some of the most massive black holes known, weighing billions of solar masses, powered luminous quasars within the first billion years of the Universe. The first MBHs must therefore have formed around the time the first stars and galaxies formed. Dynamical evidence also indicates that black holes with masses of millions to billions of solar masses ordinarily dwell in the centers of today's galaxies. MBHs populate galaxy centers today, and shone as quasars in the past; the quiescent black holes that we detect now in nearby bulges are the dormant remnants of this fiery past. In this review we report on basic, but critical, questions regarding the cosmological significance of MBHs. What physical mechanisms led to the formation of the first MBHs? How massive were the initial MBH seeds? When and where did they form? How is the growth of black holes linked to that of their host galaxy? The answers to most of these questions are works in progress, in the spirit of these reports on progress in physics.
Black holes in the early Universe
The existence of massive black holes (MBHs) was postulated in the 1960s, when the first quasars were discovered. In the late 1990s their reality was proven beyond doubt in the Milky way and a handful nearby galaxies. Since then, enormous theoretical and observational efforts have been made to understand the astrophysics of MBHs. We have discovered that some of the most massive black holes known, weighing billions of solar masses, powered luminous quasars within the first billion years of the Universe. The first MBHs must therefore have formed around the time the first stars and galaxies formed. Dynamical evidence also indicates that black holes with masses of millions to billions of solar masses ordinarily dwell in the centers of today's galaxies. MBHs populate galaxy centers today, and shone as quasars in the past; the quiescent black holes that we detect now in nearby bulges are the dormant remnants of this fiery past. In this review we report on basic, but critical, questions regarding the cosmological significance of MBHs. What physical mechanisms led to the formation of the first MBHs? How massive were the initial MBH seeds? When and where did they form? How is the growth of black holes linked to that of their host galaxy? The answers to most of these questions are works in progress, in the spirit of these reports on progress in physics. (review article)
A New Cosmological Model: Black Hole Universe
Zhang T. X.
Full Text Available A new cosmological model called black hole universe is proposed. According to this model, the universe originated from a hot star-like black hole with several solar masses, and gradually grew up through a supermassive black hole with billion solar masses to the present state with hundred billion-trillion solar masses by accreting ambient mate- rials and merging with other black holes. The entire space is structured with infinite layers hierarchically. The innermost three layers are the universe that we are living, the outside called mother universe, and the inside star-like and supermassive black holes called child universes. The outermost layer is infinite in radius and limits to zero for both the mass density and absolute temperature. The relationships among all layers or universes can be connected by the universe family tree. Mathematically, the entire space can be represented as a set of all universes. A black hole universe is a subset of the en- tire space or a subspace. The child universes are null sets or empty spaces. All layers or universes are governed by the same physics - the Einstein general theory of relativity with the Robertson-walker metric of spacetime - and tend to expand outward physically. The evolution of the space structure is iterative. When one universe expands out, a new similar universe grows up from its inside. The entire life of a universe begins from the birth as a hot star-like or supermassive black hole, passes through the growth and cools down, and expands to the death with infinite large and zero mass density and absolute temperature. The black hole universe model is consistent with the Mach principle, the observations of the universe, and the Einstein general theory of relativity. Its various aspects can be understood with the well-developed physics without any difficulty. The dark energy is not required for the universe to accelerate its expansion. The inflation is not necessary because the black hole universe
Cosmology with primordial black holes
Lindley, D.
Cosmologies containing a substantial amount of matter in the form of evaporating primordial black holes are investigated. A review of constraints on the numbers of such black holes, including an analysis of a new limit found by looking at the destruction of deuterium by high energy photons, shows that there must be a negligible population of small black holes from the era of cosmological nucleosynthesis onwards, but that there are no strong constraints before this time. The major part of the work is based on the construction of detailed, self-consistent cosmological models in which black holes are continually forming and evaporating The interest in these models centres on the question of baryon generation, which occurs via the asymmetric decay of a new type of particle which appears as a consequence of the recently developed Grand Unified Theories of elementary particles. Unfortunately, there is so much uncertainty in the models that firm conclusions are difficult to reach; however, it seems feasible in principle that primordial black holes could be responsible for a significant part of the present matter density of the Universe. (author)
Critical point in the phase diagram of primordial quark-gluon matter from black hole physics
Critelli, Renato; Noronha, Jorge; Noronha-Hostler, Jacquelyn; Portillo, Israel; Ratti, Claudia; Rougemont, Romulo
Strongly interacting matter undergoes a crossover phase transition at high temperatures T ˜1012 K and zero net-baryon density. A fundamental question in the theory of strong interactions, QCD, is whether a hot and dense system of quarks and gluons displays critical phenomena when doped with more quarks than antiquarks, where net-baryon number fluctuations diverge. Recent lattice QCD work indicates that such a critical point can only occur in the baryon dense regime of the theory, which defies a description from first principles calculations. Here we use the holographic gauge/gravity correspondence to map the fluctuations of baryon charge in the dense quark-gluon liquid onto a numerically tractable gravitational problem involving the charge fluctuations of holographic black holes. This approach quantitatively reproduces ab initio results for the lowest order moments of the baryon fluctuations and makes predictions for the higher-order baryon susceptibilities and also for the location of the critical point, which is found to be within the reach of heavy-ion collision experiments.
Lee–Wick black holes
Cosimo Bambi
Full Text Available We derive and study an approximate static vacuum solution generated by a point-like source in a higher derivative gravitational theory with a pair of complex conjugate ghosts. The gravitational theory is local and characterized by a high derivative operator compatible with Lee–Wick unitarity. In particular, the tree-level two-point function only shows a pair of complex conjugate poles besides the massless spin two graviton. We show that singularity-free black holes exist when the mass of the source M exceeds a critical value Mcrit. For M>Mcrit the spacetime structure is characterized by an outer event horizon and an inner Cauchy horizon, while for M=Mcrit we have an extremal black hole with vanishing Hawking temperature. The evaporation process leads to a remnant that approaches the zero-temperature extremal black hole state in an infinite amount of time.
The black hole quantum atmosphere
Dey, Ramit; Liberati, Stefano; Pranzetti, Daniele
Ever since the discovery of black hole evaporation, the region of origin of the radiated quanta has been a topic of debate. Recently it was argued by Giddings that the Hawking quanta originate from a region well outside the black hole horizon by calculating the effective radius of a radiating body via the Stefan-Boltzmann law. In this paper we try to further explore this issue and end up corroborating this claim, using both a heuristic argument and a detailed study of the stress energy tensor. We show that the Hawking quanta originate from what might be called a quantum atmosphere around the black hole with energy density and fluxes of particles peaked at about 4 MG, running contrary to the popular belief that these originate from the ultra high energy excitations very close to the horizon. This long distance origin of Hawking radiation could have a profound impact on our understanding of the information and transplanckian problems.
Ramit Dey
Full Text Available Ever since the discovery of black hole evaporation, the region of origin of the radiated quanta has been a topic of debate. Recently it was argued by Giddings that the Hawking quanta originate from a region well outside the black hole horizon by calculating the effective radius of a radiating body via the Stefan–Boltzmann law. In this paper we try to further explore this issue and end up corroborating this claim, using both a heuristic argument and a detailed study of the stress energy tensor. We show that the Hawking quanta originate from what might be called a quantum atmosphere around the black hole with energy density and fluxes of particles peaked at about 4MG, running contrary to the popular belief that these originate from the ultra high energy excitations very close to the horizon. This long distance origin of Hawking radiation could have a profound impact on our understanding of the information and transplanckian problems.
Massive Black Holes and Galaxies
Evidence has been accumulating for several decades that many galaxies harbor central mass concentrations that may be in the form of black holes with masses between a few million to a few billion time the mass of the Sun. I will discuss measurements over the last two decades, employing adaptive optics imaging and spectroscopy on large ground-based telescopes that prove the existence of such a massive black hole in the Center of our Milky Way, beyond any reasonable doubt. These data also provide key insights into its properties and environment. Most recently, a tidally disrupting cloud of gas has been discovered on an almost radial orbit that reached its peri-distance of ~2000 Schwarzschild radii in 2014, promising to be a valuable tool for exploring the innermost accretion zone. Future interferometric studies of the Galactic Center Black hole promise to be able to test gravity in its strong field limit.
Black hole formation in perfect fluid collapse
Goswami, Rituparno; Joshi, Pankaj S
We construct here a special class of perfect fluid collapse models which generalizes the homogeneous dust collapse solution in order to include nonzero pressures and inhomogeneities into evolution. It is shown that a black hole is necessarily generated as the end product of continued gravitational collapse, rather than a naked singularity. We examine the nature of the central singularity forming as a result of endless collapse and it is shown that no nonspacelike trajectories can escape from the central singularity. Our results provide some insights into how the dynamical collapse works and into the possible formulations of the cosmic censorship hypothesis, which is as yet a major unsolved problem in black hole physics
Time dependent black holes and scalar hair
Chadburn, Sarah; Gregory, Ruth
We show how to correctly account for scalar accretion onto black holes in scalar field models of dark energy by a consistent expansion in terms of a slow roll parameter. At leading order, we find an analytic solution for the scalar field within our Hubble volume, which is regular on both black hole and cosmological event horizons, and compute the back reaction of the scalar on the black hole, calculating the resulting expansion of the black hole. Our results are independent of the relative size of black hole and cosmological event horizons. We comment on the implications for more general black hole accretion, and the no hair theorems. (paper)
Black holes a very short introduction
Blundell, Katherine
Black holes are a constant source of fascination to many due to their mysterious nature. Black Holes: A Very Short Introduction addresses a variety of questions, including what a black hole actually is, how they are characterized and discovered, and what would happen if you came too close to one. It explains how black holes form and grow—by stealing material that belongs to stars—as well as how many there may be in the Universe. It also explores the large black holes found in the centres of galaxies, and how black holes power quasars and lie behind other spectacular phenomena in the cosmos.
Jet precession in binary black holes
Abraham, Zulema
Supermassive binary black holes are thought to lie at the centres of merging galaxies. The blazar OJ 287 is the poster child of such systems, showing strong and periodic variability across the electromagnetic spectrum. A new study questions the physical origin of this variability.
Black Holes and the Large Hadron Collider
Roy, Arunava
The European Center for Nuclear Research or CERN's Large Hadron Collider (LHC) has caught our attention partly due to the film "Angels and Demons." In the movie, an antimatter bomb attack on the Vatican is foiled by the protagonist. Perhaps just as controversial is the formation of mini black holes (BHs). Recently, the American Physical Society…
Black hole entropy and finite geometry
Levay, P.; Saniga, M.; Vrana, P.; Pracna, Petr
Ro�. 79, �. 8 (2009), 084036 ISSN 1550-7998 Institutional research plan: CEZ:AV0Z40400503 Keywords : Maxwell-Einstein supergravity * attractors * black hole entropy Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.922, year: 2009
Black holes and compact objects: Quantum aspects
This is a summary of the papers presented in session W2 on a fairly wide-ranging variety of topics in the area of black hole physics and quantum aspects of gravity, including quantum �eld and string theory in curved spacetimes. In addition, experts in a couple of topical subjects were invited to present short surveys on the ...
The black hole interpretation of string theory
For scattering processes in which both s and t are significantly larger than the Planck mass we have string theory on the one hand, and on the other hand the physics of black hole formation and decay. Both these descriptions are as yet ill understood. It is argued in this paper that a lot of insight
Black hole thermodynamics under the microscope
Falls, Kevin; Litim, Daniel F.
A coarse-grained version of the effective action is used to study the thermodynamics of black holes, interpolating from largest to smallest masses. The physical parameters of the black hole are linked to the running couplings by thermodynamics, and the corresponding equation of state includes quantum corrections for temperature, specific heat, and entropy. If quantum gravity becomes asymptotically safe, the state function predicts conformal scaling in the limit of small horizon area and bounds on black hole mass and temperature. A metric-based derivation for the equation of state and quantum corrections to the thermodynamical, statistical, and phenomenological definition of entropy are also given. Further implications and limitations of our study are discussed.
Geometric inequalities for axially symmetric black holes
A geometric inequality in general relativity relates quantities that have both a physical interpretation and a geometrical definition. It is well known that the parameters that characterize the Kerr-Newman black hole satisfy several important geometric inequalities. Remarkably enough, some of these inequalities also hold for dynamical black holes. This kind of inequalities play an important role in the characterization of the gravitational collapse; they are closely related with the cosmic censorship conjecture. Axially symmetric black holes are the natural candidates to study these inequalities because the quasi-local angular momentum is well defined for them. We review recent results in this subject and we also describe the main ideas behind the proofs. Finally, a list of relevant open problems is presented. (topical review)
On the interior of (quantum) black holes
Torres, R.
Different approaches to quantum gravity conclude that black holes may possess an inner horizon, in addition to the (quantum corrected) outer 'Schwarzschild' horizon. In this Letter we assume the existence of this inner horizon and explain the physical process that might lead to the tunneling of particles through it. It is shown that the tunneling would produce a flux of particles with a spectrum that deviates from the pure thermal one. Under the appropriate approximation the extremely high temperature of this horizon is calculated for an improved quantum black hole. It is argued that the flux of particles tunneled through the horizons affects the dynamics of the black hole interior leading to an endogenous instability
Varying constants, black holes, and quantum gravity
Carlip, S.
Tentative observations and theoretical considerations have recently led to renewed interest in models of fundamental physics in which certain 'constants' vary in time. Assuming fixed black hole mass and the standard form of the Bekenstein-Hawking entropy, Davies, Davis and Lineweaver have argued that the laws of black hole thermodynamics disfavor models in which the fundamental electric charge e changes. I show that with these assumptions, similar considerations severely constrain 'varying speed of light' models, unless we are prepared to abandon cherished assumptions about quantum gravity. Relaxation of these assumptions permits sensible theories of quantum gravity with ''varying constants,'' but also eliminates the thermodynamic constraints, though the black hole mass spectrum may still provide some restrictions on the range of allowable models
Hot Accretion onto Black Holes with Outflow
Park Myeong-Gu
Full Text Available Classic Bondi accretion flow can be generalized to rotating viscous accretion flow. Study of hot accretion flow onto black holes show that its physical charateristics change from Bondi-like for small gas angular momentum to disk-like for Keperian gas angular momentum. Especially, the mass accretion rate divided by the Bondi accretion rate is proportional to the viscosity parameter alpha and inversely proportional to the gas angular momentum divided by the Keplerian angular momentum at the Bondi radius for gas angular momentum comparable to the Keplerian value. The possible presence of outflow will increase the mass inflow rate at the Bondi radius but decrease the mass accretion rate across the black hole horizon by many orders of magnitude. This implies that the growth history of supermassive black holes and their coevolution with host galaxies will be dramatically changed when the accreted gas has angular momentum or develops an outflow.
Cosmological and black hole apparent horizons
Faraoni, Valerio
This book overviews the extensive literature on apparent cosmological and black hole horizons. In theoretical gravity, dynamical situations such as gravitational collapse, black hole evaporation, and black holes interacting with non-trivial environments, as well as the attempts to model gravitational waves occurring in highly dynamical astrophysical processes, require that the concept of event horizon be generalized. Inequivalent notions of horizon abound in the technical literature and are discussed in this manuscript. The book begins with a quick review of basic material in the first one and a half chapters, establishing a unified notation. Chapter 2 reminds the reader of the basic tools used in the analysis of horizons and reviews the various definitions of horizons appearing in the literature. Cosmological horizons are the playground in which one should take baby steps in understanding horizon physics. Chapter 3 analyzes cosmological horizons, their proposed thermodynamics, and several coordinate systems....
PHYSICS OF THE GALACTIC CENTER CLOUD G2, ON ITS WAY TOWARD THE SUPERMASSIVE BLACK HOLE
Burkert, A.; Schartmann, M.; Alig, C. [University Observatory Munich, Scheinerstrasse 1, D-81679 Munich (Germany); Gillessen, S.; Genzel, R.; Fritz, T. K.; Eisenhauer, F., E-mail: [email protected] [Max-Planck-Institute for Extraterrestrial Physics, Giessenbachstrasse 1, 85758 Garching (Germany)
We investigate the origin, structure, and evolution of the small gas cloud G2, which is on an orbit almost straight into the Galactic central supermassive black hole (SMBH). G2 is a sensitive probe of the hot accretion zone of Sgr A*, requiring gas temperatures and densities that agree well with models of captured shock-heated stellar winds. Its mass is equal to the critical mass below which cold clumps would be destroyed quickly by evaporation. Its mass is also constrained by the fact that at apocenter its sound crossing timescale was equal to its infall timescale. Our numerical simulations show that the observed structure and evolution of G2 can be well reproduced if it forms in pressure equilibrium with its surroundings in 1995 at a distance from the SMBH of 7.6 Multiplication-Sign 10{sup 16} cm. If the cloud had formed at apocenter in the 'clockwise' stellar disk as expected from its orbit, it would be torn into a very elongated spaghetti-like filament by 2011, which is not observed. This problem can be solved if G2 is the head of a larger, shell-like structure that formed at apocenter. Our numerical simulations show that this scenario explains not only G2's observed kinematical and geometrical properties but also the Br{gamma} observations of a low surface brightness gas tail that trails the cloud. In 2013, while passing the SMBH, G2 will break up into a string of droplets that within the next 30 years will mix with the surrounding hot gas and trigger cycles of active galactic nucleus activity.
The effects of baryon physics, black holes and active galactic nucleus feedback on the mass distribution in clusters of galaxies
Martizzi, Davide; Teyssier, Romain; Moore, Ben; Wentz, Tina
The spatial distribution of matter in clusters of galaxies is mainly determined by the dominant dark matter component; however, physical processes involving baryonic matter are able to modify it significantly. We analyse a set of 500 pc resolution cosmological simulations of a cluster of galaxies with mass comparable to Virgo, performed with the AMR code RAMSES. We compare the mass density profiles of the dark, stellar and gaseous matter components of the cluster that result from different assumptions for the subgrid baryonic physics and galaxy formation processes. First, the prediction of a gravity-only N-body simulation is compared to that of a hydrodynamical simulation with standard galaxy formation recipes, and then all results are compared to a hydrodynamical simulation which includes thermal active galactic nucleus (AGN) feedback from supermassive black holes (SMBHs). We find the usual effects of overcooling and adiabatic contraction in the run with standard galaxy formation physics, but very different results are found when implementing SMBHs and AGN feedback. Star formation is strongly quenched, producing lower stellar densities throughout the cluster, and much less cold gas is available for star formation at low redshifts. At redshift z= 0 we find a flat density core of radius 10 kpc in both the dark and stellar matter density profiles. We speculate on the possible formation mechanisms able to produce such cores and we conclude that they can be produced through the coupling of different processes: (I) dynamical friction from the decay of black hole orbits during galaxy mergers; (II) AGN-driven gas outflows producing fluctuations of the gravitational potential causing the removal of collisionless matter from the central region of the cluster; (III) adiabatic expansion in response to the slow expulsion of gas from the central region of the cluster during the quiescent mode of AGN activity.
Extreme black hole with an electric dipole moment
Horowitz, G.T.; Tada, T.
We construct a new extreme black hole solution in a toroidally compactified heterotic string theory. The black hole saturates the Bogomol close-quote nyi bound, has zero angular momentum, but a nonzero electric dipole moment. It is obtained by starting with a higher-dimensional rotating charged black hole, and compactifying one direction in the plane of rotation. copyright 1996 The American Physical Society
Testing the black hole "no-hair" hypothesis
Cardoso, Vitor
Black holes in General Relativity are very simple objects. This property, that goes under the name of "no-hair," has been refined in the last few decades and admits several versions. The simplicity of black holes makes them ideal testbeds of fundamental physics and of General Relativity itself. Here we discuss the no-hair property of black holes, how it can be measured in the electromagnetic or gravitational window, and what it can possibly tell us about our universe.
Interior structure of rotating black holes. III. Charged black holes
Hamilton, Andrew J. S.
This paper extends to the case of charged rotating black holes the conformally stationary, axisymmetric, conformally separable solutions presented for uncharged rotating black holes in a companion paper. In the present paper, the collisionless fluid accreted by the black hole may be charged. The charge of the black hole is determined self-consistently by the charge accretion rate. As in the uncharged case, hyper-relativistic counterstreaming between ingoing and outgoing streams drives inflation at (just above) the inner horizon, followed by collapse. If both ingoing and outgoing streams are charged, then conformal separability holds during early inflation, but fails as inflation develops. If conformal separability is imposed throughout inflation and collapse, then only one of the ingoing and outgoing streams can be charged: the other must be neutral. Conformal separability prescribes a hierarchy of boundary conditions on the ingoing and outgoing streams incident on the inner horizon. The dominant radial boundary conditions require that the incident ingoing and outgoing number densities be uniform with latitude, but the charge per particle must vary with latitude such that the incident charge densities vary in proportion to the radial electric field. The subdominant angular boundary conditions require specific forms of the incident number- and charge-weighted angular motions. If the streams fall freely from outside the horizon, then the prescribed angular conditions can be achieved by the charged stream, but not by the neutral stream. Thus, as in the case of an uncharged black hole, the neutral stream must be considered to be delivered ad hoc to just above the inner horizon.
Progress towards 3D black hole merger simulations
Seidel, E.
I review recent progress in 3D numerical relativity, focused on simulations involving black holes evolved with singularity avoiding slicings, but also touching on recent results in advanced techniques like black hole excision. After a long series of axisymmetric and perturbative studies of distorted black holes and black hole collisions, similar studies were carried out with full 3D codes. The results showed that such black hole simulations can be carried out extremely accurately, although instabilities plague the simulation at uncomfortably early times. However, new formulations of Einstein's equations allow much more stable 3D evolutions than ever before, enabling the first studies of 3D gravitational collapse to a black hole. With these new formulations, for example, it has been possible to perform the first detailed simulations of 3D grazing collisions of black holes with unequal mass, spin, and with orbital angular momentum. I discuss the 3D black hole physics that can now be studied, and prospects for the future, which look increasingly bright due to recent progress in formulations, black hole excision, new gauge conditions, and larger computers. Simulations may soon be able to provide information about the final plunge of two black holes, of relevance for gravitational wave astronomy. (author)
Erratum: Quantum corrections and black hole spectroscopy
Jiang, Qing-Quan; Han, Yan; Cai, Xu
In my paper [Qing-Quan Jiang, Yan Han, Xu Cai, Quantum corrections and black hole spectroscopy, JHEP 08 (2010) 049], there was an error in deriving the black hole spectroscopy. In this erratum, we attempt to rectify them.
Entropy of black holes with multiple horizons
Yun He
Full Text Available We examine the entropy of black holes in de Sitter space and black holes surrounded by quintessence. These black holes have multiple horizons, including at least the black hole event horizon and a horizon outside it (cosmological horizon for de Sitter black holes and "quintessence horizon� for the black holes surrounded by quintessence. Based on the consideration that the two horizons are not independent each other, we conjecture that the total entropy of these black holes should not be simply the sum of entropies of the two horizons, but should have an extra term coming from the correlations between the two horizons. Different from our previous works, in this paper we consider the cosmological constant as the variable and employ an effective method to derive the explicit form of the entropy. We also try to discuss the thermodynamic stabilities of these black holes according to the entropy and the effective temperature.
Black hole entropy, curved space and monsters
Hsu, Stephen D.H.; Reeb, David
We investigate the microscopic origin of black hole entropy, in particular the gap between the maximum entropy of ordinary matter and that of black holes. Using curved space, we construct configurations with entropy greater than the area A of a black hole of equal mass. These configurations have pathological properties and we refer to them as monsters. When monsters are excluded we recover the entropy bound on ordinary matter S 3/4 . This bound implies that essentially all of the microstates of a semiclassical black hole are associated with the growth of a slightly smaller black hole which absorbs some additional energy. Our results suggest that the area entropy of black holes is the logarithm of the number of distinct ways in which one can form the black hole from ordinary matter and smaller black holes, but only after the exclusion of monster states
He, Yun; Ma, Meng-Sen; Zhao, Ren
We examine the entropy of black holes in de Sitter space and black holes surrounded by quintessence. These black holes have multiple horizons, including at least the black hole event horizon and a horizon outside it (cosmological horizon for de Sitter black holes and "quintessence horizon" for the black holes surrounded by quintessence). Based on the consideration that the two horizons are not independent each other, we conjecture that the total entropy of these black holes should not be simply the sum of entropies of the two horizons, but should have an extra term coming from the correlations between the two horizons. Different from our previous works, in this paper we consider the cosmological constant as the variable and employ an effective method to derive the explicit form of the entropy. We also try to discuss the thermodynamic stabilities of these black holes according to the entropy and the effective temperature.
Black Holes: A Selected Bibliography.
Fraknoi, Andrew
Offers a selected bibliography pertaining to black holes with the following categories: introductory books; introductory articles; somewhat more advanced articles; readings about Einstein's general theory of relativity; books on the death of stars; articles on the death of stars; specific articles about Supernova 1987A; relevant science fiction…
Black Holes in Our Universe
are humanity's high-technology windows onto the universe. For reasons that will ... instrument ever built; and it was the first direct ... gravity will drive it to collapse into a black hole. Indeed, in 2007, ... Given their large X-ray power, it has been ...
From Pinholes to Black Holes
Fenimore, Edward E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
Pinhole photography has made major contributions to astrophysics through the use of "coded apertures�. Coded apertures were instrumental in locating gamma-ray bursts and proving that they originate in faraway galaxies, some from the birth of black holes from the first stars that formed just after the big bang.
Black holes and trapped points
Krolak, A.
Black holes are defined and their properties investigated without use of any global causality restriction. Also the boundary at infinity of space-time is not needed. When the causal conditions are brought in, the equivalence with the usual approach is established. (author)
A Black Hole Spectral Signature
Titarchuk, Lev; Laurent, Philippe
An accreting black hole is, by definition, characterized by the drain. Namely, the matter falls into a black hole much the same way as water disappears down a drain matter goes in and nothing comes out. As this can only happen in a black hole, it provides a way to see ``a black hole'', an unique observational signature. The accretion proceeds almost in a free-fall manner close to the black hole horizon, where the strong gravitational field dominates the pressure forces. In this paper we present analytical calculations and Monte-Carlo simulations of the specific features of X-ray spectra formed as a result of upscattering of the soft (disk) photons in the converging inflow (CI) into the black hole. The full relativistic treatment has been implemented to reproduce these spectra. We show that spectra in the soft state of black hole systems (BHS) can be described as the sum of a thermal (disk) component and the convolution of some fraction of this component with the CI upscattering spread (Greens) function. The latter boosted photon component is seen as an extended power-law at energies much higher than the characteristic energy of the soft photons. We demonstrate the stability of the power spectral index over a wide range of the plasma temperature 0 - 10 keV and mass accretion rates (higher than 2 in Eddington units). We also demonstrate that the sharp high energy cutoff occurs at energies of 200-400 keV which are related to the average energy of electrons mec2 impinging upon the event horizon. The spectrum is practically identical to the standard thermal Comptonization spectrum when the CI plasma temperature is getting of order of 50 keV (the typical ones for the hard state of BHS). In this case one can see the effect of the bulk motion only at high energies where there is an excess in the CI spectrum with respect to the pure thermal one. Furthermore we demonstrate that the change of spectral shapes from the soft X-ray state to the hard X-ray state is clearly to be
Confluent Heun functions and the physics of black holes: Resonant frequencies, Hawking radiation and scattering of scalar waves
Vieira, H.S., E-mail: [email protected] [Departamento de Física, Universidade Federal da Paraíba, Caixa Postal 5008, CEP 58051-970, João Pessoa, PB (Brazil); Centro de Ciências, Tecnologia e Saúde, Universidade Estadual da Paraíba, CEP 58233-000, Araruna, PB (Brazil); Bezerra, V.B., E-mail: [email protected] [Departamento de Física, Universidade Federal da Paraíba, Caixa Postal 5008, CEP 58051-970, João Pessoa, PB (Brazil)
We apply the confluent Heun functions to study the resonant frequencies (quasispectrum), the Hawking radiation and the scattering process of scalar waves, in a class of spacetimes, namely, the ones generated by a Kerr–Newman–Kasuya spacetime (dyon black hole) and a Reissner–Nordström black hole surrounded by a magnetic field (Ernst spacetime). In both spacetimes, the solutions for the angular and radial parts of the corresponding Klein–Gordon equations are obtained exactly, for massive and massless fields, respectively. The special cases of Kerr and Schwarzschild black holes are analyzed and the solutions obtained, as well as in the case of a Schwarzschild black hole surrounded by a magnetic field. In all these special situations, the resonant frequencies, Hawking radiation and scattering are studied. - Highlights: • Charged massive scalar field in the dyon black hole and massless scalar field in the Ernst spacetime are analyzed. • The confluent Heun functions are applied to obtain the solution of the Klein–Gordon equation. • The resonant frequencies are obtained. • The Hawking radiation and the scattering process of scalar waves are examined.
Charge Fluctuations of an Uncharged Black Hole
In this paper we calculate charge fluctuations of a Schwarzschild black-hole of mass $M$ confined within a perfectly reflecting cavity of radius R in thermal equilibrium with various species of radiation and fermions . Charge conservation is constrained by a Lagrange multiplier (the chemical potential). Black hole charge fluctuations are expected owing to continuous absorption and emission of particles by the black hole. For black holes much more massive than $10^{16} g$ , these fluctuations ...
Bosonic instability of charged black holes
Gaina, A.B.; Ternov, I.M.
The processes of spontaneous and induced production and accumulation of charged bosons on quasibound superradiant levels in the field of Kerr-Newman black hole is analysed. It is shown that bosonic instability may be caused exclusively by the rotation of the black hole. Particulary, the Reissner-Nordstrom configuration is stable. In the case of rotating and charged black hole the bosonic instability may cause an increase of charge of the black hole
Will black holes eventually engulf the Universe?
Martin-Moruno, Prado; Jimenez Madrid, Jose A.; Gonzalez-Diaz, Pedro F.
The Babichev-Dokuchaev-Eroshenko model for the accretion of dark energy onto black holes has been extended to deal with black holes with non-static metrics. The possibility that for an asymptotic observer a black hole with large mass will rapidly increase and eventually engulf the Universe at a finite time in the future has been studied by using reasonable values for astronomical parameters. It is concluded that such a phenomenon is forbidden for all black holes in quintessential cosmological models
Event horizon image within black hole shadow
Dokuchaev, V. I.; Nazarova, N. O.
The external border of the black hole shadow is washed out by radiation from matter plunging into black hole and approaching the event horizon. This effect will crucially influence the results of future observations by the Event Horizon Telescope. We show that gravitational lensing of the luminous matter plunging into black hole provides the event horizon visualization within black hole shadow. The lensed image of the event horizon is formed by the last highly red-shifted photons emitted by t...
Electromagnetic ``black holes'' in hyperbolic metamaterials
Smolyaninov, Igor
We demonstrate that spatial variations of the dielectric tensor components in a hyperbolic metamaterial may lead to formation of electromagnetic ``black holes'' inside this metamaterial. Similar to real black holes, horizon area of the electromagnetic ``black holes'' is quantized in units of the effective ``Planck scale'' squared. Potential experimental realizations of such electromagnetic ``black holes'' will be considered. For example, this situation may be realized in a hyperbolic metamaterial in which the dielectric component exhibits critical opalescence.
Quantum Black Holes As Elementary Particles
Are black holes elementary particles? Are they fermions or bosons? We investigate the remarkable possibility that quantum black holes are the smallest and heaviest elementary particles. We are able to construct various fundamental quantum black holes: the spin-0, spin 1/2, spin-1, and the Planck-charge cases, using the results in general relativity. Quantum black holes in the neighborhood of the Galaxy could resolve the paradox posed by the Greisen-Zatsepin-Kuzmin limit on the energy of cosmi...
Catastrophic Instability of Small Lovelock Black Holes
Takahashi, Tomohiro; Soda, Jiro
We study the stability of static black holes in Lovelock theory which is a natural higher dimensional generalization of Einstein theory. We show that Lovelock black holes are stable under vector perturbations in all dimensions. However, we prove that small Lovelock black holes are unstable under tensor perturbations in even-dimensions and under scalar perturbations in odd-dimensions. Therefore, we can conclude that small Lovelock black holes are unstable in any dimensions. The instability is ...
Thermodynamics of black-holes in Brans-Dicke gravity
Kim, H.; Kim, Y.
It is recently been argued that non-trivial Brans-Dicke black-hole solutions different from the usual Schwarzschild solution could exist. The authors attempt here to 'censor' these non-trivial Brans-Dicke black-hole solutions by examining their thermodynamics properties. Quantities like Hawking temperature and entropy of the black holes are computed. The analysis of the behaviors of these thermodynamic quantities appears to show that even in Brans-Dicke gravity, the usual Schwarzschild space-time turns out to be the only physically relevant uncharged black-hole solution
Scalar-Tensor Black Holes Embedded in an Expanding Universe
Tretyakova, Daria; Latosh, Boris
In this review we focus our attention on scalar-tensor gravity models and their empirical verification in terms of black hole and wormhole physics. We focus on a black hole, embedded in an expanding universe, describing both cosmological and astrophysical scales. We show that in scalar-tensor gravity it is quite common that the local geometry is isolated from the cosmological expansion, so that it does not backreact on the black hole metric. We try to extract common features of scalar-tensor black holes in an expanding universe and point out the gaps that must be filled.
Daria Tretyakova
Full Text Available In this review, we focus our attention on scalar-tensor gravity models and their empirical verification in terms of black hole and wormhole physics. We focus on black holes, embedded in an expanding universe, describing both cosmological and astrophysical scales. We show that in scalar-tensor gravity it is quite common that the local geometry is isolated from the cosmological expansion, so that it does not backreact on the black hole metric. We try to extract common features of scalar-tensor black holes in an expanding universe and point out the issues that are not fully investigated.
Black holes in loop quantum gravity.
Perez, Alejandro
This is a review of results on black hole physics in the context of loop quantum gravity. The key feature underlying these results is the discreteness of geometric quantities at the Planck scale predicted by this approach to quantum gravity. Quantum discreteness follows directly from the canonical quantization prescription when applied to the action of general relativity that is suitable for the coupling of gravity with gauge fields, and especially with fermions. Planckian discreteness and causal considerations provide the basic structure for the understanding of the thermal properties of black holes close to equilibrium. Discreteness also provides a fresh new look at more (at the moment) speculative issues, such as those concerning the fate of information in black hole evaporation. The hypothesis of discreteness leads, also, to interesting phenomenology with possible observational consequences. The theory of loop quantum gravity is a developing program; this review reports its achievements and open questions in a pedagogical manner, with an emphasis on quantum aspects of black hole physics.
Compensating Scientism through "The Black Hole."
Roth, Lane
The focal image of the film "The Black Hole" functions as a visual metaphor for the sacred, order, unity, and eternal time. The black hole is a symbol that unites the antinomic pairs of conscious/unconscious, water/fire, immersion/emersion, death/rebirth, and hell/heaven. The black hole is further associated with the quest for…
Area spectra of near extremal black holes
Chen, Deyou; Yang, Haitang; Zu, Xiaotao
Motivated by Maggiore's new interpretation of quasinormal modes, we investigate area spectra of a near extremal Schwarzschild-de Sitter black hole and a higher-dimensional near extremal Reissner-Nordstrom-de Sitter black hole. The result shows that the area spectra are equally spaced and irrelevant to the parameters of the black holes. (orig.)
Extremal black holes in N=2 supergravity
Katmadas, S.
An explanation for the entropy of black holes has been an outstanding problem in recent decades. A special case where this is possible is that of extremal black holes in N=2 supergravity in four and five dimensions. The best developed case is for black holes preserving some supersymmetry (BPS),
New entropy formula for Kerr black holes
González Hernán A.
Full Text Available We introduce a new entropy formula for Kerr black holes inspired by recent results for 3-dimensional black holes and cosmologies with soft Heisenberg hair. We show that also Kerr–Taub–NUT black holes obey the same formula.
On black holes and gravitational waves
Loinger, Angelo
Black holes and gravitational waves are theoretical entities of today astrophysics. Various observed phenomena have been associated with the concept of black hole ; until now, nobody has detected gravitational waves. The essays contained in this book aim at showing that the concept of black holes arises from a misinterpretation of general relativity and that gravitational waves cannot exist.
Black Hole Monodromy and Conformal Field Theory
Castro, A.; Lapan, J.M.; Maloney, A.; Rodriguez, M.J.
The analytic structure of solutions to the Klein-Gordon equation in a black hole background, as represented by monodromy data, is intimately related to black hole thermodynamics. It encodes the "hidden conformal symmetry" of a nonextremal black hole, and it explains why features of the inner event
On Quantum Contributions to Black Hole Growth
Spaans, M.
The effects of Wheeler's quantum foam on black hole growth are explored from an astrophysical per- spective. Quantum fluctuations in the form of mini (10−5 g) black holes can couple to macroscopic black holes and allow the latter to grow exponentially in mass on a time scale of 109 years.
Massive Black Hole Implicated in Stellar Destruction
of Alabama who led the study. Irwin and his colleagues obtained optical spectra of the object using the Magellan I and II telescopes in Las Campanas, Chile. These data reveal emission from gas rich in oxygen and nitrogen but no hydrogen, a rare set of signals from globular clusters. The physical conditions deduced from the spectra suggest that the gas is orbiting a black hole of at least 1,000 solar masses. The abundant amount of oxygen and absence of hydrogen indicate that the destroyed star was a white dwarf, the end phase of a solar-type star that has burned its hydrogen leaving a high concentration of oxygen. The nitrogen seen in the optical spectrum remains an enigma. "We think these unusual signatures can be explained by a white dwarf that strayed too close to a black hole and was torn apart by the extreme tidal forces," said coauthor Joel Bregman of the University of Michigan. Theoretical work suggests that the tidal disruption-induced X-ray emission could stay bright for more than a century, but it should fade with time. So far, the team has observed there has been a 35% decline in X-ray emission from 2000 to 2008. The ULX in this study is located in NGC 1399, an elliptical galaxy about 65 million light years from Earth. Irwin presented these results at the 215th meeting of the American Astronomical Society in Washington, DC. NASA's Marshall Space Flight Center in Huntsville, Ala., manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory controls Chandra's science and flight operations from Cambridge, Mass. More information, including images and other multimedia, can be found at: http://chandra.harvard.edu and http://chandra.nasa.gov
Dancing around the Black Hole
the implied properties of the central stellar population of young stars will follow. Notes [1]: The team consists of Eric Emsellem (Principal Investigator, Centre de Recherche Astronomique de Lyon, France), Didier Greusard and Daniel Friedli (Geneva Observatory, Switzerland), Francoise Combes (DEMIRM, Paris, France), Herve Wozniak (Marseille Observatory, France), Emmanuel Pecontal (Centre de Recherche Astronomique de Lyon, France) and Stephane Leon (University of Cologne, Germany). [2]: Black Holes represent an extreme physical phenomenon; if the Earth were to become one, it would measure no more than a few millimetres across. The gravitational field around a black hole is so intense that even light can not escape from it. [3]: On its most energetic and dramatic scale, this scenario results in a quasar , a type of object first discovered in 1963. In this case, the highly energetic centre of a galaxy completely outshines the outer structures and the "quasi-stellar object" appears star-like in smaller telescopes. Technical information about the photos PR Photo 25a/01 with NGC 1097 is a reproduction from the ESO LV archive, extracted via the Hypercat facility. It is based on a 2-hour photographic exposure in the R-band (Kodak IIIa-F emulsion + RG630 filtre) with the ESO 1-m Schmidt Telescope at La Silla and covers a field of about 35 x 35 arcmin 2. On this and the following photos, North is up and East is left. PR Photo 25b/01 of the central region of NGC 1808 was reproduced from an H-band (1.6 µm) image obtained with the IRAC2 camera (now decommissioned) at the MPG/ESO 2.2-m telescope on La Silla. The exposure time was 50 sec and the field measures 2.0 x 2.1 arcmin 2 (original pixel size = 0.52 arcsec). PR Photo 25c/01 of the central region of NGC 5728 was obtained at the 3.5-m Canada-France-Hawaii Telescope (CFHT) and the Adaptive-Optics PUEO instrument; the K-band (2.3 µm) exposure lasted 60 sec and the field measures 38 X 38 arcsec 2. PR Photo 25e/01 shows a raw
The funding black hole
Two physics students at the University of Bristol have organised a petition against the recently-announced funding cut of 80 million by the body that funds physics research in the UK, the Science and Technology Facilities Council (STFC).
Phase transition for black holes with scalar hair and topological black holes
Myung, Yun Soo
We study phase transitions between black holes with scalar hair and topological black holes in asymptotically anti-de Sitter spacetimes. As the ground state solutions, we introduce the non-rotating BTZ black hole in three dimensions and topological black hole with hyperbolic horizon in four dimensions. For the temperature matching only, we show that the phase transition between black hole with scalar hair (Martinez-Troncoso-Zanelli black hole) and topological black hole is second-order by using differences between two free energies. However, we do not identify what order of the phase transition between scalar and non-rotating BTZ black holes occurs in three dimensions, although there exists a possible decay of scalar black hole to non-rotating BTZ black hole
Chandra Catches "Piranha" Black Holes
Supermassive black holes have been discovered to grow more rapidly in young galaxy clusters, according to new results from NASA's Chandra X-ray Observatory. These "fast-track" supermassive black holes can have a big influence on the galaxies and clusters that they live in. Using Chandra, scientists surveyed a sample of clusters and counted the fraction of galaxies with rapidly growing supermassive black holes, known as active galactic nuclei (or AGN). The data show, for the first time, that younger, more distant galaxy clusters contained far more AGN than older, nearby ones. Galaxy clusters are some of the largest structures in the Universe, consisting of many individual galaxies, a few of which contain AGN. Earlier in the history of the universe, these galaxies contained a lot more gas for star formation and black hole growth than galaxies in clusters do today. This fuel allows the young cluster black holes to grow much more rapidly than their counterparts in nearby clusters. Illustration of Active Galactic Nucleus Illustration of Active Galactic Nucleus "The black holes in these early clusters are like piranha in a very well-fed aquarium," said Jason Eastman of Ohio State University (OSU) and first author of this study. "It's not that they beat out each other for food, rather there was so much that all of the piranha were able to really thrive and grow quickly." The team used Chandra to determine the fraction of AGN in four different galaxy clusters at large distances, when the Universe was about 58% of its current age. Then they compared this value to the fraction found in more nearby clusters, those about 82% of the Universe's current age. The result was the more distant clusters contained about 20 times more AGN than the less distant sample. AGN outside clusters are also more common when the Universe is younger, but only by factors of two or three over the same age span. "It's been predicted that there would be fast-track black holes in clusters, but we never
Gamma ray bursts of black hole universe
Slightly modifying the standard big bang theory, Zhang recently developed a new cosmological model called black hole universe, which has only a single postulate but is consistent with Mach's principle, governed by Einstein's general theory of relativity, and able to explain existing observations of the universe. In the previous studies, we have explained the origin, structure, evolution, expansion, cosmic microwave background radiation, quasar, and acceleration of black hole universe, which grew from a star-like black hole with several solar masses through a supermassive black hole with billions of solar masses to the present state with hundred billion-trillions of solar masses by accreting ambient matter and merging with other black holes. This study investigates gamma ray bursts of black hole universe and provides an alternative explanation for the energy and spectrum measurements of gamma ray bursts according to the black hole universe model. The results indicate that gamma ray bursts can be understood as emissions of dynamic star-like black holes. A black hole, when it accretes its star or merges with another black hole, becomes dynamic. A dynamic black hole has a broken event horizon and thus cannot hold the inside hot (or high-frequency) blackbody radiation, which flows or leaks out and produces a GRB. A star when it collapses into its core black hole produces a long GRB and releases the gravitational potential energy of the star as gamma rays. A black hole that merges with another black hole produces a short GRB and releases a part of their blackbody radiation as gamma rays. The amount of energy obtained from the emissions of dynamic star-like black holes are consistent with the measurements of energy from GRBs. The GRB energy spectra derived from this new emission mechanism are also consistent with the measurements.
Black Hole Thermodynamics in an Undergraduate Thermodynamics Course.
Parker, Barry R.; McLeod, Robert J.
An analogy, which has been drawn between black hole physics and thermodynamics, is mathematically broadened in this article. Equations similar to the standard partial differential relations of thermodynamics are found for black holes. The results can be used to supplement an undergraduate thermodynamics course. (Author/SK)
Jerusalem lectures on black holes and quantum information
Harlow, D.
These lectures give an introduction to the quantum physics of black holes, including recent developments based on quantum information theory such as the firewall paradox and its various cousins. An introduction is also given to holography and the anti-de Sitter/conformal field theory (AdS/CFT) correspondence, focusing on those aspects which are relevant for the black hole information problem.
Falling into a black hole
Mathur, Samir D.
String theory tells us that quantum gravity has a dual description as a field theory (without gravity). We use the field theory dual to ask what happens to an object as it falls into the simplest black hole: the 2-charge extremal hole. In the field theory description the wavefunction of a particle is spread over a large number of `loops', and the particle has a well-defined position in space only if it has the same `position' on each loop. For the infalling particle we find one definition of ...
Dyonic black hole in heterotic string theory
Jatkar, D.P.; Mukherji, S.
We study some features of the dyonic black hole solution in heterotic string theory on a six-torus. This solution has 58 parameters. Of these, 28 parameters denote the electric charge of the black hole, another 28 correspond to the magnetic charge, and the other two parameters are the mass and the angular momentum of the black hole. We discuss the extremal limit and show that in various limits it reduces to the known black hole solutions. The solutions saturating the Bogomolnyi bound are identified. An explicit solution is presented for the non-rotating dyonic black hole. (orig.)
Black-hole creation in quantum cosmology
Zhong Chao, Wu [Rome, Univ. `La Sapienza` (Italy). International Center for Relativistic Astrophysics]|[Specola Vaticana, Vatican City State (Vatican City State, Holy See)
It is proven that the probability of a black hole created from the de Sitter space-time background, at the Wkb level, is the exponential of one quarter of the sum of the black hole and cosmological horizon areas, or the total entropy of the universe. This is true not only for the spherically symmetric cases of the Schwarzschild or Reissner-Nordstroem black holes, but also for the rotating cases of the Kerr black hole and the rotating charged case of the Newman black hole. The de Sitter metric is the most probable evolution at the Planckian era of the universe.
Black holes escaping from domain walls
Flachi, Antonino; Sasaki, Misao; Pujolas, Oriol; Tanaka, Takahiro
Previous studies concerning the interaction of branes and black holes suggested that a small black hole intersecting a brane may escape via a mechanism of reconnection. Here we consider this problem by studying the interaction of a small black hole and a domain wall composed of a scalar field and simulate the evolution of this system when the black hole acquires an initial recoil velocity. We test and confirm previous results, however, unlike the cases previously studied, in the more general set-up considered here, we are able to follow the evolution of the system also during the separation, and completely illustrate how the escape of the black hole takes place
Giant Black Hole Rips Apart Star
Thanks to two orbiting X-ray observatories, astronomers have the first strong evidence of a supermassive black hole ripping apart a star and consuming a portion of it. The event, captured by NASA's Chandra and ESA's XMM-Newton X-ray Observatories, had long been predicted by theory, but never confirmed. Astronomers believe a doomed star came too close to a giant black hole after being thrown off course by a close encounter with another star. As it neared the enormous gravity of the black hole, the star was stretched by tidal forces until it was torn apart. This discovery provides crucial information about how these black holes grow and affect surrounding stars and gas. "Stars can survive being stretched a small amount, as they are in binary star systems, but this star was stretched beyond its breaking point," said Stefanie Komossa of the Max Planck Institute for Extraterrestrial Physics (MPE) in Germany, leader of the international team of researchers. "This unlucky star just wandered into the wrong neighborhood." While other observations have hinted stars are destroyed by black holes (events known as "stellar tidal disruptions"), these new results are the first strong evidence. Evidence already exists for supermassive black holes in many galaxies, but looking for tidal disruptions represents a completely independent way to search for black holes. Observations like these are urgently needed to determine how quickly black holes can grow by swallowing neighboring stars. Animation of Star Ripped Apart by Giant Black Hole Star Ripped Apart by Giant Black Hole Observations with Chandra and XMM-Newton, combined with earlier images from the German Roentgen satellite, detected a powerful X-ray outburst from the center of the galaxy RX J1242-11. This outburst, one of the most extreme ever detected in a galaxy, was caused by gas from the destroyed star that was heated to millions of degrees Celsius before being swallowed by the black hole. The energy liberated in the process
Black holes, white dwarfs, and neutron stars the physics of compact objects
Shapiro, Stuart Louis
This self-contained textbook brings together many different branches of physics--e.g. nuclear physics, solid state physics, particle physics, hydrodynamics, relativity--to analyze compact objects. The latest astronomical data is assessed
The stable problem of the black-hole connected region in the Schwarzschild black hole
Tian, Guihua
The stability of the Schwarzschild black hole is studied. Using the Painlev\\'{e} coordinate, our region can be defined as the black-hole-connected region(r>2m, see text) of the Schwarzschild black hole or the white-hole-connected region(r>2m, see text) of the Schwarzschild black hole. We study the stable problems of the black-hole-connected region. The conclusions are: (1) in the black-hole-connected region, the initially regular perturbation fields must have real frequency or complex frequen...
Quantum information erasure inside black holes
Lowe, David A.; Thorlacius, Larus
An effective field theory for infalling observers in the vicinity of a quasi-static black hole is given in terms of a freely falling lattice discretization. The lattice model successfully reproduces the thermal spectrum of outgoing Hawking radiation, as was shown by Corley and Jacobson, but can also be used to model observations made by a typical low-energy observer who enters the black hole in free fall at a prescribed time. The explicit short distance cutoff ensures that, from the viewpoint of the infalling observer, any quantum information that entered the black hole more than a scrambling time earlier has been erased by the black hole singularity. This property, combined with the requirement that outside observers need at least of order the scrambling time to extract quantum information from the black hole, ensures that a typical infalling observer does not encounter drama upon crossing the black hole horizon in a theory where black hole information is preserved for asymptotic observers.
Collision of two rotating Hayward black holes
Gwak, Bogeun [Sejong University, Department of Physics and Astronomy, Seoul (Korea, Republic of)
We investigate the spin interaction and the gravitational radiation thermally allowed in a head-on collision of two rotating Hayward black holes. The Hayward black hole is a regular black hole in a modified Einstein equation, and hence it can be an appropriate model to describe the extent to which the regularity effect in the near-horizon region affects the interaction and the radiation. If one black hole is assumed to be considerably smaller than the other, the potential of the spin interaction can be analytically obtained and is dependent on the alignment of angular momenta of the black holes. For the collision of massive black holes, the gravitational radiation is numerically obtained as the upper bound by using the laws of thermodynamics. The effect of the Hayward black hole tends to increase the radiation energy, but we can limit the effect by comparing the radiation energy with the gravitational waves GW150914 and GW151226. (orig.)
Black holes in the presence of dark energy
Babichev, E O; Dokuchaev, V I; Eroshenko, Yu N
The new, rapidly developing field of theoretical research—studies of dark energy interacting with black holes (and, in particular, accreting onto black holes)–—is reviewed. The term 'dark energy' is meant to cover a wide range of field theory models, as well as perfect fluids with various equations of state, including cosmological dark energy. Various accretion models are analyzed in terms of the simplest test field approximation or by allowing back reaction on the black-hole metric. The behavior of various types of dark energy in the vicinity of Schwarzschild and electrically charged black holes is examined. Nontrivial effects due to the presence of dark energy in the black hole vicinity are discussed. In particular, a physical explanation is given of why the black hole mass decreases when phantom energy is being accreted, a process in which the basic energy conditions of the famous theorem of nondecreasing horizon area in classical black holes are violated. The theoretical possibility of a signal escaping from beneath the black hole event horizon is discussed for a number of dark energy models. Finally, the violation of the laws of thermodynamics by black holes in the presence of noncanonical fields is considered. (reviews of topical problems)
From Black Holes to Quivers
Manschot, Jan; Sen, Ashoke
Middle cohomology states on the Higgs branch of supersymmetric quiver quantum mechanics - also known as pure Higgs states - have recently emerged as possible microscopic candidates for single-centered black hole micro-states, as they carry zero angular momentum and appear to be robust under wall-crossing. Using the connection between quiver quantum mechanics on the Coulomb branch and the quantum mechanics of multi-centered black holes, we propose a general algorithm for reconstructing the full moduli-dependent cohomology of the moduli space of an arbitrary quiver, in terms of the BPS invariants of the pure Higgs states. We analyze many examples of quivers with loops, including all cyclic Abelian quivers and several examples with two loops or non-Abelian gauge groups, and provide supporting evidence for this proposal. We also develop methods to count pure Higgs states directly.
Dynamics of test black holes
Epikhin, E.N.
A concept of a test object is introduced. This definition includes also small black holes. Reduced approximation of testing permits to unambiguously introduce a concept of background space-time. Dynamic values for test objects are introduced by means of the Noether theorem which gave the possibility to covariantly generalize pseudotensor of the Papapetru energy-momentum for the case of curved background space-time. Additional use of radiation approximation and the accountancy of the zero and first momenta of dynamic values lead to the conclusion that motion of the test object (including small black holes) is subordinated to the Matthiessen-Papapetru equations. The above results are testified to the accountancy of a proper gravitational field of the test object in integrated dynamic values [ru
Some Simple Black Hole Thermodynamics
Lopresto, Michael C.
In his recent popular book The Universe in a Nutshell, Steven Hawking gives expressions for the entropy1 and temperature (often referred to as the ``Hawking temperature''2 ) of a black hole:3 S = kc34�G A T = �c38πkGM, where A is the area of the event horizon, M is the mass, k is Boltzmann's constant, � = h2π (h being Planck's constant), c is the speed of light, and G is the universal gravitational constant. These expressions can be used as starting points for some interesting approximations on the thermodynamics of a Schwarzschild black hole, of mass M, which by definition is nonrotating and spherical with an event horizon of radius R = 2GMc2.4,5
Lifetime of a black hole
Carlitz, R.D.; Willey, R.S.
We study the constraints placed by quantum mechanics upon the lifetime of a black hole. In the context of a moving-mirror analog model for the Hawking radiation process, we conclude that the period of Hawking radiation must be followed by a much longer period during which the remnant mass (of order m/sub P/) may be radiated away. We are able to place a lower bound on the time required for this radiation process, which translates into a lower bound for the lifetime of the black hole. Particles which are emitted during the decay of the remnant, like the particles which comprise the Hawking flux, may be uncorrelated with each other. But each particle emitted from the decaying remnant is correlated with one particle emitted as Hawking radiation. The state which results after the remnant has evaporated is one which locally appears to be thermal, but which on a much larger scale is marked by extensive correlations
Van der Waals black hole
Aruna Rajagopal
Full Text Available In the context of extended phase space, where the negative cosmological constant is treated as a thermodynamic pressure in the first law of black hole thermodynamics, we find an asymptotically AdS metric whose thermodynamics matches exactly that of the Van der Waals fluid. We show that as a solution of Einstein's equations, the corresponding stress energy tensor obeys (at least for certain range of metric parameters all three weak, strong, and dominant energy conditions.
Black holes, singularities and predictability
Wald, R.M.
The paper favours the view that singularities may play a central role in quantum gravity. The author reviews the arguments leading to the conclusion, that in the process of black hole formation and evaporation, an initial pure state evolves to a final density matrix, thus signaling a breakdown in ordinary quantum dynamical evolution. Some related issues dealing with predictability in the dynamical evolution, are also discussed. (U.K.)
A black-hole cosmology
Debney, G.; Farnsworth, D.
Motivated by the fact that 2m/r is of the order of magnitude unity for the observable universe, we explore the possibility that a Schwarzschild or black hole cosmological model is appropriate. Luminosity distance and frequency shifts of freely-falling, standard, monochromatic objects are viewed by a freely-falling observer. The observer is inside r=2m. The observer in such a world does not see the same universe as do astronomers. (author)
Brown dwarfs and black holes
Tarter, J.C.
The astronomical missing-mass problem (the discrepancy between the dynamical mass estimate and the sum of individual masses in large groupings) is considered, and possible explanations are advanced. The existence of brown dwarfs (stars not massive enough to shine by nuclear burning) and black holes (extremely high density matter contraction such that gravitation allows no light emission) thus far provides the most plausible solutions
Black holes and random matrices
Cotler, Jordan S.; Gur-Ari, Guy [Stanford Institute for Theoretical Physics, Stanford University,Stanford, CA 94305 (United States); Hanada, Masanori [Stanford Institute for Theoretical Physics, Stanford University,Stanford, CA 94305 (United States); Yukawa Institute for Theoretical Physics, Kyoto University,Kyoto 606-8502 (Japan); The Hakubi Center for Advanced Research, Kyoto University,Kyoto 606-8502 (Japan); Polchinski, Joseph [Department of Physics, University of California,Santa Barbara, CA 93106 (United States); Kavli Institute for Theoretical Physics, University of California,Santa Barbara, CA 93106 (United States); Saad, Phil; Shenker, Stephen H. [Stanford Institute for Theoretical Physics, Stanford University,Stanford, CA 94305 (United States); Stanford, Douglas [Institute for Advanced Study,Princeton, NJ 08540 (United States); Streicher, Alexandre [Stanford Institute for Theoretical Physics, Stanford University,Stanford, CA 94305 (United States); Department of Physics, University of California,Santa Barbara, CA 93106 (United States); Tezuka, Masaki [Department of Physics, Kyoto University,Kyoto 606-8501 (Japan)
We argue that the late time behavior of horizon fluctuations in large anti-de Sitter (AdS) black holes is governed by the random matrix dynamics characteristic of quantum chaotic systems. Our main tool is the Sachdev-Ye-Kitaev (SYK) model, which we use as a simple model of a black hole. We use an analytically continued partition function |Z(β+it)|{sup 2} as well as correlation functions as diagnostics. Using numerical techniques we establish random matrix behavior at late times. We determine the early time behavior exactly in a double scaling limit, giving us a plausible estimate for the crossover time to random matrix behavior. We use these ideas to formulate a conjecture about general large AdS black holes, like those dual to 4D super-Yang-Mills theory, giving a provisional estimate of the crossover time. We make some preliminary comments about challenges to understanding the late time dynamics from a bulk point of view.
Black hole vacua and rotation
Krishnan, Chethan
Recent developments suggest that the near-region of rotating black holes behaves like a CFT. To understand this better, I propose to study quantum fields in this region. An instructive approach for this might be to put a large black hole in AdS and to think of the entire geometry as a toy model for the 'near-region'. Quantum field theory on rotating black holes in AdS can be well-defined (unlike in flat space), if fields are quantized in the co-rotating-with-the-horizon frame. First, some generalities of constructing Hartle-Hawking Green functions in this approach are discussed. Then as a specific example where the details are easy to handle, I turn to 2+1 dimensions (BTZ), write down the Green functions explicitly starting with the co-rotating frame, and observe some structural similarities they have with the Kerr-CFT scattering amplitudes. Finally, in BTZ, there is also an alternate construction for the Green functions: we can start from the covering AdS 3 space and use the method of images. Using a 19th century integral formula, I show the equality between the boundary correlators arising via the two constructions.
Gravity, quantum theory and the evaporation of black holes. [Review
Wilkins, D C [Tata Inst. of Fundamental Research, Bombay (India)
Recent developments in blackhole physics are reviewed. It is pointed out that black hole thermodynamics is a theory of exceptional unity and elegance. Starting from the discovery of thermal emission from black holes (evaporation process) by Hawking, the four thermodynamic laws they obey, the nonzero temperature and entropy, angular momentum and charge of the black holes are dealt with. The influence of this thermodynamics on quantum theory and gravitation is discussed in relation to particle creation and quantum gravity. The formation and basic properties of black holes are described in terms of significant milestones. The decade-long development of black hole thermodynamics from 1963-73 is highlighted. The fundamental issues arising in particle physics as a result of these discoveries are discussed.
Can superconducting cosmic strings piercing seed black holes generate supermassive black holes in the early universe?
Lake, Matthew J. [The Institute for Fundamental Study, ' ' The Tah Poe Academia Institute' ' , Naresuan University, Phitsanulok (Thailand); Thailand Center of Excellence in Physics, Ministry of Education, Bangkok (Thailand); Harko, Tiberiu [Department of Physics, Babes-Bolyai University, Cluj-Napoca (Romania); Department of Mathematics, University College London (United Kingdom)
The discovery of a large number of supermassive black holes (SMBH) at redshifts z > 6, when the Universe was only 900 million years old, raises the question of how such massive compact objects could form in a cosmologically short time interval. Each of the standard scenarios proposed, involving rapid accretion of seed black holes or black hole mergers, faces severe theoretical difficulties in explaining the short-time formation of supermassive objects. In this work we propose an alternative scenario for the formation of SMBH in the early Universe, in which energy transfer from superconducting cosmic strings piercing small seed black holes is the main physical process leading to rapid mass increase. As a toy model, the accretion rate of a seed black hole pierced by two antipodal strings carrying constant current is considered. Using an effective action approach, which phenomenologically incorporates a large class of superconducting string models, we estimate the minimum current required to form SMBH with masses of order M = 2 x 10{sup 9} M {sub CircleDot} by z = 7.085. This corresponds to the mass of the central black hole powering the quasar ULAS J112001.48+064124.3 and is taken as a test case scenario for early-epoch SMBH formation. For GUT scale strings, the required fractional increase in the string energy density, due to the presence of the current, is of order 10{sup -7}, so that their existence remains consistent with current observational bounds on the string tension. In addition, we consider an ''exotic'' scenario, in which an SMBH is generated when a small seed black hole is pierced by a higher-dimensional F-string, predicted by string theory. We find that both topological defect strings and fundamental strings are able to carry currents large enough to generate early-epoch SMBH via our proposed mechanism. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Radiation transport around Kerr black holes
Schnittman, Jeremy David
This Thesis describes the basic framework of a relativistic ray-tracing code for analyzing accretion processes around Kerr black holes. We begin in Chapter 1 with a brief historical summary of the major advances in black hole astrophysics over the past few decades. In Chapter 2 we present a detailed description of the ray-tracing code, which can be used to calculate the transfer function between the plane of the accretion disk and the detector plane, an important tool for modeling relativistically broadened emission lines. Observations from the Rossi X-Ray Timing Explorer have shown the existence of high frequency quasi-periodic oscillations (HFQPOs) in a number of black hole binary systems. In Chapter 3, we employ a simple "hot spot" model to explain the position and amplitude of these HFQPO peaks. The power spectrum of the periodic X-ray light curve consists of multiple peaks located at integral combinations of the black hole coordinate frequencies, with the relative amplitude of each peak determined by the orbital inclination, eccentricity, and hot spot arc length. In Chapter 4, we introduce additional features to the model to explain the broadening of the QPO peaks as well as the damping of higher frequency harmonics in the power spectrum. The complete model is used to fit the power spectra observed in XTE J1550-564, giving confidence limits on each of the model parameters. In Chapter 5 we present a description of the structure of a relativistic alpha- disk around a Kerr black hole. Given the surface temperature of the disk, the observed spectrum is calculated using the transfer function mentioned above. The features of this modified thermal spectrum may be used to infer the physical properties of the accretion disk and the central black hole. In Chapter 6 we develop a Monte Carlo code to calculate the detailed propagation of photons from a hot spot emitter scattering through a corona surrounding the black hole. The coronal scattering has two major observable
Magnetized black holes and black rings in the higher dimensional dilaton gravity
Yazadjiev, Stoytcho S.
In this paper we consider magnetized black holes and black rings in the higher dimensional dilaton gravity. Our study is based on exact solutions generated by applying a Harrison transformation to known asymptotically flat black hole and black ring solutions in higher dimensional spacetimes. The explicit solutions include the magnetized version of the higher dimensional Schwarzschild-Tangherlini black holes, Myers-Perry black holes, and five-dimensional (dipole) black rings. The basic physical quantities of the magnetized objects are calculated. We also discuss some properties of the solutions and their thermodynamics. The ultrarelativistic limits of the magnetized solutions are briefly discussed and an explicit example is given for the D-dimensional magnetized Schwarzschild-Tangherlini black holes
Accretion onto a Kiselev black hole
Jiao, Lei [Hebei University, College of Physical Science and Technology, Baoding (China); Yang, Rongjia [Hebei University, College of Physical Science and Technology, Baoding (China); Hebei University, Hebei Key Lab of Optic-Electronic Information and Materials, Baoding (China)
We consider accretion onto a Kiselev black hole. We obtain the fundamental equations for accretion without the back-reaction. We determine the general analytic expressions for the critical points and the mass accretion rate and find the physical conditions the critical points should fulfill. The case of a polytropic gas are discussed in detail. It turns out that the quintessence parameter plays an important role in the accretion process. (orig.)
Newtonian versus black-hole scattering
Siopsis, G.
We discuss non-relativistic scattering by a Newtonian potential. We show that the gray-body factors associated with scattering by a black hole exhibit the same functional dependence as scattering amplitudes in the Newtonian limit, which should be the weak-field limit of any quantum theory of gravity. This behavior arises independently of the presence of supersymmetry. The connection to two-dimensional conformal field theory is also discussed. copyright 1999 The American Physical Society
Black-hole bomb and superradiant instabilities
Cardoso, Vitor; Dias, Oscar J.C.; Lemos, Jose P.S.; Yoshida, Shijun
A wave impinging on a Kerr black hole can be amplified as it scatters off the hole if certain conditions are satisfied, giving rise to superradiant scattering. By placing a mirror around the black hole one can make the system unstable. This is the black-hole bomb of Press and Teukolsky. We investigate in detail this process and compute the growing time scales and oscillation frequencies as a function of the mirror's location. It is found that in order for the system black hole plus mirror to become unstable there is a minimum distance at which the mirror must be located. We also give an explicit example showing that such a bomb can be built. In addition, our arguments enable us to justify why large Kerr-AdS black holes are stable and small Kerr-AdS black holes should be unstable
Is there life inside black holes?
Dokuchaev, V I
Bound inside rotating or charged black holes, there are stable periodic planetary orbits, which neither come out nor terminate at the central singularity. Stable periodic orbits inside black holes exist even for photons. These bound orbits may be defined as orbits of the third kind, following the Chandrasekhar classification of particle orbits in the black hole gravitational field. The existence domain for the third-kind orbits is rather spacious, and thus there is place for life inside supermassive black holes in the galactic nuclei. Interiors of the supermassive black holes may be inhabited by civilizations, being invisible from the outside. In principle, one can get information from the interiors of black holes by observing their white hole counterparts. (paper)
Hawking radiation and strong gravity black holes
Qadir, A.; Sayed, W.A.
It is shown that the strong gravity theory of Salam et al. places severe restrictions on black hole evaporation. Two major implications are that: mini blck holes (down to masses approximately 10 -16 kg) would be stable in the present epoch; and that some suggested mini black hole mechanisms to explain astrophysical phenomena would not work. The first result implies that f-gravity appears to make black holes much safer by removing the possibility of extremely violent black hole explosions suggested by Hawking. (Auth.)
Charged spinning black holes as particle accelerators
Wei Shaowen; Liu Yuxiao; Guo Heng; Fu Chune
It has recently been pointed out that the spinning Kerr black hole with maximal spin could act as a particle collider with arbitrarily high center-of-mass energy. In this paper, we will extend the result to the charged spinning black hole, the Kerr-Newman black hole. The center-of-mass energy of collision for two uncharged particles falling freely from rest at infinity depends not only on the spin a but also on the charge Q of the black hole. We find that an unlimited center-of-mass energy can be approached with the conditions: (1) the collision takes place at the horizon of an extremal black hole; (2) one of the colliding particles has critical angular momentum; (3) the spin a of the extremal black hole satisfies (1/√(3))≤(a/M)≤1, where M is the mass of the Kerr-Newman black hole. The third condition implies that to obtain an arbitrarily high energy, the extremal Kerr-Newman black hole must have a large value of spin, which is a significant difference between the Kerr and Kerr-Newman black holes. Furthermore, we also show that, for a near-extremal black hole, there always exists a finite upper bound for center-of-mass energy, which decreases with the increase of the charge Q.
5D Black Holes and Matrix Strings
Dijkgraaf, R; Verlinde, Herman L
We derive the world-volume theory, the (non)-extremal entropy and background geometry of black holes and black strings constructed out of the NS IIA fivebrane within the framework of matrix theory. The CFT description of strings propagating in the black hole geometry arises as an effective field theory.
Black holes, wormholes and time machines
Al-Khalili, Jim
Bringing the material up to date, Black Holes, Wormholes and Time Machines, Second Edition captures the new ideas and discoveries made in physics since the publication of the best-selling first edition. While retaining the popular format and style of its predecessor, this edition explores the latest developments in high-energy astroparticle physics and Big Bang cosmology.The book continues to make the ideas and theories of modern physics easily understood by anyone, from researchers to students to general science enthusiasts. Taking you on a journey through space and time, author Jim Al-Khalil
BSW process of the slowly evaporating charged black hole
Wang, Liancheng; He, Feng; Fu, Xiangyun
In this paper, we study the BSW process of the slowly evaporating charged black hole. It can be found that the BSW process will also arise near black hole horizon when the evaporation of charged black hole is very slow. But now the background black hole does not have to be an extremal black hole, and it will be approximately an extremal black hole unless it is nearly a huge stationary black hole.
BOOK REVIEW Cracking the Einstein Code: Relativity and the Birth of Black Hole Physics With an Afterword by Roy Kerr Cracking the Einstein Code: Relativity and the Birth of Black Hole Physics With an Afterword by Roy Kerr
Carr, Bernard
General relativity is arguably the most beautiful scientific theory ever conceived but its status within mainstream physics has vacillated since it was proposed in 1915. It began auspiciously with the successful explanation of the precession of Mercury and the dramatic confirmation of light-bending in the 1919 solar eclipse expedition, which turned Einstein into an overnight celebrity. Though little noticed at the time, there was also Karl Schwarzschild's discovery of the spherically symmetric solution in 1916 (later used to predict the existence of black holes) and Alexander Friedmann's discovery of the cosmological solution in 1922 (later confirmed by the discovery of the cosmic expansion). Then for 40 years the theory was more or less forgotten, partly because most physicists were turning their attention to the even more radical developments of quantum theory but also because the equations were too complicated to solve except in situations involving special symmetries or very weak gravitational fields (where general relativity is very similar to Newtonian theory). Furthermore, it was not clear that strong gravitational fields would ever arise in the real universe and, even if they did, it seemed unlikely that Einstein's equations could then be solved. So research in relativity became a quiet backwater as mainstream physics swept forward in other directions. Even Einstein lost interest, turning his attention to the search for a unified field theory. This book tells the remarkable story of how the tide changed in 1963, when the 28-year-old New Zealand mathematician Roy Kerr discovered an exact solution of Einstein's equations which represents a rotating black hole, thereby cracking the code of the title. The paper was just a few pages long, it being left for others to fill in the extensive beautiful mathematics which underlay the result, but it ushered in a golden age of relativity and is now one of the most cited works in physics. Coincidentally, Kerr
Late-time dynamics of rapidly rotating black holes
Glampedakis, K.; Andersson, N.
We study the late-time behaviour of a dynamically perturbed rapidly rotating black hole. Considering an extreme Kerr black hole, we show that the large number of virtually undamped quasinormal modes (that exist for nonzero values of the azimuthal eigenvalue m) combine in such a way that the field (as observed at infinity) oscillates with an amplitude that decays as 1/t at late times. For a near extreme black hole, these modes, collectively, give rise to an exponentially decaying field which, however, is considerably 'long-lived'. Our analytic results are verified using numerical time-evolutions of the Teukolsky equation. Moreover, we argue that the physical mechanism behind the observed behaviour is the presence of a 'superradiance resonance cavity' immediately outside the black hole. We present this new feature in detail, and discuss whether it may be relevant for astrophysical black holes. (author)
Quantitative approaches to information recovery from black holes
Balasubramanian, Vijay [David Rittenhouse Laboratory, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Czech, Bartlomiej, E-mail: [email protected], E-mail: [email protected] [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada)
The evaporation of black holes into apparently thermal radiation poses a serious conundrum for theoretical physics: at face value, it appears that in the presence of a black hole, quantum evolution is non-unitary and destroys information. This information loss paradox has its seed in the presence of a horizon causally separating the interior and asymptotic regions in a black hole spacetime. A quantitative resolution of the paradox could take several forms: (a) a precise argument that the underlying quantum theory is unitary, and that information loss must be an artifact of approximations in the derivation of black hole evaporation, (b) an explicit construction showing how information can be recovered by the asymptotic observer, (c) a demonstration that the causal disconnection of the black hole interior from infinity is an artifact of the semiclassical approximation. This review summarizes progress on all these fronts. (topical review)
Quantum Black Hole Model and HAWKING'S Radiation
Berezin, Victor
The black hole model with a self-gravitating charged spherical symmetric dust thin shell as a source is considered. The Schroedinger-type equation for such a model is derived. This equation appeared to be a finite differences equation. A theory of such an equation is developed and general solution is found and investigated in details. The discrete spectrum of the bound state energy levels is obtained. All the eigenvalues appeared to be infinitely degenerate. The ground state wave functions are evaluated explicitly. The quantum black hole states are selected and investigated. It is shown that the obtained black hole mass spectrum is compatible with the existence of Hawking's radiation in the limit of low temperatures both for large and nearly extreme Reissner-Nordstrom black holes. The above mentioned infinite degeneracy of the mass (energy) eigenvalues may appeared helpful in resolving the well known information paradox in the black hole physics.
SHRINKING THE BRANEWORLD: BLACK HOLE IN A GLOBULAR CLUSTER
Gnedin, Oleg Y.; Maccarone, Thomas J.; Psaltis, Dimitrios; Zepf, Stephen E.
Large extra dimensions have been proposed as a possible solution to the hierarchy problem in physics. In one of the suggested models, the RS2 braneworld model, black holes may evaporate by Hawking radiation faster than in general relativity, on a timescale that depends on the black hole mass and on the asymptotic radius of curvature of the extra dimensions. Thus the size of the extra dimensions can be constrained by astrophysical observations. Here we point out that the black hole, recently discovered in an extragalactic globular cluster, places the strongest upper limit on the size of the extra dimensions in the RS2 model, L ∼< 0.003 mm. This black hole has the virtues of old age and relatively small mass. The derived upper limit is within an order of magnitude of the absolute limit afforded by astrophysical observations of black holes.
Beyond the singularity of the 2-D charged black hole
Giveon, Amit; Rabinovici, Eliezer; Sever, Amit
Two dimensional charged black holes in string theory can be obtained as exact SL(2,R) x U(1)/U(1) quotient CFTs. The geometry of the quotient is induced from that of the group, and in particular includes regions beyond the black hole singularities. Moreover, wavefunctions in such black holes are obtained from gauge invariant vertex operators in the SL(2,R) CFT, hence their behavior beyond the singularity is determined. When the black hole is charged we find that the wavefunctions are smooth at the singularities. Unlike the uncharged case, scattering waves prepared beyond the singularity are not fully reflected; part of the wave is transmitted through the singularity. Hence, the physics outside the horizon of a charged black hole is sensitive to conditions set behind the past singularity. (author)
Stationary black holes: large D analysis
Suzuki, Ryotaku; Tanabe, Kentaro
We consider the effective theory of large D stationary black holes. By solving the Einstein equations with a cosmological constant using the 1/D expansion in near zone of the black hole we obtain the effective equation for the stationary black hole. The effective equation describes the Myers-Perry black hole, bumpy black holes and, possibly, the black ring solution as its solutions. In this effective theory the black hole is represented as an embedded membrane in the background, e.g., Minkowski or Anti-de Sitter spacetime and its mean curvature is given by the surface gravity redshifted by the background gravitational field and the local Lorentz boost. The local Lorentz boost property of the effective equation is observed also in the metric itself. In fact we show that the leading order metric of the Einstein equation in the 1/D expansion is generically regarded as a Lorentz boosted Schwarzschild black hole. We apply this Lorentz boost property of the stationary black hole solution to solve perturbation equations. As a result we obtain an analytic formula for quasinormal modes of the singly rotating Myers-Perry black hole in the 1/D expansion.
Plasma horizons of a charged black hole
Hanni, R.S.
The most promising way of detecting black holes seems to be through electromagnetic radiation emitted by nearby charged particles. The nature of this radiation depends strongly on the local electromagnetic field, which varies with the charge of the black hole. It has often been purported that a black hole with significant charge will not be observed, because, the dominance of the Coulomb interaction forces its neutralization through selective accretion. This paper shows that it is possible to balance the electric attraction of particles whose charge is opposite that of the black hole with magnetic forces and (assuming an axisymmetric, stationary solution) covariantly define the regions in which this is possible. A Kerr-Newman hole in an asymptotically uniform magnetic field and a current ring centered about a Reissner-Nordstroem hole are used as examples, because of their relevance to processes through which black holes may be observed. (Auth.)
Siegel modular forms and black hole entropy
Belin, Alexandre; Castro, Alejandra [Institute for Theoretical Physics, University of Amsterdam,Science Park 904, Postbus 94485, 1090 GL Amsterdam (Netherlands); Gomes, João [Institute for Theoretical Physics, University of Amsterdam,Science Park 904, Postbus 94485, 1090 GL Amsterdam (Netherlands); Institute for Theoretical Physics, University of Utrecht,Leuvenlaan 3584 CE Utrecht (Netherlands); Keller, Christoph A. [Department of Mathematics, ETH Zurich,CH-8092 Zurich (Switzerland)
We discuss the application of Siegel Modular Forms to Black Hole entropy counting. The role of the Igusa cusp form χ{sub 10} in the D1D5P system is well-known, and its transformation properties are what allows precision microstate counting in this case. We apply a similar method to extract the Fourier coefficients of other Siegel modular and paramodular forms, and we show that they could serve as candidates for other types of black holes. We investigate the growth of their coefficients, identifying the dominant contributions and the leading logarithmic corrections in various regimes. We also discuss similarities and differences to the behavior of χ{sub 10}, and possible physical interpretations of such forms both from a microscopic and gravitational point of view.
Magnetic monopoles near the black hole threshold
Lue, A.; Weinberg, E.J.
We present new analytic and numerical results for self-gravitating SU(2)-Higgs magnetic monopoles approaching the black hole threshold. Our investigation extends to large Higgs self-coupling, λ, a regime heretofore unexplored. When λ is small, the critical solution where a horizon first appears is extremal Reissner-Nordstroem outside the horizon but has a nonsingular interior. When λ is large, the critical solution is an extremal black hole with non-Abelian hair and a mass less than the extremal Reissner-Nordstroem value. The transition between these two regimes is reminiscent of a first-order phase transition. We analyze in detail the approach to these critical solutions as the Higgs expectation value is varied, and compare this analysis with the numerical results. copyright 1999 The American Physical Society
NASA's Chandra Finds Black Holes Are "Green"
Black holes are the most fuel efficient engines in the Universe, according to a new study using NASA's Chandra X-ray Observatory. By making the first direct estimate of how efficient or "green" black holes are, this work gives insight into how black holes generate energy and affect their environment. The new Chandra finding shows that most of the energy released by matter falling toward a supermassive black hole is in the form of high-energy jets traveling at near the speed of light away from the black hole. This is an important step in understanding how such jets can be launched from magnetized disks of gas near the event horizon of a black hole. Illustration of Fuel for a Black Hole Engine Illustration of Fuel for a Black Hole Engine "Just as with cars, it's critical to know the fuel efficiency of black holes," said lead author Steve Allen of the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University, and the Stanford Linear Accelerator Center. "Without this information, we cannot figure out what is going on under the hood, so to speak, or what the engine can do." Allen and his team used Chandra to study nine supermassive black holes at the centers of elliptical galaxies. These black holes are relatively old and generate much less radiation than quasars, rapidly growing supermassive black holes seen in the early Universe. The surprise came when the Chandra results showed that these "quiet" black holes are all producing much more energy in jets of high-energy particles than in visible light or X-rays. These jets create huge bubbles, or cavities, in the hot gas in the galaxies. Animation of Black Hole in Elliptical Galaxy Animation of Black Hole in Elliptical Galaxy The efficiency of the black hole energy-production was calculated in two steps: first Chandra images of the inner regions of the galaxies were used to estimate how much fuel is available for the black hole; then Chandra images were used to estimate the power required to produce
Boosting jet power in black hole spacetimes.
Neilsen, David; Lehner, Luis; Palenzuela, Carlos; Hirschmann, Eric W; Liebling, Steven L; Motl, Patrick M; Garrett, Travis
The extraction of rotational energy from a spinning black hole via the Blandford-Znajek mechanism has long been understood as an important component in models to explain energetic jets from compact astrophysical sources. Here we show more generally that the kinetic energy of the black hole, both rotational and translational, can be tapped, thereby producing even more luminous jets powered by the interaction of the black hole with its surrounding plasma. We study the resulting Poynting jet that arises from single boosted black holes and binary black hole systems. In the latter case, we find that increasing the orbital angular momenta of the system and/or the spins of the individual black holes results in an enhanced Poynting flux.
The membrane paradigm for black holes
Price, R.H.; Thorne, K.S.
It is now widely accepted that black holes exist and have an astrophysical role, in particular as the likely power source of quasars. To understand this role with ease, the authors and their colleagues have developed a new paradigm for black holes - a new way to picture, think about and describe them. As far as possible it treats black holes as ordinary astrophysical objects, made of real material. A black hole in this description is a spherical or oblate surface made of a thin, electrically conducting membrane. It was the author's quest to understand the Blandford-Znajek process intuitively that led them to create the membrane paradigm. Their strategy was to translate the general-relativistic mathematics of black holes into the same language of three-dimensional space that is used for magnetized plasmas and to create a new set of black-hole diagrams and pictures to go along with the language. 9 figs
Production of spinning black holes at colliders
Park, S. C.; Song, H. S.
When the Planck scale is as low as TeV, there will be chances to produce Black holes at future colliders. Generally, black holes produced via particle collisions can have non-zero angular momenta. We estimate the production cross-section of rotating Black holes in the context of low energy gravitation theories by taking the effects of rotation into account. The production cross section is shown to be enhanced by a factor of 2 - 3 over the naive estimate σ = π ∼ R S 2 , where R S denotes the Schwarzschild radius of black hole for a given energy. We also point out that the decay spectrum may have a distinguishable angular dependence through the grey-body factor of a rotating black hole. The angular dependence of decaying particles may give a clear signature for the effect of rotating black holes.
Hawking temperature of constant curvature black holes
Cai Ronggen; Myung, Yun Soo
The constant curvature (CC) black holes are higher dimensional generalizations of Banados-Teitelboim-Zanelli black holes. It is known that these black holes have the unusual topology of M D-1 xS 1 , where D is the spacetime dimension and M D-1 stands for a conformal Minkowski spacetime in D-1 dimensions. The unusual topology and time-dependence for the exterior of these black holes cause some difficulties to derive their thermodynamic quantities. In this work, by using a globally embedding approach, we obtain the Hawking temperature of the CC black holes. We find that the Hawking temperature takes the same form when using both the static and global coordinates. Also, it is identical to the Gibbons-Hawking temperature of the boundary de Sitter spaces of these CC black holes.
Instability of ultra-spinning black holes
Emparan, Roberto; Myers, Robert C.
It has long been known that, in higher-dimensional general relativity, there are black hole solutions with an arbitrarily large angular momentum for a fixed mass. We examine the geometry of the event horizon of such ultra-spinning black holes and argue that these solutions become unstable at large enough rotation. Hence we find that higher-dimensional general relativity imposes an effective 'Kerr-bound' on spinning black holes through a dynamical decay mechanism. Our results also give indications of the existence of new stationary black holes with 'rippled' horizons of spherical topology. We consider various scenarios for the possible decay of ultra-spinning black holes, and finally discuss the implications of our results for black holes in braneworld scenarios. (author)
Charged topological black hole pair creation
I examine the pair creation of black holes in space-times with a cosmological constant of either sign. I consider cosmological C-metrics and show that the conical singularities in this metric vanish only for three distinct classes of black hole metric, two of which have compact event horizons on each spatial slice. One class is a generalization of the Reissner-Nordstroem (anti-)de Sitter black holes in which the event horizons are the direct product of a null line with a 2-surface with topology of genus g. The other class consists of neutral black holes whose event horizons are the direct product of a null conoid with a circle. In the presence of a domain wall, black hole pairs of all possible types will be pair created for a wide range of mass and charge, including even negative mass black holes. I determine the relevant instantons and Euclidean actions for each case. (orig.)
Reversible Carnot cycle outside a black hole
Xi-Hao, Deng; Si-Jie, Gao
A Carnot cycle outside a Schwarzschild black hole is investigated in detail. We propose a reversible Carnot cycle with a black hole being the cold reservoir. In our model, a Carnot engine operates between a hot reservoir with temperature T 1 and a black hole with Hawking temperature T H . By naturally extending the ordinary Carnot cycle to the black hole system, we show that the thermal efficiency for a reversible process can reach the maximal efficiency 1 – T H /T 1 . Consequently, black holes can be used to determine the thermodynamic temperature by means of the Carnot cycle. The role of the atmosphere around the black hole is discussed. We show that the thermal atmosphere provides a necessary mechanism to make the process reversible. (general)
Hidden conformal symmetry of extremal black holes
Chen Bin; Long Jiang; Zhang Jiaju
We study the hidden conformal symmetry of extremal black holes. We introduce a new set of conformal coordinates to write the SL(2,R) generators. We find that the Laplacian of the scalar field in many extremal black holes, including Kerr(-Newman), Reissner-Nordstrom, warped AdS 3 , and null warped black holes, could be written in terms of the SL(2,R) quadratic Casimir. This suggests that there exist dual conformal field theory (CFT) descriptions of these black holes. From the conformal coordinates, the temperatures of the dual CFTs could be read directly. For the extremal black hole, the Hawking temperature is vanishing. Correspondingly, only the left (right) temperature of the dual CFT is nonvanishing, and the excitations of the other sector are suppressed. In the probe limit, we compute the scattering amplitudes of the scalar off the extremal black holes and find perfect agreement with the CFT prediction.
Einstein's enigma or black holes in my bubble bath
Vishveshwara, C V
A funny rendition of the story of gravitation theory from the early historic origins to the developments in astrophysics, focusing on Albert Einstein''s theory of general relativity and black-hole physics.
Measuring spin of black holes in the universe
First page Back Continue Last page Overview Graphics. Measuring spin of black holes in the universe. Department of Physics Indian Institute of Science Bangalore. Notes: 74th Annual Meeting of Indian Academy of Science.
Microscopic origin of black hole reentrant phase transitions
Zangeneh, M. Kord; Dehyadegari, A.; Sheykhi, A.; Mann, R. B.
Understanding the microscopic behavior of the black hole ingredients has been one of the important challenges in black hole physics during the past decades. In order to shed some light on the microscopic structure of black holes, in this paper, we explore a recently observed phenomenon for black holes namely reentrant phase transition, by employing the Ruppeiner geometry. Interestingly enough, we observe two properties for the phase behavior of small black holes that leads to reentrant phase transition. They are correlated and they are of the interaction type. For the range of pressure in which the system underlies reentrant phase transition, it transits from the large black holes phase to the small one which possesses higher correlation than the other ranges of pressures. On the other hand, the type of interaction between small black holes near the large/small transition line differs for usual and reentrant phase transitions. Indeed, for the usual case, the dominant interaction is repulsive whereas for the reentrant case we encounter an attractive interaction. We show that in the reentrant phase transition case, the small black holes behave like a bosonic gas whereas in the usual phase transition case, they behave like a quantum anyon gas.
Measuring the spins of accreting black holes
McClintock, Jeffrey E; Narayan, Ramesh; Gou, Lijun; Kulkarni, Akshay; Penna, Robert F; Steiner, James F; Davis, Shane W; Orosz, Jerome A; Remillard, Ronald A
A typical galaxy is thought to contain tens of millions of stellar-mass black holes, the collapsed remnants of once massive stars, and a single nuclear supermassive black hole. Both classes of black holes accrete gas from their environments. The accreting gas forms a flattened orbiting structure known as an accretion disk. During the past several years, it has become possible to obtain measurements of the spins of the two classes of black holes by modeling the x-ray emission from their accretion disks. Two methods are employed, both of which depend upon identifying the inner radius of the accretion disk with the innermost stable circular orbit, whose radius depends only on the mass and spin of the black hole. In the Fe Kα method, which applies to both classes of black holes, one models the profile of the relativistically broadened iron line with a special focus on the gravitationally redshifted red wing of the line. In the continuum-fitting (CF) method, which has so far only been applied to stellar-mass black holes, one models the thermal x-ray continuum spectrum of the accretion disk. We discuss both methods, with a strong emphasis on the CF method and its application to stellar-mass black holes. Spin results for eight stellar-mass black holes are summarized. These data are used to argue that the high spins of at least some of these black holes are natal, and that the presence or absence of relativistic jets in accreting black holes is not entirely determined by the spin of the black hole.
Gravitational lensing by a Horndeski black hole
Badia, Javier [Instituto de Astronomia y Fisica del Espacio (IAFE, CONICET-UBA), Buenos Aires (Argentina); Eiroa, Ernesto F. [Instituto de Astronomia y Fisica del Espacio (IAFE, CONICET-UBA), Buenos Aires (Argentina); Universidad de Buenos Aires, Ciudad Universitaria Pabellon I, Departamento de Fisica, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina)
In this article we study gravitational lensing by non-rotating and asymptotically flat black holes in Horndeski theory. By adopting the strong deflection limit, we calculate the deflection angle, from which we obtain the positions and the magnifications of the relativistic images. We compare our results with those corresponding to black holes in General Relativity. We analyze the astrophysical consequences in the case of the nearest supermassive black holes. (orig.)
Unified geometric description of black hole thermodynamics
Alvarez, Jose L.; Quevedo, Hernando; Sanchez, Alberto
In the space of thermodynamic equilibrium states we introduce a Legendre invariant metric which contains all the information about the thermodynamics of black holes. The curvature of this thermodynamic metric becomes singular at those points where, according to the analysis of the heat capacities, phase transitions occur. This result is valid for the Kerr-Newman black hole and all its special cases and, therefore, provides a unified description of black hole phase transitions in terms of curvature singularities.
Effective Stringy Description of Schwarzschild Black Holes
Krasnov , Kirill; Solodukhin , Sergey N.
We start by pointing out that certain Riemann surfaces appear rather naturally in the context of wave equations in the black hole background. For a given black hole there are two closely related surfaces. One is the Riemann surface of complexified ``tortoise'' coordinate. The other Riemann surface appears when the radial wave equation is interpreted as the Fuchsian differential equation. We study these surfaces in detail for the BTZ and Schwarzschild black holes in four and higher dimensions....
Badia, Javier; Eiroa, Ernesto F.
Statistical Mechanics and Black Hole Thermodynamics
Carlip, Steven
Black holes are thermodynamic objects, but despite recent progress, the ultimate statistical mechanical origin of black hole temperature and entropy remains mysterious. Here I summarize an approach in which the entropy is viewed as arising from ``would-be pure gauge'' degrees of freedom that become dynamical at the horizon. For the (2+1)-dimensional black hole, these degrees of freedom can be counted, and yield the correct Bekenstein-Hawking entropy; the corresponding problem in 3+1 dimension...
A New Model of Black Hole Formation
Thayer G. D.
Full Text Available The formation of a black hole and its event horizon are described. Conclusions, which are the result of a thought experiment, show that Schwarzschild [1] was correct: A singularity develops at the event horizon of a newly-formed black hole. The intense gravitational field that forms near the event horizon results in the mass-energy of the black hole accumulating in a layer just inside the event horizon, rather than collapsing into a central singularity.
Observability of Quantum State of Black Hole
David, J R; Mandal, G; Wadia, S R; David, Justin R.; Dhar, Avinash; Mandal, Gautam; Wadia, Spenta R.
We analyze terms subleading to Rutherford in the $S$-matrix between black hole and probes of successively high energies. We show that by an appropriate choice of the probe one can read off the quantum state of the black hole from the S-matrix, staying asymptotically far from the BH all the time. We interpret the scattering experiment as scattering off classical stringy backgrounds which explicitly depend on the internal quantum numbers of the black hole.
Test fields cannot destroy extremal black holes
Natário, José; Queimada, Leonel; Vicente, Rodrigo
We prove that (possibly charged) test fields satisfying the null energy condition at the event horizon cannot overspin/overcharge extremal Kerr–Newman or Kerr–Newman–anti de Sitter black holes, that is, the weak cosmic censorship conjecture cannot be violated in the test field approximation. The argument relies on black hole thermodynamics (without assuming cosmic censorship), and does not depend on the precise nature of the fields. We also discuss generalizations of this result to other extremal black holes. (paper)
Low-mass black holes as the remnants of primordial black hole formation.
Greene, Jenny E
Bridging the gap between the approximately ten solar mass 'stellar mass' black holes and the 'supermassive' black holes of millions to billions of solar masses are the elusive 'intermediate-mass' black holes. Their discovery is key to understanding whether supermassive black holes can grow from stellar-mass black holes or whether a more exotic process accelerated their growth soon after the Big Bang. Currently, tentative evidence suggests that the progenitors of supermassive black holes were formed as ∼10(4)-10(5) M(⊙) black holes via the direct collapse of gas. Ongoing searches for intermediate-mass black holes at galaxy centres will help shed light on this formation mechanism.
White dwarfs - black holes. Weisse Zwerge - schwarze Loecher
Sexl, R; Sexl, H
The physical arguments and problems of relativistic astrophysics are presented in a correct way, but without any higher mathematics. The book is addressed to teachers, experimental physicists, and others with a basic knowledge covering an introductory lecture in physics. The issues dealt with are: fundamentals of general relativity, classical tests of general relativity, curved space-time, stars and planets, pulsars, gravitational collapse and black holes, the search for black holes, gravitational waves, cosmology, cosmogony, and the early universe.
Black holes, information, and the universal coefficient theorem
Patrascu, Andrei T. [Department of Physics and Astronomy, University College London, London WC1E 6BT (United Kingdom)
General relativity is based on the diffeomorphism covariant formulation of the laws of physics while quantum mechanics is based on the principle of unitary evolution. In this article, I provide a possible answer to the black hole information paradox by means of homological algebra and pairings generated by the universal coefficient theorem. The unitarity of processes involving black holes is restored by the demanding invariance of the laws of physics to the change of coefficient structures in cohomology.
Quantum capacity of quantum black holes
Adami, Chris; Bradler, Kamil
The fate of quantum entanglement interacting with a black hole has been an enduring mystery, not the least because standard curved space field theory does not address the interaction of black holes with matter. We discuss an effective Hamiltonian of matter interacting with a black hole that has a precise analogue in quantum optics and correctly reproduces both spontaneous and stimulated Hawking radiation with grey-body factors. We calculate the quantum capacity of this channel in the limit of perfect absorption, as well as in the limit of a perfectly reflecting black hole (a white hole). We find that the white hole is an optimal quantum cloner, and is isomorphic to the Unruh channel with positive quantum capacity. The complementary channel (across the horizon) is entanglement-breaking with zero capacity, avoiding a violation of the quantum no-cloning theorem. The black hole channel on the contrary has vanishing capacity, while its complement has positive capacity instead. Thus, quantum states can be reconstructed faithfully behind the black hole horizon, but not outside. This work sheds new light on black hole complementarity because it shows that black holes can both reflect and absorb quantum states without violating the no-cloning theorem, and makes quantum firewalls obsolete.
Simulations of nearly extremal binary black holes
Giesler, Matthew; Scheel, Mark; Hemberger, Daniel; Lovelace, Geoffrey; Kuper, Kevin; Boyle, Michael; Szilagyi, Bela; Kidder, Lawrence; SXS Collaboration
Astrophysical black holes could have nearly extremal spins; therefore, nearly extremal black holes could be among the binaries that current and future gravitational-wave observatories will detect. Predicting the gravitational waves emitted by merging black holes requires numerical-relativity simulations, but these simulations are especially challenging when one or both holes have mass m and spin S exceeding the Bowen-York limit of S /m2 = 0 . 93 . Using improved methods we simulate an unequal-mass, precessing binary black hole coalescence, where the larger black hole has S /m2 = 0 . 99 . We also use these methods to simulate a nearly extremal non-precessing binary black hole coalescence, where both black holes have S /m2 = 0 . 994 , nearly reaching the Novikov-Thorne upper bound for holes spun up by thin accretion disks. We demonstrate numerical convergence and estimate the numerical errors of the waveforms; we compare numerical waveforms from our simulations with post-Newtonian and effective-one-body waveforms; and we compare the evolution of the black-hole masses and spins with analytic predictions.
Jet Physics of Accreting Super-Massive Black Holes in the Era of the Fermi Gamma-ray Space Telescope
D' Ammando, Filippo, E-mail: [email protected] [Dipartimento di Fisica e Astronomia, Universitá di Bologna, Bologna (Italy); Istituto di Radioastronomia (INAF), Bologna (Italy)
The Fermi Gamma-ray Space Telescope with its main instrument on-board, the Large Area Telescope (LAT), opened a new era in the study of high-energy emission from Active Galactic Nuclei (AGN). When combined with contemporaneous ground- and space-based observations, Fermi-LAT achieves its full capability to characterize the jet structure and the emission mechanisms at work in radio-loud AGN with different black hole mass and accretion rate, from flat spectrum radio quasars to narrow-line Seyfert 1 (NLSy1) galaxies. Here, I discuss important findings regarding the blazar population included in the third LAT catalog of AGN and the γ-ray emitting NLSy1. Moreover, the detection of blazars at redshift beyond three in γ rays allows us to constrain the growth and evolution of heavy black holes over cosmic time, suggesting that the radio-loud phase may be important for a fast black hole growth in the early Universe. Finally, results on extragalactic objects from the third catalog of hard LAT sources are presented.
Black hole physics from two-dimensional dilaton gravity based on the SL(2,R)/U(1) coset model
Nojiri, S.; Oda, I.
We analyze the quantum two-dimensional dilaton gravity model, which is described by the SL(2,R)/U(1) gauged Wess-Zumino-Witten model deformed by a (1,1) operator. We show that the curvature singularity does not appear when the central charge c matter of the matter fields is given by 22 matter matter matter �δ(x + -x 0 + ), create a kind of wormholes, i.e., causally disconnected regions. Most of the quantum information in past null infinity is lost in future null infinity but the lost information would be carried by the wormholes. We also discuss the problem of defining the mass of quantum black holes. On the basis of the argument by Regge and Teitelboim, we show that the ADM mass measured by the observer who lives in one of the asymptotically flat regions is finite and does not vanish in general. On the other hand, the Bondi mass is ill defined in this model. Instead of the Bondi mass, we consider the mass measured by observers who live in an asymptotically flat region at first. A class of observers finds the mass of the black hole created by a shock wave changes as the observers' proper time goes by, i.e., they observe Hawking radiation. The measured mass vanishes after the infinite proper time and the black hole evaporates completely. Therefore the total Hawking radiation is positive even when N<24
Filippo D'Ammando
Full Text Available The Fermi Gamma-ray Space Telescope with its main instrument on-board, the Large Area Telescope (LAT, opened a new era in the study of high-energy emission from Active Galactic Nuclei (AGN. When combined with contemporaneous ground- and space-based observations, Fermi-LAT achieves its full capability to characterize the jet structure and the emission mechanisms at work in radio-loud AGN with different black hole mass and accretion rate, from flat spectrum radio quasars to narrow-line Seyfert 1 (NLSy1 galaxies. Here, I discuss important findings regarding the blazar population included in the third LAT catalog of AGN and the γ-ray emitting NLSy1. Moreover, the detection of blazars at redshift beyond three in γ rays allows us to constrain the growth and evolution of heavy black holes over cosmic time, suggesting that the radio-loud phase may be important for a fast black hole growth in the early Universe. Finally, results on extragalactic objects from the third catalog of hard LAT sources are presented.
Tidal interactions with Kerr black holes
The tidal deformation of an extended test body falling with zero angular momentum into a Kerr black hole is calculated. Numerical results for infall along the symmetry axis and in the equatorial plane of the black hole are presented for a range of values of a, the specific angular momentum of the black hole. Estimates of the tidal contribution to the gravitational radiation are also given. The tidal contribution in equatorial infall into a maximally rotating Kerr black hole may be of the same order as the center-of-mass contribution to the gravitational radiation
Noncommutative Black Holes at the LHC
Villhauer, Elena Michelle
Based on the latest public results, 13 TeV data from the Large Hadron Collider at CERN has not indicated any evidence of hitherto tested models of quantum black holes, semiclassical black holes, or string balls. Such models have predicted signatures of particles with high transverse momenta. Noncommutative black holes remain an untested model of TeV-scale gravity that offers the starkly different signature of particles with relatively low transverse momenta. Considerations for a search for charged noncommutative black holes using the ATLAS detector will be discussed.
Entropy evaporated by a black hole
Zurek, W.H.
It is shown that the entropy of the radiation evaporated by an uncharged, nonrotating black hole into vacuum in the course of its lifetime is approximately (4/3) times the initial entropy of this black hole. Also considered is a thermodynamically reversible process in which an increase of black-hole entropy is equal to the decrease of the entropy of its surroundings. Implications of these results for the generalized second law of thermodynamics and for the interpretation of black-hole entropy are pointed out
On algebraically special perturbations of black holes
Chandrasekhar, S.
Algebraically special perturbations of black holes excite gravitational waves that are either purely ingoing or purely outgoing. Solutions, appropriate to such perturbations of the Kerr, the Schwarzschild, and the Reissner-Nordstroem black-holes, are obtained in explicit forms by different methods. The different methods illustrate the remarkable inner relations among different facets of the mathematical theory. In the context of the Kerr black-hole they derive from the different ways in which the explicit value of the Starobinsky constant emerges, and in the context of the Schwarzschild and the Reissner-Nordstroem black-holes they derive from the potential barriers surrounding them belonging to a special class. (author)
The statistical clustering of primordial black holes
Carr, B.J.
It is shown that Meszaros theory of galaxy formation, in which galaxies form from the density perturbations associated with the statistical fluctuation in the number density of primordial black holes, must be modified if the black holes are initially surrounded by regions of lower radiation density than average (as is most likely). However, even in this situation, the sort of effect Meszaros envisages does occur and could in principle cause galactic mass-scales to bind at the conventional time. In fact, the requirement that galaxies should not form prematurely implies that black holes could not have a critical density in the mass range above 10 5 M(sun). If the mass spectrum of primordial black holes falls off more slowly than m -3 (as expected), then the biggest black holes have the largest clustering effect. In this case the black hole clustering theory of galaxy formation reduces to the black hole seed theory of galaxy formation, in which each galaxy becomes bound under the gravitational influence of a single black hole nucleus. The seed theory could be viable only if the early Universe had a soft equation of state until a time exceeding 10 -4 s or if something prevented black hole formation before 1 s. (orig.) [de
The horizon of the lightest black hole
Calmet, Xavier [University of Sussex, Physics and Astronomy, Falmer, Brighton (United Kingdom); Casadio, Roberto [Universita di Bologna, Dipartimento di Fisica e Astronomia, Bologna (Italy); I.N.F.N., Sezione di Bologna, Bologna (Italy)
We study the properties of the poles of the resummed graviton propagator obtained by resumming bubble matter diagrams which correct the classical graviton propagator. These poles have been previously interpreted as black holes precursors. Here, we show using the horizon wave-function formalism that these poles indeed have properties which make them compatible with being black hole precursors. In particular, when modeled with a Breit-Wigner distribution, they have a well-defined gravitational radius. The probability that the resonance is inside its own gravitational radius, and thus that it is a black hole, is about one half. Our results confirm the interpretation of these poles as black hole precursors. (orig.)
Rotating black holes and Coriolis effect
Chia-Jui Chou
Full Text Available In this work, we consider the fluid/gravity correspondence for general rotating black holes. By using the suitable boundary condition in near horizon limit, we study the correspondence between gravitational perturbation and fluid equation. We find that the dual fluid equation for rotating black holes contains a Coriolis force term, which is closely related to the angular velocity of the black hole horizon. This can be seen as a dual effect for the frame-dragging effect of rotating black hole under the holographic picture.
Black holes with Yang-Mills hair
Kleihaus, B.; Kunz, J.; Sood, A.; Wirschins, M.
In Einstein-Maxwell theory black holes are uniquely determined by their mass, their charge and their angular momentum. This is no longer true in Einstein-Yang-Mills theory. We discuss sequences of neutral and charged SU(N) Einstein-Yang-Mills black holes, which are static spherically symmetric and asymptotically flat, and which carry Yang-Mills hair. Furthermore, in Einstein-Maxwell theory static black holes are spherically symmetric. We demonstrate that, in contrast, SU(2) Einstein-Yang-Mills theory possesses a sequence of black holes, which are static and only axially symmetric
Chou, Chia-Jui, E-mail: [email protected] [Department of Electrophysics, National Chiao Tung University, Hsinchu, Taiwan, ROC (China); Wu, Xiaoning, E-mail: [email protected] [Institute of Mathematics, Academy of Mathematics and System Science, CAS, Beijing, 100190 (China); Yang, Yi, E-mail: [email protected] [Department of Electrophysics, National Chiao Tung University, Hsinchu, Taiwan, ROC (China); Yuan, Pei-Hung, E-mail: [email protected] [Institute of Physics, National Chiao Tung University, Hsinchu, Taiwan, ROC (China)
In this work, we consider the fluid/gravity correspondence for general rotating black holes. By using the suitable boundary condition in near horizon limit, we study the correspondence between gravitational perturbation and fluid equation. We find that the dual fluid equation for rotating black holes contains a Coriolis force term, which is closely related to the angular velocity of the black hole horizon. This can be seen as a dual effect for the frame-dragging effect of rotating black hole under the holographic picture.
On the thermodynamics of hairy black holes
Anabalón, Andrés [Departamento de Ciencias, Facultad de Artes Liberales y Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Viña del Mar (Chile); Astefanesei, Dumitru [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059, Valparaíso (Chile); Choque, David, E-mail: [email protected] [Universidad Técnica Federico Santa María, Av. España 1680, Valparaiso (Chile)
We investigate the thermodynamics of a general class of exact 4-dimensional asymptotically Anti-de Sitter hairy black hole solutions and show that, for a fixed temperature, there are small and large hairy black holes similar to the Schwarzschild–AdS black hole. The large black holes have positive specific heat and so they can be in equilibrium with a thermal bath of radiation at the Hawking temperature. The relevant thermodynamic quantities are computed by using the Hamiltonian formalism and counterterm method. We explicitly show that there are first order phase transitions similar to the Hawking–Page phase transition.
Destroying black holes with test bodies
Jacobson, Ted [Center for Fundamental Physics, University of Maryland, College Park, MD 20742-4111 (United States); Sotiriou, Thomas P, E-mail: [email protected], E-mail: [email protected] [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)
If a black hole can accrete a body whose spin or charge would send the black hole parameters over the extremal limit, then a naked singularity would presumably form, in violation of the cosmic censorship conjecture. We review some previous results on testing cosmic censorship in this way using the test body approximation, focusing mostly on the case of neutral black holes. Under certain conditions a black hole can indeed be over-spun or over-charged in this approximation, hence radiative and self-force effects must be taken into account to further test cosmic censorship.
Charged black holes in phantom cosmology
Jamil, Mubasher; Qadir, Asghar; Rashid, Muneer Ahmad [National University of Sciences and Technology, Center for Advanced Mathematics and Physics, Rawalpindi (Pakistan)
In the classical relativistic regime, the accretion of phantom-like dark energy onto a stationary black hole reduces the mass of the black hole. We have investigated the accretion of phantom energy onto a stationary charged black hole and have determined the condition under which this accretion is possible. This condition restricts the mass-to-charge ratio in a narrow range. This condition also challenges the validity of the cosmic-censorship conjecture since a naked singularity is eventually produced due to accretion of phantom energy onto black hole. (orig.)
Jacobson, Ted; Sotiriou, Thomas P
Statistical clustering of primordial black holes
Carr, B J [Cambridge Univ. (UK). Inst. of Astronomy
It is shown that Meszaros theory of galaxy formation, in which galaxies form from the density perturbations associated with the statistical fluctuation in the number density of primordial black holes, must be modified if the black holes are initially surrounded by regions of lower radiation density than average (as is most likely). However, even in this situation, the sort of effect Meszaros envisages does occur and could in principle cause galactic mass-scales to bind at the conventional time. In fact, the requirement that galaxies should not form prematurely implies that black holes could not have a critical density in the mass range above 10/sup 5/ M(sun). If the mass spectrum of primordial black holes falls off more slowly than m/sup -3/ (as expected), then the biggest black holes have the largest clustering effect. In this case the black hole clustering theory of galaxy formation reduces to the black hole seed theory of galaxy formation, in which each galaxy becomes bound under the gravitational influence of a single black hole nucleus. The seed theory could be viable only if the early Universe had a soft equation of state until a time exceeding 10/sup -4/ s or if something prevented black hole formation before 1 s.
Schwarzschild black holes can wear scalar wigs.
Barranco, Juan; Bernal, Argelia; Degollado, Juan Carlos; Diez-Tejedor, Alberto; Megevand, Miguel; Alcubierre, Miguel; Núñez, Darío; Sarbach, Olivier
We study the evolution of a massive scalar field surrounding a Schwarzschild black hole and find configurations that can survive for arbitrarily long times, provided the black hole or the scalar field mass is small enough. In particular, both ultralight scalar field dark matter around supermassive black holes and axionlike scalar fields around primordial black holes can survive for cosmological times. Moreover, these results are quite generic in the sense that fairly arbitrary initial data evolve, at late times, as a combination of those long-lived configurations.
Surface geometry of 5D black holes and black rings
Frolov, Valeri P.; Goswami, Rituparno
We discuss geometrical properties of the horizon surface of five-dimensional rotating black holes and black rings. Geometrical invariants characterizing these 3D geometries are calculated. We obtain a global embedding of the 5D rotating black horizon surface into a flat space. We also describe the Kaluza-Klein reduction of the black ring solution (along the direction of its rotation) which, though it is nakedly singular, relates this solution to the 4D metric of a static black hole distorted by the presence of external scalar (dilaton) and vector ('electromagnetic') fields. The properties of the reduced black hole horizon and its embedding in E 3 are briefly discussed
NASA Observatory Confirms Black Hole Limits
The very largest black holes reach a certain point and then grow no more, according to the best survey to date of black holes made with NASA's Chandra X-ray Observatory. Scientists have also discovered many previously hidden black holes that are well below their weight limit. These new results corroborate recent theoretical work about how black holes and galaxies grow. The biggest black holes, those with at least 100 million times the mass of the Sun, ate voraciously during the early Universe. Nearly all of them ran out of 'food' billions of years ago and went onto a forced starvation diet. Focus on Black Holes in the Chandra Deep Field North Focus on Black Holes in the Chandra Deep Field North On the other hand, black holes between about 10 and 100 million solar masses followed a more controlled eating plan. Because they took smaller portions of their meals of gas and dust, they continue growing today. "Our data show that some supermassive black holes seem to binge, while others prefer to graze", said Amy Barger of the University of Wisconsin in Madison and the University of Hawaii, lead author of the paper describing the results in the latest issue of The Astronomical Journal (Feb 2005). "We now understand better than ever before how supermassive black holes grow." One revelation is that there is a strong connection between the growth of black holes and the birth of stars. Previously, astronomers had done careful studies of the birthrate of stars in galaxies, but didn't know as much about the black holes at their centers. DSS Optical Image of Lockman Hole DSS Optical Image of Lockman Hole "These galaxies lose material into their central black holes at the same time that they make their stars," said Barger. "So whatever mechanism governs star formation in galaxies also governs black hole growth." Astronomers have made an accurate census of both the biggest, active black holes in the distance, and the relatively smaller, calmer ones closer by. Now, for the first
Giant black hole rips star apart
Astronomers believe that a doomed star came too close to a giant black hole after a close encounter with another star threw it off course. As it neared the enormous gravity of the black hole, the star was stretched by tidal forces until it was torn apart. This discovery provides crucial information on how these black holes grow and affect the surrounding stars and gas. "Stars can survive being stretched a small amount, as they are in binary star systems, but this star was stretched beyond its breaking point," said Dr Stefanie Komossa of the Max Planck Institute for Extraterrestrial Physics (MPE) in Germany, who led the international team of researchers. "This unlucky star just wandered into the wrong neighbourhood." While other observations have hinted that stars are destroyed by black holes (events known as 'stellar tidal disruptions'), these new results are the first strong evidence. Observations with XMM-Newton and Chandra, combined with earlier images from the German Roentgensatellite (ROSAT), detected a powerful X-ray outburst from the centre of the galaxy RXJ1242-11. This outburst, one of the most extreme ever detected in a galaxy, was caused by gas from the destroyed star that was heated to millions of degrees before being swallowed by the black hole. The energy liberated in this process is equivalent to that of a supernova. "Now, with all of the data in hand, we have the smoking gun proof that this spectacular event has occurred," said co-author Prof. Guenther Hasinger, also of MPE. The black hole in the centre of RX J1242-11 is estimated to have a mass about 100 million times that of the Sun. By contrast, the destroyed star probably had a mass about equal to that of the Sun, making it a lopsided battle of gravity. "This is the ultimate 'David versus Goliath' battle, but here David loses," said Hasinger. The astronomers estimated that about one hundredth of the mass of the star was ultimately consumed, or accreted, by the black hole. This small
Exploring Jets from a Supermassive Black Hole
collaborators observations span the enormous radial distance of a thousand to a billion times the radius of the black hole, or about 54 light-days to more than a million light-years.Scale for ChangeThe width of the jet as a function of radial distance from the black hole, for NGC 4261 (red) compared to the few other jets from nearby supermassive black holes that weve measured. NGC 4261s jets transition from parabolic to conical at around 10,000 times the radius of the black hole (RS). [Nakahara et al. 2018]The authors observations of NGC 4261s jets indicate that a transition occurs at 10,000 times the radius of the black hole (thats a little over a light-year from the black hole). At this point, the jets structures change from parabolic (becoming more tightly beamed) to conical (expanding freely). Around the same location, Nakahara and collaborators also see the radiation profile of one of the jets change, suggesting the physical conditions in the jets transition here as well.This is the first time weve been able to examine jet width this closely for both of the jets emitted from a supermassive black hole. The fact that the structure changes at the same distance for both jets indicates that the shape of these powerful streams is likely governed by global properties of the environment surrounding the galaxys nucleus, or properties of the jets themselves, rather than by a local condition.The authors next hope to pin down velocities inside NGC 4261s jets to determine where the jets accelerate and decelerate. This nearby powerhouse is clearly going to be a useful laboratory in the future, helping to unveil the secrets of more distant, feeding monsters.BonusCurious what these hungry supermassive black holes look like? Check out this artists imagining of NGC 4261, which shows how it feeds from a large, swirling accretion disk and emits fast-moving, collimated jets. [Original video credit to Dana Berry, Space Telescope Science Institute]CitationSatomi Nakahara et al 2018 ApJ 854 148
Quantum criticality and black holes
Sachdev, Subir; Mueller, Markus
Many condensed matter experiments explore the finite temperature dynamics of systems near quantum critical points. Often, there are no well-defined quasiparticle excitations, and so quantum kinetic equations do not describe the transport properties completely. The theory shows that the transport coefficients are not proportional to a mean free scattering time (as is the case in the Boltzmann theory of quasiparticles), but are completely determined by the absolute temperature and by equilibrium thermodynamic observables. Recently, explicit solutions of this quantum critical dynamics have become possible via the anti-de Sitter/conformal field theory duality discovered in string theory. This shows that the quantum critical theory provides a holographic description of the quantum theory of black holes in a negatively curved anti-de Sitter space, and relates its transport coefficients to properties of the Hawking radiation from the black hole. We review how insights from this connection have led to new results for experimental systems: (i) the vicinity of the superfluid-insulator transition in the presence of an applied magnetic field, and its possible application to measurements of the Nernst effect in the cuprates, (ii) the magnetohydrodynamics of the plasma of Dirac electrons in graphene and the prediction of a hydrodynamic cyclotron resonance.
Gravitating discs around black holes
Karas, V; Hure, J-M; Semerak, O
Fluid discs and tori around black holes are discussed within different approaches and with the emphasis on the role of disc gravity. First reviewed are the prospects of investigating the gravitational field of a black hole-disc system using analytical solutions of stationary, axially symmetric Einstein equations. Then, more detailed considerations are focused to the middle and outer parts of extended disc-like configurations where relativistic effects are small and the Newtonian description is adequate. Within general relativity, only a static case has been analysed in detail. Results are often very inspiring. However, simplifying assumptions must be imposed: ad hoc profiles of the disc density are commonly assumed and the effects of frame-dragging are completely lacking. Astrophysical discs (e.g. accretion discs in active galactic nuclei) typically extend far beyond the relativistic domain and are fairly diluted. However, self-gravity is still essential for their structure and evolution, as well as for their radiation emission and the impact on the surrounding environment. For example, a nuclear star cluster in a galactic centre may bear various imprints of mutual star-disc interactions, which can be recognized in observational properties, such as the relation between the central mass and stellar velocity dispersion. (topical review)
Superluminality, black holes and EFT
Goon, Garrett [Department of Applied Mathematics and Theoretical Physics,Cambridge University, Cambridge, CB3 0WA (United Kingdom); Hinterbichler, Kurt [CERCA, Department of Physics, Case Western Reserve University,10900 Euclid Ave, Cleveland, OH 44106 (United States)
Under the assumption that a UV theory does not display superluminal behavior, we ask what constraints on superluminality are satisfied in the effective field theory (EFT). We study two examples of effective theories: quantum electrodynamics (QED) coupled to gravity after the electron is integrated out, and the flat-space galileon. The first is realized in nature, the second is more speculative, but they both exhibit apparent superluminality around non-trivial backgrounds. In the QED case, we attempt, and fail, to find backgrounds for which the superluminal signal advance can be made larger than the putative resolving power of the EFT. In contrast, in the galileon case it is easy to find such backgrounds, indicating that if the UV completion of the galileon is (sub)luminal, quantum corrections must become important at distance scales of order the Vainshtein radius of the background configuration, much larger than the naive EFT strong coupling distance scale. Such corrections would be reminiscent of the non-perturbative Schwarzschild scale quantum effects that are expected to resolve the black hole information problem. Finally, a byproduct of our analysis is a calculation of how perturbative quantum effects alter charged Reissner-Nordstrom black holes.
Kerr black holes with scalar hair.
Herdeiro, Carlos A R; Radu, Eugen
We present a family of solutions of Einstein's gravity minimally coupled to a complex, massive scalar field, describing asymptotically flat, spinning black holes with scalar hair and a regular horizon. These hairy black holes (HBHs) are supported by rotation and have no static limit. Besides mass M and angular momentum J, they carry a conserved, continuous Noether charge Q measuring the scalar hair. HBHs branch off from the Kerr metric at the threshold of the superradiant instability and reduce to spinning boson stars in the limit of vanishing horizon area. They overlap with Kerr black holes for a set of (M, J) values. A single Killing vector field preserves the solutions, tangent to the null geodesic generators of the event horizon. HBHs can exhibit sharp physical differences when compared to the Kerr solution, such as J/M^{2}>1, a quadrupole moment larger than J^{2}/M, and a larger orbital angular velocity at the innermost stable circular orbit. Families of HBHs connected to the Kerr geometry should exist in scalar (and other) models with more general self-interactions.
Noether charge, black hole volume, and complexity
Couch, Josiah; Fischler, Willy; Nguyen, Phuc H. [Theory Group, Department of Physics and Texas Cosmology Center,University of Texas at Austin, 2515 Speedway, C1600, Austin, TX 78712-1192 (United States)
In this paper, we study the physical significance of the thermodynamic volumes of AdS black holes using the Noether charge formalism of Iyer and Wald. After applying this formalism to study the extended thermodynamics of a few examples, we discuss how the extended thermodynamics interacts with the recent complexity = action proposal of Brown et al. (CA-duality). We, in particular, discover that their proposal for the late time rate of change of complexity has a nice decomposition in terms of thermodynamic quantities reminiscent of the Smarr relation. This decomposition strongly suggests a geometric, and via CA-duality holographic, interpretation for the thermodynamic volume of an AdS black hole. We go on to discuss the role of thermodynamics in complexity = action for a number of black hole solutions, and then point out the possibility of an alternate proposal, which we dub "complexity = volume 2.0'. In this alternate proposal the complexity would be thought of as the spacetime volume of the Wheeler-DeWitt patch. Finally, we provide evidence that, in certain cases, our proposal for complexity is consistent with the Lloyd bound whereas CA-duality is not.
Super-horizon primordial black holes
Harada, Tomohiro; Carr, B.J.
We discuss a new class of solutions to the Einstein equations which describe a primordial black hole (PBH) in a flat Friedmann background. Such solutions arise if a Schwarzschild black hole is patched onto a Friedmann background via a transition region. They are possible providing the black hole event horizon is larger than the cosmological apparent horizon. Such solutions have a number of strange features. In particular, one has to define the black hole and cosmological horizons carefully and one then finds that the mass contained within the black hole event horizon decreases when the black hole is larger than the Friedmann cosmological apparent horizon, although its area always increases. These solutions involve two distinct future null infinities and are interpreted as the conversion of a white hole into a black hole. Although such solutions may not form from gravitational collapse in the same way as standard PBHs, there is nothing unphysical about them, since all energy and causality conditions are satisfied. Their conformal diagram is a natural amalgamation of the Kruskal diagram for the extended Schwarzschild solution and the conformal diagram for a black hole in a flat Friedmann background. In this paper, such solutions are obtained numerically for a spherically symmetric universe containing a massless scalar field, but it is likely that they exist for more general matter fields and less symmetric systems
Charged black holes with scalar hair
Fan, Zhong-Ying; Lü, H. [Center for Advanced Quantum Studies, Department of Physics,Beijing Normal University, Beijing 100875 (China)
We consider a class of Einstein-Maxwell-Dilaton theories, in which the dilaton coupling to the Maxwell field is not the usual single exponential function, but one with a stationary point. The theories admit two charged black holes: one is the Reissner-Nordstrøm (RN) black hole and the other has a varying dilaton. For a given charge, the new black hole in the extremal limit has the same AdS{sub 2}×Sphere near-horizon geometry as the RN black hole, but it carries larger mass. We then introduce some scalar potentials and obtain exact charged AdS black holes. We also generalize the results to black p-branes with scalar hair.
Spacetime and orbits of bumpy black holes
Vigeland, Sarah J.; Hughes, Scott A.
Our Universe contains a great number of extremely compact and massive objects which are generally accepted to be black holes. Precise observations of orbital motion near candidate black holes have the potential to determine if they have the spacetime structure that general relativity demands. As a means of formulating measurements to test the black hole nature of these objects, Collins and Hughes introduced ''bumpy black holes'': objects that are almost, but not quite, general relativity's black holes. The spacetimes of these objects have multipoles that deviate slightly from the black hole solution, reducing to black holes when the deviation is zero. In this paper, we extend this work in two ways. First, we show how to introduce bumps which are smoother and lead to better behaved orbits than those in the original presentation. Second, we show how to make bumpy Kerr black holes--objects which reduce to the Kerr solution when the deviation goes to zero. This greatly extends the astrophysical applicability of bumpy black holes. Using Hamilton-Jacobi techniques, we show how a spacetime's bumps are imprinted on orbital frequencies, and thus can be determined by measurements which coherently track the orbital phase of a small orbiting body. We find that in the weak field, orbits of bumpy black holes are modified exactly as expected from a Newtonian analysis of a body with a prescribed multipolar structure, reproducing well-known results from the celestial mechanics literature. The impact of bumps on strong-field orbits is many times greater than would be predicted from a Newtonian analysis, suggesting that this framework will allow observations to set robust limits on the extent to which a spacetime's multipoles deviate from the black hole expectation.
Hawking radiation of a high-dimensional rotating black hole
Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)
We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)
Renormalized thermodynamic entropy of black holes in higher dimensions
Kim, S.P.; Kim, S.K.; Soh, K.; Yee, J.H.
We study the ultraviolet divergent structures of the matter (scalar) field in a higher D-dimensional Reissner-Nordstroem black hole and compute the matter field contribution to the Bekenstein-Hawking entropy by using the Pauli-Villars regularization method. We find that the matter field contribution to the black hole entropy does not, in general, yield the correct renormalization of the gravitational coupling constants. In particular, we show that the matter field contribution in odd dimensions does not give the term proportional to the area of the black hole event horizon. copyright 1997 The American Physical Society
Classical black holes: the nonlinear dynamics of curved spacetime.
Thorne, Kip S
Numerical simulations have revealed two types of physical structures, made from curved spacetime, that are attached to black holes: tendexes, which stretch or squeeze anything they encounter, and vortexes, which twist adjacent inertial frames relative to each other. When black holes collide, their tendexes and vortexes interact and oscillate (a form of nonlinear dynamics of curved spacetime). These oscillations generate gravitational waves, which can give kicks up to 4000 kilometers per second to the merged black hole. The gravitational waves encode details of the spacetime dynamics and will soon be observed and studied by the Laser Interferometer Gravitational Wave Observatory and its international partners.
Relativistic hydrodynamic evolutions with black hole excision
Duez, Matthew D.; Shapiro, Stuart L.; Yo, H.-J.
We present a numerical code designed to study astrophysical phenomena involving dynamical spacetimes containing black holes in the presence of relativistic hydrodynamic matter. We present evolutions of the collapse of a fluid star from the onset of collapse to the settling of the resulting black hole to a final stationary state. In order to evolve stably after the black hole forms, we excise a region inside the hole before a singularity is encountered. This excision region is introduced after the appearance of an apparent horizon, but while a significant amount of matter remains outside the hole. We test our code by evolving accurately a vacuum Schwarzschild black hole, a relativistic Bondi accretion flow onto a black hole, Oppenheimer-Snyder dust collapse, and the collapse of nonrotating and rotating stars. These systems are tracked reliably for hundreds of M following excision, where M is the mass of the black hole. We perform these tests both in axisymmetry and in full 3+1 dimensions. We then apply our code to study the effect of the stellar spin parameter J/M 2 on the final outcome of gravitational collapse of rapidly rotating n=1 polytropes. We find that a black hole forms only if J/M 2 2 >1, the collapsing star forms a torus which fragments into nonaxisymmetric clumps, capable of generating appreciable 'splash' gravitational radiation
BHDD: Primordial black hole binaries code
Kavanagh, Bradley J.; Gaggero, Daniele; Bertone, Gianfranco
BHDD (BlackHolesDarkDress) simulates primordial black hole (PBH) binaries that are clothed in dark matter (DM) halos. The software uses N-body simulations and analytical estimates to follow the evolution of PBH binaries formed in the early Universe.
Black Hole Interior in Quantum Gravity.
Nomura, Yasunori; Sanches, Fabio; Weinberg, Sean J
We discuss the interior of a black hole in quantum gravity, in which black holes form and evaporate unitarily. The interior spacetime appears in the sense of complementarity because of special features revealed by the microscopic degrees of freedom when viewed from a semiclassical standpoint. The relation between quantum mechanics and the equivalence principle is subtle, but they are still consistent.
The quantum structure of black holes
We give an elementary review of black holes in string theory. We discuss black hole entropy from string microstates and Hawking radiation from these states. We then review the structure of two-charge microstates and explore how 'fractionation' can lead to quantum effects over macroscopic length scales of the order of the horizon radius. (topical review)
ATLAS: Black hole production and decay
This track is an example of simulated data modelled for the ATLAS detector on the Large Hadron Collider (LHC) at CERN, which will begin taking data in 2008. These tracks would be produced if a miniature black hole was produced in the proton-proton collision. Such a small black hole would decay instantly to various particles via a process known as Hawking radiation.
Gravitational lensing by a regular black hole
Eiroa, Ernesto F; Sendra, Carlos M
In this paper, we study a regular Bardeen black hole as a gravitational lens. We find the strong deflection limit for the deflection angle, from which we obtain the positions and magnifications of the relativistic images. As an example, we apply the results to the particular case of the supermassive black hole at the center of our galaxy.
Eiroa, Ernesto F; Sendra, Carlos M, E-mail: [email protected], E-mail: [email protected] [Instituto de Astronomia y Fisica del Espacio, CC 67, Suc. 28, 1428, Buenos Aires (Argentina)
Partition functions for supersymmetric black holes
Manschot, J.
This thesis presents a number of results on partition functions for four-dimensional supersymmetric black holes. These partition functions are important tools to explain the entropy of black holes from a microscopic point of view. Such a microscopic explanation was desired after the association of a
Mass inflation in the loop black hole
Brown, Eric G.; Mann, Robert; Modesto, Leonardo
In classical general relativity the Cauchy horizon within a two-horizon black hole is unstable via a phenomenon known as mass inflation, in which the mass parameter (and the spacetime curvature) of the black hole diverges at the Cauchy horizon. Here we study this effect for loop black holes - quantum gravitationally corrected black holes from loop quantum gravity - whose construction alleviates the r=0 singularity present in their classical counterparts. We use a simplified model of mass inflation, which makes use of the generalized Dray-'t Hooft relation, to conclude that the Cauchy horizon of loop black holes indeed results in a curvature singularity similar to that found in classical black holes. The Dray-'t Hooft relation is of particular utility in the loop black hole because it does not directly rely upon Einstein's field equations. We elucidate some of the interesting and counterintuitive properties of the loop black hole, and corroborate our results using an alternate model of mass inflation due to Ori.
Quantum aspects of black hole entropy
Four dimensional supersymmetric extremal black holes in string-based ... elements in the construction of black holes are our concepts of space and time. They are, thus, almost by definition, the most perfect macroscopic objects there are in ... Appealing to the Cardy formula for the asymptotic degeneracy of these states, one.
Primordial braneworld black holes: significant enhancement of ...
Abstract. The Randall-Sundrum (RS-II) braneworld cosmological model with a frac- tion of the total energy density in primordial black holes is considered. Due to their 5d geometry, these black holes undergo modified Hawking evaporation. It is shown that dur- ing the high-energy regime, accretion from the surrounding ...
Black Hole Dynamic Potentials Koustubh Ajit Kabe
Abstract. In the following paper, certain black hole dynamic potentials have been developed definitively on the lines of classical thermodynam- ics. These potentials have been refined in view of the small differences in the equations of the laws of black hole dynamics as given by Bekenstein and those of thermodynamics.
Black holes and the weak cosmic censorship
A theory of black holes is developed under the assumption of the weak cosmic censorship. It includes Hawking's theory of black holes in the future asymptotically predictable space-times as a special case but it also applies to the cosmological situations including models with nonzero cosmological constant of both signs. (author)
Black holes and the strong cosmic censorship
The theory of black holes developed by Hawking in asymptotically flat space-times is generalized so that black holes in the cosmological situations are included. It is assumed that the strong version of the Penrose cosmic censorship hypothesis holds. (author)
Black Hole Entanglement and Quantum Error Correction
Verlinde, E.; Verlinde, H.
It was recently argued in [1] that black hole complementarity strains the basic rules of quantum information theory, such as monogamy of entanglement. Motivated by this argument, we develop a practical framework for describing black hole evaporation via unitary time evolution, based on a holographic
Black hole complementarity: The inside view
David A. Lowe
Full Text Available Within the framework of black hole complementarity, a proposal is made for an approximate interior effective field theory description. For generic correlators of local operators on generic black hole states, it agrees with the exact exterior description in a region of overlapping validity, up to corrections that are too small to be measured by typical infalling observers.
Holographic Lovelock gravities and black holes
de Boer, J.; Kulaxizi, M.; Parnachev, A.
We study holographic implications of Lovelock gravities in AdS spacetimes. For a generic Lovelock gravity in arbitrary spacetime dimensions we formulate the existence condition of asymptotically AdS black holes. We consider small fluctuations around these black holes and determine the constraint on
FEASTING BLACK HOLE BLOWS BUBBLES
A monstrous black hole's rude table manners include blowing huge bubbles of hot gas into space. At least, that's the gustatory practice followed by the supermassive black hole residing in the hub of the nearby galaxy NGC 4438. Known as a peculiar galaxy because of its unusual shape, NGC 4438 is in the Virgo Cluster, 50 million light-years from Earth. These NASA Hubble Space Telescope images of the galaxy's central region clearly show one of the bubbles rising from a dark band of dust. The other bubble, emanating from below the dust band, is barely visible, appearing as dim red blobs in the close-up picture of the galaxy's hub (the colorful picture at right). The background image represents a wider view of the galaxy, with the central region defined by the white box. These extremely hot bubbles are caused by the black hole's voracious eating habits. The eating machine is engorging itself with a banquet of material swirling around it in an accretion disk (the white region below the bright bubble). Some of this material is spewed from the disk in opposite directions. Acting like high-powered garden hoses, these twin jets of matter sweep out material in their paths. The jets eventually slam into a wall of dense, slow-moving gas, which is traveling at less than 223,000 mph (360,000 kph). The collision produces the glowing material. The bubbles will continue to expand and will eventually dissipate. Compared with the life of the galaxy, this bubble-blowing phase is a short-lived event. The bubble is much brighter on one side of the galaxy's center because the jet smashed into a denser amount of gas. The brighter bubble is 800 light-years tall and 800 light-years across. The observations are being presented June 5 at the American Astronomical Society meeting in Rochester, N.Y. Both pictures were taken March 24, 1999 with the Wide Field and Planetary Camera 2. False colors were used to enhance the details of the bubbles. The red regions in the picture denote the hot gas
Reconstructing the massive black hole cosmic history through gravitational waves
Sesana, Alberto; Gair, Jonathan; Berti, Emanuele; Volonteri, Marta
The massive black holes we observe in galaxies today are the natural end-product of a complex evolutionary path, in which black holes seeded in proto-galaxies at high redshift grow through cosmic history via a sequence of mergers and accretion episodes. Electromagnetic observations probe a small subset of the population of massive black holes (namely, those that are active or those that are very close to us), but planned space-based gravitational wave observatories such as the Laser Interferometer Space Antenna (LISA) can measure the parameters of 'electromagnetically invisible' massive black holes out to high redshift. In this paper we introduce a Bayesian framework to analyze the information that can be gathered from a set of such measurements. Our goal is to connect a set of massive black hole binary merger observations to the underlying model of massive black hole formation. In other words, given a set of observed massive black hole coalescences, we assess what information can be extracted about the underlying massive black hole population model. For concreteness we consider ten specific models of massive black hole formation, chosen to probe four important (and largely unconstrained) aspects of the input physics used in structure formation simulations: seed formation, metallicity ''feedback'', accretion efficiency and accretion geometry. For the first time we allow for the possibility of 'model mixing', by drawing the observed population from some combination of the 'pure' models that have been simulated. A Bayesian analysis allows us to recover a posterior probability distribution for the ''mixing parameters'' that characterize the fractions of each model represented in the observed distribution. Our work shows that LISA has enormous potential to probe the underlying physics of structure formation.
Theoretical Frontiers in Black Holes and Cosmology� School
Orazi, Emanuele
These lecture notes are dedicated to the most recent theoretical applications of Black Hole solutions in high-energy physics. The main motivation of this volume is to present the latest black hole backgrounds that are relevant for gauge/gravity correspondence. Leading scientists in the field explain effective techniques for finding singular and cosmological solutions embedded in gauged supergravity, shedding light on underlying properties and symmetries. Starting from a basic level, the mathematical structures underlying black holes and cosmologies are revealed, helping the reader grasp the connection between theoretical approaches and physical observations with insights into possible future developments from both a theoretical and experimental point of view. The topics covered in this volume are based on lectures delivered during the "Theoretical Frontiers in Black Holes and Cosmology� school, held in Natal in June 2015.
Bumpy black holes from spontaneous Lorentz violation
Dubovsky, Sergei; Tinyakov, Peter; Zaldarriaga, Matias
We consider black holes in Lorentz violating theories of massive gravity. We argue that in these theories black hole solutions are no longer universal and exhibit a large number of hairs. If they exist, these hairs probe the singularity inside the black hole providing a window into quantum gravity. The existence of these hairs can be tested by future gravitational wave observatories. We generically expect that the effects we discuss will be larger for the more massive black holes. In the simplest models the strength of the hairs is controlled by the same parameter that sets the mass of the graviton (tensor modes). Then the upper limit on this mass coming from the inferred gravitational radiation emitted by binary pulsars implies that hairs are likely to be suppressed for almost the entire mass range of the super-massive black holes in the centers of galaxies
Magnetized black holes and nonlinear electrodynamics
Kruglov, S. I.
A new model of nonlinear electrodynamics with two parameters is proposed. We study the phenomenon of vacuum birefringence, the causality and unitarity in this model. There is no singularity of the electric field in the center of pointlike charges and the total electrostatic energy is finite. We obtain corrections to the Coulomb law at r →∞. The weak, dominant and strong energy conditions are investigated. Magnetized charged black hole is considered and we evaluate the mass, metric function and their asymptotic at r →∞ and r → 0. The magnetic mass of the black hole is calculated. The thermodynamic properties and thermal stability of regular black holes are discussed. We calculate the Hawking temperature of black holes and show that there are first-order and second-order phase transitions. The parameters of the model when the black hole is stable are found.
Black hole accretion: the quasar powerhouse
A program is described which calculates the effects of material falling into the curved space-time surrounding a rotation black hole. The authors have developed a two-dimensional, general-relativistic hydrodynamics code to simulate fluid flow in the gravitational field of a rotating black hole. Such calculations represent models that have been proposed for the energy sources of both quasars and jets from radiogalaxies. In each case, the black hole that powers the quasar or jet would have a mass of about 100 million times the mass of the sun. The black hole would be located in the center of a galaxy whose total mass is 1000 time greater than the black hole mass. (SC)
Mass formula for quasi-black holes
Lemos, Jose P. S.; Zaslavskii, Oleg B.
A quasi-black hole, either nonextremal or extremal, can be broadly defined as the limiting configuration of a body when its boundary approaches the body's quasihorizon. We consider the mass contributions and the mass formula for a static quasi-black hole. The analysis involves careful scrutiny of the surface stresses when the limiting configuration is reached. It is shown that there exists a strict correspondence between the mass formulas for quasi-black holes and pure black holes. This perfect parallelism exists in spite of the difference in derivation and meaning of the formulas in both cases. For extremal quasi-black holes the finite surface stresses give zero contribution to the total mass. This leads to a very special version of Abraham-Lorentz electron in general relativity in which the total mass has pure electromagnetic origin in spite of the presence of bare stresses.
Kerr black holes are not fragile
McInnes, Brett, E-mail: [email protected] [Centro de Estudios Cientificos (CECs), Valdivia (Chile); National University of Singapore (Singapore)
Certain AdS black holes are 'fragile', in the sense that, if they are deformed excessively, they become unstable to a fundamental non-perturbative stringy effect analogous to Schwinger pair-production [of branes]. Near-extremal topologically spherical AdS-Kerr black holes, which are natural candidates for string-theoretic models of the very rapidly rotating black holes that have actually been observed to exist, do represent a very drastic deformation of the AdS-Schwarzschild geometry. One therefore has strong reason to fear that these objects might be 'fragile', which in turn could mean that asymptotically flat rapidly rotating black holes might be fragile in string theory. Here we show that this does not happen: despite the severe deformation implied by near-extremal angular momenta, brane pair-production around topologically spherical AdS-Kerr-Newman black holes is always suppressed.
Black hole thermodynamics based on unitary evolutions
Feng, Yu-Lei; Chen, Yi-Xin
In this paper, we try to construct black hole thermodynamics based on the fact that the formation and evaporation of a black hole can be described by quantum unitary evolutions. First, we show that the Bekenstein–Hawking entropy S BH may not be a Boltzmann or thermal entropy. To confirm this statement, we show that the original black hole's 'first law' may not simply be treated as the first law of thermodynamics formally, due to some missing metric perturbations caused by matter. Then, by including those (quantum) metric perturbations, we show that the black hole formation and evaporation can be described effectively in a unitary manner, through a quantum channel between the exterior and interior of the event horizon. In this way, the paradoxes of information loss and firewall can be resolved effectively. Finally, we show that black hole thermodynamics can be constructed in an ordinary way, by constructing statistical mechanics. (paper)
Shmakova, Marina
We found double-extreme black holes associated with the special geometry of the Calabi-Yau moduli space with the prepotential F = STU. The area formula is STU-moduli independent and has [SL(2, Z)]{sup 3} symmetry in space of charges. The dual version of this theory without prepotential treats the dilaton S asymmetric versus T,U-moduli. We display the dual relation between new (STU) black holes and stringy (S|TU) black holes using particular Sp(8,Z) transformation. The area formula of one theory equals the area formula of the dual theory when expressed in terms of dual charges. We analyze the relation between (STU) black holes to string triality of black holes: (S|TU), (T|US), (U|ST) solutions. In democratic STU-symmetric version we find that all three S and T and U duality symmetries are non-perturbative and mix electric and magnetic charges.
Magnetic charge, black holes, and cosmic censorship
Hiscock, W.H.
The possibility of converting a Reissner-Nordstroem black hole into a naked singularity by means of test particle accretion is considered. The dually charged Reissner-Nordstroem metric describes a black hole only when M 2 >Q 2 +P 2 . The test particle equations of motion are shown to allow test particles with arbitrarily large magnetic charge/mass ratios to fall radially into electrically charged black holes. To determine the nature of the final state (black hole or naked singularity) an exact solution of Einstein's equations representing a spherical shell of magnetically charged dust falling into an electrically charged black hole is studied. Naked singularities are never formed so long as the weak energy condition is obeyed by the infalling matter. The differences between the spherical shell model and an infalling point test particle are examined and discussed
Thin accretion disk around regular black hole
QIU Tianqi
Full Text Available The Penrose′s cosmic censorship conjecture says that naked singularities do not exist in nature.So,it seems reasonable to further conjecture that not even a singularity exists in nature.In this paper,a regular black hole without singularity is studied in detail,especially on its thin accretion disk,energy flux,radiation temperature and accretion efficiency.It is found that the interaction of regular black hole is stronger than that of the Schwarzschild black hole. Furthermore,the thin accretion will be more efficiency to lost energy while the mass of black hole decreased. These particular properties may be used to distinguish between black holes.
Dual jets from binary black holes.
Palenzuela, Carlos; Lehner, Luis; Liebling, Steven L
The coalescence of supermassive black holes--a natural outcome when galaxies merge--should produce gravitational waves and would likely be associated with energetic electromagnetic events. We have studied the coalescence of such binary black holes within an external magnetic field produced by the expected circumbinary disk surrounding them. Solving the Einstein equations to describe black holes interacting with surrounding plasma, we present numerical evidence for possible jets driven by these systems. Extending the process described by Blandford and Znajek for a single, spinning black hole, the picture that emerges suggests that the electromagnetic field extracts energy from the orbiting black holes, which ultimately merge and settle into the standard Blandford-Znajek scenario. Emissions along these jets could potentially be observable at large distances.
Magnetohydrodynamic Simulations of Black Hole Accretion
Avara, Mark J.
Black holes embody one of the few, simple, solutions to the Einstein field equations that describe our modern understanding of gravitation. In isolation they are small, dark, and elusive. However, when a gas cloud or star wanders too close, they light up our universe in a way no other cosmic object can. The processes of magnetohydrodynamics which describe the accretion inflow and outflows of plasma around black holes are highly coupled and nonlinear and so require numerical experiments for elucidation. These processes are at the heart of astrophysics since black holes, once they somehow reach super-massive status, influence the evolution of the largest structures in the universe. It has been my goal, with the body of work comprising this thesis, to explore the ways in which the influence of black holes on their surroundings differs from the predictions of standard accretion models. I have especially focused on how magnetization of the greater black hole environment can impact accretion systems.
Black Holes and Gravitational Properties of Antimatter
Hajdukovic, D
We speculate about impact of antigravity (i.e. gravitational repulsion between matter and antimatter) on the creation and emission of particles by a black hole. If antigravity is present a black hole made of matter may radiate particles as a black body, but this shouldn't be true for antiparticles. It may lead to radical change of radiation process predicted by Hawking and should be taken into account in preparation of the attempt to create and study mini black holes at CERN. Gravity, including antigravity is more than ever similar to electrodynamics and such similarity with a successfully quantized interaction may help in quantization of gravity.
Exponential fading to white of black holes in quantum gravity
Barceló, Carlos; Carballo-Rubio, Raúl; Garay, Luis J
Quantization of the gravitational field may allow the existence of a decay channel of black holes into white holes with an explicit time-reversal symmetry. The definition of a meaningful decay probability for this channel is studied in spherically symmetric situations. As a first nontrivial calculation, we present the functional integration over a set of geometries using a single-variable function to interpolate between black-hole and white-hole geometries in a bounded region of spacetime. This computation gives a finite result which depends only on the Schwarzschild mass and a parameter measuring the width of the interpolating region. The associated probability distribution displays an exponential decay law on the latter parameter, with a mean lifetime inversely proportional to the Schwarzschild mass. In physical terms this would imply that matter collapsing to a black hole from a finite radius bounces back elastically and instantaneously, with negligible time delay as measured by external observers. These results invite to reconsider the ultimate nature of astrophysical black holes, providing a possible mechanism for the formation of black stars instead of proper general relativistic black holes. The existence of both this decay channel and black stars can be tested in future observations of gravitational waves. (paper)
Rotating Hayward's regular black hole as particle accelerator
Amir, Muhammed; Ghosh, Sushant G.
Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy (E CM ) when the collision takes place near the horizon. The rotating Hayward's regular black hole, apart from Mass (M) and angular momentum (a), has a new parameter g (g>0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M=1, there exist critical a E and r H E , which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r H E increases with increase in g. While ablack hole with outer and inner horizons. We apply the BSW process to the rotating Hayward's regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM , which increases with the deviation parameter g.
Gauss-Bonnet black holes in dS spaces
Cai Ronggen; Guo Qi
We study the thermodynamic properties associated with the black hole horizon and cosmological horizon for the Gauss-Bonnet solution in de Sitter space. When the Gauss-Bonnet coefficient is positive, a locally stable small black hole appears in the case of spacetime dimension d=5, the stable small black hole disappears, and the Gauss-Bonnet black hole is always unstable quantum mechanically when d≥6. On the other hand, the cosmological horizon is found to be always locally stable independent of the spacetime dimension. But the solution is not globally preferred; instead, the pure de Sitter space is globally preferred. When the Gauss-Bonnet coefficient is negative, there is a constraint on the value of the coefficient, beyond which the gravity theory is not well defined. As a result, there is not only an upper bound on the size of black hole horizon radius at which the black hole horizon and cosmological horizon coincide with each other, but also a lower bound depending on the Gauss-Bonnet coefficient and spacetime dimension. Within the physical phase space, the black hole horizon is always thermodynamically unstable and the cosmological horizon is always stable; furthermore, as in the case of the positive coefficient, the pure de Sitter space is still globally preferred. This result is consistent with the argument that the pure de Sitter space corresponds to an UV fixed point of dual field theory
Quantum Black Holes as Holograms in AdS Braneworlds
Emparan, R; Kaloper, Nemanja; Emparan, Roberto; Fabbri, Alessandro; Kaloper, Nemanja
We propose a new approach for using the AdS/CFT correspondence to study quantum black hole physics. The black holes on a brane in an AdS$_{D+1}$ braneworld that solve the classical bulk equations are interpreted as duals of {\\it quantum-corrected} $D$-dimensional black holes, rather than classical ones, of a conformal field theory coupled to gravity. We check this explicitly in D=3 and D=4. In D=3 we reinterpret the existing exact solutions on a flat membrane as states of the dual 2+1 CFT. We show that states with a sufficiently large mass really are 2+1 black holes where the quantum corrections dress the classical conical singularity with a horizon and censor it from the outside. On a negatively curved membrane, we reinterpret the classical bulk solutions as quantum-corrected BTZ black holes. In D=4 we argue that the bulk solution for the brane black hole should include a radiation component in order to describe a quantum-corrected black hole in the 3+1 dual. Hawking radiation of the conformal field is then ...
On the outside of cold black holes
Bicak, J.
Some general features of the behaviour of fields and particles around extreme (or nearly extreme) black holes are outlined, with emphasis on their simplicity. Simple solutions representing interacting electromagnetic and gravitational perturbations of an extreme Reissner-Nordstroem black hole are presented. The motion of the hole in an asymptotically uniform weak electric field is examined as an application and ''Newton's second law'' is thus explicitly verified for a geometrodynamical object. (author)
Reflection, radiation, and interference near the black hole horizon
Kuchiev, M.Yu.
The event horizon of black holes is capable of reflection: there is a finite probability for any particle that approaches the horizon to bounce back. The albedo of the horizon depends on the black hole temperature and the energy of the incoming particle. The reflection shares its physical origins with the Hawking process of radiation; both of them arise as consequences of the mixing of the incoming and outgoing waves that takes place due to quantum processes on the event horizon
Black hole constraints on the running-mass inflation model
Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R
The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...
Black hole as a wormhole factory
Sung-Won Kim
Full Text Available There have been lots of debates about the final fate of an evaporating black hole and the singularity hidden by an event horizon in quantum gravity. However, on general grounds, one may argue that a black hole stops radiation at the Planck mass (ħc/G1/2∼10−5 g, where the radiated energy is comparable to the black hole's mass. And also, it has been argued that there would be a wormhole-like structure, known as "spacetime foam�, due to large fluctuations below the Planck length (ħG/c31/2∼10−33 cm. In this paper, as an explicit example, we consider an exact classical solution which represents nicely those two properties in a recently proposed quantum gravity model based on different scaling dimensions between space and time coordinates. The solution, called "Black Wormhole�, consists of two different states, depending on its mass parameter M and an IR parameter ω: For the black hole state (with ωM2>1/2, a non-traversable wormhole occupies the interior region of the black hole around the singularity at the origin, whereas for the wormhole state (with ωM2<1/2, the interior wormhole is exposed to an outside observer as the black hole horizon is disappearing from evaporation. The black hole state becomes thermodynamically stable as it approaches the merging point where the interior wormhole throat and the black hole horizon merges, and the Hawking temperature vanishes at the exact merge point (with ωM2=1/2. This solution suggests the "Generalized Cosmic Censorship� by the existence of a wormhole-like structure which protects the naked singularity even after the black hole evaporation. One could understand the would-be wormhole inside the black hole horizon as the result of microscopic wormholes created by "negative� energy quanta which have entered the black hole horizon in Hawking radiation process; the quantum black hole could be a wormhole factory! It is found that this speculative picture may be consistent with the
Dynamical evolution of quasicircular binary black hole data
Alcubierre, Miguel; Bruegmann, Bernd; Diener, Peter; Guzman, F. Siddhartha; Hawke, Ian; Hawley, Scott; Herrmann, Frank; Pollney, Denis; Thornburg, Jonathan; Koppitz, Michael; Seidel, Edward
We study the fully nonlinear dynamical evolution of binary black hole data, whose orbital parameters are specified via the effective potential method for determining quasicircular orbits. The cases studied range from the Cook-Baumgarte innermost stable circular orbit (ISCO) to significantly beyond that separation. In all cases we find the black holes to coalesce (as determined by the appearance of a common apparent horizon) in less than half an orbital period. The results of the numerical simulations indicate that the initial holes are not actually in quasicircular orbits, but that they are in fact nearly plunging together. The dynamics of the final horizon are studied to determine physical parameters of the final black hole, such as its spin, mass, and oscillation frequency, revealing information about the inspiral process. We show that considerable resolution is required to extract accurate physical information from the final black hole formed in the merger process, and that the quasinormal modes of the final hole are strongly excited in the merger process. For the ISCO case, by comparing physical measurements of the final black hole formed to the initial data, we estimate that less than 3% of the total energy is radiated in the merger process
Notes on Phase Transition of Nonsingular Black Hole
Ma Meng-Sen; Zhao Ren
On the belief that a black hole is a thermodynamic system, we study the phase transition of nonsingular black holes. If the black hole entropy takes the form of the Bekenstein—Hawking area law, the black hole mass M is no longer the internal energy of the black hole thermodynamic system. Using the thermodynamic quantities, we calculate the heat capacity, thermodynamic curvature and free energy. It is shown that there will be a larger black hole/smaller black hole phase transition for the nonsingular black hole. At the critical point, the second-order phase transition appears. (paper)
Instability of charged anti-de Sitter black holes
Gwak, Bogeun; Lee, Bum-Hoon; Ro, Daeho
We have studied the instability of charged anti-de Sitter black holes in four- or higher-dimensions under fragmentation. The unstable black holes under fragmentation can be broken into two black holes. Instability depends not only on the mass and charge of the black hole but also on the ratio between the fragmented black hole and its predecessor. We have found that the near extremal black holes are unstable, and Schwarzschild-AdS black holes are stable. These are qualitatively similar to black holes in four dimensions and higher. The detailed instabilities are numerically investigated.
Unveiling the edge of time black holes, white holes, wormholes
Gribbin, John
Acclaimed science writer John Gribbin recounts dramatic stories that have led scientists to believe black holes and their more mysterious kin are not only real, but might actually provide a passage to other universes and travel through time.
Moving mirrors, black holes, and cosmic censorship
Ford, L.H.; Roman, T.A.
We examine negative-energy fluxes produced by mirrors moving in two-dimensional charged-black-hole backgrounds. If there exist no constraints on such fluxes, then one might be able to manipulate them to achieve a violation of cosmic censorship by shooting a negative-energy flux into an extreme Q=M or near-extreme Reissner-Nordstroem black hole. However, if the magnitude of the change in the mass of the hole |ΔM|, resulting from the absorption of this flux, is small compared to the normal quantum uncertainty in the mass expected from the uncertainty principle ΔEΔT≥1, then such changes should not be macroscopically observable. We argue that, given certain (physically reasonable) restrictions on the trajectory of the mirror, this indeed seems to be the case. More specifically, we show that |ΔM| and ΔT, the ''effective lifetime'' of any naked singularity thus produced, are limited by an inequality of the form |ΔM|ΔT<1. We then conclude that the negative-energy fluxes produced by two-dimensional moving mirrors do not lead to a classically observable violation of cosmic censorship
Extremal vacuum black holes in higher dimensions
Figueras, Pau; Lucietti, James; Rangamani, Mukund; Kunduri, Hari K.
We consider extremal black hole solutions to the vacuum Einstein equations in dimensions greater than five. We prove that the near-horizon geometry of any such black hole must possess an SO(2,1) symmetry in a special case where one has an enhanced rotational symmetry group. We construct examples of vacuum near-horizon geometries using the extremal Myers-Perry black holes and boosted Myers-Perry strings. The latter lead to near-horizon geometries of black ring topology, which in odd spacetime dimensions have the correct number of rotational symmetries to describe an asymptotically flat black object. We argue that a subset of these correspond to the near-horizon limit of asymptotically flat extremal black rings. Using this identification we provide a conjecture for the exact 'phase diagram' of extremal vacuum black rings with a connected horizon in odd spacetime dimensions greater than five.
Rotating black holes at future colliders. III. Determination of black hole evolution
Ida, Daisuke; Oda, Kin-ya; Park, Seong Chan
TeV scale gravity scenario predicts that the black hole production dominates over all other interactions above the scale and that the Large Hadron Collider will be a black hole factory. Such higher-dimensional black holes mainly decay into the standard model fields via the Hawking radiation whose spectrum can be computed from the greybody factor. Here we complete the series of our work by showing the greybody factors and the resultant spectra for the brane-localized spinor and vector field emissions for arbitrary frequencies. Combining these results with the previous works, we determine the complete radiation spectra and the subsequent time evolution of the black hole. We find that, for a typical event, well more than half a black hole mass is emitted when the hole is still highly rotating, confirming our previous claim that it is important to take into account the angular momentum of black holes
Thermodynamics of a class of regular black holes with a generalized uncertainty principle
Maluf, R. V.; Neves, Juliano C. S.
In this article, we present a study on thermodynamics of a class of regular black holes. Such a class includes Bardeen and Hayward regular black holes. We obtained thermodynamic quantities like the Hawking temperature, entropy, and heat capacity for the entire class. As part of an effort to indicate some physical observable to distinguish regular black holes from singular black holes, we suggest that regular black holes are colder than singular black holes. Besides, contrary to the Schwarzschild black hole, that class of regular black holes may be thermodynamically stable. From a generalized uncertainty principle, we also obtained the quantum-corrected thermodynamics for the studied class. Such quantum corrections provide a logarithmic term for the quantum-corrected entropy.
LIGO Finds Lightest Black-Hole Binary
Wednesdayevening the Laser Interferometer Gravitational-wave Observatory (LIGO) collaboration quietly mentioned that theyd found gravitational waves from yet another black-hole binary back in June. This casual announcement reveals what is so far the lightest pair of black holes weve watched merge opening the door for comparisons to the black holes weve detected by electromagnetic means.A Routine DetectionThe chirp signal of GW170608 detected by LIGO Hanford and LIGO Livingston. [LIGO collaboration 2017]After the fanfare of the previous four black-hole-binary merger announcements over the past year and a half as well as the announcement of the one neutron-star binary merger in August GW170608 marks our entry into the era in which gravitational-wave detections are officially routine.GW170608, a gravitational-wave signal from the merger of two black holes roughly a billion light-years away, was detected in June of this year. This detection occurred after wed already found gravitational waves from several black-hole binaries with the two LIGO detectors in the U.S., but before the Virgo interferometer came online in Europe and increased the joint ability of the detectors to localize sources.Mass estimates for the two components of GW170608 using different models. [LIGO collaboration 2017]Overall, GW170608 is fairly unremarkable: it was detected by both LIGO Hanford and LIGO Livingston some 7 ms apart, and the signal looks not unlike those of the previous LIGO detections. But because were still in the early days of gravitational-wave astronomy, every discovery is still remarkable in some way! GW170608 stands out as being the lightest pair of black holes weve yet to see merge, with component masses before the merger estimated at 12 and 7 times the mass of the Sun.Why Size MattersWith the exception of GW151226, the gravitational-wave signal discovered on Boxing Day last year, all of the black holes that have been discovered by LIGO/Virgo have been quite large: the masses
Sizes of Black Holes Throughout the Universe
What is the distribution of sizes of black holes in our universe? Can black holes of any mass exist, or are there gaps in their possible sizes? The shape of this black-hole mass function has been debated for decades and the dawn of gravitational-wave astronomy has only spurred further questions.Mind the GapsThe starting point for the black-hole mass function lies in the initial mass function (IMF) for stellar black holes the beginning size distribution of black holes after they are born from stars. Instead of allowing for the formation of stellar black holes of any mass, theoretical models propose two gaps in the black-hole IMF:An upper mass gap at 50130 solar masses, due to the fact that stellar progenitors of black holes in this mass range are destroyed by pair-instability supernovae.A lower mass gap below 5 solar masses, which is argued to arise naturally from the mechanics of supernova explosions.Missing black-hole (BH) formation channels due to the existence of the lower gap (LG) and the upper gap (UG) in the initial mass function. a) The number of BHs at all scales are lowered because no BH can merge with BHs in the LG to form a larger BH. b) The missing channel responsible for the break at 10 solar masses, resulting from the LG. c) The missing channel responsible for the break at 60 solar masses, due to the interaction between the LG and the UG. [Christian et al. 2018]We can estimate the IMF for black holes by scaling a typical IMF for stars and then adding in these theorized gaps. But is this initial distribution of black-hole masses the same as the distribution that we observe in the universe today?The Influence of MergersBased on recent events, the answer appears to be no! Since the first detections of gravitational waves in September 2015, we now know that black holes can merge to form bigger black holes. An initial distribution of black-hole masses must therefore evolve over time, as mergers cause the depletion of low-mass black holes and an increase in
Black hole formation in a contracting universe
Quintin, Jerome; Brandenberger, Robert H.
We study the evolution of cosmological perturbations in a contracting universe. We aim to determine under which conditions density perturbations grow to form large inhomogeneities and collapse into black holes. Our method consists in solving the cosmological perturbation equations in complete generality for a hydrodynamical fluid. We then describe the evolution of the fluctuations over the different length scales of interest and as a function of the equation of state for the fluid, and we explore two different types of initial conditions: quantum vacuum and thermal fluctuations. We also derive a general requirement for black hole collapse on sub-Hubble scales, and we use the Press-Schechter formalism to describe the black hole formation probability. For a fluid with a small sound speed (e.g., dust), we find that both quantum and thermal initial fluctuations grow in a contracting universe, and the largest inhomogeneities that first collapse into black holes are of Hubble size and the collapse occurs well before reaching the Planck scale. For a radiation-dominated fluid, we find that no black hole can form before reaching the Planck scale. In the context of matter bounce cosmology, it thus appears that only models in which a radiation-dominated era begins early in the cosmological evolution are robust against the formation of black holes. Yet, the formation of black holes might be an interesting feature for other models. We comment on a number of possible alternative early universe scenarios that could take advantage of this feature.
Particle creation rate for dynamical black holes
Firouzjaee, Javad T. [School of Astronomy, Institute for Research in Fundamental Sciences (IPM), Tehran (Iran, Islamic Republic of); University of Oxford, Department of Physics (Astrophysics), Oxford (United Kingdom); Ellis, George F.R. [University of Cape Town, Mathematics and Applied Mathematics Department, Rondebosch (South Africa)
We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe. We also infer that this frequency cut off can be a parameter that shows the primordial black hole growth at the emission moment. (orig.)
Quintin, Jerome; Brandenberger, Robert H., E-mail: [email protected], E-mail: [email protected] [Department of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8 Canada (Canada)
The eclectic approach to gravitational waves from black hole collisions
Baker, J.
I present the first results in a new program intended to make the best use of all available technologies to provide an effective understanding of waves from inspiraling black hole binaries in time for imminent observations. In particular, I address the problem of combining the close-limit approximation describing ringing black holes and full numerical relativity, required for essentially nonlinear interactions. The results demonstrate the effectiveness of our approach using general methods for a model problem, the head-on collision of black holes. Our method allows a more direct physical understanding of these collisions indicating clearly when non-linear methods are important. The success of this method supports our expectation that this unified approach will be able to provide astrophysically relevant results for black hole binaries in time to assist gravitational wave observations. (author)
Black holes as critical point of quantum phase transition.
Dvali, Gia; Gomez, Cesar
We reformulate the quantum black hole portrait in the language of modern condensed matter physics. We show that black holes can be understood as a graviton Bose-Einstein condensate at the critical point of a quantum phase transition, identical to what has been observed in systems of cold atoms. The Bogoliubov modes that become degenerate and nearly gapless at this point are the holographic quantum degrees of freedom responsible for the black hole entropy and the information storage. They have no (semi)classical counterparts and become inaccessible in this limit. These findings indicate a deep connection between the seemingly remote systems and suggest a new quantum foundation of holography. They also open an intriguing possibility of simulating black hole information processing in table-top labs.
The immediate environment of an astrophysical black hole
Contopoulos, I.
In view of the upcoming observations with the Event Horizon Telescope (EHT), we present our thoughts on the immediate environment of an astrophysical black hole. We are concerned that two approximations used in general relativistic magnetohydrodynamic numerical simulations, namely numerical density floors implemented near the base of the black hole jet, and a magnetic field that comes from large distances, may mislead our interpretation of the observations. We predict that three physical processes will manifest themselves in EHT observations, namely dynamic pair formation just above the horizon, electromagnetic energy dissipation along the boundary of the black hole jet, and a region of weak magnetic field separating the black hole jet from the disc wind.
Hawking radiation from dilatonic black holes via anomalies
Jiang Qingquan; Cai Xu; Wu Shuangqing
Recently, Hawking radiation from a Schwarzschild-type black hole via a gravitational anomaly at the horizon has been derived by Robinson and Wilczek. Their result shows that, in order to demand general coordinate covariance at the quantum level to hold in the effective theory, the flux of the energy-momentum tensor required to cancel the gravitational anomaly at the horizon of the black hole is exactly equal to that of (1+1)-dimensional blackbody radiation at the Hawking temperature. In this paper, we attempt to apply the analysis to derive Hawking radiation from the event horizons of static, spherically symmetric dilatonic black holes with arbitrary coupling constant α, and that from the rotating Kaluza-Klein (α=√(3)) as well as the Kerr-Sen (α=1) black holes via an anomalous point of view. Our results support Robinson and Wilczek's opinion. In addition, the properties of the obtained physical quantities near the extreme limit are qualitatively discussed
Pair creation of dilaton black holes in extended inflation
Bousso, R.
Dilatonic charged Nariai instantons mediate the nucleation of black hole pairs during extended chaotic inflation. Depending on the dilaton and inflaton fields, the black holes are described by one of two approximations in the Lorentzian regime. For each case we find Euclidean solutions that satisfy the no boundary proposal. The complex initial values of the dilaton and inflaton are determined, and the pair creation rate is calculated from the Euclidean action. Similar to standard inflation, black holes are abundantly produced near the Planck boundary, but highly suppressed later on. An unusual feature we find is that the earlier in inflation the dilatonic black holes are created, the more highly charged they can be. copyright 1997 The American Physical Society
Simulating merging binary black holes with nearly extremal spins
Lovelace, Geoffrey; Scheel, Mark A.; Szilagyi, Bela
Astrophysically realistic black holes may have spins that are nearly extremal (i.e., close to 1 in dimensionless units). Numerical simulations of binary black holes are important tools both for calibrating analytical templates for gravitational-wave detection and for exploring the nonlinear dynamics of curved spacetime. However, all previous simulations of binary-black-hole inspiral, merger, and ringdown have been limited by an apparently insurmountable barrier: the merging holes' spins could not exceed 0.93, which is still a long way from the maximum possible value in terms of the physical effects of the spin. In this paper, we surpass this limit for the first time, opening the way to explore numerically the behavior of merging, nearly extremal black holes. Specifically, using an improved initial-data method suitable for binary black holes with nearly extremal spins, we simulate the inspiral (through 12.5 orbits), merger and ringdown of two equal-mass black holes with equal spins of magnitude 0.95 antialigned with the orbital angular momentum.
Dispelling Black Hole Pathologies Through Theory and Observation
Spivey R. J.
Full Text Available Astrophysical black holes are by now routinely identified with metrics representing eter- nal black holes obtained as exact mathematical solutions of Einstein's field equations. However, the mere existence and discovery of stationary solutions is no guarantee that they can be attained through dynamical processes. If a straightforward physical caveat is respected throughout a spacetime manifold then the ingress of matter across an event horizon is prohibited, in accordance with Einstein's expectation. As black hole forma- tion and growth would be inhibited, the various pathological traits of black holes such as information loss, closed timelike curves and singularities of infinite mass density would be obviated. Gravitational collapse would not terminate with the formation of black holes possessing event horizons but asymptotically slow as the maximal time dilation between any pair of worldlines tends towards infinity. The remnants might be better described as dark holes, often indistinguishable from black holes except in certain as- trophysically important cases. The absence of trapped surf aces circumvents topological censorship, with potentially observable consequences for astronomy, as exemplified by the remarkable electromagnetic characteristics, extreme energetics and abrupt extinc- tion of quasars within low redshift galaxies.
Massive Binary Black Holes in the Cosmic Landscape
Colpi, Monica; Dotti, Massimo
Binary black holes occupy a special place in our quest for understanding the evolution of galaxies along cosmic history. If massive black holes grow at the center of (pre-)galactic structures that experience a sequence of merger episodes, then dual black holes form as inescapable outcome of galaxy assembly, and can in principle be detected as powerful dual quasars. But, if the black holes reach coalescence, during their inspiral inside the galaxy remnant, then they become the loudest sources of gravitational waves ever in the universe. The Laser Interferometer Space Antenna is being developed to reveal these waves that carry information on the mass and spin of these binary black holes out to very large look-back times. Nature seems to provide a pathway for the formation of these exotic binaries, and a number of key questions need to be addressed: How do massive black holes pair in a merger? Depending on the properties of the underlying galaxies, do black holes always form a close Keplerian binary? If a binary forms, does hardening proceed down to the domain controlled by gravitational wave back reaction? What is the role played by gas and/or stars in braking the black holes, and on which timescale does coalescence occur? Can the black holes accrete on flight and shine during their pathway to coalescence? After outlining key observational facts on dual/binary black holes, we review the progress made in tracing their dynamics in the habitat of a gas-rich merger down to the smallest scales ever probed with the help of powerful numerical simulations. N-Body/hydrodynamical codes have proven to be vital tools for studying their evolution, and progress in this field is expected to grow rapidly in the effort to describe, in full realism, the physics of stars and gas around the black holes, starting from the cosmological large scale of a merger. If detected in the new window provided by the upcoming gravitational wave experiments, binary black holes will provide a deep view
The capacity to transmit classical information via black holes
Adami, Christoph; Ver Steeg, Greg
One of the most vexing problems in theoretical physics is the relationship between quantum mechanics and gravity. According to an argument originally by Hawking, a black hole must destroy any information that is incident on it because the only radiation that a black hole releases during its evaporation (the Hawking radiation) is precisely thermal. Surprisingly, this claim has never been investigated within a quantum information-theoretic framework, where the black hole is treated as a quantum channel to transmit classical information. We calculate the capacity of the quantum black hole channel to transmit classical information (the Holevo capacity) within curved-space quantum field theory, and show that the information carried by late-time particles sent into a black hole can be recovered with arbitrary accuracy, from the signature left behind by the stimulated emission of radiation that must accompany any absorption event. We also show that this stimulated emission turns the black hole into an almost-optimal quantum cloning machine, where the violation of the no-cloning theorem is ensured by the noise provided by the Hawking radiation. Thus, rather than threatening the consistency of theoretical physics, Hawking radiation manages to save it instead.
Configurational entropy of anti-de Sitter black holes
Braga, Nelson R.F.; Rocha, Roldão da
Recent studies indicate that the configurational entropy is an useful tool to investigate the stability and (or) the relative dominance of states for diverse physical systems. Recent examples comprise the connection between the variation of this quantity and the relative fraction of light mesons and glueballs observed in hadronic processes. Here we develop a technique for defining a configurational entropy for an AdS-Schwarzschild black hole. The achieved result corroborates consistency with the Hawking–Page phase transition. Namely, the dominance of the black hole configurational entropy will be shown to increase with the temperature. In order to verify the consistency of the new procedure developed here, we also consider the case of black holes in flat space-time. For such a black hole, it is known that evaporation leads to instability. The configurational entropy obtained for the flat space case is thoroughly consistent with the physical expectation. In fact, we show that the smaller the black holes, the more unstable they are. So, the configurational entropy furnishes a reliable measure for stability of black holes.
Braga, Nelson R.F., E-mail: [email protected] [Instituto de Física, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, RJ 21941-972 (Brazil); Rocha, Roldão da, E-mail: [email protected] [Centro de Matemática, Computação e Cognição, Universidade Federal do ABC – UFABC, 09210-580, Santo André (Brazil)
BOOK REVIEW: Black Holes, Cosmology and Extra Dimensions Black Holes, Cosmology and Extra Dimensions
Frolov, Valeri P.
flatness of the Universe, the horizon problem and isotropy of cosmological microwave background. All this material is covered in chapter seven. Chapter eight contains brief discussion of several popular inflation models. Chapter nine is devoted to the problem of the large-scale structure formation from initial quantum vacuum fluctuation during the inflation and the spectrum of the density fluctuations. It also contains remarks on the baryonic asymmetry of the Universe, baryogenesis and primordial black holes. Part III covers the material on extra dimensions. It describes how Einstein gravity is modified in the presence of one or more additional spatial dimensions and how these extra dimensions are compactified in the Kaluza-Klein scheme. The authors also discuss how extra dimensions may affect low energy physics. They present examples of higher-dimensional generalizations of the gravity with higher-in-curvature corrections and discuss a possible mechanism of self-stabilization of an extra space. A considerable part of the chapter 10 is devoted to cosmological models with extra dimensions. In particular, the authors discuss how extra dimensions can modify 'standard' inflation models. At the end of this chapter they make several remarks on a possible relation of the value of fundamental constants in our universe with the existence of extra dimensions. Finally, in chapter 11 they demonstrate that several observable properties of the Universe are closely related with the special value of the fundamental physical constants and their fine tuning. They give interesting examples of such fine tuning and summarize many other cases. The book ends with discussion of a so-called 'cascade birth of universes in multidimensional spaces' model, proposed by one of the authors. As is evident from this brief summary of topics presented in the book, many interesting areas of modern gravity and cosmology are covered. However, since the subject is so wide, this inevitably implies that the
Black hole dynamics at large D
We demonstrate that the classical dynamics of black holes can be reformulated as a dynamical problem of a codimension one membrane moving in flat space. This membrane - roughly the black hole event horizon - carries a conserved charge current and stress tensor which source radiation. This `membrane paradigm' may be viewed as a simplification of the equations of general relativity at large D, and suggests the possibility of using 1/D as a useful expansion parameter in the analysis of complicated four dimensional solutions of general relativity, for instance the collision between two black holes.
Black hole ringdown echoes and howls
Nakano, Hiroyuki; Sago, Norichika; Tagoshi, Hideyuki; Tanaka, Takahiro
Recently the possibility of detecting echoes of ringdown gravitational waves from binary black hole mergers was shown. The presence of echoes is expected if the black hole is surrounded by a mirror that reflects gravitational waves near the horizon. Here, we present slightly more sophisticated templates motivated by a waveform which is obtained by solving the linear perturbation equation around a Kerr black hole with a complete reflecting boundary condition in the stationary traveling wave approximation. We estimate that the proposed template can bring about a 10% improvement in the signal-to-noise ratio.
Quantum chaos and the black hole horizon
Thanks to AdS/CFT, the analogy between black holes and thermal systems has become a practical tool, shedding light on thermalization, transport, and entanglement dynamics. Continuing in this vein, recent work has shown how chaos in the boundary CFT can be analyzed in terms of high energy scattering right on the horizon of the dual black hole. The analysis revolves around certain out-of-time-order correlation functions, which are simple diagnostics of the butterfly effect. We will review this work, along with a general bound on these functions that implies black holes are the most chaotic systems in quantum mechanics. (NBÂ Room Change to Main Auditorium)
Black Holes and the Information Paradox
't Hooft, Gerard
In electromagnetism, like charges repel, opposite charges attract. A remarkable feature of the gravitational force is that like masses attract. This gives rise to an instability: the more mass you have, the stronger the attractive force, until an inevitable implosion follows, leading to a "black hole". It is in the black hole where an apparent conflict between Einstein's General Relativity and the laws of Quantum Mechanics becomes manifest. Most physicists now agree that a black hole should be described by a Schrödinger equation, with a Hermitean Hamiltonian, but this requires a modification of general relativity. Both General Relativity and Quantum mechanics are shaking on their foundations.
Fast plunges into Kerr black holes
Hadar, Shahar [Racah Institute of Physics, Hebrew University,Jerusalem 91904 (Israel); Porfyriadis, Achilleas P.; Strominger, Andrew [Center for the Fundamental Laws of Nature, Harvard University,Cambridge, MA 02138 (United States)
Most extreme-mass-ratio-inspirals of small compact objects into supermassive black holes end with a fast plunge from an eccentric last stable orbit. For rapidly rotating black holes such fast plunges may be studied in the context of the Kerr/CFT correspondence because they occur in the near-horizon region where dynamics are governed by the infinite dimensional conformal symmetry. In this paper we use conformal transformations to analytically solve for the radiation emitted from fast plunges into near-extreme Kerr black holes. We find perfect agreement between the gravity and CFT computations.
Black hole entropy, universality, and horizon constraints
To ask a question about a black hole in quantum gravity, one must restrict initial or boundary data to ensure that a black hole is actually present. For two-dimensional dilaton gravity, and probably a much wider class of theories, I show that the imposition of a 'stretched horizon' constraint modifies the algebra of symmetries at the horizon, allowing the use of conformal field theory techniques to determine the asymptotic density of states. The result reproduces the Bekenstein-Hawking entropy without any need for detailed assumptions about the microscopic theory. Horizon symmetries may thus offer an answer to the problem of universality of black hole entropy
Carlip, Steven [Department of Physics, University of California, Davis, CA 95616 (United States)
To ask a question about a black hole in quantum gravity, one must restrict initial or boundary data to ensure that a black hole is actually present. For two-dimensional dilaton gravity, and probably a much wider class of theories, I show that the imposition of a 'stretched horizon' constraint modifies the algebra of symmetries at the horizon, allowing the use of conformal field theory techniques to determine the asymptotic density of states. The result reproduces the Bekenstein-Hawking entropy without any need for detailed assumptions about the microscopic theory. Horizon symmetries may thus offer an answer to the problem of universality of black hole entropy.
Stationary Black Holes: Uniqueness and Beyond
Heusler Markus
Full Text Available The spectrum of known black hole solutions to the stationary Einstein equations has increased in an unexpected way during the last decade. In particular, it has turned out that not all black hole equilibrium configurations are characterized by their mass, angular momentum and global charges. Moreover, the high degree of symmetry displayed by vacuum and electro-vacuum black hole space-times ceases to exist in self-gravitating non-linear field theories. This text aims to review some of the recent developments and to discuss them in the light of the uniqueness theorem for the Einstein-Maxwell system.
Primordial black holes from fifth forces
Amendola, Luca; Rubio, Javier; Wetterich, Christof
Primordial black holes can be produced by a long-range attractive fifth force stronger than gravity, mediated by a light scalar field interacting with nonrelativistic "heavy" particles. As soon as the energy fraction of heavy particles reaches a threshold, the fluctuations rapidly become nonlinear. The overdensities collapse into black holes or similar screened objects, without the need for any particular feature in the spectrum of primordial density fluctuations generated during inflation. We discuss whether such primordial black holes can constitute the total dark matter component in the Universe.
Piotr T. Chruściel
Full Text Available The spectrum of known black-hole solutions to the stationary Einstein equations has been steadily increasing, sometimes in unexpected ways. In particular, it has turned out that not all black-hole-equilibrium configurations are characterized by their mass, angular momentum and global charges. Moreover, the high degree of symmetry displayed by vacuum and electro vacuum black-hole spacetimes ceases to exist in self-gravitating non-linear field theories. This text aims to review some developments in the subject and to discuss them in light of the uniqueness theorem for the Einstein-Maxwell system.
Entropy Inequality Violations from Ultraspinning Black Holes.
Hennigar, Robie A; Mann, Robert B; Kubizňák, David
We construct a new class of rotating anti-de Sitter (AdS) black hole solutions with noncompact event horizons of finite area in any dimension and study their thermodynamics. In four dimensions these black holes are solutions to gauged supergravity. We find that their entropy exceeds the maximum implied from the conjectured reverse isoperimetric inequality, which states that for a given thermodynamic volume, the black hole entropy is maximized for Schwarzschild-AdS space. We use this result to suggest more stringent conditions under which this conjecture may hold.
Depilating Global Charge From Thermal Black Holes
March-Russell, John David; March-Russell, John; Wilczek, Frank
At a formal level, there appears to be no difficulty involved in introducing a chemical potential for a globally conserved quantum number into the partition function for space-time including a black hole. Were this possible, however, it would provide a form of black hole hair, and contradict the idea that global quantum numbers are violated in black hole evaporation. We demonstrate dynamical mechanisms that negate the formal procedure, both for topological charge (Skyrmions) and complex scalar-field charge. Skyrmions collapse to the horizon; scalar-field charge fluctuates uncontrollably.
Surprise: Dwarf Galaxy Harbors Supermassive Black Hole
The surprising discovery of a supermassive black hole in a small nearby galaxy has given astronomers a tantalizing look at how black holes and galaxies may have grown in the early history of the Universe. Finding a black hole a million times more massive than the Sun in a star-forming dwarf galaxy is a strong indication that supermassive black holes formed before the buildup of galaxies, the astronomers said. The galaxy, called Henize 2-10, 30 million light-years from Earth, has been studied for years, and is forming stars very rapidly. Irregularly shaped and about 3,000 light-years across (compared to 100,000 for our own Milky Way), it resembles what scientists think were some of the first galaxies to form in the early Universe. "This galaxy gives us important clues about a very early phase of galaxy evolution that has not been observed before," said Amy Reines, a Ph.D. candidate at the University of Virginia. Supermassive black holes lie at the cores of all "full-sized" galaxies. In the nearby Universe, there is a direct relationship -- a constant ratio -- between the masses of the black holes and that of the central "bulges" of the galaxies, leading them to conclude that the black holes and bulges affected each others' growth. Two years ago, an international team of astronomers found that black holes in young galaxies in the early Universe were more massive than this ratio would indicate. This, they said, was strong evidence that black holes developed before their surrounding galaxies. "Now, we have found a dwarf galaxy with no bulge at all, yet it has a supermassive black hole. This greatly strengthens the case for the black holes developing first, before the galaxy's bulge is formed," Reines said. Reines, along with Gregory Sivakoff and Kelsey Johnson of the University of Virginia and the National Radio Astronomy Observatory (NRAO), and Crystal Brogan of the NRAO, observed Henize 2-10 with the National Science Foundation's Very Large Array radio telescope and
Inferences about Supernova Physics from Gravitational-Wave Measurements: GW151226 Spin Misalignment as an Indicator of Strong Black-Hole Natal Kicks.
O'Shaughnessy, Richard; Gerosa, Davide; Wysocki, Daniel
The inferred parameters of the binary black hole GW151226 are consistent with nonzero spin for the most massive black hole, misaligned from the binary's orbital angular momentum. If the black holes formed through isolated binary evolution from an initially aligned binary star, this misalignment would then arise from a natal kick imparted to the first-born black hole at its birth during stellar collapse. We use simple kinematic arguments to constrain the characteristic magnitude of this kick, and find that a natal kick v_{k}≳50 km/s must be imparted to the black hole at birth to produce misalignments consistent with GW151226. Such large natal kicks exceed those adopted by default in most of the current supernova and binary evolution models.
Stability and fluctuations in black hole thermodynamics
Ruppeiner, George
I examine thermodynamic fluctuations for a Kerr-Newman black hole in an extensive, infinite environment. This problem is not strictly solvable because full equilibrium with such an environment cannot be achieved by any black hole with mass M, angular momentum J, and charge Q. However, if we consider one (or two) of M, J, or Q to vary so slowly compared with the others that we can regard it as fixed, instances of stability occur, and thermodynamic fluctuation theory could plausibly apply. I examine seven cases with one, two, or three independent fluctuating variables. No knowledge about the thermodynamic behavior of the environment is needed. The thermodynamics of the black hole is sufficient. Let the fluctuation moment for a thermodynamic quantity X be √( 2 >). Fluctuations at fixed M are stable for all thermodynamic states, including that of a nonrotating and uncharged environment, corresponding to average values J=Q=0. Here, the fluctuation moments for J and Q take on maximum values. That for J is proportional to M. For the Planck mass it is 0.3990(ℎ/2π). That for Q is 3.301e, independent of M. In all cases, fluctuation moments for M, J, and Q go to zero at the limit of the physical regime, where the temperature goes to zero. With M fluctuating there are no stable cases for average J=Q=0. But, there are transitions to stability marked by infinite fluctuations. For purely M fluctuations, this coincides with a curve which Davies identified as a phase transition | CommonCrawl |
StarWerk.Net
Starwerk Blog
This is where you'll find our thoughts about recent (and not) astronomy, earth science and other science news, our answers to questions we get from people a lot, plus whatever else strikes our fancy. Sometimes we get into a few of the mathematical or scientific details, but never too deeply.
You have probably heard of dark matter- which is different from dark energy - but do you know what it is or why scientists think it exists?
Why Do Astronomers Think There is Dark Matter?
01-Feb-2020 0:00 written by: Kevin McLin
The evidence for dark matter goes back to the 1930s, or even further if we broaden our meaning somewhat. For example, people long thought that there was a planet in close proximity to the Sun, even closer than Mercury. They thought this was true because Mercury had unexplained perturbations in its orbital motions,. Put another way, Mercury did not follow the predictions of Newton's laws of gravitation and his laws of motion. One possible explanation for the discrepancy could have been that the laws were simply wrong. But they worked for other objects, so why not for Mercury?
To understand why these discrepancies are a problem, we have to know what Newton's laws are. We'll begin with the Law of Universal Gravitation. If you already understand Newton's laws, you can skip the next couple of sections.
Newton's Law of Gravitation
Newton's law of gravitation describes the force that one object exerts on another due to the gravitational effect. It says that the force is proportional to the product of the masses of the two interacting objects. In other words, the bigger the product of the masses, the bigger the force exerted. It doesn't matter if there is one large object and one small one, or if both objects have the same mass. As long as the product of the masses is constant, the force exerted will be the same.
The force also depends on the distance separating the two objects, and it becomes smaller as the objects get farther apart. However, it does not diminish in a simple linear fashion. The force is inversely proportional to the square of the distance between the two. The "inversely" part means that the force involves the reciprocal of the distance, but the "square of the distance" part means that we have to use the square of the distance, not the distance itself.
Mathematically we can write all of these dependencies as is done below.
$$ \vec{F_{12}} = -\frac{G m_1 m_2 }{r^2}\hat{r} $$
This equation provides the force \(\vec{F}_{12}\), meaning "the force of object 1 upon object 2." The symbols \(m_1\) and \(m_2\) are the respective masses of the objects, and \(r\) is the distance separating them. The small arrows above the \(\vec{F}_{12}\) and the \(\hat{r}\) remind us that the force is a vector quantity; it has both a size and a direction. In particular, \(\hat{r}\) is called a unit vector because it has unit length, or in other words, a length of 1. It has no units of measure (no meters or inches or anything like that) and is there only to provide a direction; it always points along the radial direction in a polar coordinate system centered on one of the objects and toward the other object. The negative sign tells us that the direction of the force is opposite to the radial direction for a coordinate system centered on that mass. A picture will probably make this more clear.
The diagram below at left depicts the geometry. It shows \(\vec{F}_{12}\), the force that object 1 exerts upon object 2. Note that the coordinate system is centered on the first object. That is why the unit vector \(\hat{r}\) begins there and points toward object 2. The force felt by object 2 from the gravity of object 1 (\(\vec{F}_{12}\) ) is directed along the same line, but it points in the opposite direction. As is customary, the force acting on object 2 is shown at object 2, but like any vector, it can be moved around as we like. As long as we don't change either of its length or direction, we do not change the vector.
A similar diagram could be used to show the force that object 2 exerts on object 1. We could call this force \(\vec{F}_{21}\). The force is depicted in the diagram on the right. The two forces are equal in strength and opposite in direction, and so \(\vec{F}_{21} =- \vec{F}_{12}\) (see Newton's Third Law, below). This fact is also clear because we do not change the strength of the force by changing the order of multiplication of the masses in Newton's gravitational law: \(m_1 m_2 = m_2 m_1\).
The symbol \(G\) in Newton's law is called the gravitational constant. It keeps track of the system of units we are using to measure distance, mass and force. In the SI system of units (mass in kilograms, distance in meters, force in newtons) the gravitational constant has the numerical value below. For other systems of units (measuring force in pounds, distance in feet, etc.) it will generally have a different value.
$$ G = 6.67 \times 10^{-11}\rm\, N\,m^2\,kg^{-2} $$
One thing to note in passing is the tiny value of \(G\). The \(10^{-11}\) is a very, very small number. It means that in order to get an appreciable force we need to have a lot of mass. Alternatively, we could place the masses exceptionally close together so that \(r\) is small. As an example, if we have two \(\rm 1\,kg\) masses that are \(\rm 1\,m\) apart, we get a force so small as to be completely negligible in most cases.
$$ F = \frac{\rm (6.67 \times 10^{-11}\rm\, N\,m^2\,kg^{-2})(1\,kg)(1\,kg) }{(\rm 1\,m)^2} = 6.67 \times 10^{-11}\rm\, N $$
An example you can try yourself is to calculate how close these two objects would have to be to create a force of 6 or 7 newtons between them. You will find that it is quite a small number; compare it to the average distance between the proton and electron in a hydrogen atom, which is about \(\rm 10^{-10}\,m\).
As far as we can tell, this law works everywhere in the universe. Well, almost everywhere. We'll get into some of that later. In particular, it should describe the motions of all the planets as they orbit the Sun, it should describe the motion of the Moon orbiting Earth, it should describe the motion of a rock falling on the surface of Earth. All of these motions are dominated by gravity to the extent that other forces, like air resistance from Earth's atmosphere, for example, can be neglected. These non-gravitational effects can be neglected for rocks that are not falling too fast, but we cannot ignore them for falling feathers or pieces of paper. They should certainly not important for the motions of objects in outer space.
Newton's Laws of Motion
In addition to his Law of Universal Gravitation, Isaac Newton formulated three Laws of Motion. These laws describe all the motions of all bodies we observe around us, at least in the classical view of the world. They break down when we try to use them to describe atomic or subatomic particles, or when we attempt to use them to predict the motion of bodies traveling close to the speed of light. For those special (but often important) conditions we need new physics as revealed by The Special Theory of Relativity or by quantum mechanics. For all other situations, Newton's framework works exceedingly well. We briefly state each of his laws below, and then we explain their general meaning in what follows.
Newton's First Law of Motion:
An Object in motion moves in a straight line at constant speed unless acted upon by a net force.
Newton's Second Law of Motion:
The net force exerted on an object, or in other words, the sum of all the forces acting on it, is equal to the product of its mass and acceleration. Or mathematically:
\(\vec{F}_{net}=m\vec{a}\)
Newton's Third Law of Motion:
Every force exerted by one object upon a second induces an equal and opposite reactive force exerted by the second upon the first.
Motion in a Gravitational Field
All of the laws of motion are generally true for any force, but the second law allows us to make quantitative predictions in specific situations. For example, we can use it to analyze the motion of objects under the influence of gravity alone. All we have to do is set the force, \(\vec{F}_{net}\), in Newton's Second Law equal to the gravitational force, as below.
$$ m_2\vec{a_2} = -\frac{Gm_1m_2}{r^2} \hat{r}$$
Note that the subscripts here have a formal meaning. On the left we are using Newton's Second Law to write the net force on mass 2. Further, with the equal sign we are saying that this (net) force is the same as the gravitational force between mass 1 and mass 2. This equation can be simplified in a number of ways in order to make its meaning more clear. First, we can cancel the common factor of \(m_2\) by dividing both sides of the equation by that value. In addition, we can remove the vector signs and the minus sign. All these do is remind of of the direction of the acceleration, and that it is opposite to the direction of the unit vector \(\hat{r}\). But we know that the acceleration of object 1 points directly at object 2; we are dealing with an attractive force that always acts along the line connecting the two gravitating bodies, so the direction is not in question. What we are really interested in, as a result, is just the strength of the force.
Finally, we will rename the mass \(m_1\) to a capital \(M\). Since we have rid ourselves of \(m_2\), we don't really need to keep the subscript 1 on the mass of object 1. It just makes our notation redundant and cumbersome, so we will get rid of it. What's more, using an upper case \(M\) for the mass creating the gravity is a common convention, and we will follow it. This does not change anything, it just replaces the name we are using for the mass of the object creating the gravitational field that accelerates the second mass. After these simplifications, the equation can be written as below.
$$ a = \frac{GM}{r^2} $$
A Brief Aside for Clarification
While we have assumed that object 2 is orbiting in the gravitational field of object 1, we could have done the opposite. In fact, the objects each orbit in the gravitational field of the other. So, for example, as Earth orbits Sun, it is also true that Sun orbits Earth. More accurately, we should say that they both orbit their common center of mass. Because the Sun contains so much more mass than Earth, the center of mass of the Earth-Sun pair is nearly coincident with the center of the Sun. So in this particular example the distinction is not usually important. That is not true for objects of more similar masses. Two stars in a binary system is one example. If the stars have the same mass, then the center of mass of the system is halfway between the two. In that case, the stars orbit - that is, they follow paths centered on - a point in space that is completely empty. This small detail is a good thing to keep in mind when thinking about gravity. Do not make the mistake of thinking that less massive objects "orbit around" more massive ones. In fact, according to Newton's law, all objects with mass produce gravity, and all objects with mass respond (that is, they move) in response the gravitational field produced by other objects. It's just that objects with more mass move less than objects with less mass. This effect has been used by astronomers to detect planets around other stars as they undergo small motions under the influence of the gravity of the planets orbiting them.
The equation above gives the gravitational acceleration caused by an object with a mass \(M\) located a distance \(r\) away. It is often called the gravitational field and denoted by a lower case \(g\). We can use it, to take one example, to compute the surface gravity of Earth, which has a mass of \(\rm 5.98\times 10^{24}\,kg\) and a radius of \(\rm 6378\,km\). Note that to do the calculation we must convert the radius from kilometers to meters.
$$ g = \frac{\rm (6.67\times 10^{-11}\,N\,m^2\,kg^{-2})(5.98\times 10^{24}\,kg)}{\rm (6.378\times 10^6 \,m)^2} = \rm 9.81\,m\,s^{-2}$$
This is the value of Earth's surface gravity, a number first measured by Galileo more than 400 years ago, though he expressed it in different units, of course. It is the same for all falling objects regardless of their mass (or their "weight" if you prefer, though be careful: weight and mass, while related, are not the same). While it's true that more massive objects feel a larger force from gravity (Newton's gravitational law), they also have a larger inertia, and so are more difficult to accelerate (Newton's Second Law). The two effects exactly cancel each other, and so all objects fall at the same rate, regardless of their mass.
We should point out here that we might have pulled a little bit of subterfuge above. To understand how, consider the following: Newton's gravitational law essentially gives us a definition of a quantity we could call gravitational mass. It depends only upon the gravitational interaction between two massive bodies. The Second Law is a definition of something we might call inertial mass. It describes how an object's acceleration depends on its mass and the force exerted on it. We set them equal in our analysis above when we canceled the \(m_2\) in the Second Law with the \(m_2\) in the gravitational law. There is no reason that the two have to be the same, and there are some theories of particle physics that predict small differences between them. Experiments have been run in an attempt to measure these possible differences, but so far, no differences have been found. Maybe that is because there actually are no differences, or maybe it is because our current experiments lack the sensitivity required so measure them. In any case, it is something to keep in mind as you continue to read through this post. All of its conclusions depend upon the equivalence of gravitational and inertial mass.
You might also have objected to the distance we used in our computation above. If we stand upon the surface of Earth, then the distance to the Earth is not the same as the distance from its surface to its center. It's zero! While that is certainly true, it is also true that, gravitationally speaking, the distance that matters is the distance to the center. That is because a spherical object like Earth has a gravitational field outside its volume that is identical to that of an object with the same mass located at a point at its center. Newton was the first person to prove this assertion mathematically. If you are curious to know how the proof works, have a look in an introductory calculus-based text on physics. In any event, Newton's gravitational law describes the gravitational force between any two such point masses, so our computation turns out to be the acceleration created by a point-Earth located at the center of the actual Earth. The treatment is valid (to very good approximation) anywhere outside Earth's volume.
The result above gives a theoretical underpinning to Galileo's discovery that all objects fall at the same rate when under gravity's sole influence. Galileo had conducted experiments to arrive at his conclusion, but it was not until Newton proposed his laws of motion and gravitation that the phenomenon was understood theoretically.
We could repeat this same calculation for all the planets, for the moon, or basically for any point in space. For example, we could compute the value of Earth's gravitational acceleration at the distance of the Moon. We would only have to replace the radius of Earth with the distance from Earth to the Moon. This distance is 384402 kilometers. You can substitute this value for Earth's radius, above, if you like. Then you will see how fast the Moon is falling toward Earth. Just don't forget to convert from kilometers to meters before you do the substitution. If you neglect this step, your answer will be off from the correct one by a factor of 1000 squared, i.e., a million!
Circular Motion
We are now ready to explore how the mass of an object can be determined by observing the gravitational effects it has on another object. Recall that, according to the First Law of Motion, an object moves in a straight line at constant speed unless it is acted upon by a net force. So if the object either speeds up or changes direction, then there must be a net force acting. Let us use this idea to find a means to measure the mass of gravitating objects.
To begin, consider one of the planets orbiting the Sun. The planets' orbits are slightly distorted from a circle; they are ellipses, of which a circle is a special case. However, their orbital paths are close enough to circular that we do not make a large error by assuming they are perfectly circular. Certainly our reasoning will still be valid, even if our mathematical results will be in slight disagreement with reality.
A general result from Newtonian mechanics is that an object moving at a constant speed along a circular path has a constant acceleration, called the centripetal acceleration. Centripetal means center-seeking. This acceleration does not cause the moving object to speed up or slow down, it only changes the direction of the object's velocity. Further, the acceleration vector points toward the center of the circular path (in the minus \(\hat{r}\) direction), so it really is center seeking. We have a somewhat more detailed discussion of this in the blog post What is an Orbit?
The mathematical expression for the centripetal acceleration is shown below. The acceleration is shown with a subscript \(c\) to emphasize that we are specifically referring to centripetal acceleration.
$$ \vec{a}_c = -\frac{v^2}{r} \hat{r} $$
The figure below shows the geometry. An object with mass \(m\) moves along a circular path with radius \(r\). The object moves at a constant speed, and at each point along the path the velocity is tangent to the path, as shown by the vector \(\vec{v}\). The radial unit vector, \(\hat{r}\), always points from the center of the circle outward toward the position of the object at any particular moment in time. The acceleration vector \(\vec{a}\) points inward from the object toward the center, opposite the direction of \(\hat{r}\).
For a planet in orbit about the Sun, the centripetal force is provided by the Sun's gravity. That means we can set the centripetal acceleration equal to the gravitational acceleration in this case. We then arrive at the following expression, in which the relationship between the motion of the orbiting object (its speed, \(v\)) to the mass of the gravitating body is becoming clear. In this case, the mass of the gravitating body is \(M\), the mass of the Sun.
$$ \frac{v^2}{r} = \frac{GM}{r^2}$$
There is a common factor of \(r\) in the denominator of both sides of this equation. We can therefore simplify things by multiplying the entire equation by \(r\), canceling the factor on the left and leaving only a single factor on the right. We then have the expression below.
$$ v^2 = \frac{GM}{r}$$
We now see the relationship between motion and mass quite clearly. An object with greater mass causes an orbiting object to move faster than a lower-mass object does. Keep in mind, we are referring to the mass of the object creating the gravity, not the mass of the orbiting object. So for the solar system, we consider that the Sun is creating the gravity. The planets are orbiting in the gravitational field created by the Sun. The mass of a given planet does not matter: for a given orbital radius, a planet of any mass will orbit at the speed given above. Additionally, as the orbit gets bigger (as \(r\) increases) the velocity gets smaller. So planets farther from the Sun move more slowly than planets near to the Sun, the mass of the planet notwithstanding.
We can find exactly what we want, the mass in terms of the velocity, by rearranging. If we multiply the equation by \(r\) and divide by \(G\), we then have a final equation that gives the mass in terms of the (presumably) measurable speed of the orbiting object.
$$ M = \frac{v^2 r}{G} $$
This equation provides the mass of a gravitating object in terms of the affect it has on the motions of objects near it. In particular, we can use it to determine the mass of the Sun if we know the orbital speed of any of the planets. And we do.
The speed of Earth in its orbit is about \(\rm 30\,km\,s^{-1}\), which we can determine by noting that it's orbital radius is 149 million kilometers, and it requires a year (365.26 days) to orbit one time. Using these numbers and plugging into the expression above gives us the mass of the Sun. We have, of course, remembered to convert the orbital radius from kilometers to meters and the orbital period from days to seconds.
$$ M_\odot = \frac{\rm (3\times 10^4\,m\,s^{-1})^2 (1.49\times 10^{11}\,m)}{\rm 6.67\times 10^{-11}\,N\,m^2\,kg^{-2}} = \rm 2\times 10^{30}\,kg$$
We could repeat this calculation for each of the other seven planets, plus asteroids, etc. Each would give the same answer for the mass of the Sun to within uncertainties. Of course, we remind ourselves that the planets are not on circular orbits. They follow ellipses, and this contributes to some of the variation we would get. However, it is illustrative of the methods used to measure the masses of objects in space. More sophisticated mathematical treatments yield better mass determinations, and they follow along essentially the same lines of reasoning. Namely, motions of objects in a gravitational field can be used to infer the mass of the object producing the gravity that is affecting the motion of the orbiting bodies.
So what does all this have to do with the orbit of Mercury, and what does that have to do with dark matter? Everything.
If we know the mass of the gravity source, we can predict the motions of objects in that gravity field. On the other hand, if we can observe the motions of objects in space, we can infer the mass that must be present to produce those motions. Both effects play into the observations of Mercury's orbital motions.
The Puzzle of Mercury's Orbit
It was long known that Mercury's orbit does not follow the predictions made by Newton's laws. The mass of the Sun was well known because of calculations like the one above, therefore it should have been possible to accurately predict the motion of Mercury. However, this was not the case. Mercury exhibited very small, but quite noticeable, differences in its orbit from those predicted by Newtonian gravity. It was assumed that there must be some extra gravity causing its motions to be perturbed. The source of this extra gravity was postulated to be an unseen planet, and the planet was given the name Vulcan. Astronomers searched for Vulcan in the twilight sky for many years. (Mercury is always near to the Sun, and so only visible in twilight. So would be the case for Vulcan.) Despite the most careful searches, no solid detection of Vulcan was ever made. Several false alarms were reported, but they always evaporated upon further observations. To this day there has not been a planet seen between Mercury and the Sun. Apparently, Vulcan does not exist.
So, how was the issue of Mercury's strange orbital motion finally solved? It took a new and better understanding of gravity. In other words, new physics was needed. This was provided by Albert Einstein in 1915 with his Theory of General Relativity. In fact, the first test Einstein made of his new theory, before he had even announced it, was to use it to compute the orbit of Mercury. The results of his calculations matched Mercury's motions exactly.
In regions where gravity is strong, like in the vicinity of the Sun (where Mercury orbits) General Relativity (GR) gives slightly different predictions for the motions of objects than Newton's laws do. It turns out that GR is more accurate in these regions. Where gravity is weaker, the two theories give the same predictions. In fact, when gravity is weaker, GR, which is an extremely difficult theory mathematically, simplifies to become mathematically equivalent to Newtonian gravity. It is only when gravity becomes strong that the two diverge. In those regions general relativity must be used.
A discussion of general relativity is outside the scope of what we can address here, and I bring it up only as an illustration. When we see motions of objects in space that we cannot explain given the amount of gravitating mass we see, there are two possible conclusions. The first is that there are unseen bodies present, creating extra gravity and perturbing the motions of objects. We can then look for those objects and try to reconcile the orbital motions with the gravitating mass. The other possibility is that our application of the gravitational equations is in error, and that we need a different theory of gravity. The latter turned out to be the explanation for Mercury, though for decades astronomers believed the solution was in the existence of unseen matter in the form of the planet Vulcan.
But sometimes things have gone the other way. The discovery of Neptune was greatly aided by the realization that Uranus moved in ways not explained by the gravity of the Sun - and the known planets - alone. Uranus's orbital motions suggested that an unseen body tugged at it from a place farther out in the solar system. That body, the planet Neptune, was eventually found. It was this success of Newton's gravitational theory in the 19th century that emboldened scientists to postulate the existence of Vulcan, and to spend so many years vainly looking for it. It is also the example of Neptune's discovery that suggests the presence of unseen matter can sometimes be the explanation for puzzling motions of celestial objects.
Evidence for Dark Matter - I. Galaxy Clusters
In the 1930s, astronomers began to study the universe beyond our galaxy. It was only at the very end of the previous decade, in 1929, that Edwin Hubble had presented the evidence that the universe even extended that far. Prior to that, many astronomers had thought that the nebulae that were, in fact, external galaxies, were no more than clouds of gas within the confines of the Milky Way. Hubble proved that this idea was wrong.
An astronomer at The California Institute of Technology (Cal Tech) named Fritz Zwicky soon began to study a group of these external galaxies in the constellation Coma Bernices (Bernice's Comb), called the Coma cluster. He measured the motions of the galaxies within the cluster and found them to be exceedingly fast. Zwicky then counted up the amount of matter he could see, basically the galaxies themselves, which allowed him to assign a total mass to the cluster. When he compared the galaxy speed to the mass he saw that there was far, far too little mass to hold the cluster together. Instead, given the speed of the galaxies and the paltry amount of mass, the Coma Cluster should be flying apart.
Of course, it is possible that the Coma cluster is, in fact, flying apart. But it seems unlikely. The universe, it turns out, is full of such galaxy clusters, and one has to wonder why we should see them at all if they are not stable. Zwicky understood this too, so he postulated the existence of additional mass that was invisible to him. Being German-Swiss originally, he coined a German term for this missing mass, "dunkel Materie," or dark matter.
Zwicky's method of mass determination was not the same as that described above. He used energy considerations, not forces and accelerations. Specifically, he used the Virial Theorem. Nonetheless, the methods are complementary, not at odds. Both should (and do) give the same results. It's just that for some systems, forces and accelerations provide the simplest approach. For others, it is easier to use energy. In either case, we arrive at a mathematical relationship like the one above, with \(M \approx v^2 r / G\), with \(v\) being a characteristic speed for components in the system and \(r\) being a characteristic size.
Zwicky's initial estimates of the amount of dark matter were far too large because he used an incorrect conversion of light, basically the brightness of galaxies, which he was able to measure, to mass, which he had to infer from the light he saw. This was in the early days of our understanding of galaxies, and subsequent research has brought the amount of mass associated with a given amount of light down considerably. Nonetheless, even with the current much lower conversions, the motions of individual galaxies within clusters (as well as the temperature of the x-ray emitting gas they contain) suggests that they have approximately six times as much matter as we see in them. This result holds true for all galaxy clusters, not just Coma. So the dark matter puzzle has not gone away. An image of the Coma cluster from the NASA website Astronomy Picture of the Day (APOD) is shown below.
Evidence for Dark Matter - II. Galaxy Rotation Curves
Clusters are not the only evidence for dark matter. In the 1970s, an astronomer named Vera Rubin began to study the rotation of spiral galaxies. What she found was unexpected. To understand why, let's consider the rotation of the planets around the Sun. According to our analysis above, their velocity should decrease as the square root of their distance from the Sun. Up above, we learned that the velocity squared of an object orbiting another object is given by the following expression.
$$v^2 = \frac{GM}{r}$$
We can take the square root of both sides of this equation to arrive at the equation below. We have rewritten the square root of the product \(GM\) as a single constant, \(k\), to emphasize the dependence on the orbital radius, \(r\). To take one example, the equation predicts the orbital speed for all the planets.
To compare this prediction to observation, we can plot the observed planet orbital speeds with a plot of the equation on the same graph. The figure at right below shows such a comparison. You can see that the prediction matches the observations extremely well.
$$v = \frac{k}{\sqrt{r}}$$
This \( r^{-\frac{1}{2}} \) dependence is what we expect for a system like the Sun and planets, in which nearly all the mass is located in the central object. But what about a galaxy?
The mass distribution of galaxies is very different from the mass distribution of planetary systems orbiting a star. Galaxies do seem to have a mass concentration near their center, but they also have considerable mass away from the center. As a result, we expect that the velocity of the stars in a galaxy will not fall off as rapidly as the velocity of the planets in the solar system does. However, it is not clear what sort of dependence to expect. It just depends how the mass in galaxies is distributed. Galaxy mass dependence with radius is what Very Rubin set out to explore when she began her study of spiral galaxies. To determine the mass distribution, she made careful measurements of stars in galaxies and plotted them against the distance of the stars from the galactic centers of the systems she observed. Example plots from one of her papers (Rubin et al, Astrophysical Journal, 289, 81, 1985) are shown at right.
These graphs, which are called rotation curves, can be used to infer the radial dependence of gravitating mass in each of the galaxies plotted. Eight galaxies are shown here. The horizontal axis for the plots is angular distance from the galaxy center, in arc seconds. The vertical axis for each shows recession speed from Earth. Each is centered on the recession speed of the galaxy, which is typically several hundred or several thousand km/s. The half of the galaxy that is rotating away from us has a slightly higher recession speed, while the side of the galaxy that is rotating away from us has a slightly lower recession speed. We could subtract the system recession speed, which is the speed of the galaxy center, but that has not been done for these plots.
One difference that is immediately noticed from the plot of the solar system is the the velocities of stars in the outer parts of these galaxies is faster than rotation speeds for galaxies near the center. That is expected because as we move away from the center of a galaxy we are adding gravitating mass inside the orbits of the stars. However, what was surprising to astronomers was how the rotation curves all flatten out after some distance, and how they essentially remain constant out to the edge of measurements. Let's look at the implications for the observed flattening of rotation curves.
From our discussion above, we know that the velocity of the stellar motion depends on the attracting mass and the size of the stellar orbits. We can rewrite the equation from above as below, but now we have expressly written the gravitating mass with a radial dependency, \(M = M(r)\). This reminds us that we no longer have a dominant massive object inside the orbit. Instead, the gravitating mass increases significantly as we move outward from the center, and so we expect that the rotation speed will increase, too.
Of course, the velocity does increase with radius in the inner parts of the galaxies. The plots clearly show this. However, at some radius, call it \(r_0\), the velocity becomes constant in all the galaxies, and each has its own particular value for the characteristic radius \(r_0\).
$$ v(r)^2 = \frac{GM(r)}{r}$$
If the velocity is remaining constant as the distance \(r\) increases, it must also be true that the mass \(M\) increases, and it must do so at exactly the same rate. So we must have an expression for the mass, at least in the region of the galaxies where the rotation speed is constant, that is like the one below. \(M_0\) and \(r_0\) are both constant parameters.
$$ M(r) = M_0 \left ( \frac{r}{r_0} \right )$$
With this sort of radial dependence for mass, the variable \(r\) cancels out of the equation, leaving a constant velocity.
$$ v^2 = \left (\frac{G}{r} \right ) \left [ M_0 \left ( \frac{r}{r_0} \right ) \right ] = \frac{GM_0}{r_0}$$
The need for dark matter becomes apparent when you take a look at a photograph of a spiral galaxy, like the one below. You will see that they are bright in the center and become progressively dimmer toward the edges. This is because the stellar density drops with radius.
Image Credit: NASA, ESA, S. Bianchi (Università degli Studi Roma Tre University), A. Laor (Technion-Israel Institute of Technology), and M. Chiaberge (ESA, STScI, and JHU)
One solution would be to make up the increasing mass with material that does not shine, like gas or dust. However, it turns out that gas and dust both shine quite brightly if you observe them in the right form of light. For example, we can observe the atomic gas via radio emission from neutral hydrogen, called HI. In areas that contain hydrogen molecules instead of atoms, emission from other molecular species (like CO) is visible. The dust, on the other hand, is visible by infrared emission. So we actually can observe these constituents, and none of them can explain the constant rotation speed of galaxies. What's more, the emission from neutral hydrogen extends many times farther from the center than the stars, dust and molecular gas do. Even at these distances the rotation speed remains constant, indicating that the mass within the galaxy continues to increase with radius as far out as we can measure.
Only a small fraction of the mass is observed directly at these distances by electromagnetic radiation. Instead, the vast majority of the mass is completely invisible, and this invisible material dominates the total mass of the entire galaxy by as much as a factor of ten. Somewhat confusingly, the material composing the bulk of the mass is called dark matter. But it's not really dark, it's invisible. Invisible matter would be a much more descriptive and less confusing name for it.
Evidence for Dark Matter - III. Gravitaional Lensing
Abel 2218 galaxy cluster and gravitational lensing system.
Credit: NASA, ESA, and Johan Richard (Caltech, USA) Acknowledgement: Davide de Martin & James Long (ESA/Hubble)
The image above illustrates the third type of powerful evidence for dark matter: gravitational lensing. This phenomenon is not part of Newtonian physics, according to which, gravity would arguably not have any effect on light at all. Because light has no mass, it would not feel any force according to Newton's Law of Universal Gravitation. I say arguably because there is another view of Newtonian gravity in which it is thought of an an acceleration (the little g, discussed above) and not a force, per se. In that case, everything could experience the acceleration, even light. However, if you compute the deflection caused by a Newtonian acceleration as light passes a massive object, you find that you predict only half the amount of deflection that is observed. This deflection was first measured during a solar eclipse in 1919, after it had been predicted by general relativity - Einstein, again. The predictions of GR match the deflection observed precisely.
As mentioned before, GR is too much to go into in this post. However, we will go so far as to say that in the GR view of gravity, it is neither a force nor an acceleration. It is a distortion of space and time (spacetime, in the parlance of relativity, both special and general). This distortion is a compression or stretching of space, and similarly, a slowing or speeding up of time, and the effects are local and vary as one moves around in spacetime. The distortion (which is gravity) causes the trajectories of objects to appear to bend. It also makes it seem as if there is a gravitational acceleration near massive objects. But there is no acceleration in 4-dimensional (3-space plus 1-time dimension) spacetime. The apparent acceleration is an illusion that appears in 3-dimensional space when you consider it separate from time, as is the case with Newtonian physics. Even if you ddin't follow all of that, you probably get that it is a completely different view of what gravity is and how it works. What's more, it does a better job than Newtonian gravity of predicting how gravity behaves and how it affects objects in the universe. All objects.
Just as Newtonian dynamics can be used to infer the gravitating mass in a dynamical system, so general relativity can be used to infer the mass in a system with gravitational lensing, which is itself a dynamical system, of course. When this is done, we again find that the amount of mass required to cause the observed bending of starlight is much greater than the mass that can be directly observed in light, radio, x-rays, infrared, etc. So again, we have strong evidence for invisible material affecting the gravitational field, or in other words, for dark matter.
The picture above is a Hubble Space Telescope image of the galaxy cluster called Abel 2218. If you look closely you will notice many arc structures that are concentric around the cluster center. You can also see arcs centered around some of the more massive cluster members. These arcs are actually images of background galaxies that have been distorted by the gravity (spacetime distortion) of the foreground cluster mass. Astronomers call such a system a gravitational lens because the mass of the cluster acts like a lens to magnify and distort objects in the background. A system like this, with very prominent and obvious arcs, is called a strong lens, and it provides powerful means to determine the mass in the cluster. Analysis of the arcs provides a way to measure not just the total mass in the lens, but also the mass distribution required to form the many images seen. So gravitational lensing alone suggests the existence of dark matter, because the amount of mass required to create the lensed images is always much more than is observed via electromagnetic waves. Furthermore, the mass determined by lensing agrees with the mass determined for clusters using more traditional methods based on the velocities of the cluster members, as Zwicky first used back in the 1930s.
Bullet Cluster. For more information about this image:
http://chandra.harvard.edu/press/06_releases/press_082106.html
The second image provides the most powerful evidence for dark matter yet seen. It shows a collision between two galaxy clusters, but there is a lot going on here and it needs a bit of explaining. There are actually three images superimposed. The first is the optical image of the galaxies, and it does not show anything particularly unusual. There is not even any strong lensing as observed for Abel 2218. However, careful analysis of galaxies in this field shows that they are slightly distorted by the mass of the clusters. The distortion is not enough to be seen in any individual galaxy, but when considered as a whole, the galaxies tend to be curved into arcs centered on the mass in the foreground, just as is the case for the individual galaxy arcs in the strong lens of Abel 2218. Analysis of this weak lensing effect shows that the mass of the clusters is concentrated in the areas shaded blue. Note that these areas coincide with the two clusters in the image. Not a big deal, you might think, but just wait.
Finally, there is the red region. This is x-ray emission that has been measured by the Chandra X-Ray Observatory, a counterpart to Hubble that observes the universe in the higher energy regime of x-rays. This Chandra image shows the presence of very hot gas, with a temperature of several million kelvin.
Putting this all together, we have the following picture: Two clusters collided with each other quite recently (cosmically speaking). The member galaxies of each cluster pretty much passed right through one another. That would be expected since the clusters are mostly empty space, and the distance between galaxies in each cluster is much larger than the galaxies themselves. You can think of if sort of like two swarms of bees that pass through one another. The bees will seldom, if ever, run into one another, and the swarms will continue along on their journeys unmolested. A similar condition exists when two galaxies collide because the distance between stars in a galaxy is much larger than the stars themselves, to an even greater degree, in fact.
But look at the distribution of the x-ray emitting gas. It has stopped in the middle of the two galaxy clusters, presumably the point of the collision.
All galaxy clusters contain this hot gas. In fact, there is as much mass in cluster gas as there is in the stars in the member galaxies. However, if two clouds of gas collide, they do not pass through one another. The gas particles (atoms, in this case) interact very strongly via collisions, or in other words, through strong electromagnetic interactions. All of their kinetic energy (energy of motion) has gone into heating them up, while they themselves have pretty much stopped dead in their tracks. This is what we see has happened from the Chandra image.
So the collision has stripped the gas out of the two clusters, but the galaxies and the bulk of the mass (revealed by weak lensing) has passed through. This indicates that the material that strongly interacts via electromagnetic effects - the gas component of the mass that can collide - has stopped. The components that do not interact via electromagnetic effects, the galaxies and whatever is causing the lensing, have passed right through and continued on their original trajectories. This is exactly what would be expected for a dark matter particle that has mass, and so interacts via gravity, but that has no ability to respond via electric or magnetic interactions.
Most astronomers consider this system, and a couple others like it, to be the final proof needed for the existence of dark matter, an invisible type of particle that interacts only, or at least primarily, by gravitational effects. Dark matter is invisible because, lacking any electromagnetic potency, it does not interact with any electromagnetic radiation, not visible light, not x-rays or gamma-rays, not ultraviolet, not infrared, not radio. Nothing. But it has mass/energy, and according to general relativity, anything that has mass or energy will both create gravity (spacetime distortion) and respond to it.
What is the Dark Matter?
The discussion above gives a general overview of the reasoning behind the belief among most astronomers and physicists that the vast majority of the matter in the universe is invisible. When scientists do a census, the dark component of matter is more than 85% of the total. And by "invisible," we mean a type of matter that does not interact via electromagnetism. It is not detectable using light (photons) of any kind. Such matter is clearly not composed of the standard kinds of particles, leptons (basically electrons) and baryons (particles made of quarks, like protons and neutrons), since they both interact strongly with photons. Well, almost.
One type of lepton, the neutrinos, do not interact with electromagnetic radiation. Could they be the dark matter? For a time, many physicists and astronomers thought they might be. However, when the details of neutrino dark matter are worked out, they turn out to have the wrong properties. Indeed, they are dark, but the dark matter in galaxies and galaxy clusters is far too concentrated to be predominantly neutrinos. Because of their tiny mass, which is almost zero, in fact, they are moving very fast. And because they interact so weakly, they cannot cool down (and thus slow down). Cooling down means that a substance transfers its kinetic energy to some other object or substance, and material that does not interact with anything cannot do that. Of course, neutrinos do interact via gravity, but gravity is too weak to account for much cooling for a fast-moving particle. For this reason, there is no way for neutrinos to collapse into the concentrated structures that we observe in galaxies and galaxy clusters. Neutrinos are a type of dark matter, but they can only be a tiny fraction of what is seen dominating galaxies and clusters. That dark matter must be something else.
So what could it be? In truth, we don't know. We know from its distribution in space (collapsed into concentrated lumps) that it must be "cold." That means it must be a particle that is fairly massive, so that even at high temperatures it would move slowly. Only in that way could it have managed to cool via gravitational interactions, the only kind available to it, and collapse when the universe was young. It could have thereby seeded the collapsed structures we see today.
There are no particles in the zoo of known particles that have the right properties to account for such an early collapse. The dark matter must be some kind of as yet unknown particle. Scientists have proposed a number of candidates, but to date, none of them has been detected. Thus far the dark matter is only detected indirectly via the motions of the stars and galaxies, and in the bending of light in gravitational lenses.
© 2020 StarWerk.net / Kevin McLin. All images © 2020 Kevin McLin unless otherwise noted. | CommonCrawl |
Candidate Attitude and Spacecraft Conventions
Creation date: 2019-02-14 22:18:49 Update date: 2021-01-13 20:15:14
Policy: Expert Review
Authority: CCSDS.MOIMS.NAV
OID: 1.3.112.4.57.4
13 records in registry
OID Tree
Object Identifier
1.3.112.4.57.4
Attitude and Spacecraft Conventions
Show/Hide filter(s)
--------- Unassigned Reserved Provisional Assigned Expired Unmanaged TBC Returned Deleted Deprecated
ContainsStarts withEnds withIs equal toRegular Expression
Default Units/Type
Export registry as:
OID
Direction Cosine Matrix
Represents the orientation of a frame B with respect to a frame A, the coordinate transformation from frame A to frame B.
$$M_{BA}$$
1.3.112.4.57.4.1
The first three elements form the vector part of the quaternion, the fourth is a scalar element. Defined as $$Q=\left[ \begin{array} {c} Q1\\ Q2\\ Q3\\ QC\end{array}\right]=\left[\begin{array}{c}e_{1}\mathrm{sin}(\frac{\phi}{2})\\ e_{2}\mathrm{sin}(\frac{\phi}{2})\\ e_{3}\mathrm{sin}(\frac{\phi}{2})\\ \mathrm{cos}(\frac{\phi}{2})\\ \end{array}\right]$$
where \(e_{1}, e_{2}, e_{3}\) are the three elements of the Euler rotation axis (unit vector) and \(\phi\) is the Euler rotation angle. The quaternion represents the coordinate transformation from frame A to frame B.
$$Q=\left[\begin{array}{c}Q1\\ Q2\\ Q3\\ QC\end{array}\right]$$
Quaternion Derivative
Rate of change of the quaternion. The quaternion evolves in time according to $$\dot{Q}=\frac{1}{2}\Omega(\omega)\mathrm{Q}$$ where
$$\Omega(\omega)=\left[\begin{array}{cccc} 0 & \omega_{z} & -\omega_{y} & \omega_{x}\\-\omega_{z} & 0 & \omega_{x} & \omega_{y}\\ \omega_{y} & -\omega_{x} & 0 & \omega_{z}\\ -\omega_{x} & -\omega_{y} & -\omega_{z} & 0 \end{array}\right]$$
and \(\omega\) is the angular velocity.
$$\dot{Q}=\left[\begin{array}{c}\dot{Q1}\\ \dot{Q2}\\ \dot{Q3}\\ \dot{QC} \end{array}\right]$$
s\(^{-1}\)
The rotational rate of frame B with respect to frame A. The vector direction is the instantaneous axis of rotation of frame B with respect to frame A and the vector magnitude is the instantaneous rate of this rotation. The subscript indicates the frame in which the angular velocity is resolved. Here SC refers to the spacecraft body frame.
$$\omega_{sc}=\left[\begin{array}{c} \omega_{x}\\ \omega_{y}\\ \omega_{z}\end{array}\right]$$
rad/s
Euler Angles
Euler angles are used to represent a rotation from an initial frame A to a final frame B as a product of three successive rotations about reference unit vectors, the angles of these rotations are the Euler angles. There are 12 possible sequences. The rotation sequence and the rotation angles are specified when providing the final transformation matrix (direction cosine matrix). For example, \(M_{BA}=M_{312}=M_{3}(\phi)M_{1}(\theta)M_{2}(\psi)\). \(M_{2}(\psi)\) is the first rotation of \(\psi\) about the 2nd axis of the initial frame A. \(M_{1}(\theta)\) is the second rotation of the angle \(\theta\) about the 1st axis of the intermediate frame. \(M_{3}(\phi)\) is the 3rd rotation of the angle \(\phi\) around the 3rd axis of the second intermediate frame, completing the transformation into the final frame B. Mathematically this is written as
$$M_{312} = \left[ \begin{array}{ccc} \mathrm{cos}(\phi) & \mathrm{sin}(\phi) & 0 \\
-\mathrm{sin}(\phi) & \mathrm{cos}(\phi) & 0\\0 & 0 & 1 \end{array} \right] \cdot$$
$$\left[ \begin{array}{ccc} 1 & 0 & 0\\ 0 & \mathrm{cos}(\theta) & \mathrm{sin}(\theta) \\
0 & -\mathrm{sin}(\theta) & \mathrm{cos}(\theta) \end{array} \right] \left[ \begin{array}{ccc} \mathrm{cos}(\psi) & 0 & -\mathrm{sin}(\psi) \\
0 & 1 & 0\\ \mathrm{sin}(\psi) & 0 & \mathrm{cos}(\psi) \\ \end{array} \right]$$
\(M_{312}\) (\(\phi\),\(\theta\),\(\psi\))
Euler Rates
The time derivatives of the Euler angle representation. They represent the rotation rates of the individual transformations represented in the three angle Euler angle rotation sequence. The transformation between Euler rates and angular velocity is not orthogonal. The angles are written in the same order as the Euler angle sequence, with a dot to indicate differentiation.
$$\left[\begin{array}{ccc}\dot{\phi} & \dot{\theta} & \dot{\psi}\end{array}\right ]$$
The moment of inertia tensor is a symmetric 3x3 matrix. Expressed in a coordinate frame attached to the center of mass of a spacecraft body. The subscript indicates the frame in which the inertia is resolved.
$$I_{SC}=\left[ \begin{array}{ccc} I_{XX} & -I_{XY} & -I_{XZ}\\ -I_{XY} & I_{YY} & -I_{YZ}\\ -I_{XZ} & -I_{YZ} & I_{ZZ}\end{array}\right]$$
$$I_{SC}$$
kg-m\(^{2}\)
Defined for a rigid body as the product of the inertia (\(I\)) and the angular velocity (\(\omega\)). If a spacecraft contains devices which contribute angular momentum they are added to the momentum generated by the spacecraft body. The subscript indicates the coordinate frame in which the momentum is resolved. The superscript indicates what elements are included in the momentum. For example, \(W\) indicates a reaction wheel, \(B\) indicates just the spacecraft body, \(C\) indicates the total system momentum, momentum about the system center of mass.
$$H^{B}_{SC}=I_{SC}\omega_{SC}$$
$$H^{C}_{SC}=I_{SC}\omega_{SC}+H^{W}_{SC}$$
$$H^{W}_{SC}=M_{SC,W}I_{W}\omega_{W}$$
Note that in the 2nd equation above the inertia (\(I_{SC}\) ) includes the inertia of the wheels transverse to their spin axes, but not the inertia along the spin axes. The second term is the momentum contribution of the wheels along their spin axes only, the momentum along the transverse direction is included in the first term. \(M_{SC,W}\) in the third equation is the direction cosine matrix (see above) defining the transformation from the wheel frame to the spacecraft body frame, \(I_{W}\) is the wheel spin axis inertia, and \(\omega_{W}\) is the angular velocity in the spin direction.
$$H^{B}_{SC}, H^{C}_{SC}, H^{W}_{SC}$$
N-m-s
Torque Vector
Torque (\(T\)) is the rate of change of angular momentum. For example, in the spacecraft body frame (SC)
$$T_{SC}=\dot{H}^{C}_{SC} + \omega_{SC}\times H^{C}_{SC}$$
$$T_{SC}$$
Nutation
The angle between a spacecraft principal moment of inertia axis and the angular momentum vector.
$$\theta$$
1.3.112.4.57.4.10
Spin Axis
The axis about which a spacecraft is spinning, often closely aligned with the major principal axis of inertia.
Spin Rate
The rotation rate about the Spin Axis.
$$\omega$$
Rotation angle about the Spin Axis.
$$\Phi$$
SANA operation is provided by Viagénie under the secretariat of the Consultative Committee for Space Data Systems (CCSDS). The project is funded by NASA.
About SANA
The Space Assigned Numbers Authority (SANA) is the registrar function for the protocol registries created under the Consultative Committee for Space Data Systems (CCSDS). SANA was created by the Yellow Book 313x0y02 under the SANA working group, as part of the Systems Engineering Area(SEA).
The SANA operations is provided by Viagénie under the secretariat of CCSDS.
The SANA's roles, responsibilities and procedures are defined in the CCSDS Yellow Book 313.0-Y-2. The SANA implements the CCSDS Yellow Book 313.1-Y-1 (CCSDS SANA Registry Management Policy).
To contact SANA, use [email protected].
Registry feed
Attitude and Spacecraft Conventions [1.3.112.4.57.4] registry
Attitude and spacecraft conventions record with ['Name'] Quaternion [1.3.112.4.57.4.2]
Updated on 2019-02-15 13:33:38 by Audric Schiltknecht [1.3.112.4.2.2]
Attitude and spacecraft conventions record with ['Name'] Phase [1.3.112.4.57.4.13]
Published on 2019-02-15 13:29:16 by Audric Schiltknecht [1.3.112.4.2.2]
Attitude and spacecraft conventions record with ['Name'] Spin Rate [1.3.112.4.57.4.12]
Attitude and spacecraft conventions record with ['Name'] Spin Axis [1.3.112.4.57.4.11]
Attitude and spacecraft conventions record with ['Name'] Nutation [1.3.112.4.57.4.10]
Attitude and spacecraft conventions record with ['Name'] Torque Vector [1.3.112.4.57.4.9]
Attitude and spacecraft conventions record with ['Name'] Angular Momentum [1.3.112.4.57.4.8]
Attitude and spacecraft conventions record with ['Name'] Inertia [1.3.112.4.57.4.7]
Attitude and spacecraft conventions record with ['Name'] Euler Rates [1.3.112.4.57.4.6]
Attitude and spacecraft conventions record with ['Name'] Euler Angles [1.3.112.4.57.4.5] | CommonCrawl |
Design Science
The novelty 'sweet spot' of invention
Data and method
Published online by Cambridge University Press: 07 November 2017
Yuejun He and
Jianxi Luo
Yuejun He
Singapore University of Technology and Design, Engineering Product Development Pillar, Singapore, 487372, Singapore
Save pdf (1 mb)
Invention arises from novel combinations of prior technologies. However, prior studies of creativity have suggested that overly novel combinations may be harmful to invention. Apart from the factors of expertise, market, etc., there may be such a thing as 'too much' or 'too little' novelty that will determine an invention's future value, but little empirical evidence exists in the literature. Using technical patents as the proxy of inventions, our analysis of 3.9 million patents identifies a clear 'sweet spot' in which the mix of novel combinations of prior technologies favors an invention's eventual success. Specifically, we found that the invention categories with the highest mean values and hit rates have moderate novelty in the center of their combination space and high novelty in the extreme of their combination space. Too much or too little central novelty suppresses the positive contribution of extreme novelty in the invention. Furthermore, the combination of scientific and broader knowledge beyond patentable technologies creates additional value for invention and enlarges the advantage of the novelty sweet spot. These findings may further enable data-driven methods both for assessing invention novelty and for profiling inventors, and may inspire a new strand of data-driven design research and practice.
novelty invention technology combination
Design Science , Volume 3 , 2017 , e21
DOI: https://doi.org/10.1017/dsj.2017.23[Opens in a new window]
Distributed as Open Access under a CC-BY 4.0 license (http://creativecommons.org/licenses/by/4.0/)
Copyright © The Author(s) 2017
Novelty is an essential element of invention (Lubart Reference Lubart1994; Sternberg & Lubart Reference Sternberg and Lubart1996; Luo Reference Luo2015). High novelty implies an increase in the variability which can result in both breakthrough and failure (Fleming Reference Fleming2001). Empirical studies on the impact of novel versus conventional design stimuli on creative output have reported mixed results (Chan & Schunn Reference Chan and Schunn2015a ). Despite its clear value to invention, excessive novelty may also harm invention by introducing challenges to its embodiment, product development, manufacture, and user adoption (Luo Reference Luo2015). In the pursuit of invention, inventors are faced with a 'novelty dilemma'. Prior engineering design research has speculated that a novelty 'sweet spot' may exist which delivers the best invention outcome (Fu et al. Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013); there may be such a thing as 'too much' or 'too little' novelty that will determine an invention's future value.
If the novelty sweet spot does exist, the next question is where it is – how much novelty is needed for inventions in the sweet spot? In this research, we aim to answer this question empirically by drawing on the combination theory of invention to analyze novelty. Prior studies of creativity considered an invention as the recombination of prior technologies or knowledge, and suggested that uncommon combinations give rise to novelty (Simonton Reference Simonton1999; Fleming Reference Fleming2001; Ward Reference Ward2001; Arthur Reference Arthur2007; Basnet & Magee Reference Basnet and Magee2016). Thus, an invention's novelty can be measured as the frequency with which the prior technologies the invention recombines had previously been combined (Uzzi et al. Reference Uzzi, Mukherjee, Stringer and Jones2013; Chan & Schunn Reference Chan and Schunn2015a ). Several recent studies have utilized the massive data of patents as the proxy of inventions to analyze technology combinations (Youn et al. Reference Youn, Strumsky, Bettencourt and Lobo2015; Kim et al. Reference Kim, Cerigo, Jeong and Youn2016). We follow these prior works to analyze 3.9 million technical patents from the United States Patent and Trademark Office (USPTO) to explore the novelty sweet spot in the combination space of invention.
Our empirical results show a clear novelty 'sweet spot' at which the suitable level of novelty of prior technology combinations favors an invention's eventual success. We also found that combination of scientific and broader knowledge beyond patentable technologies creates additional value for the patented invention and increases the advantage of the novelty sweet spot. The identification and nuanced understanding of the novelty sweet spot contributes to the design creativity literature and can guide inventors to pursue more valuable inventions in practice. Our methodology may also contribute to the growing literature on the data-driven evaluation of design creativity. Below, we will first review the related literature and introduce our methodology and then report our empirical findings.
To explore the novelty sweet spot for invention, this research primarily draws on the literature of combination theory and novelty measurement.
2.1 Design combination, novelty and outcome
The prior literature has suggested that novelty arises from uncommon combinations. Simonton (Reference Simonton1999) argued that an invention is the recombination of existing technologies and that the novelty of the invention is the result of unconventional combinations of prior technologies. Arthur (Reference Arthur2007) proposed that invention results from recursive problem solving by combining existing technologies. Youn et al. (Reference Youn, Strumsky, Bettencourt and Lobo2015) used patent classification codes as a proxy of technologies to analyze the multi-classification of US patents and found that the major driver of modern invention has indeed been the combination of existing technologies rather than the introduction of new technologies. Luo & Wood (Reference Luo and Wood2017) found a trend that patented inventions have been combining the knowledge of broader domains over the past three decades. Synthesizing the anecdotal accounts of creative writing and laboratory investigations, Ward (Reference Ward2001) noted that new properties can arise out of conceptual combination. Basnet & Magee (Reference Basnet and Magee2016) focused on the combination of analogical transfers in the cognitive process and argued that new inventive ideas are created by using the combinatorial analogical transfer of existing ideas. Nickerson (Reference Nickerson2015) stressed that creative work can be performed by thousands of people through collective design, i.e., designers modify and combine each other's work in a process called remixing.
Combinations can involve different sources and lead to different outcomes. With empirical studies and a psychoanalytic interpretation of poem-creation activities, Rothenberg (Reference Rothenberg1980) proposed two combinatory thinking processes that foster creativity, i.e., Janusian thinking which conceives two or more opposite or antithetical entities simultaneously and homospatial thinking which conceives two or more discrete entities occupying the same space. Analyzing scientific publications, Uzzi et al. (Reference Uzzi, Mukherjee, Stringer and Jones2013) found that the scientific papers that have the greatest impact (with an outstanding number of future citations) are grounded in a mass of exceptionally conventional combinations of prior work and a minor insertion of highly novel combinations. Kim et al. (Reference Kim, Cerigo, Jeong and Youn2016) found that patents that add novel combinations of their co-classifications to conventional combinations are most likely to become 'hits'. Based on US patent citation data, Fleming (Reference Fleming2001, Reference Fleming2007) found that although novel combinations based on rareness in historical occurrences lead to less useful inventions on average, they also give rise to the variability that can result in both breakthrough and failure.
Despite a robust link between distant combinations and the increased novelty of concepts, prior laboratory experiments or empirical studies have reported mixed results on the impact of distant combinations on design outcome (Chan & Schunn Reference Chan and Schunn2015a ). It is naturally difficult for scientists and inventors to retrieve, absorb, and integrate technologies across unfamiliar domains. For instance, Chan & Schunn (Reference Chan and Schunn2015a ) found that the direct effects of far combinations have a mean zero effect on creative concept generation, and iterations are important for converting distant combinations into creative concepts. Kaufman & Baer (Reference Kaufman, Baer, Sternberg, Grigorenko and Singer2004) found that creativity is more domain-specific than general, and it is naturally difficult for people to effectively combine technologies across distant domains. Forbus et al. (Reference Forbus, Gentner and Law1995) found that during knowledge retrieval, superficial reminding is much more frequent than structural reminding, which makes it difficult to achieve design novelty from a distant enough combination. Based on an analysis of 2.8 million inventors' 3.9 million patents, Alstott et al. (Reference Alstott, Triulzi, Yan and Luo2017a ) found that inventors are far more likely to obtain new patents in new domains that are more related to their previous patents than in more distant domains.
Similarly, studies of design-by-analogy have suggested that novelty may arise when design is conceived by analogy across distant domains (Gick & Holyoak Reference Gick and Holyoak1980; Gentner & Markman Reference Gentner and Markman1997; Ward Reference Ward2001; Chan et al. Reference Chan, Fu, Schunn, Cagan, Wood and Kotovsky2011), but Gick & Holyoak (Reference Gick and Holyoak1980) found that human subjects often fail to notice the relevance of an analogy because of the cognitive distance between the potential solution and the target problem. Chan & Schunn (Reference Chan and Schunn2015b ) conducted an in vivo study and found that distant analogies do not lead directly to creative concepts via large leaps but instead increase the concept generation rate. Chan, Dow and Schunn (Reference Chan, Dow and Schunn2015) found that conceptually closer rather than more distant stimuli appear to be more beneficial to design because of their easier perception and more obvious connection to the design problem. Fu et al. (Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013) found that if stimuli are too distant from the design problem, they can become harmful to the design process and outcome. Accordingly, they argued that there might be such a thing as too 'near' and too 'far' in design analogies and that the stimuli from the 'middle ground' may be desirable for developing creative solutions.
Taken together, the literature has suggested that novel combinations are fundamental for invention and particularly crucial for breakthroughs, but excessively novel combinations may be ineffective and lead to poor results. Meanwhile, conventional combinations, compared with novel ones, are relatively easy and effective. The most desirable design outcome may arise in the middle ground, suggesting a hypothesis that there is a novelty 'sweet spot' of prior technology combinations. Our research aims to empirically test this hypothesis and to identify the novelty sweet spot. To do so, the evaluation of novelty is required.
2.2 Evaluation of design novelty
To evaluate novelty, design researchers have proposed various definitions, metrics, and methods. In general, novelty indicates that an invention is new, original, unexpected, and surprising (e.g., Sternberg & Lubart Reference Sternberg and Lubart1999; Simonton Reference Simonton2000; Kaufman & Baer Reference Kaufman, Baer, Sternberg, Grigorenko and Singer2004). Weisberg (Reference Weisberg2006) suggested that novelty is subjective to the experience of the evaluator. Therefore, novelty can be defined with reference to the previous ideas of the individuals concerned or relative to the entirety of human history (Boden Reference Boden1996). For instance, Oman et al. (Reference Oman, Tumer, Wood and Seepersad2013) measured the 'novelty' of a new concept as its uniqueness across all functional dimensions relative to a group of comparable ideas. Simonton (Reference Simonton1999) associated the novelty of the invention with the commonality of its combinations of prior technologies. Some researchers have suggested that novelty can be measured by comparing the observed situation with the random one. For example, Uzzi et al. (Reference Uzzi, Mukherjee, Stringer and Jones2013) proposed calculation of the novelty of a combination of scientific fields by comparing the observed frequency of the combination with the random frequency of the combination in the randomized samples. Kim et al. (Reference Kim, Cerigo, Jeong and Youn2016) calculated a relative likelihood that each pair of classification codes is put together at random and a deviation from the empirical observation to assess a patent's overall novelty.
Traditionally, invention evaluation has been carried out using an expert group approach and based on experts' subjective opinions, intuitions, or experiences (Amabile Reference Amabile1996). Various procedures and techniques have been proposed to facilitate expert groups and analyze their opinions. Sarkar & Chakrabarti (Reference Sarkar and Chakrabarti2007) introduced the function–behavior–structure (FBS) model and the SAPPhIRE model using product characteristics to measure product novelty. Brown (Reference Brown2015) presented a simple framework for computational design creativity evaluation, which contains agent judging, the set of aspects, knowledge about the designer, etc. Grace et al. (Reference Grace, Maher, Fisher and Brady2015) developed a typology of expectations that, when violated, produce surprise and contribute to creativity.
Evaluation relying on expert opinions is naturally subjective and limited in terms of the data sample size. As a result, it is difficult either to apply rigorous mathematics for evaluation or to test theoretical hypotheses with statistical significance. Meanwhile, there is an increasing call for a computational and data-driven evaluation of design novelty (Brown Reference Brown2015; He & Luo Reference He and Luo2017). Recent studies have developed methods to analyze patent documents to evaluate patented inventions. Patent documents contain rich details, and there are also millions of patents in the public patent databases, which enable a more rigorous and systematical data-driven evaluation of the novelty of patented inventions. For instance, Fleming (Reference Fleming2001) analyzed how frequently the co-classes of a patent were assigned to other patents in the history to indicate the novelty of this patent from a recombination perspective. He & Luo (Reference He and Luo2017) analyzed how frequently a pair of patent classes had appeared together in the references of previous patents to assess its conventionality or novelty.
In this paper, we introduce a data-driven method to measure the novelty of patented inventions, using the extensive data existing in the USPTO patent database, and we then test the hypothesis on the existence of the novelty sweet spot of invention.
3 Data and method
3.1 Data
In this study, we used patents as the proxy of inventions, with awareness of patents' limitations (e.g., not all inventions are patented). Our analysis involved approximately 3.9 million utility patents granted from 1976 to 2016 contained in the USPTO (United States Patent and Trademark Office) patent databaseFootnote 1 , including the 601,715 patents that were granted in the 1990s with five or more references to prior patents, along with the patents that they cite (i.e., backward references) and the patents that cite them (i.e., forward citations). Our focus on patents in the 1990s ensures that their backward references and forward citations are sufficiently covered, because most citations fall within a time lag of 10 years (Trajtenberg Reference Trajtenberg1990; Hall, Jaffe & Trajtenberg Reference Hall, Jaffe and Trajtenberg2001). The data on patent documents that we used in our analysis are the patent classifications and references (Figure 1). Each patent is assigned to one or more patent classes by the USPTO examiners, indicating which types of technology it embodies. In this study, we used IPC4 (four-digit International Patent Classification) marking 631 patent classes as the proxy of technology fields, as commonly carried out in the innovation literature (Breschi, Lissoni & Malerba Reference Breschi, Lissoni and Malerba2003; Boschma, Heimeriks & Balland Reference Boschma, Heimeriks and Balland2014; Kay et al. Reference Kay, Newman, Youtie, Porter and Rafols2014; Rigby Reference Rigby2015; Alstott et al. Reference Alstott, Triulzi, Yan and Luo2017b ; Yan & Luo Reference Yan and Luo2017).
Figure 1. Example of a patent document (US Patent 5410453).
3.2 Method
Figure 2 depicts the structure of our overall research method to statistically compute the novelty profile and the value of a patented invention to identify the novelty sweet spot. The details of each step of the method are described in the following subsections.
Figure 2. Flow diagram of the research method.
3.2.1 Novelty of technology combination
A pair of patent classes assigned to a patent's references approximates a recombination of existing technology fields in an invention. With this information, one can calculate the frequency at which a combination of technology fields has occurred in historical inventions' references to indicate the combination's novelty. Figure 3 illustrates the procedure to extract all pairs of patent classes assigned to a patent's references. The first step is to identify the referenced patents of a focal patent (column I). The second step is to identify the classes of these referenced patents (column II). On this basis, the list of all class pairs in the patent's references is extracted (column III). For example, class A that is assigned to patent 1 (one of the patent references of the focal patent) forms a class pair with class B that is also assigned to patent 1, with class A that is assigned to patent 2, and with classes B, C, and D that are assigned to patent 3, and so on.
Figure 3. Illustrative procedure of extracting the class pairs in the reference list of a patent.
Following the frequency-based approach to evaluate novelty (Uzzi et al. Reference Uzzi, Mukherjee, Stringer and Jones2013), we first computed the frequency of a pair of patent classes co-occurring in the references of individual patents in history. Because it is difficult to perceive how large or small a frequency value is, benchmarking is needed. We normalized the empirically observed frequency value by comparing it with the same metric for the comparable randomized citation networks to indicate the novelty of the combination. Such a normalized value relative to comparable random situations is called the ' $z$ -score' in the network science literature, indicating the extent to which the empirical observation deviates from expectations in comparable but randomized settings. The formula for the $z$ -score is
(1) $$\begin{eqnarray}z_{ij}=\frac{x-\unicode[STIX]{x1D707}}{\unicode[STIX]{x1D70E}},\end{eqnarray}$$
where $z_{ij}$ is the relative co-occurrence frequency of the pair of classes $i$ and $j$ , $x$ is the empirically observed co-occurrence frequency of classes $i$ and $j$ , $\unicode[STIX]{x1D707}$ is the average expected co-occurrence frequency of classes $i$ and $j$ in comparable randomized citation networks, and $\unicode[STIX]{x1D70E}$ is its standard deviation. The average expected value and the standard deviation were calculated based on an ensemble of 10 randomized reference lists of the same patents in the randomized citation networks.
Figure 4 illustrates an example of how the empirical citation network was randomized. In the citation network, the nodes are patents and the links are backward references, i.e., an arrow goes from a citing patent to a cited patent. Specifically, we randomly selected a pair of citing-to-cited links with the same citing and cited years (i.e., the years in which the citing and cited patents were granted) and swapped the cited patents. For example, in Figure 4, link $a$ and link $b$ can be switched by swapping the cited patents, but link $a$ and link $c$ cannot be switched. As a result, the random swapping procedure preserved all of the numbers of forward and backward citations of each patent and the year lags of the citations, which makes the randomized networks comparable with the empirical network.
Figure 4. How the empirical citation network was randomized by swapping the cited patents of randomly selected citing-to-cited links with the same citing and cited years.
We calculated the $z$ -score of any pair of the 631 IPC4 classes that were assigned to the patents appearing as references of all US patents granted from 1990 to 1999. A lower $z$ -score, indicating less frequent historical occurrences, suggests higher novelty. Thereafter, we used the additive inverse of the $z$ -score to measure the novelty of the combination represented by the patent class pair. For example, if the $z$ -score of a class pair is $-100$ , its novelty score is 100. If the $z$ -score of another class pair is 5, its novelty score is $-5$ . The first pair has a higher novelty score than the latter pair.
3.2.2 Novelty profile of a patented invention
It should be noted that each invention is often composed of multiple combinations of technology fields (i.e., each patent has a set of pairs of classes in the list of its references) and each of the combinations has a different degree of novelty (i.e., each of the class pairs has a $z$ -score). Thus, the combination space of a patented invention can be viewed as a network of patent classes whose pairings are denoted as weighted links according to the $z$ -scores of the pairs (see Figure 5(a) for an example). In other words, each invention can be characterized by a spectrum of novelty values given by its combinations, i.e., each patent can be profiled by a spectrum of $z$ -scores given by the patent class pairs in its references. This spectrum of novelty values can be summarized in a cumulative distribution of the $z$ -scores (see Figure 5(b) for an example).
Figure 5. The technology combination space of US patent 5473937 entitled 'Temperature sensing apparatus'. (a) Network of classes assigned to the references of the patent, connected according to the $z$ -scores (red link, minimum $z$ -score; blue link, median $z$ -score). (b) Cumulative distribution of the class pairs by their $z$ -scores.
To investigate the spectrum of novelty values, we first considered the median value of the above distribution, which is located at the center of the distribution, thus indicating the novelty in the central area of an invention's combinations. In the world of invention, the extreme or outlier is as meaningful as the average (Fleming Reference Fleming2007; Girotra, Terwiesch & Ulrich Reference Girotra, Terwiesch and Ulrich2010). Therefore, we are also concerned with the novelty of the most novel combination in the extreme of the spectrum. In brief, to profile the novelty of an invention, we analyzed both the novelty of the center and the novelty of the extreme in the space of its combinations. Specifically, we defined and quantified the central novelty of a patented invention as the additive inverse of the median $z$ -score in the distribution and its extreme novelty as the additive inverse of the minimum $z$ -score in the distribution.
It should be noted that the $z$ -scores in the distribution for a patent were calculated based on the historical data on the co-occurrence of patent class pairs until the granting year of the focal patent, because extreme novelty and central novelty are relative to the past and present artifacts and should change temporarily as newer technologies are developed over time (Weisberg Reference Weisberg2006). As a robustness check (see the Supplementary Appendix available at https://doi.org/10.1017/dsj.2017.23), we also generated the analysis results with the $z$ -scores calculated based on the co-occurrences of patent class pairs in the granting year of the focal patent. The qualitative patterns in the main text hold.
Figure 6 shows the distributions of all patents in the 1990s according to their median $z$ -scores and minimum $z$ -scores. The median $z$ -score distribution patterns changed little over the two five-year periods in the 1990s. From 1990 to 1994, 6.18% of the patents had a median $z$ -score below 0, whereas 6.36% of the patents had a negative median $z$ -score from 1995 to 1999. Moreover, there was no obvious change in the minimum $z$ -score distribution over time. From 1990 to 1994, 55.58% of the patents had a minimum $z$ -score below 0, whereas from 1995 to 1999, 59.01% of the patents did so.
Figure 6. Patent distribution according to $z$ -scores. (a) Cumulative distribution according to median $z$ -scores (i.e., additive inverse of central novelty). (b) Cumulative distribution according to minimum $z$ -scores (i.e., additive inverse of extreme novelty).
Figure 7. The central–extreme novelty space and the locations of the three patents in Table 1.
With the definition of and the method to compute both the central novelty and the extreme novelty of an invention, we can now assess and position an invention in a two-dimensional space defined by central novelty and extreme novelty (Figure 7). Because the values of central novelty and extreme novelty are highly dispersed, we divided them into 10 (equally sized) categories. A few example patents (Table 1) are located in the respective categories in the 10-by-10 matrix according to their central and extreme novelty values. These patents were all granted in 1995 but differ in their central novelty, extreme novelty, and realized invention values (to be defined below).
Table 1. Example patents of different central novelty, extreme novelty, and invention value
3.2.3 Value of a patented invention
The value of an invention is realized when it is endowed with utility and economic and social significance. Prior empirical studies have shown strong evidence that the number of a patent's forward citations (i.e., the citations it receives after being granted) is highly correlated with the value it has achieved, as indicated by expert opinions, awards, and market value (Harhoff et al. Reference Harhoff, Narin, Scherer and Vopel1999; Hall, Jaffe & Trajtenberg Reference Hall, Jaffe and Trajtenberg2000). For example, the patent for crystalline silicoaluminophosphates held by Union Carbide Corporation (patent #4310440) describes an important compound. With its widespread uses as a catalyst in other inventions, the patent created great economic value for its holder and received 229 citations through 1995 as the most cited patent since 1976 granted by USPTO (Hall, Jaffe & Trajtenberg Reference Hall, Jaffe and Trajtenberg2000). Thus, we followed the literature to approximate the value of a patented invention by the count of its forward citations, normalized by the average forward citation count of all of the patents granted in the same patent class and the same year. The normalization allows a comparative analysis across fields and years. The formula for the value ( $v_{i}$ ) of a patented invention $i$ is
(2) $$\begin{eqnarray}v_{i}=\frac{a_{i}}{\bar{a}},\end{eqnarray}$$
where $a_{i}$ denotes the total count of forward citations received by patent $i$ and $\bar{a}$ denotes the average count of forward citations received by all of the patents granted in the same year and in the same IPC4 class as patent $i$ .
We were also interested in the subset of inventions that achieved outstanding value and are considered to be breakthrough inventions. In this paper, we defined the top 5% of patents in terms of the normalized forward citation count (i.e., the invention value) as 'hit inventions'. In our analysis, the variable ''hit invention' of a patent is 1 if the patent is of a top 5% normalized forward citation and 0 otherwise. We also ran robustness tests using top 1%, 3%, and 10% as alternatives to define a hit invention (see the Supplementary Appendix).
3.2.4 Descriptive statistics
Table 2 reports the descriptive statistics for the variables based on our data set of 601,715 utility patents in the USPTO patent database that were granted in the 1990s and have five or more references to prior patents.
We primarily analyzed the patents in our total data set to associate central and extreme novelty with invention value.
Table 2. Descriptive statistics of the key variables
4.1 Novelty sweet spot
The association between central novelty and mean invention value follows a parabola or inverted-U curve (Figure 8a). Invention value increases initially until the 60th percentile of central novelty and declines from the 60th percentile onward. The highest mean invention value appears at a 'sweet spot' of the 40th–60th percentiles of central novelty, i.e., a medium level of novelty in the center of their combinations. The association between extreme novelty and mean invention values follows a cubic curve moving upward (Figure 8b). The highest mean invention value appears at the highest level of extreme novelty. The associations between central or extreme novelty and hit invention rates (Figures 8c and d) follow the same patterns. These patterns are further confirmed by multivariable regression analyses (see Table S1 in the Supplementary Appendix).
Figure 8. Mean invention values and hit invention rates of patents of different central and extreme novelty percentiles. (a) Mean invention values with confidence intervals ( $\pm 1.96$ standard errors of the mean) of patents equally distributed over 10 central novelty levels. (b) Mean invention values with confidence intervals ( $\pm 1.96$ standard errors of the mean) of patents equally distributed over 10 extreme novelty levels. (c) Hit invention rates of patents equally distributed over 10 central novelty levels. (d) Hit invention rates of patents equally distributed over 10 extreme novelty levels.
Figure 9(a) shows the distribution of patents in cells of a 10-by-10 matrix by their central and extreme novelty. More patents lie along the diagonal of the central–extreme novelty matrix, implying that, to some extent, the central novelty and extreme novelty of patents are correlated. In the central–extreme novelty matrix, patents are concentrated in the regions in which the central novelty and extreme novelty are simultaneously low or high, i.e., the bottom left and upper right corners. The matrix further enables a two-dimensional comparison of the realized invention values of patents in different regions of the central–extreme novelty space.
Figure 9(b) reports the average invention value of each category of patents in the central–extreme novelty matrix. For interest in the most significant inventions, we also report the probability of achieving the top 5% invention value for patents in each category of the central–extreme novelty matrix (Figure 9c). Figures 9(b) and (c) both exhibit a similar sweet spot, i.e., the regions of medium central novelty (the 30th–60th percentiles) and high extreme novelty (the 90th–100th percentiles) in the central–extreme novelty space, where the highest mean invention values and hit invention rates are located. Notably, the value sweet spot in Figures 9(b) and (c) is away from the popular spots in Figure 9(a) that have the highest concentrations of inventions. Only 2.17% of the patents are located in the sweet spot, despite its high mean value and high rates of hit inventions.
Figure 9. Central–extreme novelty matrix. Each cell represents a category of patents according to their percentiles of central and extreme novelty. Gray indicates a lack of data. (a) Patent distribution across the space. The number in the dashed box is the sweet spot's share of total patents. (b) Mean invention value, i.e., average normalized forward citation. (c) Hit invention rate, i.e., probability of top 5% invention value.
The realization of a high-value invention requires sufficiently but not excessively novel or conventional combinations in the center. Therefore, in the interest of maximizing invention value, there is such a thing as 'too novel' or 'too conventional', but only for the combination center of an invention. Meanwhile, a higher extreme novelty monotonically increases the value of an invention regardless of its central novelty. However, for patents with medium central novelty, an increase in extreme novelty increases the invention value more significantly than an increase at the low or high central novelty levels.
4.2 Novelty sweet spot with/without non-patent references
Many patents also cite non-patent references (NPRs) such as scientific papers, technical reports, and books, which may imply a broader scope of the combined knowledge embodied in the inventions. Prior study has shown that patents citing NPRs present a higher average value measured by forward citation counts than those citing only patents, particularly when the patented invention combines technologies from a wider scope of fields (Fleming & Sorenson Reference Fleming and Sorenson2004). Our results (Figure 10) show that patents with NPRs present generally higher invention values than patents without NPRs in every cell of the central–extreme novelty space. More specifically, the value added by NPRs, indicated by the gap between the two surfaces in each panel of Figure 10, is maximized in the sweet spot of the central–extreme novelty matrix, i.e., medium central novelty and high extreme novelty. In brief, the combination of scientific and broader knowledge beyond patentable technologies may create more valuable inventions and enlarge the value advantage of the novelty sweet spot.
Figure 10. Novelty sweet spot with/without non-patent references (e.g., scientific papers, technical reports, books, etc.) by central and extreme novelty percentiles. Each cell in the base matrix represents a category of patents. (a) Mean invention value, i.e., average normalized forward citation. (b) Hit invention rate, i.e., probability of top 5% invention value.
The foregoing findings hold true when the data sample includes the patents with no fewer than 20, 30, and 50 reference IPC4 pairs (Figures S2–S5 in the Supplementary Appendix), when $z$ -scores are calculated using the data of co-occurrences of class pairs only in the granting year of the focal patent (Figures S6–S9 in the Supplementary Appendix) rather than the historical data until the granting year, when extreme novelty is alternatively defined as the $z$ -score of the 3rd, 5th, 8th, or 10th percentile of the $z$ -score distribution of a patent (Figures S2–S4 in the Supplementary Appendix), and when we change the definition of a hit invention to one that is among the top 10%, 3%, and 1% in terms of normalized forward citation counts (Figure S5 in the Supplementary Appendix). The detailed robustness tests are reported in the Supplementary Appendix.
In brief, we have identified a clear 'sweet spot' of invention in the central–extreme novelty space, with a statistical analysis of approximately 600,000 patents in the USPTO database. This finding supports the prior conjecture from engineering design research (Fu et al. Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013). Knowledge of the specified sweet spot may further enable data-driven methods for assessing novelty and profiling inventors. It may also provide some guidance for engineering designers to enhance the value of their potential inventions. Below, we discuss several applications to make sense of this new understanding.
4.3 Potential applications
First, the central–extreme novelty matrix can be used as a data-driven tool to computationally assess the novelty of a new invention. For instance, one can assess the central novelty and extreme novelty of a new invention and locate it in the core–peripheral novelty matrix. Figure 7 has presented a few examples of individual patents positioned in the matrix. For a patented invention, the desirable situation is to lie in the sweet spot. In particular, such data-driven assessment of the novelty of a new patent application may aid in the patent validity examination.
Furthermore, the central–extreme novelty matrix can also be used to profile individual inventors, companies, states, and countries by visualizing the novelty structures of their invention portfolios. The desirable portfolio would have most patents concentrated in the sweet spot of the central–extreme novelty space; in reality, this concentration is gradual. For instance, Figure 11 presents an example that visualizes the patent portfolios of two countries, the USA and China, within the central–extreme novelty space, revealing their differences. To ensure consistency for comparison, only USPTO patents from the USA and China are analyzed here.
Figure 11. Patent distributions of the USA and China by central and extreme novelty percentiles. Each cell represents a category of patents. Gray indicates a lack of data. (a) Patent distribution of the USA from 1996 to 2005 and (b) from 2006 to 2015; (c) patent distribution of China from 1996 to 2005 and (d) from 2006 to 2015. The numbers in the dashed boxes are the shares of the patents in the sweet spot, i.e., the region with the 30th–60th percentiles of central novelty and the 90th–100th percentiles of extreme novelty.
The USPTO patents of the USA are concentrated around both the high (i.e., upper right) and the low (i.e., bottom left) novelty corners of the matrix and are distributed over the sweet spot. Despite an increase in total patents, this patent distribution profile has changed little over the past two decades (Figures 11a and b), and the percentage of patents within the sweet spot has increased from 2.93% to 3.37%. In contrast, the USPTO patents of China were concentrated at the low novelty corner at first but have exhibited a shift toward the upper right corner over the past two decades (Figures 11c and d), which represents both high central novelty and high extreme novelty. However, the share of the patents in the sweet spot dropped from 2.10% to 1.00%. Such a visual comparison suggests that the USA has a generally more valuable patent portfolio, with an increasing portion of patents in the sweet spot; conversely, China has been producing more novel patents over time, but is losing sight of the potential value of its patented inventions. For interested readers, the novelty profiles of additional countries can be found in the Supplementary Appendix.
Figure 12. Patent distributions of different technology domains by central and extreme novelty percentiles. Each cell represents a category of patents. Gray indicates a lack of data. (a) Distribution of nanotechnology patents from 1996 to 2005 and (b) from 2006 to 2015; (c) distribution of hybrid electric vehicle patents from 1996 to 2005 and (d) from 2006 to 2015. The patents for the corresponding domains are extracted from the special patent categories '903 – Hybrid Electric Vehicles' and '977 – Nanotechnology' created by the USPTO, among nine art-collection classes whose three-digit IDs start with the number 9. The numbers in the dashed boxes are the shares of the patents in the sweet spot, i.e., the region with the 30th–60th percentiles of central novelty and the 90th–100th percentiles of extreme novelty.
Likewise, the same visual assessment can be applied to technical design domains for comparative and trend analyses. For example, Figure 12 visualizes the distributions of nanotechnology patents and hybrid electrical vehicle patents in the USPTO patent database over two decades. The nanotechnology patents are concentrated at the upper right corner of both high central and extreme novelty, with a tendency to disperse toward a lower periphery novelty over time (Figures 12a and b). The percentage of nanotechnology patents within the sweet spot dropped significantly, from 3.92% to 2.07%. In contrast, the concentration of hybrid electrical vehicle patents shifted obviously from the center of the novelty matrix to the upper right corner over time, suggesting a general increase in the portion of more novel inventions in this domain (Figures 12c and d). However, such an increase in proportion did not take place in the favorable sweet spot for value; the percentage of patents in the sweet spot was almost unchanged (from 1.50% to 1.48%). Nanotechnology inventions were generally more novel and were more present in the valuable sweet spot, whereas hybrid electrical vehicle inventions increased in the more novel categories but not in the sweet spot. Such differences in the visualized novelty profiles of domains may result from or reflect their different technical natures and development stages.
In brief, the central–extreme novelty matrix, together with the knowledge of the sweet spot in the matrix space, may enable more systematic, consistent, and efficient data-driven evaluation (of novelty and value) of inventions or new technologies than traditional approaches using subjective opinions of experts (Hennessey & Amabile Reference Hennessey and Amabile2010). Thus, it will have a broad impact on general inventive practices as well as innovation management and policy.
Our findings contribute to both creativity theories and inventive practices. The most important finding is a specific 'sweet spot' in the central–extreme novelty space. Too much or too little novelty in the center may limit the future value realization of the invention and suppress the positive value contribution of extreme novelty to an invention. To pursue hit inventions or breakthroughs, inventors should be aware of the sweet spot at the beginning of the design process. One can use sufficient but not excessive domain-specific technologies to form a moderately novel center and infuse a small number of technologies from distant domains to form a highly novel extreme in the combination space. This finding about the novelty 'sweet spot' is aligned with those of Fu et al. (Reference Fu, Chan, Cagan, Kotovsky, Schunn and Wood2013) and Chan et al. (Reference Chan, Dow and Schunn2015), despite different definitions of novelty, different types of experiments, and different correlation factors. Another important finding is that the combination of scientific and broader knowledge apart from patentable technologies generally creates value for an invention, and it reinforces the added value of the sweet spot over other regions in the central–extreme novelty space. This finding suggests that inventors searching broadly for scientific and non-patentable knowledge in the invention process may find more valuable inventive opportunities.
Our findings favor T-shaped inventors, who are equipped with basic scientific knowledge in various domains and deep design expertise in a specific domain. Such inventors with the T-shaped knowledge structure are less likely to be trapped by the conventional wisdoms of their domains of specialization, and can consistently explore, leverage, and engage technologies from distant domains for invention. This type of domain-crossing exploration is more effective if the inventors engage scientific and broader knowledge to comprehend and integrate technologies across domains. Our results support the movements of engineering education to cultivate holistic inventors with such a T-shaped knowledge structure to promote innovation, and deal with the growing complexity in technological inventions and the invention process (Luo & Wood Reference Luo and Wood2017).
Furthermore, we demonstrate the use of the 'central–extreme novelty matrix' to profile the novelty structures of the patent portfolios of different countries and of different technology domains. The visual analysis reveals that the USA had a generally more valuable patent portfolio and an increasing concentration of patents in the sweet spot, while China was losing sight of the value of inventions (i.e., it had a decreasing portion of patents in the sweet spot), despite producing more novel patents over time. We also visually found that nanotechnology inventions were highly novel but had a decreasing portion in the valuable sweet spot, whereas hybrid electrical vehicle inventions shifted their concentration to the more novel categories but not in the sweet spot over the past two decades. The novelty matrix and the knowledge of the sweet spot can be further applied to assess and compare the invention portfolios of individual inventors, companies, states, countries, and industries.
In summary, this paper contributes a scientific understanding of what novelty structure is most likely to give inventions greater value. Such an understanding is valuable for inventive practices in all fields. This paper also contributes a promising novelty evaluation tool, i.e., the central–extreme novelty matrix. It can characterize the two-dimensional novelty structures of inventions and the patent portfolios of inventors at different aggregation levels, including persons, organizations, regions, etc. This new understanding and our methodological contributions are expected to inspire and enhance creativity in design practices, engineering education, innovation management and policy, etc., across fields.
The study has limitations. For example, our method relies on a statistical analysis of the data on patent references; thus, we focused on patents with at least five references. As a result, we may have neglected highly novel patents with few references. In addition, we only analyzed direct references, although indirect references may also have implications for the combination space of invention. For co-occurrences, pair is the simplest and most generic unit of analysis. Other forms of co-occurrences, such as triples or specific topological structures, can be considered for further study to explore additional insights into novelty structures. In addition, the USPTO patent database is just one of many patent databases worldwide. It would be interesting to conduct a similar analysis using other patent databases, such as the patents filed in European, Chinese, and Japanese patent offices, to explore whether our findings in this paper will hold or vary.
This study can move forward in a few directions for future research. First, alternative measures of the novelty spectra can be explored to assess inventions based on patent data. Second, we plan to work with industrial companies and government organizations to apply our work and findings (e.g., the central–extreme novelty matrix and the 'sweet spot') for impact on innovation practices. Third, new studies may bring new insights into invention by using the novelty matrix to assess and compare the novelty profiles across different ranges of patents, e.g., system patents versus device patents, singular patents versus grandparent–parent–child families of patents, etc. Fourth, a data-driven invention evaluation tool can be developed to automate the novelty assessment and the visualization functions, as we preliminarily presented in this paper. Thus, laymen (e.g., engineers, managers, patent lawyers, policy makers) can use the tool to quantitatively and visually evaluate inventions and patent portfolios at different scales. Furthermore, we hope that a systematic model to predict the value of new inventions can be developed in future research by incorporating additional factors that affect invention value with central and extreme novelty.
This research is funded by SUTD-MIT International Design Centre (IDG31600105) and Singapore Ministry of Education Tier 2 Academic Research Grant (MOE2013-T2-2-167).
Supplementary data is available at https://doi.org/10.1017/dsj.2017.23.
1 The dataset was downloaded from PatentsView, available at http://www.patentsview.org/.
Alstott, J., Triulzi, G., Yan, B. & Luo, J. 2017a Inventors' movements and performance across technology domains. Design Science; in press.Google Scholar
Alstott, J., Triulzi, G., Yan, B. & Luo, J. 2017b Mapping technology space by normalizing technology relatedness networks. Scientometrics 110, 443–479.Google Scholar
Amabile, T. M. 1996 Creativity in Context: Update to 'The Social Psychology of Creativity'. Westview Press.Google Scholar
Arthur, W. B. 2007 The structure of invention. Research Policy 36, 274–287.Google Scholar
Basnet, S. & Magee, C. L. 2016 Modeling of technological performance trends using design theory. Design Science 2, e8.CrossRefGoogle Scholar
Boden, M. A. 1996 Dimensions of Creativity. MIT Press.Google Scholar
Boschma, R., Heimeriks, G. & Balland, P.-A. 2014 Scientific knowledge dynamics and relatedness in biotech cities. Research Policy 43, 107–114.CrossRefGoogle Scholar
Breschi, S., Lissoni, F. & Malerba, F. 2003 Knowledge-relatedness in firm technological diversification. Research Policy 32, 69–87.CrossRefGoogle Scholar
Brown, D. C. 2015 Computational design creativity evaluation. Design Computing and Cognition'14. Springer.Google Scholar
Chan, J., Dow, S. P. & Schunn, C. D. 2015 Do the best design ideas (really) come from conceptually distant sources of inspiration? Design Studies 36, 31–58.CrossRefGoogle Scholar
Chan, J., Fu, K., Schunn, C., Cagan, J., Wood, K. & Kotovsky, K. 2011 On the benefits and pitfalls of analogies for innovative design: ideation performance based on analogical distance, commonness, and modality of examples. Journal of Mechanical Design 133, 081004.CrossRefGoogle Scholar
Chan, J. & Schunn, C. 2015a The importance of iteration in creative conceptual combination. Cognition 145, 104–115.CrossRefGoogle Scholar
Chan, J. & Schunn, C. D. 2015b The impact of analogies on creative concept generation: lessons from an in vivo study in engineering design. Cognitive Science 39, 126–155.CrossRefGoogle Scholar
Fleming, L. 2001 Recombinant uncertainty in technological search. Management Science 47, 117–132.CrossRefGoogle Scholar
Fleming, L. 2007 Breakthroughs and the 'long tail' of innovation. MIT Sloan Management Review 49, 69.Google Scholar
Fleming, L. & Sorenson, O. 2004 Science as a map in technological search. Strategic Management Journal 25, 909–928.CrossRefGoogle Scholar
Forbus, K. D., Gentner, D. & Law, K. 1995 MAC/FAC: a model of similarity-based retrieval. Cognitive Science 19, 141–205.CrossRefGoogle Scholar
Fu, K., Chan, J., Cagan, J., Kotovsky, K., Schunn, C. & Wood, K. 2013 The meaning of 'near' and 'far': the impact of structuring design databases and the effect of distance of analogy on design output. Journal of Mechanical Design 135, 021007.CrossRefGoogle Scholar
Gentner, D. & Markman, A. B. 1997 Structure mapping in analogy and similarity. American Psychologist 52, 45.Google Scholar
Gick, M. L. & Holyoak, K. J. 1980 Analogical problem solving. Cognitive Psychology 12, 306–355.CrossRefGoogle Scholar
Girotra, K., Terwiesch, C. & Ulrich, K. T. 2010 Idea generation and the quality of the best idea. Management Science 56, 591–605.CrossRefGoogle Scholar
Grace, K., Maher, M. L., Fisher, D. & Brady, K. 2015 Modeling expectation for evaluating surprise in design creativity. Design Computing and Cognition'14. Springer.Google Scholar
Hall, B. H., Jaffe, A. B. & Trajtenberg, M. 2000 Market Value and Patent Citations: A First Look. National Bureau of Economic Research.CrossRefGoogle Scholar
Hall, B. H., Jaffe, A. B. & Trajtenberg, M. 2001 The NBER Patent Citation Data File: Lessons, Insights and Methodological Tools. National Bureau of Economic Research.CrossRefGoogle Scholar
Harhoff, D., Narin, F., Scherer, F. M. & Vopel, K. 1999 Citation frequency and the value of patented inventions. Review of Economics and Statistics 81, 511–515.CrossRefGoogle Scholar
He, Y. & Luo, J. 2017 Novelty, conventionality, and value of invention. Design Computing and Cognition'16. Springer.Google Scholar
Hennessey, B. A. & Amabile, T. M. 2010 Creativity. Annual Review of Psychology 61, 569–598.CrossRefGoogle ScholarPubMed
Kaufman, J. C. & Baer, J. 2004 Hawking's Haiku, Madonna's math: why it is hard to be creative in every room of the house. In Creativity: From Potential to Realization (ed. Sternberg, R. J., Grigorenko, E. L. & Singer, J. L.), pp. 3–19.CrossRefGoogle Scholar
Kay, L., Newman, N., Youtie, J., Porter, A. L. & Rafols, I. 2014 Patent overlay mapping: visualizing technological distance. Journal of the Association for Information Science and Technology 65, 2432–2443.Google Scholar
Kim, D., Cerigo, D. B., Jeong, H. & Youn, H. 2016 Technological novelty profile and invention's future impact. EPJ Data Science 5, 8.Google Scholar
Lubart, T.1994. Product-centered self-evaluation and the creative process. Unpublished doctoral dissertation, Yale University, New Haven, CT.Google Scholar
Luo, J. 2015 The united innovation process: integrating science, design, and entrepreneurship as sub-processes. Design Science 1, e2.CrossRefGoogle Scholar
Luo, J. & Wood, K. L. 2017 The growing complexity in invention process. Research in Engineering Design 1–15.Google Scholar
Nickerson, J. V. 2015 Collective design: remixing and visibility. Design Computing and Cognition'14. Springer.Google Scholar
Oman, S. K., Tumer, I. Y., Wood, K. & Seepersad, C. 2013 A comparison of creativity and innovation metrics and sample validation through in-class design projects. Research in Engineering Design 24, 65–92.Google Scholar
Rigby, D. L. 2015 Technological relatedness and knowledge space: entry and exit of US cities from patent classes. Regional Studies 49, 1922–1937.Google Scholar
Rothenberg, A. 1980 The emerging goddess: the creative process in art, science, and other fields. Journal of Aesthetics and Art Criticism 39 (2), 206–209.Google Scholar
Sarkar, P. & Chakrabarti, A.2007. Development of a method for assessing design creativity. Guidelines for a Decision Support Method Adapted to NPD Processes.Google Scholar
Simonton, D. K. 1999 Creativity as blind variation and selective retention: Is the creative process Darwinian? Psychological Inquiry 10, 309–328.Google Scholar
Simonton, D. K. 2000 Creativity: cognitive, personal, developmental, and social aspects. American Psychologist 55, 151.CrossRefGoogle ScholarPubMed
Sternberg, R. J. & Lubart, T. I. 1996 Investing in creativity. American Psychologist 51, 677.CrossRefGoogle Scholar
Sternberg, R. J. & Lubart, T. I. 1999 The concept of creativity: prospects and paradigms. Handbook of Creativity 1, 3–15.Google Scholar
Trajtenberg, M. 1990 A penny for your quotes: patent citations and the value of innovations. The Rand Journal of Economics 21 (1), 172–187.CrossRefGoogle Scholar
Uzzi, B., Mukherjee, S., Stringer, M. & Jones, B. 2013 Atypical combinations and scientific impact. Science 342, 468–472.CrossRefGoogle ScholarPubMed
Ward, T. B. 2001 Creative cognition, conceptual combination, and the creative writing of Stephen R. Donaldson. American Psychologist 56, 350.CrossRefGoogle Scholar
Weisberg, R. W. 2006 Creativity: Understanding Innovation in Problem Solving, Science, Invention, and The Arts. John Wiley & Sons.Google Scholar
Yan, B. & Luo, J. 2017 Measuring technological distance for patent mapping. Journal of the Association for Information Science and Technology 68, 423–437.Google Scholar
Youn, H., Strumsky, D., Bettencourt, L. M. & Lobo, J. 2015 Invention as a combinatorial process: evidence from US patents. Journal of The Royal Society Interface 12, 20150272.CrossRefGoogle ScholarPubMed
He and Lou supplementary material
He and Lou supplementary material 1
File 4 MB
Total number of PDF views: 705 *
* Views captured on Cambridge Core between 07th November 2017 - 17th January 2021. This data will be updated every 24 hours.
8 Cited by
Hostname: page-component-77fc7d77f9-95hzn Total loading time: 0.56 Render date: 2021-01-17T01:46:32.408Z Query parameters: { "hasAccess": "1", "openAccess": "1", "isLogged": "0", "lang": "en" } Feature Flags last update: Sun Jan 17 2021 00:53:58 GMT+0000 (Coordinated Universal Time) Feature Flags: { "metrics": true, "metricsAbstractViews": false, "peerReview": true, "crossMark": true, "comments": true, "relatedCommentaries": true, "subject": true, "clr": true, "languageSwitch": true, "figures": false, "newCiteModal": false, "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true }
Jensen, Lasse Skovgaard and Özkil, Ali Gürcan 2018. Identifying challenges in crowdfunded product development: a review of Kickstarter projects. Design Science, Vol. 4, Issue. ,
Luo, Jianxi Song, Binyang Blessing, Lucienne and Wood, Kristin 2018. Design opportunity conception using the total technology space map. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, Vol. 32, Issue. 4, p. 449.
Gravina, Daniele Liapis, Antonios and Yannakakis, Georgios N. 2019. Quality Diversity Through Surprise. IEEE Transactions on Evolutionary Computation, Vol. 23, Issue. 4, p. 603.
He, Yuejun Camburn, Bradley Luo, Jianxi Yang, Maria C. and Wood, Kristin L. 2019. Visual Sensemaking of Massive Crowdsourced Data for Design Ideation. Proceedings of the Design Society: International Conference on Engineering Design, Vol. 1, Issue. 1, p. 409.
He, Yuejun Camburn, Bradley Liu, Haowen Luo, Jianxi Yang, Maria and Wood, Kristin 2019. Mining and Representing the Concept Space of Existing Ideas for Directed Ideation. Journal of Mechanical Design, Vol. 141, Issue. 12,
Siddharth, L. Madhusudanan, N. and Chakrabarti, Amaresh 2020. Toward Automatically Assessing the Novelty of Engineering Design Solutions. Journal of Computing and Information Science in Engineering, Vol. 20, Issue. 1,
Camburn, Bradley He, Yuejun Raviselvam, Sujithra Luo, Jianxi and Wood, Kristin 2020. Machine Learning-Based Design Concept Evaluation. Journal of Mechanical Design, Vol. 142, Issue. 3,
Arts, Sam Hou, Jianan and Gomez, Juan Carlos 2021. Natural language processing to identify the creation and impact of new technologies in patent text: Code, data, and new measures. Research Policy, Vol. 50, Issue. 2, p. 104144.
Yuejun He (a1) and Jianxi Luo (a1)
DOI: https://doi.org/10.1017/dsj.2017.23 | CommonCrawl |
^ EFSA Panel on Dietetic Products, Nutrition and Allergies; European Food Safety Authority (EFSA), Parma, Italy (2011). "Scientific Opinion on the substantiation of health claims related to L-theanine from Camellia sinensis (L.) Kuntze (tea) and improvement of cognitive function (ID 1104, 1222, 1600, 1601, 1707, 1935, 2004, 2005), alleviation of psychological stress (ID 1598, 1601), maintenance of normal sleep (ID 1222, 1737, 2004) and reduction of menstrual discomfort (ID 1599) pursuant to Article 13(1) of Regulation (EC) No 1924/2006". EFSA Journal. 9 (6): 2238. doi:10.2903/j.efsa.2011.2238.
Caffeine (Examine.com; FDA adverse events) is of course the most famous stimulant around. But consuming 200mg or more a day, I have discovered the downside: it is addictive and has a nasty withdrawal - headaches, decreased motivation, apathy, and general unhappiness. (It's a little amusing to read academic descriptions of caffeine addiction9; if caffeine were a new drug, I wonder what Schedule it would be in and if people might be even more leery of it than modafinil.) Further, in some ways, aside from the ubiquitous placebo effect, caffeine combines a mix of weak performance benefits (Lorist & Snel 2008, Nehlig 2010) with some possible decrements, anecdotally and scientifically:
Most diehard nootropic users have considered using racetams for enhancing brain function. Racetams are synthetic nootropic substances first developed in Russia. These smart drugs vary in potency, but they are not stimulants. They are unlike traditional ADHD medications (Adderall, Ritalin, Vyvanse, etc.). Instead, racetams boost cognition by enhancing the cholinergic system.
So it's no surprise that as soon as medical science develops a treatment for a disease, we often ask if it couldn't perhaps make a healthy person even healthier. Take Viagra, for example: developed to help men who couldn't get erections, it's now used by many who function perfectly well without a pill but who hope it will make them exceptionally virile.
But perhaps the biggest difference between Modafinil and other nootropics like Piracetam, according to Patel, is that Modafinil studies show more efficacy in young, healthy people, not just the elderly or those with cognitive deficits. That's why it's great for (and often prescribed to) military members who are on an intense tour, or for those who can't get enough sleep for physiological reasons. One study, by researchers at Imperial College London, and published in Annals of Surgery, even showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions.
One of the most common strategies to beat this is cycling. Users who cycle their nootropics take them for a predetermined period, (usually around five days) before taking a two-day break from using them. Once the two days are up, they resume the cycle. By taking a break, nootropic users reduce the tolerance for nootropics and lessen the risk of regression and tolerance symptoms.
I noticed what may have been an effect on my dual n-back scores; the difference is not large (▃▆▃▃▂▂▂▂▄▅▂▄▂▃▅▃▄ vs ▃▄▂▂▃▅▂▂▄▁▄▃▅▂▃▂▄▂▁▇▃▂▂▄▄▃▃▂▃▂▂▂▃▄▄▃▆▄▄▂▃▄▃▁▂▂▂▃▂▄▂▁▁▂▄▁▃▂▄) and appears mostly in the averages - Toomim's quick two-sample t-test gave p=0.23, although a another analysis gives p=0.138112. One issue with this before-after quasi-experiment is that one would expect my scores to slowly rise over time and hence a fish oil after would yield a score increase - the 3.2 point difference could be attributable to that, placebo effect, or random variation etc. But an accidentally noticed effect (d=0.28) is a promising start. An experiment may be worth doing given that fish oil does cost a fair bit each year: randomized blocks permitting an fish-oil-then-placebo comparison would take care of the first issue, and then blinding (olive oil capsules versus fish oil capsules?) would take care of the placebo worry.
Since coffee drinking may lead to a worsening of calcium balance in humans, we studied the serial changes of serum calcium, PTH, 1,25-dihydroxyvitamin D (1,25(OH)2D) vitamin D and calcium balance in young and adult rats after daily administration of caffeine for 4 weeks. In the young rats, there was an increase in urinary calcium and endogenous fecal calcium excretion after four days of caffeine administration that persisted for the duration of the experiment. Serum calcium decreased on the fourth day of caffeine administration and then returned to control levels. In contrast, the serum PTH and 1,25(OH)2D remained unchanged initially, but increased after 2 weeks of caffeine administration…In the adult rat group, an increase in the urinary calcium and endogenous fecal calcium excretion and serum levels of PTH was found after caffeine administration. However, the serum 1,25(OH)2D levels and intestinal absorption coefficient of calcium remained the same as in the adult control group.
At small effects like d=0.07, a nontrivial chance of negative effects, and an unknown level of placebo effects (this was non-blinded, which could account for any residual effects), this strongly implies that LLLT is not doing anything for me worth bothering with. I was pretty skeptical of LLLT in the first place, and if 167 days can't turn up anything noticeable, I don't think I'll be continuing with LLLT usage and will be giving away my LED set. (Should any experimental studies of LLLT for cognitive enhancement in healthy people surface with large quantitative effects - as opposed to a handful of qualitative case studies about brain-damaged people - and I decide to give LLLT another try, I can always just buy another set of LEDs: it's only ~$15, after all.)
More photos from this reportage are featured in Quartz's new book The Objects that Power the Global Economy. You may not have seen these objects before, but they've already changed the way you live. Each chapter examines an object that is driving radical change in the global economy. This is from the chapter on the drug modafinil, which explores modifying the mind for a more productive life.
In terms of legal status, Adrafinil is legal in the United States but is unregulated. You need to purchase this supplement online, as it is not a prescription drug at this time. Modafinil on the other hand, is heavily regulated throughout the United States. It is being used as a narcolepsy drug, but isn't available over the counter. You will need to obtain a prescription from your doctor, which is why many turn to Adrafinil use instead.
Kennedy et al. (1990) administered what they termed a grammatical reasoning task to subjects, in which a sentence describing the order of two letters, A and B, is presented along with the letter pair, and subjects must determine whether or not the sentence correctly describes the letter pair. They found no effect of d-AMP on performance of this task.
"I enjoyed this book. It was full of practical information. It was easy to understand. I implemented some of the ideas in the book and they have made a positive impact for me. Not only is this book a wealth of knowledge it helps you think outside the box and piece together other ideas to research and helps you understand more about TBI and the way food might help you mitigate symptoms."
The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we'll put the 500 at $5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we'll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it's necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: 5 + (>5 \times 7.25) = >41.
A record of nootropics I have tried, with thoughts about which ones worked and did not work for me. These anecdotes should be considered only as anecdotes, and one's efforts with nootropics a hobby to put only limited amounts of time into due to the inherent limits of drugs as a force-multiplier compared to other things like programming1; for an ironic counterpoint, I suggest the reader listen to a video of Jonathan Coulton's I Feel Fantastic while reading.
Compared with those reporting no use, subjects drinking >4 cups/day of decaffeinated coffee were at increased risk of RA [rheumatoid arthritis] (RR 2.58, 95% CI 1.63-4.06). In contrast, women consuming >3 cups/day of tea displayed a decreased risk of RA (RR 0.39, 95% CI 0.16-0.97) compared with women who never drank tea. Caffeinated coffee and daily caffeine intake were not associated with the development of RA.
Nootropics, also known as 'brain boosters,' 'brain supplements' or 'cognitive enhancers' are made up of a variety of artificial and natural compounds. These compounds help in enhancing the cognitive activities of the brain by regulating or altering the production of neurochemicals and neurotransmitters in the brain. It improves blood flow, stimulates neurogenesis (the process by which neurons are produced in the body by neural stem cells), enhances nerve growth rate, modifies synapses, and improves cell membrane fluidity. Thus, positive changes are created within your body, which helps you to function optimally irrespective of your current lifestyle and individual needs.
Segmental analysis of the key components of the global smart pills market has been performed based on application, target area, disease indication, end-user, and region. Applications of smart pills are found in capsule endoscopy, drug delivery, patient monitoring, and others. Sub-division of the capsule endoscopy segment includes small bowel capsule endoscopy, controllable capsule endoscopy, colon capsule endoscopy, and others. Meanwhile, the patient monitoring segment is further divided into capsule pH monitoring and others.
Using the 21mg patches, I cut them into quarters. What I would do is I would cut out 1 quarter, and then seal the two edges with scotch tape, and put the Pac-Man back into its sleeve. Then the next time I would cut another quarter, seal the new edge, and so on. I thought that 5.25mg might be too much since I initially found 4mg gum to be too much, but it's delivered over a long time and it wound up feeling much more like 1mg gum used regularly. I don't know if the tape worked, but I did not notice any loss of potency. I didn't like them as much as the gum because I would sometimes forget to take off a patch at the end of the day and it would interfere with sleep, and because the onset is much slower and I find I need stimulants more for getting started than for ongoing stimulation so it is better to have gum which can be taken precisely when needed and start acting quickly. (One case where the patches were definitely better than the gum was long car trips where slow onset is fine, since you're most alert at the start.) When I finally ran out of patches in June 2016 (using them sparingly), I ordered gum instead.
Board-certified neuropsychologist Brian Lebowitz, PhD and associate clinical professor of neurology at Stony Brook University, explains to MensHealth.com that the term "encompasses so many things," including prescription medications. Brain enhancers fall into two different categories: naturally occurring substances like Ginkgo biloba, creatine and phenibut; and manmade prescription drugs, like Adderall, and over-the-counter supplements such as Noopept.
Common environmental toxins – pesticides, for example – cause your brain to release glutamate (a neurotransmitter). Your brain needs glutamate to function, but when you create too much of it it becomes toxic and starts killing neurons. Oxaloacetate protects rodents from glutamate-induced brain damage.[17] Of course, we need more research to determine whether or not oxaloacetate has the same effect on humans.
Zach was on his way to being a doctor when a personal health crisis changed all of that. He decided that he wanted to create wellness instead of fight illness. He lost over a 100 lbs through functional nutrition and other natural healing protocols. He has since been sharing his knowledge of nutrition and functional medicine for the last 12 years as a health coach and health educator.
Yet some researchers point out these drugs may not be enhancing cognition directly, but simply improving the user's state of mind – making work more pleasurable and enhancing focus. "I'm just not seeing the evidence that indicates these are clear cognition enhancers," says Martin Sarter, a professor at the University of Michigan, who thinks they may be achieving their effects by relieving tiredness and boredom. "What most of these are actually doing is enabling the person who's taking them to focus," says Steven Rose, emeritus professor of life sciences at the Open University. "It's peripheral to the learning process itself."
A randomized non-blind self-experiment of LLLT 2014-2015 yields a causal effect which is several times smaller than a correlative analysis and non-statistically-significant/very weak Bayesian evidence for a positive effect. This suggests that the earlier result had been driven primarily by reverse causation, and that my LLLT usage has little or no benefits.
Adrafinil is a prodrug for Modafinil, which means it can be metabolized into Modafinil to give you a similar effect. And you can buy it legally just about anywhere. But there are a few downsides. Patel explains that you have to take a lot more to achieve a similar effect as Modafinil, wait longer for it to kick in (45-60 minutes), there are more potential side effects, and there aren't any other benefits to taking it.
Another common working memory task is the n-back task, which requires the subject to view a series of items (usually letters) and decide whether the current item is identical to the one presented n items back. This task taxes working memory because the previous items must be held in working memory to be compared with the current item. The easiest version of this is a 1-back task, which is also called a double continuous performance task (CPT) because the subject is continuously monitoring for a repeat or double. Three studies examined the effects of MPH on working memory ability as measured by the 1-back task, and all found enhancement of performance in the form of reduced errors of omission (Cooper et al., 2005; Klorman et al., 1984; Strauss et al., 1984). Fleming et al. (1995) tested the effects of d-AMP on a 5-min CPT and found a decrease in reaction time, but did not specify which version of the CPT was used.
Smart pills are defined as drugs or prescription medication used to treat certain mental disorders, from milder ones such as brain fog, to some more severe like ADHD. They are often referred to as 'nootropics' but even though the two terms are often used interchangeably, smart pills and nootropics represent two different types of cognitive enhancers.
Due to the synthetic nature of racetams, you won't find them in many of the best smart pills on the market. The intentional exclusion is not because racetams are ineffective. Instead, the vast majority of users trust natural smart drugs more. The idea of using a synthetic substance to alter your brain's operating system is a big turn off for most people. With synthetic nootropics, you're a test subject until more definitive studies arise.
Fish oil (Examine.com, buyer's guide) provides benefits relating to general mood (eg. inflammation & anxiety; see later on anxiety) and anti-schizophrenia; it is one of the better supplements one can take. (The known risks are a higher rate of prostate cancer and internal bleeding, but are outweighed by the cardiac benefits - assuming those benefits exist, anyway, which may not be true.) The benefits of omega acids are well-researched.
The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.)
Vinh Ngo, a San Francisco family practice doctor who specializes in hormone therapy, has become familiar with piracetam and other nootropics through a changing patient base. His office is located in the heart of the city's tech boom and he is increasingly sought out by young, male tech workers who tell him they are interested in cognitive enhancement.
Noopept is a Russian stimulant sometimes suggested for nootropics use as it may be more effective than piracetam or other -racetams, and its smaller doses make it more convenient & possibly safer. Following up on a pilot study, I ran a well-powered blind randomized self-experiment between September 2013 and August 2014 using doses of 12-60mg Noopept & pairs of 3-day blocks to investigate the impact of Noopept on self-ratings of daily functioning in addition to my existing supplementation regimen involving small-to-moderate doses of piracetam. A linear regression, which included other concurrent experiments as covariates & used multiple imputation for missing data, indicates a small benefit to the lower dose levels and harm from the highest 60mg dose level, but no dose nor Noopept as a whole was statistically-significant. It seems Noopept's effects are too subtle to easily notice if they exist, but if one uses it, one should probably avoid 60mg+.
So the chi-squared believes there is a statistically-significant difference, the two-sample test disagrees, and the binomial also disagrees. Since I regarded it as a dubious theory, can't see a difference, and the binomial seems like the most appropriate test, I conclude that several months of 1mg iodine did not change my eye color. (As a final test, when I posted the results on the Longecity forum where people were claiming the eye color change, I swapped the labels on the photos to see if anyone would claim something along the lines when I look at the photos, I can see a difference!. I thought someone might do that, which would be a damning demonstration of their biases & wishful thinking, but no one did.)
White, Becker-Blease, & Grace-Bishop (2006) 2002 Large university undergraduates and graduates (N = 1,025) 16.2% (lifetime) 68.9%: improve attention; 65.2:% partying; 54.3%: improve study habits; 20%: improve grades; 9.1%: reduce hyperactivity 15.5%: 2–3 times per week; 33.9%: 2–3 times per month; 50.6%: 2–3 times per year 58%: easy or somewhat easy to obtain; write-in comments indicated many obtaining stimulants from friends with prescriptions
In addition, while the laboratory research reviewed here is of interest concerning the effects of stimulant drugs on specific cognitive processes, it does not tell us about the effects on cognition in the real world. How do these drugs affect academic performance when used by students? How do they affect the total knowledge and understanding that students take with them from a course? How do they affect various aspects of occupational performance? Similar questions have been addressed in relation to students and workers with ADHD (Barbaresi, Katusic, Colligan, Weaver, & Jacobsen, 2007; Halmøy, Fasmer, Gillberg, & Haavik, 2009; see also Advokat, 2010) but have yet to be addressed in the context of cognitive enhancement of normal individuals.
The next cheap proposition to test is that the 2ml dose is so large that the sedation/depressive effect of nicotine has begun to kick in. This is easy to test: take much less, like half a ml. I do so two or three times over the next day, and subjectively the feeling seems to be the same - which seems to support that proposition (although perhaps I've been placebo effecting myself this whole time, in which case the exact amount doesn't matter). If this theory is true, my previous sleep results don't show anything; one would expect nicotine-as-sedative to not hurt sleep or improve it. I skip the day (no cravings or addiction noticed), and take half a ml right before bed at 11:30; I fall asleep in 12 minutes and have a ZQ of ~105. The next few days I try putting one or two drops into the tea kettle, which seems to work as well (or poorly) as before. At that point, I was warned that there were some results that nicotine withdrawal can kick in with delays as long as a week, so I shouldn't be confident that a few days off proved an absence of addiction; I immediately quit to see what the week would bring. 4 or 7 days in, I didn't notice anything. I'm still using it, but I'm definitely a little nonplussed and disgruntled - I need some independent source of nicotine to compare with!
Sulbutiamine, mentioned earlier as a cholinergic smart drug, can also be classed a dopaminergic, although its mechanism is counterintuitive: by reducing the release of dopamine in the brain's prefrontal cortex, the density of dopamine receptors actually increase after continued Sulbutiamine exposure, through a compensatory mechanism. (This provides an interesting example of how dividing smart drugs into sensible "classes" is a matter of taste as well as science, especially since many of them create their discernable neural effects through still undefined mechanisms.)
Gamma-aminobutyric acid, also known as GABA, naturally produced in the brain from glutamate, is a neurotransmitter that helps in the communication between the nervous system and brain. The primary function of this GABA Nootropic is to reduce the additional activity of the nerve cells and helps calm the mind. Thus, it helps to improve various conditions, like stress, anxiety, and depression by decreasing the beta brain waves and increasing the alpha brain waves. It is one of the best nootropic for anxiety that you can find in the market today. As a result, cognitive abilities like memory power, attention, and alertness also improve. GABA helps drug addicts recover from addiction by normalizing the brain's GABA receptors which reduce anxiety and craving levels in the absence of addictive substances.
The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the null hypothesis files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A.
I posted a link to the survey on my Google+ account, and inserted the link at the top of all gwern.net pages; 51 people completed all 11 binary choices (most of them coming from North America & Europe), which seems adequate since the 11 questions are all asking the same question, and 561 responses to one question is quite a few. A few different statistical tests seem applicable: a chi-squared test whether there's a difference between all the answers, a two-sample test on the averages, and most meaningfully, summing up the responses as a single pair of numbers and doing a binomial test:
Factor analysis. The strategy: read in the data, drop unnecessary data, impute missing variables (data is too heterogeneous and collected starting at varying intervals to be clean), estimate how many factors would fit best, factor analyze, pick the ones which look like they match best my ideas of what productive is, extract per-day estimates, and finally regress LLLT usage on the selected factors to look for increases.
It is at the top of the supplement snake oil list thanks to tons of correlations; for a review, see Luchtman & Song 2013 but some specifics include Teenage Boys Who Eat Fish At Least Once A Week Achieve Higher Intelligence Scores, anti-inflammatory properties (see Fish Oil: What the Prescriber Needs to Know on arthritis), and others - Fish oil can head off first psychotic episodes (study; Seth Roberts commentary), Fish Oil May Fight Breast Cancer, Fatty Fish May Cut Prostate Cancer Risk & Walnuts slow prostate cancer, Benefits of omega-3 fatty acids tally up, Serum Phospholipid Docosahexaenonic Acid Is Associated with Cognitive Functioning during Middle Adulthood endless anecdotes.
Of course, there are drugs out there with more transformative powers. "I think it's very clear that some do work," says Andrew Huberman, a neuroscientist based at Stanford University. In fact, there's one category of smart drugs which has received more attention from scientists and biohackers – those looking to alter their own biology and abilities – than any other. These are the stimulants.
l-Theanine – A 2014 systematic review and meta-analysis found that concurrent caffeine and l-theanine use had synergistic psychoactive effects that promoted alertness, attention, and task switching;[29] these effects were most pronounced during the first hour post-dose.[29] However, the European Food Safety Authority reported that, when L-theanine is used by itself (i.e. without caffeine), there is insufficient information to determine if these effects exist.[34]
Contact us at [email protected] | Sitemap xml | Sitemap txt | Sitemap | CommonCrawl |
Help:Lesson 20 Print
The following questions are intended to help you judge your preparation for this exam. Carefully work through the problems.
These questions are repeated on the preparation quiz for this lesson.
This is not designed to be a comprehensive review. There may be items on the exam that are not covered in this review. Similarly, there may be items in this review that are not tested on this exam. You are strongly encouraged to review the readings, homework exercises, and other activities from Units 1-3 as you prepare for the exam. In particular, you should go over the Review for Exam 1 and the Review for Exam 2. Use the Index to review definitions of important terms.
1 Lesson Summaries
Click on the link at right for a review of the summaries from each lesson.
Here are the summaries for each lesson in unit 3. Reviewing these key points from each lesson will help you in your preparation for the exam.
Lesson 16 Recap
Pie charts are used when you want to represent the observations as part of a whole, where each slice (sector) of the pie chart represents a proportion or percentage of the whole.
Bar charts present the same information as pie charts and are used when our data represent counts. A Pareto chart is a bar chart where the height of the bars is presented in descending order.
$\hat p$ is a point estimator for true proportion $p$. $\displaystyle{\hat p = \frac{x}{n}}$
The sampling distribution of $\hat p$ has a mean of $p$ and a standard deviation of $\displaystyle{\sqrt{\frac{p\cdot(1-p)}{n}}}$
If $np \ge 10$ and $n(1-p) \ge 10$, you can conduct probability calculations using the Normal Probability Applet. $\displaystyle {z = \frac{\textrm{value} - \textrm{mean}}{\textrm{standard deviation}} = \frac{\hat p - p}{\sqrt{\frac{p \cdot (1-p)}{n}}}}$
The estimator of $p$ is $\hat p$. $\displaystyle{ \hat p = \frac {x}{n}}$ and is used for both confidence intervals and hypothesis testing.
You will use the Excel spreadsheet
CategoricalDataAnalysis.xls
to perform hypothesis testing and calculate confidence intervals for problems involving one proportion.
The requirements for a confidence interval are $n \hat p \ge 10$ and $n(1-\hat p) \ge 10$. The requirements for hypothesis tests involving one proportion are $np\ge10$ and $n(1-p)\ge10$.
We can determine the sample size we need to obtain a desired margin of error using the formula $\displaystyle{ n=\left(\frac{z^*}{m}\right)^2 p^*(1-p^*)}$ where $p^*$ is a prior estimate of $p$. If no prior estimate is available, the formula $\displaystyle{ \left(\frac{z^*}{2m}\right)^2}$ is used.
When conducting hypothesis tests using two proportions, the null hypothesis is always $p_1=p_2$, indicating that there is no difference between the two proportions. The alternative hypothesis can be left-tailed ($<$), right-tailed($>$), or two-tailed($\ne$).
For a hypothesis test and confidence interval of two proportions, we use the following symbols:
$$ \begin{array}{lcl} \text{Sample proportion for group 1:} & \hat p_1 = \displaystyle{\frac{x_1}{n_1}} \\ \text{Sample proportion for group 2:} & \hat p_2 = \displaystyle{\frac{x_2}{n_2}} \end{array} $$
For a hypothesis test only, we use the following symbols:
$$ \begin{array}{lcl} \text{Overall sample proportion:} & \hat p = \displaystyle{\frac{x_1+x_2}{n_1+n_2}} \end{array} $$
Whenever zero is contained in the confidence interval of the difference of the true proportions we conclude that there is no significant difference between the two proportions.
to perform hypothesis testing and calculate confidence intervals for problems involving two proportions.
The $\chi^2$ hypothesis test is a test of independence between two variables. These variables are either associated or they are not. Therefore, the null and alternative hypotheses are the same for every test:
$$ \begin{array}{1cl} H_0: & \text{The (first variable) and the (second variable) are independent.} \\ H_a: & \text{The (first variable) and the (second variable) are not independent.} \end{array} $$
The degrees of freedom ($df$) for a $\chi^2$ test of independence are calculated using the formula $df=(\text{number of rows}-1)(\text{number of columns}-1)$
In our hypothesis testing for $\chi^2$ we never conclude that two variables are dependent. Instead, we say that two variables are not independent.
2 Review Questions
Questions 1 through 5: Decide which hypothesis test to use. Here is a list of hypothesis tests we have studied so far this semester. For each question identify the one hypothesis test that is most appropriate to the given situation. You may use a hypothesis test once, more than once, or not at all.
a. One sample z-test
b. One sample t-test
c. Paired-samples t-test
d. Independent sample t-test
e. ANOVA
f. Test of one proportion
g. Test of two proportions
h. Chi-Squared test of independence
1. In an article in the Journal of Small Business Management successful start-up businesses in the United States and Korea were compared. One set of data compared educational level (high school, undergraduate degree, master's degree, doctoral degree) of people who managed successful start-up companies in the United States and Korea. You want to determine if education level differs between managers of successful start-up companies differs between these two countries. Which hypothesis test would be most appropriate for this analysis?
2. A human resources manager reported data from a recent involuntary Reduction in Force (RIF) at her company. You are an attorney and want to determine if age discrimination was a factor (it is illegal to discriminate against employees because of age). The company reported the number of employees in two groups: 40 years old or younger, and over 40 years old. They also reported the number of employees in each group who were terminated. You want to determine if both age groups were treated equally. Which hypothesis test would be most appropriate for this analysis?
3. A survey was conducted by a group of state lotteries. A random sample of 2406 adults completed the survey. A total of 248 were classified as "heavy" players. Of these, 152 were male. You want to determine if the proportion of male "heavy" lottery players is different than the proportion of males in the population, which is 48.5%. Which hypothesis test would be most appropriate for this analysis?
4. A student project compared the effectiveness of two different combination locks. One of the locks turned clockwise first and the other lock turned counterclockwise first. They asked 25 students to participate in the study. Each student was given the combination to each lock and asked to open the locks. The time it took them to open each lock was recorded. They want to determine if one of the locks is easier to open. Which hypothesis test would be most appropriate for this analysis?
5. Weight gain during pregnancy of the mother is an important indicator of infant health. A simple random sample of pregnant women on Egypt, Kenya, and Mexico was used to determine if weight gain during pregnancy differed in these three countries. Which hypothesis test would be most appropriate for this analysis?
Questions 6 through 9: Decide which confidence interval to use. Here is a list of confidence intervals we have studied so far this semester. For each question identify the one confidence interval that is most appropriate for the given situation. You may use a confidence interval once, more than once, or not at all.
a. One sample z-confidence interval
b. One sample t-confidence interval
c. Paired-samples t-confidence interval
d. Independent sample t-confidence interval
e. "+4" confidence interval for one proportion
f. "+4" confidence interval for two proportions
6. A bank employs two appraisers. When approving borrowers for mortgages, it is imperative that the appraisers value the same types of properties consistently. To make sure this is the case, the bank evaluates six properties that both appraisers have recently valued. Which confidence interval would be most appropriate for this study?
7. In a Wall Street Journal article on satisfaction with career paths, the percentage of psychology majors reporting they were "satisfied" or "very satisfied" with their career path was reported. The same data was also reported for accounting majors. You decide to construct a 95% confidence interval to see if the observed difference is significant. Which confidence interval would be most appropriate for this study?
8. O'Hare International Airport in Chicago has a reputation for having a large proportion of its flights being late. You design a study to see if this reputation is deserved. You find that the average on-time rate for all international airports in the US is 70%. You collect data and determine the on-time rate for O'Hare. You decide to construct a confidence interval to compare O'Hare's on-time rate to the national average. Which confidence interval would be most appropriate for this study?
9. DoubleStuf Oreo cookies are supposed to have twice the filling of regular Oreo cookies. You and some friends decide you want to know if that is a true assertion by the company who makes them. You take a sample of 55 DoubleStuf Oreo cookies and measure the amount of filling in each one. You need to construct a confidence interval to estimate the true mean filling amount of DoubleStuf Oreos in order to compare it to the filling amount found in regular Oreos. Which confidence interval would be most appropriate for this study?
10. Which one of the following best defines the notion of the significance level of a hypothesis test?
a. The probability of rejecting $H_o$, whether it's true or not
b. The probability of observing a sample statistic more extreme than the one actually obtained, assuming the null hypothesis is true
c. The probability of the type I error
d. The probability of the type II error
11. Which one of the following best defines the notion of the $P$-value of a hypothesis test?
12. Suppose you create a 95% confidence interval for a mean, and get (10, 20). You've been told to report this by saying something similar to, "We are 95% confident that the true mean is between 10 and 20." Exactly what does this mean?
a. 95% of the data are between 10 and 20.
b. 95% of the sample means are between 10 and 20.
c. There is a 95% chance that the true mean is between 10 and 20.
d. 95% of all 95% confidence intervals actually contain the true mean.
Questions 13 through 15: Use the following information. You take a simple random sample of 100 adults from a town in the Western United States to determine the proportion of adults in the town who invest in the stock market. Assume the unknown population proportion or percentage of people in town who invest in the stock market is $p=0.30$ (or 30%).
13. What is the mean of the distribution of the sample proportions?
c. 0.70
d. 0.30
14. What is the standard deviation of the distribution of the sample proportions?
a. 0.004
b. 0.046
c. 0.458
d. 4.583
15. What is the probability that your random sample of 100 adults will have a sample proportion less that 0.25?
Questions 16 through 20: Use the following information. Accupril is meant to control hypertension. In clinical trials of Accupril, 2142 subjects were divided into two groups. The 1563 subjects in the experimental group received Accupril. The 579 subjects in the control group received a placebo. Of the 1563 in the experimental group, 61 experienced dizziness as a side effect. Of the 579 subjects in the control group, 15 experienced dizziness as a side effect.
16. Let $p_1$ be the true proportion of people who experience dizziness while taking Accupril. Let $p_2$ be the true proportion of people who experience dizziness but do not take Accupril. Create a 95% confidence interval for $p_1 - p_2$.
a. (0.006, 0.092)
b. (-0.06, 0.92)
c. (-0.004, 0.029)
d. (-0.04, 0.29)
Perform a hypothesis test to see if the proportion of experimental group subjects who experience dizziness is different than the proportion of control group subjects who do. Let $p_1$ be the true proportion of people who experience dizziness while taking Accupril. Let $p_2$ be the true proportion of people who experience dizziness but do not take Accupril. Use a level of significance of $\alpha = 0.05$.
17. Which of the following pairs of hypotheses is the most appropriate for addressing this question?
a. $H_o:~p_1=p_2$ $H_a:~p_1<p_2$
b. $H_o:~p_1=p_2$ $H_a:~p_1\ne p_2$
c. $H_o:~p_1=p_2$ $H_a:~p_1>p_2$
d. $H_o:~p_1<p_2$ $H_a:~p_1=p_2$
e. $H_o:~p_1 \ne p_2$ $H_a:~p_1=p_2$
f. $H_o:~p_1>p_2$ $H_a:~p_1=p_2$
18. The value of your test statistic is:
a. -1.361
19. The $P$-value of your test is:
20. Is there sufficient evidence to conclude that the true proportion of people who experience dizziness while taking Accupril is different than the true proportion of people who experience dizziness while not taking Accupril?
a. Yes. I rejected $H_o$.
b. Yes. I failed to reject $H_o$.
c. Yes. I accepted $H_a$.
d. No. I rejected $H_o$.
e. No. I failed to reject $H_o$.
f. No. I failed to accept $H_a$.
Questions 21 through 24: Use the following information and table.
A survey was conducted of 1279 randomly selected adults aged 18 and older. They were asked "Are you a morning person or a night person?"
The hypotheses for this study are:
$$ \begin{array}{rl} H_o: & \text{Being a morning or evening person is independent of age} \\ H_a: & \text{Being a morning or evening person is not independent of age} \\ \end{array} $$
The results of the survey are given here:
Morning Person
Evening Person
Conduct a test of independence. Use a level of significance of $\alpha=0.05$
21. Calculate the test statistic for this hypothesis test. Assume the requirements for the test are satisfied.
22. Calculate the $P$-value for this hypothesis test. Assume the requirements for the test are satisfied.
23. Should you reject $H_o$ or not? Explain.
a. Yes. The $P$-value is less than 0.05.
b. Yes. The $P$-value is greater than 0.05.
c. Yes. Looking at the data we can see that the age is a factor in determining if you are a morning or a night person.
d. No. The $P$-value is less than 0.05.
e. No. The $P$-value is greater than 0.05.
f. No. Young people are more likely to be a night person.
24. Do you have sufficient evidence to conclude that age makes a difference in whether a person is a morning or night person? Why or why not?
a. Yes. The table makes this clear.
b. Yes. I rejected $H_o$.
c. Yes. I failed to reject $H_o$.
d. No. The difference in the data in the table is entirely due to chance.
e. No. I rejected $H_o$.
f. No. I failed to reject $H_o$.
Questions 25 and 31: Use the following information to answer each question. A recent book noted that only 20% of all investment managers outperform the Dow Jones Industrial Average over a five-year period. A random sample of 200 investment managers that had graduated from one of the top ten business programs in the country were followed over a five-year period. Fifty of these outperformed the Dow Jones Industrial Average. Let $p$ be the true proportion of investment managers who graduated from one of the top ten business programs who outperformed the Dow Jones over a five-year period.
25. Based on the results of the sample, a 95% confidence interval for $p$ is:
a. (1.95, 3.15)
b. (0.0195, 0 .0315)
c. (0.195, 0.315)
d. (0.028, 0.031)
e. We can assert that $p$ = 0.20 with 100% confidence, because only 20% of investment managers outperform the standard indexes.
26. Suppose you had been in charge of designing the study. What sample size would be needed to construct a margin of error of 2% with 95% confidence? Use the prior estimate of $p^* = 0.2$ for this estimate.
a. $n=2401$
b. $n=1537$
c. $n=16$
d. $n=1801$
e. $n>30$
Suppose you wish to see if there is evidence that graduates of one of the top ten business programs performs better than other investment managers. Conduct a hypothesis test. Use a level of significance of $\alpha=0.05$.
a. $H_o:~p=0.2$ $H_a:~p<0.2$
b. $H_o:~p=0.2$ $H_a:~p\ne0.2$
c. $H_o:~p=0.2$ $H_a:~p>0.2$
d. $H_o:~p<0.2$ $H_a:~p=0.2$
e. $H_o:~p\ne0.2$ $H_a:~p=0.2$
f. $H_o:~p>0.2$ $H_a:~p=0.2$
28. How many measurements must you have in order to assure that $\hat p$ is normally distributed?
a. $n\ge30$
b. $n\ge5$
c. $np\ge10$ and $n(1-p)\ge10$
d. $np\ge5$ and $n(1-p)\ge5$
31. Is there sufficient evidence to conclude that graduates from the top ten business programs perform better than other investment managers?
Retrieved from "http://statistics.byuimath.com/index.php?title=Help:Lesson_20_Print&oldid=2976" | CommonCrawl |
Study the effect of partially replacement sand by waste pistachio shells in cement mortar
Zainab Hashim Abbas Alsalami1
The most economic and environmental problems result from burial agricultural waste materials. The main objective of the present investigation is to assess the usefulness of agricultural waste in mortar admixture. These materials are expected to reduce the density of the admixture, and thus producing lightweight mortar.This study aims to study the effect of using pistachio shells as partial replacement of sand on the properties of cement mortar. Furthermore, the effect of density, absorption, and compressive strength of cement mortar were also obtained.Ordinary Portland cement from Kufa Cement Plant was used with a water cement ratio of 0.48 and a mix proportion of 1:3, also six percentages of pistachio shells were used (10, 20, 30, 40, 50 and 60% by weight of fine aggregate).A total number of 84 mortar cubes were casted with 12 cubes for each mixture ratio. From 84 mortar cubes 21 cubes were utilized to calculate the average water absorption and 63 cubes were utilized to calculate the average density and compressive strength.Compressive strength values of the mortar cubes were evaluated at 7, 14, 28 days at different percentage replacement levels obtaining a range of values of 6.78, 8.92 and 14.1 MPa, respectively at 20% replacement.The density values were reduced with the increment in replacement levels until it reached (1.21 and 1) gm/cm3 at 28 days with replacement percentages of (50 and 60%) respectively.Water absorption was increased with the increment in replacement levels and reaching (6.04%) at (60%) replacement level.
The high cost of construction materials is a major problem in the construction industry. Therefore, researchers tend to study more economic materials such as agricultural and industrial waste Materials. However, these waste materials if not charged safely it may be dangerous. The manufacture of traditional masonry materials consume a lot of thermal and electrical power and in return it turn contaminates the air, water and land.
Other benefits of using agricultural waste materials in the construction industry instead of naturalistic materials are protection of natural resources, elimination of waste materials and release of precious ground for different purposes.
Pistacia vera is an individual from the Anacardiaceae or cashew family. Pistachio trees are separate sex, implying that there are isolated male and female trees. The criterion male class is "Peters", the essential vaccinator for "Kerman", the principle female class [1].
The pistachio is local to the Asia Minor region, from the islands of the Mediterranean in the west to India in the east. Furthermore, it is broadly found in Syria, Iraq and Iran. It presumably created in inside wilderness zones, since it demands extended, warm summers for organic product development, is dry season and salt tolerant, in addition to that has a high winter severe prerequisite. Figure 1 explains Pistacia vera fruits.
Pistacia vera fruits
Several researchers have discussed the replacement of sand by many types of waste materials; Ganiron studied the use of recycled glass bottles as fine aggregates in concrete mixture. He concluded that the use of recycled glass bottles as an alternative fine aggregate for concrete mix decreases the unit weight of concrete, the value of modulus of elasticity and cost of concrete, he also concluded that the use of recycled bottles as an alternative of fine aggregate is not recommended for structural members such as columns, beams and suspended slabs [2].
Sada et al. investigated the use of groundnut as a replacement of fine Aggregate; they found that the use of groundnut shell in concrete reduces the concretes workability due to the high absorption of water by the groundnut shell; the densities and compressive strength of concrete decreased with the increase in groundnut shell percentage [3].
Obilade conducted experimental study on rice husk as fine aggregates in concrete; he found that the density reduces with the increase in the percentage of husk rice, he also concluded that there is a high potential for the use of rice husk as fine aggregate in the production of lightly reinforced concrete [4].
Mortar is a mixture of sand, cement and water. The main difference between mortar and concrete is mortar does not have coarse aggregate.
Mohammed stated that as a result of the heterogeneity of the mortar, its technique of conduct under various load impacts is subject to the properties of the constituents of the mortar. Sand has an important effect on the characteristics of the mortar because it forms the master volume of mortar; therefore the chosen of convenient aggregates in mortar is very substantial [5].
De Schutter and Poppe demonstrate an exceptionally noteworthy impact of the sand sort on the mortar properties [6].
Mortar is the material responsible for the dividing of stresses in building structures; therefore studying the properties of mortar is significant to ensure a good performance of masonry structures [7].
Lenczner stated that the main purpose of mortar is to adhesively join the individual masonry units together. It also provides defense against the permeation of air and water through the joints in a masonry assembly. Mortar also links the non-masonry elements of an assembly such as joint reinforcement and ties. Minor dimensional variations in the masonry units have also been compensated by mortar. Finally, mortar joints have important effect on architectural quality of construction through colour and shadow [8].
Mortars are characterized into four Types: M, S, N and O by ASTM C 270, 2006 [9]:
Type N mortar: General universally aims mortar with great binding abilities and workability.
Type S mortar: General universally aims mortar with higher flexural bond quality.
Type M mortar: High compressive-quality mortar, however not exceptionally workable.
Type O mortar: Low-strength mortar, utilized generally for inside applications and reclamation.
Ordinary Portland Cement (OPC) according to ASTM C150 Type 1 [10] commercially accessible in Kufa cement plant.
Test results indicated that the embraced cement adjusts to Iraqi specifications IQS No. 5/1984 [11].
The chemical composition and physical properties of this OPC were given in Tables 1, 2 respectively.
Table 1 Chemical composition of cement
Table 2 Physical properties of cement
The fine aggregate utilized throughout this work are brought from AL-Ekhadir region. Tests have been completed to decide the gradation, fineness modulus, and sulfate content. Results demonstrated that the fine aggregate were conformed to the requirements of IQS No. 45/1984 [12] as shown in Tables 3, 4.
Table 3 Grading of fine aggregate
Table 4 Physical and chemical properties of fine aggregate
Pistachio shells
The pistachio shells were acquired from Hila city as a waste from shops which sell pistachio. The shells shown in Fig. 2 were washed, sun dried for 7 days (Sun drying was important to simplicity expulsion of the meat from the inward shells of the pistachio pieces), afterward the Pistachio shells were squashed utilizing electric power grinder machine shown in Fig. 3 to diminish it to sizes similar to fine aggregate as determined in IQS No. 45/1984 [12].
Pistacia vera shells after washed and sun dried
Electric power grinder machine
The pistachio shells were sieved by utilizing 4.75 mm sieve to dispense with harmful materials and curiously large particles.
Some physical tests performed on pistachio shells sample are displayed in Table 5.
Table 5 Physical properties of pistachio shells
For determining the specific gravity of pistachio shells, flask, measuring balance, filtered water and a drying material were utilized as a part of the assurance of specific gravity of pistachio shells. Vacuous, pure and arid specific gravity flask with its plug was measured (W1). The flask was filled to 33% full with the pistachio shells specimen and rebalanced (W2). A little quantity of filtered water was then included and the jug substance vibrated to evacuate trapped air. Vibration proceeded and more water included until the point that the container was full. The plug was embedded and abundance water cleaned on flask and measured (W3). The container from that point was discharged, altogether cleaned and swabbed dry and after that loaded with filtered water and the plug embedded and overabundance water swabbed and measured (W4).
Calculation of specific gravity for pistachio shells were as follows:
$${\text{S}}.{\text{G }} = \frac{{{\text{W}}_{2} - {\text{W}}_{1} }}{{ \left( {{\text{W}}_{4} - {\text{W}}_{1} } \right) - \left( {{\text{W}}_{3} - {\text{W}}_{2} } \right)}}$$
where: W1 = weight of empty flask, (85 gm), W2 = weight of flask + cement, (109 gm), W3 = weight of flask + cement + water, (206 gm), W4 = weight of flask + water, (383 gm).
Experimental set up
Mixing by weight approach was embraced in this examination work. A blend proportion of 1:3 (by weight of cement: fines) with a water cement proportion of 0.48 were kept constant for all the blends. The substitution ratios of fine aggregate with pistachio shells were 0, 10, 20, 30, 40, 50 and 60% by weight of the fine aggregate to study the impacts of various extents of the pistachio shells on a few properties of mortar. Table 6 displays the computed masses of constituent materials for all blends. The 0% substitution filled in as control for different blends. Compressive strength as per B.S 1881-part 4—1989 [13], by utilizing 200 KN limit testing machine was done on mortar samples that were molded utilizing steel molds of sizes (70.6, 70.6, 70.6 mm). The molds were amassed and greased up preceding molding for simple evacuation of cubes; machine vibration was utilized through the molding. The mortar solid shapes were de-formed 24 h subsequent to molding and were cured for 7, 14, 28 days.
Table 6 Mix proportions for one cube
A similar process to ASTM C642-06 standard test method [14] for density, absorption, and voids in hardened concrete was used to measure density and absorption.
Density test was done on mortar shape before testing of compressive strength such mortar solid shape was weighted and separated weight on volume of mold rate of three molds was computed for each blend and each age.
Absorption test
After 28 days curing the specimens were taken out from curing tank. Specimens are dried in an oven at 105 °C for 24 h. The dry specimens were cooled to room temperature (25 °C) weighed accurately and noted as dry weight. Then dry specimens were immersed in a water container. Weight of the specimen at predetermined intervals was taken after wiping the surface with dry cloth. This process is to be continued not less than 48 h or up to constant weight are to be obtained in two successive observations.
The dry weight of mortar solids were measured and noted as weight (w1). Then the dry mortar solid were totally inundated in water at room temperature for 24 h. Following 24 h the mortar solids were expelled from the water, permitted to deplete and any hints of water were wiped out with moist material as appeared in Fig. 4. At that point this weight was noted as the wet weight (w2). From the raise in weight of the samples, water absorption amounts were obtained as percentage of dry weight.
mortar solids wiped out with moist cloth
Calculation of water absorption was as follows:
$${\text{Water absorption\% }} = \frac{{{\text{w}}_{2} - {\text{w}}_{1} }}{{{\text{w}}_{1} }}$$
where: w1 = dry weight of mortar solid, (gm), w2 = weight of solid mortar after 24 h immersion in water, (gm).
Fully 84 mortar cubes were molded with 12 cubes for each mixture ratio. From 84 mortar cubes 21 cubes were utilized to calculate the average water absorption and 63 cubes were utilized to calculate the average density and compressive strength.
Each specimen was weighed to detect the density before subjecting the specimens to compression test.
The results of compressive strength, density and absorption are shown in Table 7.
Table 7 Compressive strength, density and absorption
Figures 5 and 6 show change of compressive strength with age and with variable of replacement ratios, respectively. Figure 7 shows variable of density with age at different replacement ratios. Figure 8 explains absorption at different replacement ratios. Figures 9 and 10 explain compressive strength and density at 28 days with different replacement ratios, respectively. Figures 11 and 12 explain variable compressive strength and absorption with density at 28 days with different replacement ratios, respectively.
Compressive strength-age (with different replacement ratios)
Compressive strength-ratio of replacement (at different curing time)
Density—age (with different replacement ratios)
Absorption (with different replacement ratios)
Compressive strength at 28 days (with different replacement ratios)
Fig. 10
Density at 28 days (with different replacement ratios)
Compressive strength—density at 28 days (with different replacement ratios)
Compressive strength—absorption at 28 days (with different replacement ratios)
From Figs. 5, 6 were concluded that compressive strength increases with age and decreases with increasing ratio of replacement in agreement with Sada et al. who investigated the use of groundnut as a replacement of fine aggregate [3].
Neville, 2011 stated that the increasing in compressive strength with age that was because of continuation of hydration of anhydrate cement with time which composes a new output of hydration within the mortar mass [15].
The reduction in compressive strength with the increase in replacement levels might be attributed to the low workability of the mixture resulted from the absorption of water by the pistachio shells during mixing. Furthermore, the low density of pistachio shells as compared with that of fine aggregate may also contribute in the reduction of the compressive strength as shown in Fig. 10 and Table 7, which show that the density decreases with the increment in replacement ratio. This also coincides with the observations of Obilade who studied the influence of replacing fine aggregates by husk rice on the concrete properties [4]. Also, it was clear that mortar with 30% replacement and above could be used for non-load bearing purposes, and mortar with 20% replacement and below could be used for loading bearing purposes according to the requirements of ASTM C270, 2014 [9].
It also clear that mixes at 50 and 60% replacement level achieve lightweight mortar density requirement according to BS EN 998-2, 2010 [16].
From Figs. 7, 10 illustrate that density increase with age that was because of hydration of cement and close the pores and make the mortar denser [15], but density decrease with increase of replacement because of pistachio shells have less density than that of used sand.
From Fig. 11 shows that compressive the strength decrease with the decrement in density at 28 days with different replacement levels. The reduction might be attributed to increasing pores with decreasing density.
From Fig. 12 illustrates that the compressive strength decreases with increasing absorption at 28 days with different replacement levels, this might be due to absorption of water of hydration.
Compressive strength decreases with the increment in replacement level of pistachio shells and increases with age.
Density decreases with the increment in replacement level of pistachio shells due to low density of pistachio shells as compared with that of used fine aggregate.
Density increases with age that was because of hydration of cement and closes the pores and makes the mortar denser.
Absorption increases with the increment in replacement level of pistachio shells might be due to porous texture of pistachio shells.
Compressive strength decrease with increasing absorption.
Compressive strength decreases with decrement in density.
Mortar with 30% replacement and above could be used for non-loading bearing purposes.
Mortar with 20% replacement and below could be used for loading bearing purposes.
Mortar at 50% and 60% replacement level achieve lightweight mortar.
W1 :
weight of empty flask
weight of flask + cement
weight of flask + cement + water
weight of flask + water
S.G:
dry weight of mortar solid, (gm)
weight of solid mortar after 24 h immersion in water, (gm)
Nwangwu AC. Genetic variability in a representative 'Kerman' x 'Peters' population of pistachio (Pistacia vera L.) orchard. M.sc. Thesis of Biotechnology in the College of Science and Mathematics California State University, Fresno May 2015.
Ganiron TU Jr. Use of recycled glass bottles as fine aggregates in concrete mixture. Int J Adv Sci Technol. 2013;61:17–28.
Sada BH, Amartey YD, Bakoc S. An Investigation Into the Use of Groundnut as Fine Aggregate Replacement. Niger J Technol. 2013;32(1):54–60.
Obilade IO. Experimental study on rice husk as fine aggregates in concrete. Int J Eng Sci. 2014;3(8):9–14.
Mohammed A, Hughes TG, Abubakar A. Importance of sand grading on the compressive strength and stiffness of lime mortar in small scale model studies. Open J Civil Eng. 2015;5:372–8. https://doi.org/10.4236/ojce.2015.54037.
De Schutter G, Poppe AM. Quantification of the water demand of sand in mortar. Construct Build Mater. 2004;18(7):517–21.
Vladimir GH, Graça V, Paulo BL. Influence of aggregates grading and water/cement ratio in workability and hardened properties of mortars. Constr Build Mater. 2011;25:2980–7.
Lenczner D. Elements of loadbearing brickwork. Oxford: Pergamon Press; 1972.
ASTM C. 270 standard specification for mortar for unit masonry, annual book of standards. 04th ed. West Conshohocken: ASTM International; 2006.
ASTM C. 150 standard specification for Portland cement. West Conshohocken: American Society for Testing and Materials; 2005.
Iraqi Specification. No.5, Portland cement. Baghdad; 1984.
Iraqi Specification. No.45, Aggregate from natural sources for concrete and construction. Baghdad; 1984.
BS. 1881- Part 4, Method for determination of compressive strength of cement mortar. British Standard 1989.
ASTM C. 642 Standard test method for density, absorption, and voids in hardened concrete. West Conshohocken: ASTM International; 2006. https://doi.org/10.1520/C0642-06.
Neville AM. Properties of concrete. New York and Longman: Wiley; 2011. p. 844.
BS EN. 998-2 Specification for mortar for masonry-part 2: Masonry mortar, British Standards Document. 2010.
I wish to thanks College of Water Resources Engineering\Al-Qasim Green University for providing an opportunity to do research work in the laboratory.
The author declares that she has no competing interests.
The author consents to the publication process.
The author consents to the ethics.
No funding Information available (no fund organization but I will pay the fees for publishing).
Department of Sustainable Management, College of Water Resources Engineering, Al-Qasim Green University, Al-Qasim District, Babylon, Iraq, 51013
Zainab Hashim Abbas Alsalami
Correspondence to Zainab Hashim Abbas Alsalami.
Alsalami, Z.H.A. Study the effect of partially replacement sand by waste pistachio shells in cement mortar. Appl Adhes Sci 5, 19 (2017). https://doi.org/10.1186/s40563-017-0099-3
Density and absorption | CommonCrawl |
Wed, 05 Jun 2019 18:34:54 GMT
4.E: Applications of Derivatives (ALL Chap 4 Exercises)
[ "article:topic", "calcplot:yes", "license:ccbyncsa", "showtoc:no", "transcluded:yes" ]
MTH 210 Calculus I
professor playground
4.1: Related Rates
4.2: Linear Approximations and Differentials
4.3: Maxima and Minima
4.4: The Mean Value Theorem
4.5: Derivatives and the Shape of a Graph
4.6: Limits at Infinity and Asymptotes
4.7: Applied Optimization Problems
4.8: L'Hôpital's Rule
4.9: Newton's Method
4.10: Antiderivatives
Chapter Review Exercises
These are homework exercises to accompany OpenStax's "Calculus" Textmap.
For the following exercises, find the quantities for the given equation.
1) Find \(\frac{dy}{dt}\) at \(x=1\) and \(y=x^2+3\) if \(\frac{dx}{dt}=4.\)
Solution: \(8\)
2) Find \(\frac{dx}{dt}\) at \(x=−2\) and \(y=2x^2+1\) if \(\frac{dy}{dt}=−1.\)
3) Find \(\frac{dz}{dt}\) at \((x,y)=(1,3)\) and \(z^2=x^2+y^2\) if \(\frac{dx}{dt}=4\) and \(\frac{dy}{dt}=3\).
Solution: \(\frac{13}{\sqrt{10}}\)
For the following exercises, sketch the situation if necessary and used related rates to solve for the quantities.
4) [T] If two electrical resistors are connected in parallel, the total resistance (measured in ohms, denoted by the Greek capital letter omega, \(Ω\)) is given by the equation \(\frac{1}{R}=\frac{1}{R_1}+\frac{1}{R_2}.\) If \(R_1\) is increasing at a rate of \(0.5Ω/min\) and \(R_2\) decreases at a rate of \(1.1Ω/min\), at what rate does the total resistance change when \(R_1=20Ω\) and \(R_2=50Ω/min\)?
5) A 10-ft ladder is leaning against a wall. If the top of the ladder slides down the wall at a rate of 2 ft/sec, how fast is the bottom moving along the ground when the bottom of the ladder is 5 ft from the wall?
Solution: \(2\sqrt{3} ft/sec\)
6) A 25-ft ladder is leaning against a wall. If we push the ladder toward the wall at a rate of 1 ft/sec, and the bottom of the ladder is initially \(20ft\) away from the wall, how fast does the ladder move up the wall \(5sec\) after we start pushing?
7) Two airplanes are flying in the air at the same height: airplane A is flying east at 250 mi/h and airplane B is flying north at \(300mi/h.\) If they are both heading to the same airport, located 30 miles east of airplane A and 40 miles north of airplane B, at what rate is the distance between the airplanes changing?
Solution: The distance is decreasing at \(390mi/h.\)
8) You and a friend are riding your bikes to a restaurant that you think is east; your friend thinks the restaurant is north. You both leave from the same point, with you riding at 16 mph east and your friend riding \(12mph\) north. After you traveled \(4mi,\) at what rate is the distance between you changing?
9) Two buses are driving along parallel freeways that are \(5mi\) apart, one heading east and the other heading west. Assuming that each bus drives a constant \(55mph\), find the rate at which the distance between the buses is changing when they are \(13mi\) part, heading toward each other.
Solution: The distance between them shrinks at a rate of \(\frac{1320}{13}≈101.5mph.\)
10) A 6-ft-tall person walks away from a 10-ft lamppost at a constant rate of \(3ft/sec.\) What is the rate that the tip of the shadow moves away from the pole when the person is 10ft away from the pole?
Using the previous problem, what is the rate at which the tip of the shadow moves away from the person when the person is 10 ft from the pole?
Solution: \(\frac{9}{2} ft/sec\)
11) A 5-ft-tall person walks toward a wall at a rate of 2 ft/sec. A spotlight is located on the ground 40 ft from the wall. How fast does the height of the person's shadow on the wall change when the person is 10 ft from the wall?
12) Using the previous problem, what is the rate at which the shadow changes when the person is 10 ft from the wall, if the person is walking away from the wall at a rate of 2 ft/sec?
Solution: It grows at a rate \(\frac{4}{9}\) ft/sec
13) A helicopter starting on the ground is rising directly into the air at a rate of 25 ft/sec. You are running on the ground starting directly under the helicopter at a rate of 10 ft/sec. Find the rate of change of the distance between the helicopter and yourself after 5 sec.
Solution: The distance is increasing at \((\frac{135\sqrt{26})}{26}\) ft/sec
14) For the following exercises, draw and label diagrams to help solve the related-rates problems.
The side of a cube increases at a rate of \(\frac{1}{2}\) m/sec. Find the rate at which the volume of the cube increases when the side of the cube is 4 m.
The volume of a cube decreases at a rate of \(10\) m/sec. Find the rate at which the side of the cube changes when the side of the cube is 2 m.
Slution: \(−\frac{5}{6}\) m/sec
15) The radius of a circle increases at a rate of \(2\) m/sec. Find the rate at which the area of the circle increases when the radius is 5 m.
16) The radius of a sphere decreases at a rate of \(3\) m/sec. Find the rate at which the surface area decreases when the radius is 10 m.
Solution: \(240π m^2/sec\)
17) The radius of a sphere increases at a rate of 1 m/sec. Find the rate at which the volume increases when the radius is \(20\) m.
18) The radius of a sphere is increasing at a rate of 9 cm/sec. Find the radius of the sphere when the volume and the radius of the sphere are increasing at the same numerical rate.
Solution: \(\frac{1}{2\sqrt{π}}\) cm
19) The base of a triangle is shrinking at a rate of 1 cm/min and the height of the triangle is increasing at a rate of 5 cm/min. Find the rate at which the area of the triangle changes when the height is 22 cm and the base is 10 cm.
20) A triangle has two constant sides of length 3 ft and 5 ft. The angle between these two sides is increasing at a rate of 0.1 rad/sec. Find the rate at which the area of the triangle is changing when the angle between the two sides is \(π/6.\)
Solution: The area is increasing at a rate \(\frac{(3\sqrt{3})}{8}ft_2/sec.\)
21) A triangle has a height that is increasing at a rate of 2 cm/sec and its area is increasing at a rate of 4 \(cm^2/sec\). Find the rate at which the base of the triangle is changing when the height of the triangle is 4 cm and the area is 20 \(cm^2\).
For the following exercises, consider a right cone that is leaking water. The dimensions of the conical tank are a height of 16 ft and a radius of 5 ft.
22) How fast does the depth of the water change when the water is 10 ft high if the cone leaks water at a rate of 10 \(ft^3/min\)?
Solution: The depth of the water decreases at \(\frac{128}{125π}\) ft/min.
23) Find the rate at which the surface area of the water changes when the water is 10 ft high if the cone leaks water at a rate of 10 \(ft^3/min\).
24) If the water level is decreasing at a rate of 3 in./min when the depth of the water is 8 ft, determine the rate at which water is leaking out of the cone.
Solution: The volume is decreasing at a rate of \(\frac{(25π)}{16}ft^3/min.\)
25) A vertical cylinder is leaking water at a rate of 1 \(ft^3/sec\). If the cylinder has a height of 10 ft and a radius of 1 ft, at what rate is the height of the water changing when the height is 6 ft?
26) A cylinder is leaking water but you are unable to determine at what rate. The cylinder has a height of 2 m and a radius of 2 m. Find the rate at which the water is leaking out of the cylinder if the rate at which the height is decreasing is 10 cm/min when the height is 1 m.
Solution: The water flows out at rate \(\frac{(2π)}{5}m_3/min.\)
27) A trough has ends shaped like isosceles triangles, with width 3 m and height 4 m, and the trough is 10 m long. Water is being pumped into the trough at a rate of \(5m^3/min\). At what rate does the height of the water change when the water is 1 m deep?
28) A tank is shaped like an upside-down square pyramid, with base of 4 m by 4 m and a height of 12 m (see the following figure). How fast does the height increase when the water is 2 m deep if water is being pumped in at a rate of \(\frac{2}{3}\) m/sec?
Solution: \(\frac{3}{2} m/sec\)
For the following problems, consider a pool shaped like the bottom half of a sphere, that is being filled at a rate of 25 \(ft^3\)/min. The radius of the pool is 10 ft.
29) Find the rate at which the depth of the water is changing when the water has a depth of 5 ft.
Solution: \(\frac{25}{19π} ft/min\)
31) If the height is increasing at a rate of 1 in./sec when the depth of the water is 2 ft, find the rate at which water is being pumped in.
32) Gravel is being unloaded from a truck and falls into a pile shaped like a cone at a rate of 10 \(ft^3/min\). The radius of the cone base is three times the height of the cone. Find the rate at which the height of the gravel changes when the pile has a height of 5 ft.
Solution: \(\frac{2}{45π} ft/min\)
33) Using a similar setup from the preceding problem, find the rate at which the gravel is being unloaded if the pile is 5 ft high and the height is increasing at a rate of 4 in./min.
For the following exercises, draw the situations and solve the related-rate problems.
34) You are stationary on the ground and are watching a bird fly horizontally at a rate of \(10\) m/sec. The bird is located 40 m above your head. How fast does the angle of elevation change when the horizontal distance between you and the bird is 9 m?
Solution: The angle decreases at \(\frac{400}{1681}rad/sec.\)
35) You stand 40 ft from a bottle rocket on the ground and watch as it takes off vertically into the air at a rate of 20 ft/sec. Find the rate at which the angle of elevation changes when the rocket is 30 ft in the air.
36) A lighthouse, L, is on an island 4 mi away from the closest point, P, on the beach (see the following image). If the lighthouse light rotates clockwise at a constant rate of 10 revolutions/min, how fast does the beam of light move across the beach 2 mi away from the closest point on the beach?
Solution: \(100π/min\)
37)Using the same setup as the previous problem, determine at what rate the beam of light moves across the beach 1 mi away from the closest point on the beach.
38) You are walking to a bus stop at a right-angle corner. You move north at a rate of 2 m/sec and are 20 m south of the intersection. The bus travels west at a rate of 10 m/sec away from the intersection – you have missed the bus! What is the rate at which the angle between you and the bus is changing when you are 20 m south of the intersection and the bus is 10 m west of the intersection?
Solution: The angle is changing at a rate of \(\frac{21}{25}rad/sec\).
For the following exercises, refer to the figure of baseball diamond, which has sides of 90 ft.
39) [T] A batter hits a ball toward third base at 75 ft/sec and runs toward first base at a rate of 24 ft/sec. At what rate does the distance between the ball and the batter change when 2 sec have passed?
40) [T] A batter hits a ball toward second base at 80 ft/sec and runs toward first base at a rate of 30 ft/sec. At what rate does the distance between the ball and the batter change when the runner has covered one-third of the distance to first base? (Hint: Recall the law of cosines.)
Solution: The distance is increasing at a rate of \(62.50\) ft/sec.
41) [T] A batter hits the ball and runs toward first base at a speed of 22 ft/sec. At what rate does the distance between the runner and second base change when the runner has run 30 ft?
42) [T] Runners start at first and second base. When the baseball is hit, the runner at first base runs at a speed of 18 ft/sec toward second base and the runner at second base runs at a speed of 20 ft/sec toward third base. How fast is the distance between runners changing 1 sec after the ball is hit?
Solution: The distance is decreasing at a rate of \(11.99\) ft/sec.
1) What is the linear approximation for any generic linear function \(y=mx+b\)?
2) Determine the necessary conditions such that the linear approximation function is constant. Use a graph to prove your result.
Solution: \(f′(a)=0\)
3) Explain why the linear approximation becomes less accurate as you increase the distance between \(x\) and \(a\). Use a graph to prove your argument.
4) When is the linear approximation exact?
Solution: The linear approximation exact when \(y=f(x)\) is linear or constant.
For the following exercises, find the linear approximation \(L(x)\) to \(y=f(x)\) near \(x=a\) for the function.
5) [T] \(f(x)=x+x^4,a=0\)
6) [T] \(f(x)=\frac{1}{x},a=2\)
Solution: \(L(x)=\frac{1}{2}−\frac{1}{4}(x−2)\)
7) [T] \(f(x)=tanx,a=\frac{π}{4}\)
8) [T] \(f(x)=sinx,a=\frac{π}{2}\)
Solution: \(L(x)=1\)
9) [T] \(f(x)=xsinx,a=2π\)
10) [T] \(f(x)=sin^2x,a=0\)
For the following exercises, compute the values given within 0.01 by deciding on the appropriate \(f(x)\) and \(a\), and evaluating \(L(x)=f(a)+f′(a)(x−a).\) Check your answer using a calculator.
11) [T] \((2.001)^6\)
12) [T] \(sin(0.02)\)
Solution: \(0.02\)
13) [T] \(cos(0.03)\)
14) [T] \((15.99)^{1/4}\)
Solution: \(1.9996875\)
15) [T] \(\frac{1}{0.98}\)
Solution: \(0.001593\)
For the following exercises, determine the appropriate \(f(x)\) and \(a\), and evaluate \(L(x)=f(a)+f′(a)(x−a).\) Calculate the numerical error in the linear approximations that follow.
17) \((1.01)^3\)
18) \(cos(0.01)\)
Solution: \(1;\) error, \(~0.00005\)
19) \((sin(0.01))^2\)
20) \((1.01)^{−3}\)
Solution: \(0.97;\) error, \(~0.0006\)
21) \((1+\frac{1}{10})^{10}\)
22) \(\sqrt{8.99}\)
Solution: \(3−\frac{1}{600};\) error, \(~4.632×10^{−7}\)
For the following exercises, find the differential of the function.
23) \(y=3x^4+x^2−2x+1\)
24) \(y=xcosx\)
Solution: \(dy=(cosx−xsinx)dx\)
25) \(y=\sqrt{1+x}\)
26) \(y=\frac{x^2+2}{x−1}\)
Solution: \(dy=(\frac{x^2−2x−2}{(x−1)^2})dx\)
For the following exercises, find the differential and evaluate for the given \(x\) and \(dx\).
27) \(y=3x^2−x+6, x=2, dx=0.1\)
28) \(y=\frac{1}{x+1},x=1, dx=0.25\)
Solution: \(dy=−\frac{1}{(x+1)^2}dx,−\frac{1}{16}\)
29) \(y=tanx,x=0, dx=\frac{π}{10}\)
30) \(y=\frac{3x^2+2}{\sqrt{x+1}}\), x=0, dx=0.1\)
Solution: \(dy=\frac{9x^2+12x−2}{2(x+1)^{3/2}}dx,−0.1\)
31) \(y=\frac{sin(2x)}{x}, x=π, dx=0.25\)
32) \(y=x^3+2x+\frac{1}{x}, x=1, dx=0.05\)
Solution: \(dy=(3x^2+2−\frac{1}{x^2})dx, 0.2\)
For the following exercises, find the change in volume \(dV\) or in surface area \(dA.\)
33) \(dV\) if the sides of a cube change from 10 to 10.1.
34) \(dA\) if the sides of a cube change from \(x\) to \(x+dx\).
Solution: \(12xdx\)
35) \(dA\) if the radius of a sphere changes from \(r\) by \(dr.\)
36) \(dV\) if the radius of a sphere changes from \(r\) by \(dr\).
Solution: \(4πr^2dr\)
37) \(dV\) if a circular cylinder with \(r=2\) changes height from 3 cm to \(3.05cm.\)
38) \(dV\) if a circular cylinder of height 3 changes from \(r=2\) to \(r=1.9cm.\)
Solution: \(−1.2πcm^3\)
For the following exercises, use differentials to estimate the maximum and relative error when computing the surface area or volume.
39) A spherical golf ball is measured to have a radius of \(5mm,\) with a possible measurement error of \(0.1mm.\) What is the possible change in volume?
40) A pool has a rectangular base of 10 ft by 20 ft and a depth of 6 ft. What is the change in volume if you only fill it up to 5.5 ft?
Solution: \(−100 ft^3\)
41) An ice cream cone has height 4 in. and radius 1 in. If the cone is 0.1 in. thick, what is the difference between the volume of the cone, including the shell, and the volume of the ice cream you can fit inside the shell
For the following exercises, confirm the approximations by using the linear approximation at \(x=0.\)
42) \(\sqrt{1−x}≈1−\frac{1}{2}x\)
43) \(\frac{1}{\sqrt{1−x^2}}≈1\)
44) \(\sqrt{c^2+x^2}≈c\)
1) In precalculus, you learned a formula for the position of the maximum or minimum of a quadratic equation \(y=ax^2+bx+c\), which was \(m=−\frac{b}{(2a)}\). Prove this formula using calculus.
2) If you are finding an absolute minimum over an interval \([a,b],\) why do you need to check the endpoints? Draw a graph that supports your hypothesis.
Solution: Answers may vary
3) If you are examining a function over an interval \((a,b),\) for \(a\) and \(b\) finite, is it possible not to have an absolute maximum or absolute minimum?
4) When you are checking for critical points, explain why you also need to determine points where \(f(x)\) is undefined. Draw a graph to support your explanation.
Solution: Answers will vary
5) Can you have a finite absolute maximum for \(y=ax^2+bx+c\) over \((−∞,∞)\)? Explain why or why not using graphical arguments.
6) Can you have a finite absolute maximum for \(y=ax^3+bx^2+cx+d\) over \((−∞,∞)\) assuming a is non-zero? Explain why or why not using graphical arguments.
Solution: No; answers will vary
7) Let \(m\) be the number of local minima and \(M\) be the number of local maxima. Can you create a function where \(M>m+2\)? Draw a graph to support your explanation.
8) Is it possible to have more than one absolute maximum? Use a graphical argument to prove your hypothesis.
Solution: Since the absolute maximum is the function (output) value rather than the x value, the answer is no; answers will vary
9) Is it possible to have no absolute minimum or maximum for a function? If so, construct such a function. If not, explain why this is not possible.
10) [T] Graph the function \(y=e^{ax}.\) For which values of \(a\), on any infinite domain, will you have an absolute minimum and absolute maximum?
Solution: When \(a=0\)
For the following exercises, determine where the local and absolute maxima and minima occur on the graph given. Assume domains are closed intervals unless otherwise specified.
Solution: Absolute minimum at 3; Absolute maximum at −2.2; local minima at −2, 1; local maxima at −1, 2
Solution: Absolute minima at −2, 2; absolute maxima at −2.5, 2.5; local minimum at 0; local maxima at −1, 1
For the following problems, draw graphs of \(f(x),\) which is continuous, over the interval \([−4,4]\) with the following properties:
15) Absolute maximum at \(x=2\) and absolute minima at \(x=±3\)
16) Absolute minimum at \(x=1\) and absolute maximum at \(x=2\)
Solution: Answers may vary.
17) Absolute maximum at \(x=4,\) absolute minimum at \(x=−1,\) local maximum at \(x=−2,\) and a critical point that is not a maximum or minimum at \(x=2\)
18) Absolute maxima at \(x=2\) and \(x=−3\), local minimum at \(x=1\), and absolute minimum at \(x=4\)
For the following exercises, find the critical points in the domains of the following functions.
19) \(y=4x^3−3x\)
20) \(y=4\sqrt{x}−x^2\)
Solution: \(x=1\)
21) \(y=\frac{1}{x−1}\)
22) \(y=ln(x−2)\)
Solution: None
23) \(y=tan(x)\)
24) \(y=\sqrt{4−x^2}\)
25) \(y=x^{3/2}−3x^{5/2}\)
26) \(y=\frac{x^2−1}{x^2+2x−3}\)
27) \(y=sin^2(x)\)
28) \(y=x+\frac{1}{x}\)
Solution: \(x=−1,1\)
For the following exercises, find the local and/or absolute maxima for the functions over the specified domain.
29) \(f(x)=x2^+3\) over \([−1,4]\)
30) \(y=x^2+\frac{2}{x}\) over \([1,4]\)
Solution: Absolute maximum: \(x=4, y=\frac{33}{2}\); absolute minimum: \(x=1, y=3\)
31) \(y=(x−x^2)^2\) over \([−1,1]\)
32) \(y=\frac{1}{(x−x^2)}\) over \([0,1]\)
Solution: Absolute minimum: \(x=\frac{1}{2}, y=4\)
33) \(y=\sqrt{9−x}\) over \([1,9]\)
34) \(y=x+sin(x)\) over \([0,2π]\)
Solution: Absolute maximum: \(x=2π, y=2π;\) absolute minimum: \(x=0, y=0\)
35) \(y=\frac{x}{1+x}\) over \([0,100]\)
36) \(y=|x+1|+|x−1|\) over \([−3,2]\)
Solution: Absolute maximum: \(x=−3;\) absolute minimum: \(−1≤x≤1, y=2\)
37) \(y=\sqrt{x}−\sqrt{x^3}\) over \([0,4]\)
38) \(y=sinx+cosx\) over \([0,2π]\)
Solution: Absolute maximum: \(x=\frac{π}{4}, y=\sqrt{2}\); absolute minimum: \(x=\frac{5π}{4}, y=−\sqrt{2}\)
39) \(y=4sinθ−3cosθ\) over \([0,2π]\)
For the following exercises, find the local and absolute minima and maxima for the functions over \((−∞,∞).\)
40) \(y=x^2+4x+5\)
Solution: Absolute minimum: \(x=−2, y=1\)
41) \(y=x^3−12x\)
42) \(y=3x^4+8x^3−18x^2\)
Solution: Absolute minimum: \(x=−3, y=−135;\) local maximum: \(x=0, y=0\); local minimum: \(x=1, y=−7\)
43) \(y=x^3(1−x)^6\)
44) \(y=\frac{x^2+x+6}{x−1}\)
Solution: Local maximum: \(x=1−2\sqrt{2}, y=3−4\sqrt{2}\); local minimum: \(x=1+2\sqrt{2}, y=3+4\sqrt{2}\)
45) \(y=\frac{x^2−1}{x−1}\)
For the following functions, use a calculator to graph the function and to estimate the absolute and local maxima and minima. Then, solve for them explicitly.
46) [T] \(y=3x\sqrt{1−x^2}\)
Solution: Absolute maximum: \(x=\frac{\sqrt{2}}{2}, y=\frac{3}{2};\) absolute minimum: \(x=−\frac{\sqrt{2}}{2}, y=−\frac{3}{2}\)
47) [T] \(y=x+sin(x)\)
48) [T] \(y=12x^5+45x^4+20x^3−90x^2−120x+3\)
Solution: Local maximum: \(x=−2,y=59\); local minimum: \(x=1, y=−130\)
49) [T] \(y=\frac{x^3+6x^2−x−30}{x−2}\)
50) [T] \(y=\frac{\sqrt{4−x^2}}{\sqrt{4+x^2}}\)
Solution: Absolute maximum: \(x=0, y=1;\) absolute minimum: \(x=−2,2, y=0\)
51) A company that produces cell phones has a cost function of \(C=x^2−1200x+36,400,\) where \(C\) is cost in dollars and \(x\) is number of cell phones produced (in thousands). How many units of cell phone (in thousands) minimizes this cost function?
52) A ball is thrown into the air and its position is given by \(h(t)=−4.9t^2+60t+5m.\) Find the height at which the ball stops ascending. How long after it is thrown does this happen?
Solution: \(h=\frac{9245}{49}m, t=\frac{300}{49}s\)
For the following exercises, consider the production of gold during the California gold rush (1848–1888). The production of gold can be modeled by \(G(t)=\frac{(25t)}{(t^2+16)}\), where t is the number of years since the rush began \((0≤t≤40)\) and \(G\) is ounces of gold produced (in millions). A summary of the data is shown in the following figure.
53) Find when the maximum (local and global) gold production occurred, and the amount of gold produced during that maximum.
54) Find when the minimum (local and global) gold production occurred. What was the amount of gold produced during this minimum?
Solution: The global minimum was in 1848, when no gold was produced.
Find the critical points, maxima, and minima for the following piecewise functions.
55) \(y=\begin{cases}x^2−4x& 0≤x≤1//x^2−4&1<x≤2\end{cases}\)
56) \(y=\begin{cases}x^2+1 & x≤1 // x^2−4x+5 & x>1\end{cases}\)
Solution: Absolute minima: \(x=0, x=2, y=1\); local maximum at \(x=1, y=2\)
For the following exercises, find the critical points of the following generic functions. Are they maxima, minima, or neither? State the necessary conditions.
57) \(y=ax^2+bx+c,\) given that \(a>0\)
58) \(y=(x−1)^a\), given that \(a>1\)
Solution: No maxima/minima if \(a\) is odd, minimum at \(x=1\) if \(a\) is even
1)Why do you need continuity to apply the Mean Value Theorem? Construct a counterexample.
2) Why do you need differentiability to apply the Mean Value Theorem? Find a counterexample.
Solution: One example is \(f(x)=|x|+3,−2≤x≤2\)
3) When are Rolle's theorem and the Mean Value Theorem equivalent?
4) If you have a function with a discontinuity, is it still possible to have \(f′(c)(b−a)=f(b)−f(a)?\) Draw such an example or prove why not.
Solution: Yes, but the Mean Value Theorem still does not apply
For the following exercises, determine over what intervals (if any) the Mean Value Theorem applies. Justify your answer.
5) \(y=sin(πx)\)
6) \(y=\frac{1}{x^3}\)
Solution: \((−∞,0),(0,∞)\)
7) \(y=\sqrt{4−x^2}\)
8) \(y=\sqrt{x^2−4}\)
Solution: \((−∞,−2),(2,∞)\)
9) \(y=ln(3x−5)\)
For the following exercises, graph the functions on a calculator and draw the secant line that connects the endpoints. Estimate the number of points \(c\) such that \(f′(c)(b−a)=f(b)−f(a).\)
10) [T] \(y=3x^3+2x+1\) over \([−1,1]\)
Solution: 2 points
11) [T] \(y=tan(\frac{π}{4}x)\) over \([−\frac{3}{2},\frac{3}{2}]\)
12) [T] \(y=x^2cos(πx)\) over \([−2,2]\)
13) [T] \(y=x^6−\frac{3}{4}x^5−\frac{9}{8}x^4+\frac{15}{16}x^3+\frac{3}{32}x^2+\frac{3}{16}x+\frac{1}{32}\) over \([−1,1]\)
For the following exercises, use the Mean Value Theorem and find all points \(0<c<2\) such that \(f(2)−f(0)=f′(c)(2−0)\).
14) \(f(x)=x^3\)
Solution: \(c=\frac{2\sqrt{3}}{3}\)
15) \(f(x)=sin(πx)\)
16) \(f(x)=cos(2πx)\)
Solution: \(c=\frac{1}{2},1,\frac{3}{2}\)
17) \(f(x)=1+x+x^2\)
18) \(f(x)=(x−1)^{10}\)
Solution: \(c=1\)
19) \(f(x)=(x−1)^9\)
For the following exercises, show there is no \(c\) such that \(f(1)−f(−1)=f′(c)(2)\). Explain why the Mean Value Theorem does not apply over the interval \([−1,1].\)
20) \(f(x)=∣x−\frac{1}{2} ∣\)
Solution: Not differentiable
21) \(f(x)=\frac{1}{x^2}\)
22) \(f(x)=\sqrt{|x|}\)
23) \(f(x)=[x]\) (Hint: This is called the floor function and it is defined so that \(f(x)\) is the largest integer less than or equal to \(x\).)
For the following exercises, determine whether the Mean Value Theorem applies for the functions over the given interval \(([a,b]\. Justify your answer.
24) \(y=e^x\) over \([0,1]\)
Solution: Yes
25) \(y=ln(2x+3)\) over \([−\frac{3}{2},0]\)
26) \(f(x)=tan(2πx)\) over \([0,2]\)
Solution: The Mean Value Theorem does not apply since the function is discontinuous at \(x=\frac{1}{4},\frac{3}{4},\frac{5}{4},\frac{7}{4}.\)
27) \(y=\sqrt{9−x^2}\) over \([−3,3]\)
28) \(y=\frac{1}{|x+1|}\) over \([0,3]\)
29) \(y=x^3+2x+1\) over \([0,6]\)
30) \(y=\frac{x^2+3x+2}{x}\) over \([−1,1]\)
Solution: The Mean Value Theorem does not apply; discontinuous at \(x=0.\)
31) \(y=\frac{x}{sin(πx)+1}\) over \([0,1]\)
32) \(y=ln(x+1)\) over \([0,e−1]\)
33) \(y=xsin(πx)\) over \([0,2]\)
34) \(y=5+|x|\) over \([−1,1]\)
Solution: The Mean Value Theorem does not apply; not differentiable at \(x=0\).
For the following exercises, consider the roots of the equation.
35) Show that the equation \(y=x^3+3x^2+16\) has exactly one real root. What is it?
36) Find the conditions for exactly one root (double root) for the equation \(y=x^2+bx+c\)
Solution: \(b=±2\sqrt{c}\)
37) Find the conditions for \(y=e^x−b\) to have one root. Is it possible to have more than one root?
For the following exercises, use a calculator to graph the function over the interval \([a,b]\) and graph the secant line from \(a\) to \(b\). Use the calculator to estimate all values of \(c\) as guaranteed by the Mean Value Theorem. Then, find the exact value of \(c\), if possible, or write the final equation and use a calculator to estimate to four digits.
38) [T] \(y=tan(πx)\) over \([−\frac{1}{4},\frac{1}{4}]\)
Solution: \(c=±\frac{1}{π}cos^{−1}(\frac{\sqrt{π}}{2}), c=±0.1533\)
39) [T] \(y=\frac{1}{\sqrt{x+1}}\) over \([0,3]\)
40) [T] \(y=∣x^2+2x−4∣\) over \([−4,0]\)
Solution: The Mean Value Theorem does not apply.
41) [T] \(y=x+\frac{1}{x}\) over \([\frac{1}{2},4]\)
42) [T] \(y=\sqrt{x+1}+\frac{1}{x^2}\) over \([3,8]\)
Solution: \(\frac{1}{2\sqrt{c+1}}−\frac{2}{c^3}=\frac{521}{2880}; c=3.133,5.867\)
43) At 10:17 a.m., you pass a police car at 55 mph that is stopped on the freeway. You pass a second police car at 55 mph at 10:53 a.m., which is located 39 mi from the first police car. If the speed limit is 60 mph, can the police cite you for speeding?
44) Two cars drive from one spotlight to the next, leaving at the same time and arriving at the same time. Is there ever a time when they are going the same speed? Prove or disprove.
45) Show that \(y=sec^2x\) and \(y=tan^2x\) have the same derivative. What can you say about \(y=sec^2x−tan^2x\)?
46) Show that \(y=csc^2x\) and \(y=cot^2x\) have the same derivative. What can you say about \(y=csc^2x−cot^2x\)?
Solution: It is constant.
1) If c is a critical point of \(f(x)\), when is there no local maximum or minimum at \(c\)? Explain.
2) For the function \(y=x^3\), is \(x=0\) both an inflection point and a local maximum/minimum?
Solution: It is not a local maximum/minimum because \(f′\) does not change sign
3) For the function \(y=x^3\), is \(x=0\) an inflection point?
4) Is it possible for a point \(c\) to be both an inflection point and a local extrema of a twice differentiable function?
Solution: No
6) Why do you need continuity for the first derivative test? Come up with an example.
7) Explain whether a concave-down function has to cross \(y=0\) for some value of \(x\).
Solution: False; for example, \(y=\sqrt{x}\).
8) Explain whether a polynomial of degree \(2\) can have an inflection point.
For the following exercises, analyze the graphs of \(f′\), then list all intervals where f is increasing or decreasing.
Solution: Increasing for \(−2<x<−1\) and \(x>2\); decreasing for \(x<−2\) and \(−1<x<2\)
Solution: Decreasing for \(x<1\), increasing for \(x>1\)
Solution: Decreasing for \(−2<x<−1\) and \(1<x<2\); increasing for \(−1<x<1\) and \(x<−2\) and \(x>2\)
For the following exercises, analyze the graphs of \(f′,\) then list all intervals where
a. \(f\) is increasing and decreasing and
b. the minima and maxima are located.
Solution: a. Increasing over \(−2<x<−1,0<x<1,x>2\), decreasing over \(x<−2, −1<x<0,1<x<2;\) b. maxima at \(x=−1\) and \(x=1\), minima at \(x=−2\) and \(x=0\) and \(x=2\)
Solution: a. Increasing over \(x>0\), decreasing over \(x<0;\) b. Minimum at \(x=0\)
For the following exercises, analyze the graphs of \(f′\), then list all inflection points and intervals \(f\) that are concave up and concave down.
Solution: Concave up on all \(x\), no inflection points
Solution: Concave up for \(x<0\) and \(x>1\), concave down for \(0<x<1\), inflection points at \(x=0\) and \(x=1\)
For the following exercises, draw a graph that satisfies the given specifications for the domain \(x=[−3,3].\) The function does not have to be continuous or differentiable.
24) \(f(x)>0,f′(x)>0\) over \(x>1,−3<x<0,f′(x)=0\) over \(0<x<1\)
25) \(f′(x)>0\) over \(x>2,−3<x<−1,f′(x)<0\) over \(−1<x<2,f''(x)<0\) for all \(x\)
Solution: Answer will vary
26) \(f''(x)<0\) over \(−1<x<1,f''(x)>0,−3<x<−1,1<x<3,\) local maximum at \(x=0,\) local minima at \(x=±2\)
27) There is a local maximum at \(x=2,\) local minimum at \(x=1,\) and the graph is neither concave up nor concave down.
28) There are local maxima at \(x=±1,\) the function is concave up for all \(x\), and the function remains positive for all \(x.\)
For the following exercises, determine
a. intervals where \(f\) is increasing or decreasing and
b. local minima and maxima of \(f\).
29) \(f(x)=sinx+sin^3x\) over −π<x<π
a. Increasing over \(−\frac{π}{2}<x<\frac{π}{2},\) decreasing over \(x<−π\frac{π}{2},x>\frac{π}{2}\)
b. Local maximum at \(x=\frac{π}{2}\); local minimum at \(x=−\frac{π}{2}\)
28) \(f(x)=x^2+cosx\)
For the following exercises, determine a. intervals where \(f\) is concave up or concave down, and b. the inflection points of \(f\).
29) \(f(x)=x^3−4x^2+x+2\)
a. Concave up for \(x>\frac{4}{3},\) concave down for \(x<\frac{34}{3}\)
b. Inflection point at \(x=\frac{4}{3}\)
a. intervals where \(f\) is increasing or decreasing,
b. local minima and maxima of \(f\),
c. intervals where \(f\) is concave up and concave down, and
d. the inflection points of \(f.\)
30) \(f(x)=x^2−6x\)
31) \(f(x)=x^3−6x^2\)
Solution: a. Increasing over \(x<0\) and \(x>4,\) decreasing over \(0<x<4\) b. Maximum at \(x=0\), minimum at \(x=4\) c. Concave up for \(x>2\), concave down for \(x<2\) d. Infection point at \(x=2\)
33) \(f(x)=x^{11}−6x^{10}\)
Solution: a. Increasing over \(x<0\) and \(x>\frac{60}{11}\), decreasing over \(0<x<\frac{60}{11}\) b. Minimum at \(x=\frac{60}{11}\) c. Concave down for \(x<\frac{54}{11}\), concave up for \(x>\frac{54}{11}\) d. Inflection point at \(x=\frac{54}{11}\)
34) \(f(x)=x+x^2−x^3\)
35) \(f(x)=x^2+x+1\)
Solution: a. Increasing over \(x>−\frac{1}{2}\), decreasing over \(x<−\frac{1}{2}\) b. Minimum at \(x=−\frac{1}{2}\) c. Concave up for all \(x\) d. No inflection points
36) \(f(x)=x^3+x^4\)
b. local minima and maxima of \(f,\)
d. the inflection points of \(f.\) Sketch the curve, then use a calculator to compare your answer. If you cannot determine the exact answer analytically, use a calculator.
37) [T] \(f(x)=sin(πx)−cos(πx)\) over \(x=[−1,1]\)
Solution: a. Increases over \(−\frac{1}{4}<x<\frac{3}{4},\) decreases over \(x>\frac{3}{4}\) and \(x<−\frac{1}{4}\) b. Minimum at \(x=−\frac{1}{4}\), maximum at \(x=\frac{3}{4}\) c. Concave up for \(−\frac{3}{4}<x<\frac{1}{4}\), concave down for \(x<−\frac{3}{4}\) and \(x>\frac{1}{4}\) d. Inflection points at \(x=−\frac{3}{4},x=\frac{1}{4}\)
38) [T] \(f(x)=x+sin(2x)\) over \(x=[−\frac{π}{2},\frac{π}{2}]\)
39) [T] \(f(x)=sinx+tanx\) over \((−\frac{π}{2},\frac{π}{2})\)
Solution: a. Increasing for all \(x\) b. No local minimum or maximum c. Concave up for \(x>0\), concave down for \(x<0\) d. Inflection point at \(x=0\)
40) [T] \(f(x)=(x−2)^2(x−4)^2\)
41) [T] \(f(x)=\frac{1}{1−x},x≠1\)
Solution: a. Increasing for all \(x\) where defined b. No local minima or maxima c. Concave up for \(x<1\); concave down for \(x>1\) d. No inflection points in domain
42) [T] \(f(x)=\frac{sinx}{x}\) over \(x=[−2π,2π] [2π,0)∪(0,2π]\)
43) \(f(x)=sin(x)e^x\) over \(x=[−π,π]\)
Solution: a. Increasing over \(−\frac{π}{4}<x<\frac{3π}{4}\), decreasing over \(x>\frac{3π}{4},x<−\frac{π}{4}\) b. Minimum at \(x=−\frac{π}{4}\), maximum at \(x=\frac{3π}{4}\) c. Concave up for \(−\frac{π}{2}<x<\frac{π}{2}\), concave down for \(x<−\frac{π}{2},x>\frac{π}{2}\) d. Infection points at \(x=±\frac{π}{2}\)
44) \(f(x)=lnx\sqrt{x},x>0\)
45) \(f(x)=\frac{1}{4}\sqrt{x}+\frac{1}{x},x>0\)
Solution: a. Increasing over \(x>4,\) decreasing over \(0<x<4\) b. Minimum at \(x=4\) c. Concave up for \(0<x<8\sqrt[3]{2}\), concave down for \(x>8\sqrt[3]{2}\) d. Inflection point at \(x=8\sqrt[3]{2}\)
46) \(f(x)=\frac{e^x}{x},x≠0\)
For the following exercises, interpret the sentences in terms of \(f,f′,\) and \(f''.\)
47) The population is growing more slowly. Here \(f\) is the population.
Solution: \(f>0,f′>0,f''<0\)
48) A bike accelerates faster, but a car goes faster. Here \(f=\) Bike's position minus Car's position.
49) The airplane lands smoothly. Here \(f\) is the plane's altitude.
Solution: \(f>0,f′<0,f''<0\)
50) Stock prices are at their peak. Here \(f\)is the stock price.
51) The economy is picking up speed. Here \(f\) is a measure of the economy, such as GDP.
Solution: \(f>0,f′>0,f''>0\)
For the following exercises, consider a third-degree polynomial \(f(x),\) which has the properties f′(1)=0,f′(3)=0.
Determine whether the following statements are true or false. Justify your answer.
52) \(f(x)=0\) for some \(1≤x≤3\)
53) \(f''(x)=0\) for some \(1≤x≤3\)
Soltuion: True, by the Mean Value Theorem
54) There is no absolute maximum at \(x=3\)
55) If \(f(x)\) has three roots, then it has \(1\) inflection point.
Solution: True, examine derivative
56) If \(f(x)\) has one inflection point, then it has three real roots.
For the following exercises, examine the graphs. Identify where the vertical asymptotes are located.
Solution: \(x=−1,x=2\)
For the following functions \(f(x)\), determine whether there is an asymptote at \(x=a\). Justify your answer without graphing on a calculator.
6) \(f(x)=\frac{x+1}{x^2+5x+4},a=−1\)
7) \(f(x)=\frac{x}{x−2},a=2\)
Solution: Yes, there is a vertical asymptote
8) \(f(x)=(x+2)^{3/2},a=−2\)
9) \(f(x)=(x−1)^{−1/3},a=1\)
Solution: Yes, there is vertical asymptote
10) \(f(x)=1+x^{−2/5},a=1\)
For the following exercises, evaluate the limit.
11) \(lim_{x→∞}\frac{1}{3x+6}\)
12) \(lim_{x→∞}\frac{2x−5}{4x}\)
13) \(lim_{x→∞}\frac{x^2−2x+5}{x+2}\)
Solution: \(∞\)
14) \(lim_{x→−∞}\frac{3x^3−2x}{x^2+2x+8}\)
15) \(lim_{x→−∞}\frac{x^4−4x^3+1}{2−2x^2−7x^4}\)
Solution: \(−\frac{1}{7}\)
16) \(lim_{x→∞}\frac{3x}{\sqrt{x^2+1}}\)
17) \(lim_{x→−∞}\frac{\sqrt{4x2−1}}{x+2}\)
Solution: \(−2\)
18) \(lim_{x→∞}\frac{4x}{\sqrt{x2−1}}\)
19) \(lim_{x→−∞}\frac{4x}{\sqrt{x2−1}}\)
20) \(lim_{x→∞}\frac{2\sqrt{x}}{x−\sqrt{x}+1}\)
For the following exercises, find the horizontal and vertical asymptotes.
21) \(f(x)=x−\frac{9}{x}\)
Solution: Horizontal: none, vertical: \(x=0\)
22) \(f(x)=\frac{1}{1−x^2}\)
23) \(f(x)=\frac{x^3}{4−x^2}\)
Solution: Horizontal: none, vertical: \(x=±2\)
24) \(f(x)=\frac{x^2+}{3x^2+1}\)
25) \(f(x)=sin(x)sin(2x)\)
Solution: Horizontal: none, vertical: none
26) \(f(x)=cosx+cos(3x)+cos(5x)\)
27) \(f(x)=\frac{xsin(x)}{x^2−1}\)
Solution: Horizontal: \(y=0,\) vertical: \(x=±1\)
28) \(f(x)=\frac{x}{sin(x)}\)
29) \(f(x)=(\frac{1}{x^3+x^2}\)
Solution: Horizontal: \(y=0,\) vertical: \(x=0\) and \(x=−1\)
30) \(f(x)=\frac{1}{x−1}−2x\)
31) \(f(x)=\frac{x^3+1}{x^3−1}\)
Solution: Horizontal: \(y=1,\) vertical: \(x=1\)
32) \(f(x)=\frac{sinx+cosx}{sinx−cosx}\)
33) \(f(x)=x−sinx\)
34) \(f(x)=\frac{1}{x}−\sqrt{x}\)
For the following exercises, construct a function \(f(x)\) that has the given asymptotes.
35) \(x=1\) and \(y=2\)
Solution: Answers will vary, for example: \(y=\frac{2x}{x−1}\)
37) \(y=4, x=−1\)
Solution: Answers will vary, for example: \(y=\frac{4x}{x+1}\)
38) \(x=0\)
For the following exercises, graph the function on a graphing calculator on the window \(x=[−5,5]\) and estimate the horizontal asymptote or limit. Then, calculate the actual horizontal asymptote or limit.
39) [T] \(f(x)=\frac{1}{x+10}\)
Solution: \(y=0\)
40) [T] \(f(x)=\frac{x+1}{x^2+7x+6}\)
41) [T] \(lim_{x→−∞}x^2+10x+25\)
42) [T] \(lim_{x→−∞}\frac{x+2}{x^2+7x+6}\)
43) [T] \(lim_{x→∞}\frac(3x+2}{x+5}\)
For the following exercises, draw a graph of the functions without using a calculator. Be sure to notice all important features of the graph: local maxima and minima, inflection points, and asymptotic behavior.
44) \(y=3x^2+2x+4\)
45) \(y=x^3−3x^2+4\)
46) \(y=\frac{2x+1}{x^2+6x+5}\)
47) \(y=\frac{x^3+4x^2+3x}{3x+9}\)
48) \(y=\frac{x^2+x−2}{x^2−3x−4}\)
49) \(y=\sqrt{x^2−5x+4}\)
50) \(y=2x\sqrt{16−x^2}\)
51) \(y=\frac{cosx}{x}\), on \(x=[−2π,2π]\)
52) \(y=e^x−x^3\)\)
53) \(y=xtanx,x=[−π,π]\)
54) \(y=xln(x),x>0\)
55) \(y=x^2sin(x),x=[−2π,2π]\)
56) For \(f(x)=\frac{P(x)}{Q(x)}\) to have an asymptote at \(y=2\) then the polynomials \(P(x)\) and \(Q(x)\) must have what relation?
57) For \(f(x)=\frac{P(x)}{Q(x)}\) to have an asymptote at \(x=0\), then the polynomials \(P(x)\) and \(Q(x).\) must have what relation?
Solution: \(Q(x).\) must have have \(x^{k+1}\) as a factor, where \(P(x)\) has \(x^k\) as a factor.
58) If \(f′(x)\) has asymptotes at \(y=3\) and \(x=1\), then \(f(x)\) has what asymptotes?
59) Both \(f(x)=\frac{1}{(x−1)}\) and \(g(x)=\frac{1}{(x−1)^2}\) have asymptotes at \(x=1\) and \(y=0.\) What is the most obvious difference between these two functions?
Solution: \(lim_{x→1^−f(x)andlimx→1−g(x)
True or false: Every ratio of polynomials has vertical asymptotes.
For the following exercises, answer by proof, counterexample, or explanation.
1) When you find the maximum for an optimization problem, why do you need to check the sign of the derivative around the critical points?
Solution: The critical points can be the minima, maxima, or neither.
2) Why do you need to check the endpoints for optimization problems?
3) True or False. For every continuous nonlinear function, you can find the value \(x\) that maximizes the function.
Solution: False; \(y=−x^2\) has a minimum only
4) True or False. For every continuous nonconstant function on a closed, finite domain, there exists at least one \(x\) that minimizes or maximizes the function.
For the following exercises, set up and evaluate each optimization problem.
5) To carry a suitcase on an airplane, the length \(+width+\) height of the box must be less than or equal to \(62in\). Assuming the height is fixed, show that the maximum volume is \(V=h(31−(\frac{1}{2})h)^2.\) What height allows you to have the largest volume?
Solution: \(h=\frac{62}{3}\) in.
6) You are constructing a cardboard box with the dimensions \(2 m by 4 m.\) You then cut equal-size squares from each corner so you may fold the edges. What are the dimensions of the box with the largest volume?
7) Find the positive integer that minimizes the sum of the number and its reciprocal.
8) Find two positive integers such that their sum is \(10\), and minimize and maximize the sum of their squares.
For the following exercises, consider the construction of a pen to enclose an area.
9) You have \(400ft\) of fencing to construct a rectangular pen for cattle. What are the dimensions of the pen that maximize the area?
Solution: \(100ft by 100ft\)
10) You have \(800ft\) of fencing to make a pen for hogs. If you have a river on one side of your property, what is the dimension of the rectangular pen that maximizes the area?
11) You need to construct a fence around an area of \(1600ft.\) What are the dimensions of the rectangular pen to minimize the amount of material needed?
Solution: \(40ft by 40ft\)
12) Two poles are connected by a wire that is also connected to the ground. The first pole is \(20ft\) tall and the second pole is \(10ft\) tall. There is a distance of \(30ft\) between the two poles. Where should the wire be anchored to the ground to minimize the amount of wire needed?
13) [T] You are moving into a new apartment and notice there is a corner where the hallway narrows from \(8 ft to 6 ft\). What is the length of the longest item that can be carried horizontally around the corner?
Solution: 19.73 ft
14) A patient's pulse measures \(70 bpm, 80 bpm\), then \(120 bpm.\) To determine an accurate measurement of pulse, the doctor wants to know what value minimizes the expression \((x−70)^2+(x−80)^2+(x−120)^2\)? What value minimizes it?
15) In the previous problem, assume the patient was nervous during the third measurement, so we only weight that value half as much as the others. What is the value that minimizes \((x−70)^2+ (x−80)^2+\frac{1}{2}(x−120)^2?\)
Solution: \(84 bpm\)
16) You can run at a speed of \(6\) mph and swim at a speed of \(3\) mph and are located on the shore, \(4\) miles east of an island that is \(1\) mile north of the shoreline. How far should you run west to minimize the time needed to reach the island?
For the following problems, consider a lifeguard at a circular pool with diameter \(40m.\) He must reach someone who is drowning on the exact opposite side of the pool, at position \(C\). The lifeguard swims with a speed \(v\) and runs around the pool at speed \(w=3v.\)
17) Find a function that measures the total amount of time it takes to reach the drowning person as a function of the swim angle, \(θ\).
Solution: \(T(θ)=\frac{40θ}{3v}+\frac{40cosθ}{v}\)
18) Find at what angle \(θ\) the lifeguard should swim to reach the drowning person in the least amount of time.
19) A truck uses gas as \(g(v)=av+\frac{b}{v}\), where \(v\) represents the speed of the truck and \(g\) represents the gallons of fuel per mile. At what speed is fuel consumption minimized?
Solution: \(v=\sqrt{\frac{b}{a}}\)
For the following exercises, consider a limousine that gets \(m(v)=\frac{(120−2v)}{5}mi/gal\) at speed \(v\), the chauffeur costs \($15/h\), and gas is \($3.5/gal.\)
20) Find the cost per mile at speed \(v.\)
21) Find the cheapest driving speed.
Solution: approximately \(34.02mph\)
For the following exercises, consider a pizzeria that sell pizzas for a revenue of \(R(x)=ax\) and costs \(C(x)=b+cx+dx^2\), where \(x\) represents the number of pizzas.
22) Find the profit function for the number of pizzas. How many pizzas gives the largest profit per pizza?
23) Assume that \(R(x)=10x\) and \(C(x)=2x+x^2\).How many pizzas sold maximizes the profit?
24) Assume that \(R(x)=15x,\) and \(C(x)=60+3x+\frac{1}{2}x^2\). How many pizzas sold maximizes the profit?
For the following exercises, consider a wire \(4ft\) long cut into two pieces. One piece forms a circle with radius r and the other forms a square of side \(x\).
25) Choose \(x\) to maximize the sum of their areas.
26) Choose \(x\) to minimize the sum of their areas.
For the following exercises, consider two nonnegative numbers \(x\) and \(y\) such that \(x+y=10\). Maximize and minimize the quantities.
27) \(xy\)
Solution: Maximal: \(x=5,y=5;\) minimal: \(x=0,y=10\) and \(y=0,x=10\)
28 \(x^2y^2\)
29) \(y−\frac{1}{x}\)
Solution: Maximal: \(x=1,y=9;\) minimal: none
30) \(x^2−y\)
For the following exercises, draw the given optimization problem and solve.
31) Find the volume of the largest right circular cylinder that fits in a sphere of radius \(1\).
Solution: \(\frac{4π}{3\sqrt{3}}\)
32) Find the volume of the largest right cone that fits in a sphere of radius \(1\).
33) Find the area of the largest rectangle that fits into the triangle with sides \(x=0,y=0\) and \(\frac{x}{4}+\frac{y}{6}=1.\)
34) Find the largest volume of a cylinder that fits into a cone that has base radius \(R\) and height \(h\).
35) Find the dimensions of the closed cylinder volume \(V=16π\) that has the least amount of surface area.
Solution: \(r=2,h=4\)
36) Find the dimensions of a right cone with surface area \(S=4π\) that has the largest volume.
For the following exercises, consider the points on the given graphs. Use a calculator to graph the functions.
37) [T] Where is the line \(y=5−2x\) closest to the origin?
Solution: \((2,1)\)
38) [T] Where is the line \(y=5−2x\) closest to point \((1,1)\)?
39) [T] Where is the parabola \(y=x^2\) closest to point \((2,0)\)?
Solution: \((0.8351,0.6974)\)
For the following exercises, set up, but do not evaluate, each optimization problem.
41) A window is composed of a semicircle placed on top of a rectangle. If you have \(20ft\) of window-framing materials for the outer frame, what is the maximum size of the window you can create? Use r to represent the radius of the semicircle.
Solution: \(A=20r−2r^2−\frac{1}{2}πr^2\)
42) You have a garden row of \(20\) watermelon plants that produce an average of \(30\) watermelons apiece. For any additional watermelon plants planted, the output per watermelon plant drops by one watermelon. How many extra watermelon plants should you plant?
43) You are constructing a box for your cat to sleep in. The plush material for the square bottom of the box costs \($5/ft^2\) and the material for the sides costs \($2/ft^2\). You need a box with volume \(4ft^2\). Find the dimensions of the box that minimize cost. Use \(x\) to represent the length of the side of the box.
Solution: \(C(x)=5x^2+\frac{32}{x}\)
44) You are building five identical pens adjacent to each other with a total area of \(1000m^2\), as shown in the following figure. What dimensions should you use to minimize the amount of fencing?
optimization problems
problems that are solved by finding the maximum or minimum value of a function
45) You are the manager of an apartment complex with \(50\) units. When you set rent at \($800/month,\) all apartments are rented. As you increase rent by \($25/month\), one fewer apartment is rented. Maintenance costs run \($50/month\) for each occupied unit. What is the rent that maximizes the total amount of profit?
Solution: \(P(x)=(50−x)(800+25x−50)\)
1) Evaluate the limit \(lim_{x→∞}\frac{e^x}{x}\).
2) Evaluate the limit \(lim_{x→∞}\frac{e^x}{x^k}\).
3) Evaluate the limit \(lim_{x→∞}\frac{lnx}{x^k}\).
4) Evaluate the limit \(lim_{x→a}\frac{x−a}{x^2−a^2}\).
Solution: \(\frac{1}{2a}\)
5. Evaluate the limit \(lim_{x→a}\frac{x−a}{x^3−a^3}\).
6. Evaluate the limit \(lim_{x→a}\frac{x−a}{x^n−a^n}\).
Solution: \(\frac{1}{na^{n−1}}\)
For the following exercises, determine whether you can apply L'Hôpital's rule directly. Explain why or why not. Then, indicate if there is some way you can alter the limit so you can apply L'Hôpital's rule.
7) \(lim_{x→0^+}x^2lnx\)
8) \(lim_{x→∞}x^{1/x}\)
Solution: Cannot apply directly; use logarithms
9) \(lim_{x→0}x^{2/x}\)
10) \(lim_{x→0}\frac{x^2}{1/x}\)
Solution: Cannot apply directly; rewrite as \(lim_{x→0}x^3\)
11) \(lim_{x→∞}\frac{e^x}{x}\)
For the following exercises, evaluate the limits with either L'Hôpital's rule or previously learned methods.
21) \(lim_{x→3}\frac{x^2−9}{x−3}\)
22) \(lim_{x→3}\frac{x^2−9}{x+3}\)
23) \(lim_{x→0}\frac{(1+x)^{−2}−1}{x}\)
24) \(lim_{x→π/2}\frac{cosx}{\frac{π}{2}−x}\)
25) \(lim_{x→π}\frac{x−π}{sinx}\)
26) \(lim_{x→1}\frac{x−1}{sinx}\)
27) \(lim_{x→0}\frac{(1+x)^n−1}{x}\)
Solution: \(n\)
28) \(lim_{x→0}\frac{(1+x)^n−1−nx}{x^2}\)
29) \(lim_{x→0}\frac{sinx−tanx}{x^3}\)
30) \(lim_{x→0}\frac{\sqrt{1+x}−\sqrt{1−x}}{x}\)
31) \(lim_{x→0}\frac{e^x−x−1}{x^2}\)
Solution: \(\frac{1}{2}\)
32) \(lim_{x→0}\frac{tanx}{\sqrt{x}}\)
33) \(lim_{x→1}\frac{x→1}{lnx}\)
34) \(lim_{x→0}(x+1)^{1/x}\)
35) \(lim_{x→1}\frac{\sqrt{x}−\sqrt[3]{x}}{x−1}\)
36) \(lim_{x→0^+}x^{2x}\)
37) \(lim_{x→∞}xsin(\frac{1}{x})\)
38) \(lim_{x→0}\frac{sinx−x}{x^2}\)
39) \(lim_{x→0^+}xln(x^4)\)
40) \(lim_{x→∞}(x−e^x)\)
41) \(lim_{x→∞}x^2e^{−x}\)
42) \(lim_{x→0}\frac{3^x−2^x}{x}\)
43) \(lim_{x→0}\frac{1+1/x}{1−1/x}\)
44) \(lim_{x→π/4}(1−tanx)cotx\)
45) \(lim_{x→∞}xe^{1/}\)x
46) \(lim_{x→0}x^{1/cosx}\)
47) \(lim_{x→0}x^{1/x}\)
48) \(lim_{x→0}(1−\frac{1}{x})^x\)
49) \(lim_{x→∞}(1−\frac{1}{x})^x\)
Solution: \(\frac{1}{e}\)
For the following exercises, use a calculator to graph the function and estimate the value of the limit, then use L'Hôpital's rule to find the limit directly.
50) [T] \(lim_{x→0}\frac{e^x−1}{x}\)
51) [T] \(lim_{x→0}xsin(\frac{1}{x})\)
52) [T] \(lim_{x→1}\frac{x−1}{1−cos(πx)}\)
53) [T] \(lim_{x→1}\frac{e^{(x−1)}−1}{x−1}\)
54) [T] \(lim_{x→1}\frac{(x−1)^2}{lnx}\)
55) [T] \(lim_{x→π}\frac{1+cosx}{sinx}\)
56) [T] \(lim_{x→0}(cscx−\frac{1}{x})\)
57) [T] \(lim_{x→0^+}tan(x^x)\)
Solution: \(tan(1)\)
58) [T] \(lim_{x→0^+}\frac{lnx}{sinx}\)
59) [T] \(lim_{x→0}\frac{e^x−e^{−x}}{x}\)
For the following exercises, write Newton's formula as \(x_{n+1}=F(x_n)\) for solving \(f(x)=0\).
1) \(f(x)=x^2+1\)
2) \(f(x)=x^3+2x+1\)
Solution: \(F(x_n)=x_n−\frac{x_n^3+2x_n+1}{3x_n^2+2}\)
3) \(f(x)=sinx\)
4) \(f(x)=e^x\)
Solution: \(F(x_n)=x_n−\frac{e^{x_n}}{e^{x_n}}\)
5) \(f(x)=x^3+3xe^x\)
For the following exercises, solve \(f(x)=0\) using the iteration \(x_{n+1}=x_{n−c}f(x_n)\), which differs slightly from Newton's method. Find a c that works and a \(c\) that fails to converge, with the exception of \(c=0.\)
6) \(f(x)=x^2−4,\) with \(x_0=0\)
Solution: \(|c|>0.5\) fails, \(|c|≤0.5\) works
7) \(f(x)=x^2−4x+3,\) with \(x_0=2\)
8) What is the value of \("c"\) for Newton's method?
Solution: \(c=\frac{1}{f′(x_n)}\)
For the following exercises, start at
a. \(x_0=0.6\) and
b. \(x_0=2.\)
Compute \(x_1\) and \(x_2\) using the specified iterative method.
9) \(x_{n+1}=x_n^2−\frac{1}{2}\)
10) \(x_{n+1}=2x_n(1−x_n)\)
Solution: \(a. x_1=\frac{12}{25},x_2=\frac{312}{625}; b. x_1=−4, x_2=−40\)
11) \(x_{n+1}=\sqrt{x_n}\)
12) \(x_{n+1}=\frac{1}{\sqrt{x_n}}\)
Solution: \(a. x_1=1.291, x_2=0.8801; b. x_1=0.7071, x_2=1.189\)
14) \(x_{n+1}=x_n^2+x_{n−2}\)
Solution: \(a. x_1=−\frac{26}{25}, x_2=−\frac{1224}{625}; b. x_1=4,x_2=18\)
15) \(x_{n+1}=\frac{1}{2}x_n−1\)
16) \(x_{n+1}=|x_n|\)
Solution: \(a. x_1=\frac{6}{10},x_2=\frac{6}{10}; b. x_1=2,x_2=2\)
For the following exercises, solve to four decimal places using Newton's method and a computer or calculator. Choose any initial guess \(x_0\) that is not the exact root.
17) \(x^2−10=0\)
18) \(x^4−100=0\)
Solution: \(3.1623or−3.1623\)
19) \(x^2−x=0\)
Solution: \(0,−1 or 1\)
21) \(x+5cos(x)=0\)
22) \(x+tan(x)=0,\) choose \(x_0∈(−\frac{π}{2},\frac{π}{2})\)
23) \(\frac{1}{1−x}=2\)
24) \(1+x+x^2+x^3+x^4=2\)
Solution: \(0.5188\) or \(−1.2906\)
25) \(x^3+(x+1)^3=10^3\)
26) \(x=sin2^(x)\)
For the following exercises, use Newton's method to find the fixed points of the function where \(f(x)=x\); round to three decimals.
27) \(sinx\)
28) \(tan(x)\) on \(x=(\frac{π}{2},\frac{3π}{2})\)
Solution: \(4.493\)
29) \(e^x−2\)
30) \(ln(x)+2\)
Solution: \(0.159,3.146\)
31) Newton's method can be used to find maxima and minima of functions in addition to the roots. In this case apply Newton's method to the derivative function \(f′(x)\) to find its roots, instead of the original function. For the following exercises, consider the formulation of the method.
To find candidates for maxima and minima, we need to find the critical points \(f′(x)=0.\) Show that to solve for the critical points of a function \(f(x)\), Newton's method is given by \(x_{n+1}=x_n−\frac{f′(x_n)}{f''(x_n)}\).
What additional restrictions are necessary on the function \(f\)?
Solution: We need \(f\) to be twice continuously differentiable.
For the following exercises, use Newton's method to find the location of the local minima and/or maxima of the following functions; round to three decimals.
32) Minimum of \(f(x)=x^2+2x+4\)
33) Minimum of \(f(x)=3x^3+2x^2−16\)
34) Minimum of \(f(x)=x^2e^x\)
35) Maximum of \(f(x)=x+\frac{1}{x}\)
Solution: \(x=−1\)
36) Maximum of \(f(x)=x^3+10x^2+15x−2\)
37) Maximum of \(f(x)=\frac{\sqrt{x}−\sqrt[3]{x}}{x}\)
Solution: \(x=5.619\)
38) Minimum of \(f(x)=x^2sinx,\) closest non-zero minimum to \(x=0\)
39) Minimum of \(f(x)=x^4+x^3+3x^2+12x+6\)
Solution: \(x=−1.326\)
For the following exercises, use the specified method to solve the equation. If it does not work, explain why it does not work.
40) Newton's method, \(x^2+2=0\)
41) Newton's method, \(0=e^x\)
Solution: There is no solution to the equation.
42) Newton's method, \(0=1+x^2\) starting at \(x_0=0\)
43) Solving \(x_{n+1}=−x_n^3\) starting at \(x_0=−1\)
Solution: It enters a cycle.
For the following exercises, use the secant method, an alternative iterative method to Newton's method. The formula is given by
\(x_n=x_{n−1}−f(x_{n−1})\frac{x_{n−1}−x_{n−2}}{f(x_{n−1})−f(x_{n−2})}.\)
44) a root to \(0=x^2−x−3\) accurate to three decimal places.
45) Find a root to \(0=sinx+3x\) accurate to four decimal places.
46) Find a root to \(0=e^x−2\) accurate to four decimal places.
47) Find a root to \(ln(x+2)=\frac{1}{2}\) accurate to four decimal places.
Solution: \(−0.3513\)
48) Why would you use the secant method over Newton's method? What are the necessary restrictions on \(f\)?
For the following exercises, use both Newton's method and the secant method to calculate a root for the following equations. Use a calculator or computer to calculate how many iterations of each are needed to reach within three decimal places of the exact answer. For the secant method, use the first guess from Newton's method.
49) \(f(x)=x^2+2x+1,x_0=1\)
Solution: Newton: \(11\) iterations, secant: \(16\) iterations
50) \(f(x)=x^2,x_0=1\)
51) \(f(x)=sinx,x_0=1\)
Solution: Newton: three iterations, secant: six iterations
52) \(f(x)=e^x−1,x_0=2\)
Solution: Newton: five iterations, secant: eight iterations
In the following exercises, consider Kepler's equation regarding planetary orbits, \(M=E−εsin(E)\), where \(M\) is the mean anomaly, \(E\) is eccentric anomaly, and ε measures eccentricity.
54) Use Newton's method to solve for the eccentric anomaly \(E\) when the mean anomaly \(M=\frac{π}{3}\) and the eccentricity of the orbit \(ε=0.25;\) round to three decimals.
55) Use Newton's method to solve for the eccentric anomaly \(E\) when the mean anomaly \(M=\frac{3π}{2}\) and the eccentricity of the orbit \(ε=0.8;\) round to three decimals.
Solution: \(E=4.071\)
The following two exercises consider a bank investment. The initial investment is \($10,000\). After \(25\) years, the investment has tripled to \($30,000.\)
56) Use Newton's method to determine the interest rate if the interest was compounded annually.
57) Use Newton's method to determine the interest rate if the interest was compounded continuously.
Solution: \(4.394%\)
58) The cost for printing a book can be given by the equation \(C(x)=1000+12x+(\frac{1}{2})x^{2/3}\). Use Newton's method to find the break-even point if the printer sells each book for \($20.\)
For the following exercises, show that \(F(x)\) are antiderivatives of \(f(x)\).
1) \(F(x)=5x^3+2x^2+3x+1,f(x)=15x^2+4x+3\)
Solution: \(F′(x)=15x^2+4x+3\)
2) \(F(x)=x^2+4x+1,f(x)=2x+4\)
3) \(F(x)=x^2e^x,f(x)=e^x(x^2+2x)\)
Solution: \(F′(x)=2xe^x+x^2e^x\)
4) \(F(x)=cosx,f(x)=−sinx\)
5) \(F(x)=e^x,f(x)=e^x\)
Solution: \(F′(x)=e^x\)
For the following exercises, find the antiderivative of the function.
6) \(f(x)=\frac{1}{x^2}+x\)
7) \(f(x)=e^x−3x^2+sinx\)
Solution: \(F(x)=e^x−x^3−cos(x)+C\)
8) \(f(x)=e^x+3x−x^2\)
9) \(f(x)=x−1+4sin(2x)\)
Solution: \(F(x)=\frac{x^2}{2}−x−2cos(2x)+C\)
For the following exercises, find the antiderivative \(F(x)\) of each function \(f(x).\)
10) \(f(x)=5x^4+4x^5\)
11) \(f(x)=x+12x^2\)
Solution: \(F(x)=\frac{1}{2}x^2+4x^3+C\)
12) \(f(x)=\frac{1}{\sqrt{x}}\)
13) \(f(x)=(\sqrt{x})^3\)
Solution: \(F(x)=\frac{2}{5}(\sqrt{x})^5+C\)
14) \(f(x)=x^{1/3}+(2x)^{1/3}\)
15) \(f(x)=\frac{x^{1/3}}{x^{2/3}}\)
Solution: \((F(x)=\frac{3}{2}x^{2/3}+C\)
16) \(f(x)=2sin(x)+sin(2x)\)
17) \(f(x)=sec^2(x)+1\)
Solution: (F(x)=x+tan(x)+C\)
18) \(f(x)=sinxcosx\)
19) \(f(x)=sin^2(x)cos(x)\)
Solution: \(F(x)=\frac{1}{3}sin^3(x)+C\)
20) \(f(x)=0\)
21) \(f(x)=\frac{1}{2}csc^2(x)+\frac{1}{x^2}\)
Solution: \(F(x)=−\frac{1}{2}cot(x)−\frac{1}{x}+C\)
22) \(f(x)=cscxcotx+3x\)
23) \(f(x)=4cscxcotx−secxtanx\)
Solution: \(F(x)=−secx−4cscx+C\)
24) \(f(x)=8secx(secx−4tanx)\)
25) \(f(x)=\frac{1}{2}e^{−4x}+sinx\)
Solution: \(F(x)=−\frac{1}{8}e^{−4x}−cosx+C\)
For the following exercises, evaluate the integral.
26) \(∫(−1)dx\)
27) \(∫sinxdx\)
Solution: \(−cosx+C\)
28) \(∫(4x+\sqrt{x})dx\)
29) \(∫\frac{3x^2+2}{x^2}dx\)
Solution: \(3x−\frac{2}{x}+C\)
30) \(∫(secxtanx+4x)dx\)
31) \(∫(4\sqrt{x}+\sqrt[4]{x})dx\)
Solution: \(\frac{8}{3}x^{3/2}+\frac{4}{5}x^{5/4}+C\)
32) \(∫(x^{−1/3}−x^{2/3})dx\)
33) \(∫\frac{14x^3+2x+1}{x^3}dx\)
Solution: \(14x−\frac{2}{x}−\frac{1}{2x^2}+C\)
34) \(∫(e^x+e^{−x})dx\)
For the following exercises, solve the initial value problem.
35) \(f′(x)=x^{−3},f(1)=1\)
Solution: \(f(x)=−\frac{1}{2x^2}+\frac{3}{2}\)
36) \(f′(x)=\sqrt{x}+x^2,f(0)=2\)
37) \(f′(x)=cosx+sec^2(x),f(\frac{π}{4})=2+\frac{\sqrt{2}}{2}\)
Solution: \(f(x)=sinx+tanx+1\)
38) \(f′(x)=x^3−8x^2+16x+1,f(0)=0\)
39 )\(f′(x)=\frac{2}{x^2}−\frac{x^2}{2},f(1)=0\)
Solution: \(f(x)=−\frac{1}{6}x^3−\frac{2}{x}+\frac{13}{6}\)
For the following exercises, find two possible functions \(f\) given the second- or third-order derivatives
40) \(f''(x)=x^2+2\)
41) \(f''(x)=e^{−x}\)
Solution: Answers may vary; one possible answer is \(f(x)=e^{−x}\)
42) \(f''(x)=1+x\)
43) \(f'''(x)=cosx\)
Solution: Answers may vary; one possible answer is \(f(x)=−sinx\)
44) \(f'''(x)=8e^{−2x}−sinx\)
45) A car is being driven at a rate of \(40\) mph when the brakes are applied. The car decelerates at a constant rate of \(10\) ft/sec2. How long before the car stops?
Solution: \(5.867\) sec
46) In the preceding problem, calculate how far the car travels in the time it takes to stop.
47) You are merging onto the freeway, accelerating at a constant rate of \(12\) ft/sec2. How long does it take you to reach merging speed at \(60\) mph?
48) Based on the previous problem, how far does the car travel to reach merging speed?
49) A car company wants to ensure its newest model can stop in \(8\) sec when traveling at \(75\) mph. If we assume constant deceleration, find the value of deceleration that accomplishes this.
Solution: \(13.75 ft/sec^2\)
50) A car company wants to ensure its newest model can stop in less than \(450\) ft when traveling at \(60\) mph. If we assume constant deceleration, find the value of deceleration that accomplishes this.
For the following exercises, find the antiderivative of the function, assuming \(F(0)=0.\)
51) [T] \(f(x)=x^2+2\)
Solution: \(F(x)=\frac{1}{3}x^3+2x\)
52) [T] \(f(x)=4x−\sqrt{x}\)
53) [T] \(f(x)=sinx+2x\)
Solution: \(F(x)=x^2−cosx+1\)
54) \([T] f(x)=e^x\)
55) \([T] f(x)=\frac{1}{(x+1)^2}\)
Solution: \(F(x)=−\frac{1}{(x+1)}+1\)
56) [T] \(f(x)=e^{−2x}+3x^2\)
For the following exercises, determine whether the statement is true or false. Either prove it is true or find a counterexample if it is false.
57) If \(f(x)\) is the antiderivative of \(v(x)\), then \(2f(x)\) is the antiderivative of \(2v(x).\)
Solution: True
58) If \(f(x)\) is the antiderivative of \(v(x)\), then \(f(2x)\) is the antiderivative of \(v(2x).\)
59) If \(f(x)\) is the antiderivative of \(v(x),\) then \(f(x)+1\) is the antiderivative of \(v(x)+1.\)
Solution: False
60) If \(f(x)\) is the antiderivative of \(v(x)\), then \((f(x))^2\) is the antiderivative of \((v(x))^2.\)
True or False? Justify your answer with a proof or a counterexample. Assume that \(f(x)\) is continuous and differentiable unless stated otherwise.
1) If \(f(−1)=−6\) and \(f(1)=2\), then there exists at least one point \(x∈[−1,1]\) such that \(f′(x)=4.\)
Solution: True, by Mean Value Theorem
2) If \(f′(c)=0,\) there is a maximum or minimum at \(x=c.\)
3) There is a function such that \(f(x)<0,f′(x)>0,\) and \(f''(x)<0.\) (A graphical "proof" is acceptable for this answer.)
4) There is a function such that there is both an inflection point and a critical point for some value \(x=a.\)
5) Given the graph of \(f′\), determine where \(f\) is increasing or decreasing.
Solution: Increasing: \((−2,0)∪(4,∞)\), decreasing: \((−∞,−2)∪(0,4)\)
6) The graph of \(f\) is given below. Draw \(f′\).
7) Find the linear approximation \(L(x)\) to \(y=x^2+tan(πx)\) near \(x=\frac{1}{4}.\)
Solution: \(L(x)=\frac{17}{16}+\frac{1}{2}(1+4π)(x−\frac{1}{4})\)
8) Find the differential of \(y=x^2−5x−6\) and evaluate for \(x=2\) with \(dx=0.1.\)
Find the critical points and the local and absolute extrema of the following functions on the given interval.
9) \(f(x)=x+sin^2(x)\) over \([0,π]\)
Solution: Critical point: \(x=\frac{3π}{4},\) absolute minimum: \(x=0,\) absolute maximum: \(x=π\)
10) \(f(x)=3x^4−4x^3−12x^2+6\) over \([−3,3]\)
Determine over which intervals the following functions are increasing, decreasing, concave up, and concave down.
11) \(x(t)=3t^4−8t^3−18t^2\)
Solution: Increasing: \((−1,0)∪(3,∞),\) decreasing: \((−∞,−1)∪(0,3),\) concave up: \((−∞,\frac{1}{3}(2−\sqrt{13}))∪(\frac{1}{3}(2+\sqrt{13}),∞)\), concave down: \((\frac{1}{3}(2−\sqrt{13}),\frac{1}{3}(2+\sqrt{13}))\)
12) \(y=x+sin(πx)\)
13) \(g(x)=x−\sqrt{x}\)
Solution: Increasing: \((\frac{1}{4},∞),\) decreasing: \((0,\frac{1}{4})\), concave up: \((0,∞),\) concave down: nowhere
14) \(f(θ)=sin(3θ)\)
Evaluate the following limits.
15) \(lim_{x→∞}\frac{3x\sqrt{x^2+1}}{\sqrt{x4−1}}\)
16) \(lim_{x→∞}cos(\frac{1}{x})\)
17) \(lim_{x→1}\frac{x−1}{sin(πx)}\)
Solution: \(−\frac{1}{π}\)
18) \(lim_{x→∞}(3x)^{1/x}\)
Use Newton's method to find the first two iterations, given the starting point.
19) \(y=x^3+1,x_0=0.5\)
Solution: \(x_1=−1,x_2=−1\)
20) \(\frac{1}{x+1}=\frac{1}{2},x_0=0\)
Find the antiderivatives \(F(x)\) of the following functions.
21) \(g(x)=\sqrt{x}−\frac{1}{x^2}\)
Solution: ](F(x)=\frac{2x^{3/2}}{3}+\frac{1}{x}+C\)
22) \(f(x)=2x+6cosx,F(π)=π^2+2\)
Graph the following functions by hand. Make sure to label the inflection points, critical points, zeros, and asymptotes.
23) \(y=\frac{1}{x(x+1)^2}\)
Inflection points: none; critical points: \(x=−\frac{1}{3}\); zeros: none; vertical asymptotes: \(x=−1, x=0\); horizontal asymptote: \(y=0\)
24) \(y=x−\sqrt{4−x^2}\)
25) A car is being compacted into a rectangular solid. The volume is decreasing at a rate of \(2 m^3/sec\). The length and width of the compactor are square, but the height is not the same length as the length and width. If the length and width walls move toward each other at a rate of \(0.25\) m/sec, find the rate at which the height is changing when the length and width are \(2\) m and the height is \(1.5\) m.
Solution: The height is decreasing at a rate of \(0.125\) m/sec
26) A rocket is launched into space; its kinetic energy is given by \(K(t)=(\frac{1}{2})m(t)v(t)^2\), where \(K\) is the kinetic energy in joules, \(m\) is the mass of the rocket in kilograms, and \(v\) is the velocity of the rocket in meters/second. Assume the velocity is increasing at a rate of \(15 m/sec^2\) and the mass is decreasing at a rate of \(10\) kg/sec because the fuel is being burned. At what rate is the rocket's kinetic energy changing when the mass is \(2000\) kg and the velocity is \(5000\) m/sec? Give your answer in mega-Joules (MJ), which is equivalent to \(10^6\) J.
27) The famous Regiomontanus' problem for angle maximization was proposed during the \(15\) th century. A painting hangs on a wall with the bottom of the painting a distance \(a\) feet above eye level, and the top \(b\) feet above eye level. What distance x (in feet) from the wall should the viewer stand to maximize the angle subtended by the painting, \(θ\)?
Solution: \(x=\sqrt{ab}\) feet
28) An airline sells tickets from Tokyo to Detroit for \($1200.\) There are \(500\) seats available and a typical flight books \(350\) seats. For every \($10\) decrease in price, the airline observes an additional five seats sold. What should the fare be to maximize profit? How many passengers would be onboard?
4.E: Open Stax 4.1 - 4.5 Exercises | CommonCrawl |
Marcos Martínez-Romero ORCID: orcid.org/0000-0002-9814-32581,
Clement Jonquet1,3,
Martin J. O'Connor1,
John Graybeal1,
Alejandro Pazos2 &
Mark A. Musen1
Ontologies and controlled terminologies have become increasingly important in biomedical research. Researchers use ontologies to annotate their data with ontology terms, enabling better data integration and interoperability across disparate datasets. However, the number, variety and complexity of current biomedical ontologies make it cumbersome for researchers to determine which ones to reuse for their specific needs. To overcome this problem, in 2010 the National Center for Biomedical Ontology (NCBO) released the Ontology Recommender, which is a service that receives a biomedical text corpus or a list of keywords and suggests ontologies appropriate for referencing the indicated terms.
We developed a new version of the NCBO Ontology Recommender. Called Ontology Recommender 2.0, it uses a novel recommendation approach that evaluates the relevance of an ontology to biomedical text data according to four different criteria: (1) the extent to which the ontology covers the input data; (2) the acceptance of the ontology in the biomedical community; (3) the level of detail of the ontology classes that cover the input data; and (4) the specialization of the ontology to the domain of the input data.
Our evaluation shows that the enhanced recommender provides higher quality suggestions than the original approach, providing better coverage of the input data, more detailed information about their concepts, increased specialization for the domain of the input data, and greater acceptance and use in the community. In addition, it provides users with more explanatory information, along with suggestions of not only individual ontologies but also groups of ontologies to use together. It also can be customized to fit the needs of different ontology recommendation scenarios.
Ontology Recommender 2.0 suggests relevant ontologies for annotating biomedical text data. It combines the strengths of its predecessor with a range of adjustments and new features that improve its reliability and usefulness. Ontology Recommender 2.0 recommends over 500 biomedical ontologies from the NCBO BioPortal platform, where it is openly available (both via the user interface at http://bioportal.bioontology.org/recommender, and via a Web service API).
During the last two decades, the biomedical community has grown progressively more interested in ontologies. Ontologies provide the common terminology necessary for biomedical researchers to describe their datasets, enabling better data integration and interoperability, and therefore facilitating translational discoveries [1, 2].
BioPortal [3, 4], developed by the National Center for Biomedical Ontology (NCBO) [5], is a highly used platformFootnote 1 for hosting and sharing biomedical ontologies. BioPortal users can publish their ontologies as well as submit new versions. They can browse, search, review, and comment on ontologies, both interactively through a Web interface, and programmatically via Web services. In 2008, BioPortalFootnote 2 contained 72 ontologies and 300,000 ontology classes. As of 2017, the number of ontologies exceeds 500, with more than 7.8 million classes, making it one of the largest public repositories of biomedical ontologies.
The great number, complexity, and variety of ontologies in the biomedical field present a challenge for researchers: how to identify those ontologies that are most relevant for annotating, mining or indexing particular datasets. To address this problem, in 2010 the NCBO released the first version of its Ontology Recommender (henceforth 'Ontology Recommender 1.0' or 'original Ontology Recommender') [6], which informed the user of the most appropriate ontologies in BioPortal to annotate textual data. It was, to the best of our knowledge, the first biomedical ontology recommendation service, and it became widely known and used by the community.Footnote 3 However, the service has some limitations, and a significant amount of work has been done in the field of ontology recommendation since its release. This motivated us to analyze its weaknesses and to design a new recommendation approach.
The main contributions of this paper are the following:
A state-of-the-art approach for recommending biomedical ontologies. Our approach is based on evaluating the relevance of an ontology to biomedical text data according to four different criteria, namely: ontology coverage, ontology acceptance, ontology detail, and ontology specialization.
A new ontology recommendation system, the NCBO Ontology Recommender 2.0 (henceforth 'Ontology Recommender 2.0' or 'new Ontology Recommender'). This system has been implemented based on our approach, and it is openly available at BioPortal.
Our research is particularly relevant both for researchers and developers who need to identify the most appropriate ontologies for annotating textual data of biomedical nature (e.g., journal articles, clinical trial descriptions, metadata about microarray experiments, information on small molecules, electronic health records, etc.). Our ontology recommendation approach can be easily adapted to other domains, as it will be illustrated in the Discussion section. Overall, this work advances prior research in the fields of ontology evaluation and recommendation, and provides the community with a useful service which is, to the best of our knowledge, the only ontology recommendation system currently available to the public.
Much theoretical work has been done over the past two decades in the fields of ontology evaluation, selection, search, and recommendation. Ontology evaluation has been defined as the problem of assessing a given ontology from the point of view of a particular criterion, typically in order to determine which of several ontologies would best suit a particular purpose [7]. As a consequence, ontology recommendation is fundamentally an ontology evaluation task because it addresses the problem of evaluating and consequently selecting the most appropriate ontologies for a specific context or goal [8, 9].
Early contributions in the field of ontology evaluation date back to the early 1990s and were motivated by the necessity of having evaluation strategies to guide and improve the ontology engineering process [10,11,12]. Some years later, with the birth of the Semantic Web [13], the need for reusing ontologies across the Web motivated the development of the first ontology search engines [14,15,16], which made it possible to retrieve all ontologies satisfying some basic requirements. These engines usually returned only the ontologies that had the query term itself in their class or property names [17]. However, the process of recommending ontologies involves more than that. It is a complex process that comprises evaluating all candidate ontologies according to a variety of criteria, such as coverage, richness of the ontology structure [18,19,20], correctness, frequency of use [21], connectivity [18], formality, user ratings [22], and their suitability for the task at hand.
In biomedicine, the great number, size, and complexity of ontologies have motivated strategies to help researchers find the best ontologies to describe their datasets. Tan and Lambrix [23] proposed a theoretical framework for selecting the best ontology for a particular text-mining application and manually applied it to a gene-normalization task. Alani et al. [17] developed an ontology-search strategy that uses query-expansion techniques to find ontologies related to a particular domain (e.g., Anatomy). Maiga and Williams [24] conceived a semi-automatic tool that makes it possible to find the ontologies that best match a list of user-defined task requirements.
The most relevant alternative to the NCBO Ontology Recommender is BiOSS [21, 25], which was released in 2011 by some of the authors of this paper. BiOSS evaluates each candidate ontology according to three criteria: (1) the input coverage; (2) the semantic richness of the ontology for the input; and (3) the acceptance of the ontology. However, this system has some weaknesses that make it insufficient to satisfy many ontology reuse needs in biomedicine. BiOSS' ontology repository is not updated regularly, so it does not take into account the most recent revisions to biomedical ontologies. Also, BiOSS evaluates ontology acceptance by counting the number of mentions of the ontology name in Web 2.0 resources, such as Twitter and Wikipedia. However, this method is not always appropriate because a large number of mentions do not always correspond to a high level of acceptance by the community (e.g., an ontology may be "popular" on Twitter because of a high number of negative comments about it). Another drawback is that the input to BiOSS is limited to comma-delimited keywords; it is not possible to suggest ontologies to annotate raw text, which is a very common use case in biomedical informatics.
In this work, we have applied our previous experience in the development of the original Ontology Recommender and the BiOSS system to conceive a new approach for biomedical ontology recommendation. The new approach has been used to design and implement the Ontology Recommender 2.0. The new system combines the strengths of previous methods with a range of enhancements, including new recommendation strategies and the ability to handle new use cases. Because it is integrated within the NCBO BioPortal, this system works with a large corpus of current biomedical ontologies and can therefore be considered the most comprehensive biomedical ontology recommendation system developed to date.
Our recommendations for the choice of appropriate ontologies center around the use of ontologies to perform annotation of textual data. We define annotation as a correspondence or relationship between a term and an ontology class that specifies the semantics of that term. For instance, an annotation might relate leucocyte in some text to a particular ontology class leucocyte in the Cell Ontology. The annotation process will also relate textual data such as white blood cell and lymphocyte to the class leucocyte in the Cell Ontology, via synonym and subsumption relationships, respectively.
Description of the original approach
The original NCBO Ontology Recommender supported two primary use cases: (1) corpus-based recommendation, and (2) keyword-based recommendation. In these scenarios, the system recommended appropriate ontologies from the BioPortal ontology repository to annotate a text corpus or a list of keywords, respectively.
The NCBO Ontology Recommender invoked the NCBO Annotator [26] to identify all annotations for the input data. The NCBO Annotator is a BioPortal service that annotates textual data with ontology classes. Then, the Ontology Recommender scored all BioPortal ontologies as a function of the number and relevance of the annotations found, and ranked the ontologies according to those scores. The first ontology in the ranking would be suggested as the most appropriate for the input data. The score for each ontology was calculated according to the following formulaFootnote 4:
$$ s c o r e\left( o, t\right)=\frac{{\displaystyle \sum \left( a n n o t a t i o n S c o r e(a)+2\ast h i e r a r c h y L e v e l(a)\right)}}{lo{ g}_{10}\left(\left| o\right|\right)}\forall a\in a n n o t a t i o n s\left( o, t\right) $$
such that:
$$ score\left( o, t\right)\in \mathbb{R}:\; score\left( o, t\right)\ge 0 $$
$$ a n n o t a t i o n S c o r e( a)=\Big\{\begin{array}{c} 10 i f a n n o t a t i o n T y p e= P R E F \\ 8 i f a n n o t a t i o n T y p e= S Y N \end{array}\operatorname{} $$
$$ h i e r a r c h y L e v e l( a)\in \mathbb{Z}: h i e r a r c h y L e v e l( a)\ge 0 $$
Here o is the ontology that is being evaluated; t is the input text; score(o, t) represents the relevance of the ontology o for t; annotationScore(a) is the score for the annotation a; hierarchyLevel(a) is the position of the matched class in the ontology tree, such that 0 represents the root level; |o| is the number of classes in o; and annotations(o,t) is the list of annotations (a) performed with o for t, returned by the NCBO Annotator.
The annotationScore(a) would depend on whether the annotation was achieved with a class 'preferred name' (PREF) or with a class synonym (SYN). A preferred name is the human readable label that the authors of the ontology suggested to be used when referring to the class (e.g., vertebral column), whereas synonyms are alternate names for the class (e.g., spinal column, backbone, spine). Each class in BioPortal has a single preferred name and it may have any number of synonyms. Because synonyms can be imprecise, this approach favored matches on preferred names.
The normalization by ontology size was intended to discriminate between large ontologies that offer good coverage of the input data, and small ontologies with both correct coverage and better specialization for the input data's domain. The granularities of the matched classes (i.e., hierarchyLevel(a)) were also considered, so that annotations performed with granular classes (e.g., epithelial cell proliferation) would receive higher scores than those performed with more abstract classes (e.g., biological process).
For example, Table 1 shows the top five suggestions of the original Ontology Recommender for the text Melanoma is a malignant tumor of melanocytes which are found predominantly in skin but also in the bowel and the eye. In this example, the system considered that the best ontology for the input data is the National Cancer Institute Thesaurus (NCIT).
Table 1 Ontologies suggested by the original Ontology Recommender for the sample input text Melanoma is a malignant tumor of melanocytes which are found predominantly in skin but also in the bowel and the eye
In the following sections, we summarize the most relevant shortcomings of the original approach, addressing input coverage, coverage of multi-word terms, input types and output information.
Input coverage
Input coverage refers to the fraction of input data that is annotated with ontology classes. Given that the goal is to find the best ontologies to annotate the user's data, high input coverage is the main requirement for ontology-recommendation systems. One of the shortcomings of the original approach is that it did not ensure that ontologies that provide high input coverage were ranked higher than ontologies with lower coverage. The approach was strongly based on the total number of annotations returned by the NCBO Annotator. However, a large number of annotations does not always imply high coverage. Ontologies with low input coverage can contain a great many classes that match only a few input terms, or match many repeated terms in a large text corpus.
In the previous example (see Table 1), EHDA (Human Developmental Anatomy Ontology) was ranked at the second position. However, it covers only two input terms: skin and eye. Clearly, it is not an appropriate ontology to annotate the input when compared with LOINC or EFO, which have almost three times more terms covered. The reason that EHDA was assigned a high score is that it contains 11 different eye classes (e.g., EHDA:4732, EHDA:3808, EHDA:5701) and 4 different skin classes (e.g., EHDA:6531, EHDA:6530, EHDA:7501), which provide a total of 15 annotations. Since the recommendation score computed using the original approach is directly influenced by the number of annotations, EHDA obtains a high relevance score and thus the second position in the ranking. This issue was also identified by López-García et al. in their study of the efficiency of automatic summarization techniques [27]. These authors noticed that EHDA was the most recommended ontology for a broad range of topics that the ontology actually did not cover well.
Multi-word terms
Biomedical texts frequently contain terms composed of several words, such as distinctive arrangement of microtubules, or dental disclosing preparation. Annotating a multi-word phrase or multi-word keyword with an ontological class that completely represents its semantics is a much better choice than annotating each word separately. The original recommendation approach was not designed to select the longest matches and consequently the results were affected.
As an example, Table 2 shows the top 5 ontologies suggested by the original Ontology Recommender for the phrase embryonic cardiac structure. Ideally, the first ontology in the ranking (SWEET) would contain the class embryonic cardiac structure. However, the SWEET ontology covers only the term structure. This ontology was ranked at the first position because it contains 3 classes matching the term structure and also because it is a small ontology (4549 classes).
Table 2 Top 5 ontologies suggested by Ontology Recommender 1.0 for the sample input text embryonic cardiac structure
Furthermore, SNOMEDCT, which does contain a class that provides a precise representation of the input, was ranked in the 5th position. There are 3 other ontologies in BioPortal that contain the class embryonic cardiac structure: EP, BIOMODELS and FMA. However, they were ranked 8, 11 and 32, respectively. The recommendation algorithm should assign a higher score to an annotation that covers all words in a multi-word term than it does to different annotations that cover all words separately.
Input types
Related work in ontology recommendation highlights the importance of addressing two different input types: text corpora and lists of keywords [28]. The original Ontology Recommender, while offering users the possibility of selecting among these two recommendation scenarios, would treat the input data in the same manner. To satisfy users' expectations, the system should process these two input types differently, to better reflect the information coded in the input about multi-word boundaries.
The output provided by the original Ontology Recommender consisted of a list of ontologies ranked by relevance score. For each ontology, the Web-based user interface displayed the number of classes matched and the size of each recommended ontology. In contrast, the Web service could additionally return the particular classes matched in each ontology. This information proved insufficient to assure users that a recommended ontology was appropriate and better than the alternatives. For example, it was not possible to know what specific input terms were covered by each class. The system should provide enough detail both to reassure users, and to give them information about alternative ontologies.
In this section we have described the fundamental limitations of the original Ontology Recommender and suggested methods to address them. The strategy for evaluating input coverage must be improved. Additionally, there is a diversity of other recently-proposed evaluation techniques [8, 19, 25] that could enhance the original approach. Particularly, there are two evaluation criteria that could substantially improve the output provided by the system: (1) ontology acceptance, which represents the degree of acceptance of the ontology by the community; and (2) ontology detail, which refers to the level of detail of the classes that cover the input data.
Description of the new approach
In this section, we present our new approach to biomedical ontology recommendation. First, we describe our ontology evaluation criteria and explain how the recommendation process works. We then provide some implementation details and discuss improvements to the user interface.
The execution starts from the input data and a set of configuration settings. The NCBO Annotator [26] is then used to obtain all annotations for the input using BioPortal ontologies. Those ontologies that do not provide annotations for the input data are considered irrelevant and are ignored in further processing. The ontologies that provide annotations are evaluated one by one according to four evaluation criteria that address the following questions:
Coverage: To what extent does the ontology represent the input data?
Acceptance: How well-known and trusted is the ontology by the biomedical community?
Detail: How rich is the ontology representation for the input data?
Specialization: How specialized is the ontology to the domain of the input data?
According to our analysis of related work, these are the most relevant criteria for ontology recommendation. Note that other authors have referred to the coverage criterion as term matching [6], class match measure [19] and topic coverage [28]. Acceptance is related to popularity [21, 25, 28], because it measures the level of support provided to the ontology by the people in the community. Other criteria to measure ontology acceptance are connectivity [6], and connectedness [18], which assess the relevance of an ontology based on the number and quality of connections to an ontology by other ontologies. Detail is similar to structure measure [6], semantic richness [21, 25], structure [18], and granularity [24].
For each of these evaluation criteria, a score in the interval [0,1] is typically obtained. Then, all the scores for a given ontology are aggregated into a composite relevance score, also in the interval [0,1]. This score represents the appropriateness of that ontology to describe the input data. The individual scores are combined in accordance with the following expression:
$$ s c o r e\left( o, t\right)={w}_c\ast c o v e r a g e\left( o, t\right)+{w}_a\ast a c c e p t a n c e( o)+\kern0.2em {w}_d\ast d e t a i l\left( o, t\right)+{w}_s\ast s p e c i a l i z a t i o n\left( o, t\right) $$
where o is the ontology that is being evaluated, t represents the input data, and {w c , w a , w d , w s } are a set of predefined weights that are used to give more or less importance to each evaluation criterion, such that w c + w a + w d + w s = 1. Note that acceptance is the only criterion independent from the input data. Ultimately, the system returns a list of ontologies ranked according to their relevance scores.
Ontology evaluation criteria
The relevance score of each candidate ontology is calculated based on coverage, acceptance, detail, and specialization. We now describe these criteria in more detail.
Ontology coverage
It is crucial that ontology recommendation systems suggest ontologies that provide high coverage of the input data. As with the original approach, the new recommendation process is driven by the annotations provided by the NCBO Annotator, but the method used to evaluate the candidate ontologies is different. In the new algorithm, each annotation is assigned a score computed in accordance with the following expressionFootnote 5:
$$ annotationScore2(a)=\left( a nnotationTypeScore(a)+ multiWordScore(a)\right)* annotatedWords(a) $$
$$ annotationTypeScore(a)=\left\{\begin{array}{c}\hfill 10\; if\; annotationType= PREF\hfill \\ {}\hfill 5\; if\; annotationType= S Y N\hfill \end{array}\right. $$
$$ multiWordScore(a)=\left\{\begin{array}{c}\hfill 3\; if\; annotatedWords(a)>1\hfill \\ {}\hfill 0\; otherwise\hfill \end{array}\right. $$
In this expression, annotationTypeScore(a) is a score based on the annotation type which, as with the original approach, can be either 'PREF', if the annotation has been performed with a class preferred name, or 'SYN', if it has been performed with a class synonym. Our method assigns higher relevance to scores done with class preferred names than to those made with class synonyms because we have seen that many BioPortal ontologies contain synonyms that are not reliable (e.g., Other variants as a synonym of Other Variants of Basaloid Follicular Neoplasm of the Mouse Skin in the NCI Thesaurus).
The multiWordScore(a) score rewards multi-word annotations. It gives more importance to classes that annotate multi-word terms than to classes that annotate individual words separately (e.g., blood cell versus blood and cell). Such classes better reflect the input data than do classes that represent isolated words.
The annotatedWords(a) function represents the number of words matched by the annotation (e.g., 2 for the term blood cell).
Sometimes, an ontology provides overlapping annotations for the same input data. For instance, the text white blood cell may be covered by two different classes, white blood cell and blood cell. In the original approach, ontologies with low input coverage were sometimes ranked among the top positions because they had multiple classes matching a few input terms, and all those annotations contributed to the final score. Our new approach addresses this issue. If an ontology provides several annotations for the same text fragment, only the annotation with the highest score is selected to contribute to the coverage score.
The coverage score for each ontology is computed as the sum of all the annotation scores, as follows:
$$ coverage\left( o, t\right)= norm\left({\displaystyle \sum a nnotationScore2(a)}\right)\forall a\in selectedAnnotations(A) $$
where A is the set of annotations performed with the ontology o for the input t, selectedAnnotations(A) is the set of annotations that are left after discarding overlapping annotations, and norm is a function that normalizes the coverage score to the interval [0,1].
As an example, Table 3 shows the annotations performed with SNOMEDCT for the input A thrombocyte is a kind of blood cell. This example shows how our approach prioritizes (i.e., assigns a higher score to) annotations performed with preferred names over synonyms (e.g., cell over entire cell), and annotations performed with multi-word terms over single-word terms (e.g., blood cell over blood plus cell). The coverage score for SNOMEDCT would be calculated as 5 + 26 = 31, which would be normalized to the interval [0,1] by dividing it by the maximum coverage score. The maximum coverage score is obtained by adding the scores of all the annotations performed with all BioPortal ontologies, after discarding overlapping annotations.
Table 3 SNOMEDCT annotations for the input A thrombocyte is a kind of blood cell
It is important to note that this evaluation of ontology coverage takes into account term frequency. That is, matched terms with several occurrences are considered more relevant to the input data than terms that occur less frequently. If an ontology covers a term that appears several times in the input, its corresponding annotation score will be counted each time and the coverage score for the ontology accordingly will be higher. In addition, because we select only the matches with the highest score, the frequencies are not distorted by terms embedded in one another (e.g., white blood cell and blood cell).
Our approach accepts two input types: free text and comma-delimited keywords. For the keyword input type, only those annotations that cover all the words in a multi-word term are considered. Partial annotations are immediately discarded.
Ontology acceptance
In biomedicine, some ontologies have been developed and maintained by widely known institutions or research projects. The content of these ontologies is periodically curated, extensively used, and accepted by the community. Examples of broadly accepted ontologies are SNOMEDCT [29] and Gene Ontology [30]. Some ontologies uploaded to BioPortal may be relatively less reliable, however. They may contain incorrect or poor quality content or simply be insufficiently up to date. It is important that an ontology recommender be able to distinguish between ontologies that are accepted as trustworthy and those that are less so.
Our approach proposes to estimate the degree of acceptance of each ontology based of information extracted from ontology repositories or terminology systems. Widely used examples of these systems in biomedicine include BioPortal, the Unified Medical Language System (UMLS) [31], the OBO Foundry [32], Ontobee [33], the Ontology Lookup Service (OLS) [34], and Aber-OWL [35]. The calculation of ontology acceptance is based on two factors: (1) The presence or absence of the ontology in ontology repositories; and (2) the number of visits (pageviews) to the ontology in ontology repositories in a recent period of time (e.g., the last 6 months). This method takes into account changes in ontology acceptance over time. The acceptance score for each ontology is calculated as follows:
$$ acceptance(o)={w}_{presence}* presenceScore(o)+{w}_{visits}* visitsScore(o) $$
presenceScore(o) is a value in the interval [0,1] that represents the presence of the ontology in a predefined list of ontology repositories. It is calculated as follows:
$$ p r e s e n c e S c o r e( o)={\displaystyle \sum_{i=1}^n{w}_{p_i}\ast p r e s e n c{e}_i( o)} $$
where w pi represents the weight assigned to the presence of the ontology in the repository i, with \( {\displaystyle {\sum}_{i-1}^n{w}_{p{}_i}=1} \), and:
$$ presenc{e}_i(o)=\left\{\begin{array}{c}\hfill 1\ i f\ o\ i s\ present\ i n\ repository\ i\hfill \\ {}\hfill 0\ o therwise\hfill \end{array}\right. $$
visitsScore(o) represents the number of visits to the ontology on a given list of ontology repositories in a recent period of time. Note that this score can typically be calculated only for those repositories that are available on the Web and that have an independent page for each provided ontology. This score is calculated as follows:
$$ v i s i t s S c o r e( o)={\displaystyle \sum_{i=1}^n{w}_{v_i}\ast v i s i t{s}_i( o)} $$
where w vi is the weight assigned to the ontology visits on the repository i, with \( {\displaystyle {\sum}_{i-1}^n{w}_{v{}_i}=1} \); visits i (o) represents the number of visits to the ontology in the repository i, normalized to the interval [0,1].
w presence and w visits are weights that are used to give more or less importance each factor, with w presence + w visits = 1.
Figure 1 shows the top 20 accepted BioPortal ontologies according to our approach at the time of writing this paper. Estimating the acceptance of an ontology by the community is inherently subjective, but the above ranking shows that our approach provides reasonable results. All ontologies in the ranking are widely known and accepted biomedical ontologies that are used in a variety of projects and applications.
Top 20 BioPortal ontologies according to their acceptance scores. The x-axis shows the acceptance score in the interval [0, 100]. The y-axis shows the ontology acronyms. These acceptance scores were obtained by using UMLS to calculate the presenceScore(o), BioPortal to compute the visitsScore(o), and assigning the same weight to pageviewsScore(o) and reposScore(o) (w pv = 0.5, w repos = 0.5)
Ontology detail
Ontologies containing a richer representation for a specific input are potentially more useful to describe the input than less detailed ontologies. As an example, the class melanoma in the Human Disease Ontology contains a definition, two synonyms, and twelve properties. However, the class melanoma from the GALEN ontology does not contain any definition, synonyms, or properties. If a user needs an ontology to represent that concept, the Human Disease Ontology would probably be more useful than the GALEN ontology because of this additional information. An ontology recommender should be able to analyze the level of detail of the classes that cover the input data and to give more or less weight to the ontology according to the degree to which its classes have been specified.
We evaluate the richness of the ontology representation for the input data based on a simplification of the "semantic richness" metric used by BiOSS [25]. For each annotation selected during the coverage evaluation step, we calculate the detail score as follows:
$$ detailScore(a)=\frac{definitionScore(a)+ synonymsScore(a)+ propertiesScore(a)}{3} $$
where detailScore(a) is a value in the interval [0,1] that represents the level of detail provided by the annotation a. This score is based on three functions that evaluate the detail of the knowledge representation according to the number of definitions, synonyms, and other properties of the matched class:
$$ definitionScore( a)=\Big\{\begin{array}{c} \quad 1\ if\ \left| D\right|\ge {k}_d \\ \quad\left| D\right|/{k}_d\ otherwise \end{array} $$
$$ $$ synonymsScore(a)=\Big\{\begin{array}{c} \quad 1 \ if \left| S\right|\ge {k}_s \\ \quad \left| S\right|/{k}_s \ otherwise \end{array}$$ $$
$$ propertiesScore( a)=\Big\{\begin{array}{c} \quad 1 \ if \left| P\right|\ge {k}_p \\ \quad \left| P\right|/{k}_p \ otherwise \end{array} $$
where |D|, |S| and |P| are the number of definitions, synonyms, and other properties of the matched class, and kd, ks and kp are predefined constants that represent the number of definitions, synonyms, and other properties, respectively, necessary to get the maximum detail score. For example, using ks = 4 means that, if the class has 4 or more synonyms, then it will be assigned the maximum synonyms score, which would be 1. If it has fewer than 4 synonyms, for example 3, the synonyms score will be computed proportionally according to the expression above (i.e., 3/4). Finally, the detail for the ontology would be calculated as the sum of the detail scores of the annotations done with the ontology, normalized to [0,1]:
$$ detail\left( o, t\right)=\frac{{\displaystyle \sum detail Score(a)}}{\left| A\right|}\forall a\in selectedAnnotations(A) $$
Example: Suppose that, for the input t = Penicillin is an antibiotic used to treat tonsillitis, there are two ontologies O1 and O2 with the classes shown in Table 4.
Table 4 Example of ontology classes for the input Penicillin is an antibiotic used to treat tonsillitis
Assuming that kd = 1, ks = 4 and kp = 10, the detail score for O1 and O2 would be calculated as follows:
$$ detail\left( O1, t\right)=\frac{\left(\frac{1+\raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$4$}\right.+\raisebox{1ex}{$7$}\!\left/ \!\raisebox{-1ex}{$10$}\right.}{3}\right)+\left(\frac{1+1+1}{3}\right)}{2}=0.87 $$
$$ detail\left( O2, t\right)=\frac{\left(\frac{0+\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$4$}\right.+\raisebox{1ex}{$3$}\!\left/ \!\raisebox{-1ex}{$10$}\right.}{3}\right)+\left(\frac{0+0+\raisebox{1ex}{$2$}\!\left/ \!\raisebox{-1ex}{$10$}\right.}{3}\right)}{2}=0.13 $$
Given that O1 annotates the input with two classes that provide more detailed information than the classes from O2, the detail score for O1 is higher.
Ontology specialization
Some biomedical ontologies aim to represent detailed information about specific subdomains or particular tasks. Examples include the Ontology for Biomedical Investigations [36], the Human Disease Ontology [37] and the Biomedical Resource Ontology [38]. These ontologies are usually much smaller than more general ones, with only several hundred or a few thousand classes, but they provide comprehensive knowledge for their fields.
To evaluate ontology specialization, an ontology recommender needs to quantify the extent to which a candidate ontology fits the specialized nature of the input data. To do that, we reused the evaluation approach applied by the original Ontology Recommender, and adapted it to the new annotation scoring strategy. The specialization score for each candidate ontology is calculated according to the following expression:
$$ specialization\left( o, t\right)= norm\left(\frac{{\displaystyle \sum \left( a nnotationScore2(a)+2* hierarchyLevel(a)\right)}}{{ \log}_{10}\left(\left| o\right|\right)}\right)\forall a\in A $$
where o is the ontology being evaluated, t is the input text, annotationScore2(a) is the function that calculates the relevance score of an annotation (see Section Ontology coverage), hierarchyLevel(a) returns the level of the matched class in the ontology hierarchy, and A is the set of all the annotations done with the ontology o for the input t. Unlike the coverage and detail criteria, which consider only selectedAnnotations(A), the specialization criterion takes into account all the annotations returned by the Annotator (i.e., A). This is generally appropriate because an ontology that provides multiple annotations for a specific text fragment is likely to be more specialized for that text than an ontology that provides only one annotation for it. The normalization by ontology size aims to assign a higher score to smaller, more specialized ontologies. Applying a logarithmic function decreases the impact of ontologies with a very large size. Finally, the norm function normalizes the score to the interval [0,1].
Using the same hypothetical ontologies, input, and annotations from the previous example, and taking into account the size and annotation details shown in Table 5, the specialization score for O1 and O2 would be calculated as follows:
Table 5 Ontology size and annotation details for the ontologies in Table 4
$$ s p e c i a l i z a t i o n\left( O1, t\right)= n o r m\left(\frac{\left(10+2\ast 5\right)+\left(5+2\ast 3\right)}{lo{ g}_{10}(120000)}\right)= n o r m\left(\frac{31}{5.08}\right)= n o r m(6.10) $$
$$ s p e c i a l i z a t i o n\left( O2, t\right)= n o r m\left(\frac{\left(5+2\ast 6\right)+\left(10+2\ast 12\right)}{lo{ g}_{10}(800)}\right)= n o r m\left(\frac{51}{2.90}\right)= n o r m(17.59) $$
It is possible to see that the classes from O2 are located deeper in the hierarchy than are those from O1. Also, O2 is a much smaller ontology than O1. As a consequence, according to our ontology-specialization method, O2 would be considered more specialized for the input than O1, and would be assigned a higher specialization score.
Evaluation of ontology sets
When annotating a biomedical text corpus or a list of biomedical keywords, it is often difficult to identify a single ontology that covers all terms. In practice, it is more likely that several ontologies will jointly cover the input [8]. Suppose that a researcher needs to find the best ontologies for a list of biomedical terms. If there is not a single ontology that provides an acceptable coverage it should then evaluate different combinations of ontologies and return a ranked list of ontology sets that, together, provide higher coverage. For instance, in our previous example (Penicillin is an antibiotic used to treat tonsillitis), O1 covers the terms penicillin and antibiotic and O2 covers penicillin and tonsillitis. None of those ontologies provides full coverage of all the relevant input terms. However, by using O1 and O2 together, it is possible to cover penicillin, antibiotic, and tonsillitis.
Our method to evaluate ontology sets is based on the "ontology combinations" approach used by the BiOSS system [21]. The system generates all possible sets of 2 and 3 candidate ontologies (3 being the default maximum, though users may modify this limit according to their specific needs) and it evaluates them using the criteria presented previously. To improve performance, we use some heuristic optimizations to discard certain ontology sets without performing the full evaluation process for them. For example, a set containing two ontologies that cover exactly the same terms will be immediately discarded because that set's coverage will not be higher than that provided by each ontology individually.
The relevance score for each set of ontologies is calculated using the same approach as for single ontologies, in accordance with the following expression:
$$ s c o r e S e t\left( O, t\right)={w}_c\ast c o v e r a g e S e t\left( O, t\right)+{w}_a\ast a c c e p t a n c e S e t( O)+{w}_d\ast d e t a i l S e t\left( O, t\right)+{w}_s\ast s p e c i a l i z a t i o n S e t\left( O, t\right) $$
where O = {o | o is an ontology} and |O| > 1. The scores for the different evaluation criteria are calculated as follows:
coverageSet : It is computed the same way as for a single ontology, but takes into account all the annotations performed with all the ontologies in the ontology set. The system selects the best annotations, and the set's input coverage is computed based on them.
acceptanceSet , detailSet , and specializationSet : For each ontology, the system calculates its coverage contribution (as a percentage) to the set's coverage score. The recommender then uses this contribution to calculate all the other scores proportionally. By using this method, the impact (in terms of acceptance, detail and specialization) of a particular ontology on the set score will vary according to the coverage provided by such ontology.
Ontology Recommender 2.0 implements the ontology recommendation approach previously described in this paper. Figure 2 shows the architecture of Ontology Recommender 2.0. Like its predecessor, it has two interfaces: a Web service API,Footnote 6 which makes it possible to invoke the recommender programmatically, and a Web-based user interface, which is included in the NCBO BioPortal.Footnote 7
An overview of the architecture and workflow of Ontology Recommender 2.0. (1) The input data and parameter settings are received through any of the system interfaces (i.e., Web service or Web UI), and are sent to the system's backend. (2) The evaluation process starts. The NCBO Annotator is invoked to retrieve all annotations for the input data. The system uses these annotations to evaluate BioPortal ontologies, one by one, according to four criteria: coverage, acceptance, detail and specialization. Because of the system's modular design, additional evaluation criteria can be easily added. The system uses BioPortal services to retrieve any additional information required by the evaluation process. For example, evaluation of ontology acceptance requires the number of visits to the ontology in BioPortal (pageviews), and checking whether the ontology is present in the Unified Medical Language System (UMLS) or not. Four independent evaluation scores are returned for each ontology (one per evaluation criterion). (3) The scores obtained are combined into a relevance score for the ontology. (4) The relevance scores are used to generate a ranked list of ontologies or ontology sets, which (5) is returned via the corresponding system's interface
The Web-based user interface was developed using the Ruby-on-Rails Web framework and the Javascript language. Server side components were implemented using the Ruby language. These components interact with other BioPortal services to retrieve all the information needed to achieve the recommendation process.
The typical workflow is as follows. First, the Ontology Recommender calls the Annotator service to obtain all the annotations performed for the input data using all BioPortal ontologies. Second, for each ontology, it invokes other BioPortal services to obtain the number of classes in the ontology, the number of visits to each ontology in a recent period of time, and to check the presence of the ontology in UMLS. Third, for each annotation performed with the ontology, it makes several calls to retrieve the number of definitions, synonyms and properties of the ontology class involved in the annotation. The system has four independent evaluation modules that use all this information to assess each candidate ontology according to the four evaluation criteria proposed in our approach: coverage, acceptance, detail, and specialization. Because of the system's modular design, new ontology evaluation modules can be easily plugged in.
NCBO provides a Virtual Appliance for communities that want to use the Ontology Recommender locally. This appliance is a pre-installed copy of the NCBO software that users can run and maintain. More information about obtaining and installing the NCBO Virtual Appliance is available at the NCBO Wiki.Footnote 8
The system uses a set of predefined parameters to control how the different evaluation scores are calculated, weighted and aggregated. Given that high input coverage is the main requirement for ontology recommendation systems, the weight assigned by default to ontology coverage (0.55) is considerably higher than the weight assigned to ontology acceptance, detail and specialization (0.15). Our system uses the same coverage weight than the BiOSS system [21]. The default configuration provides appropriate results for general ontology recommendation scenarios. However, both the web interface and the REST service allow users to adapt the system to their specific needs by modifying the weights given to coverage, acceptance, knowledge detail, and specialization. The predefined values for all default parameters used by Ontology Recommender 2.0 are provided as an additional file [see Additional file 2].
Some Ontology Recommender users may need to obtain repeatable results over time. Currently, however, any changes in the BioPortal ontology repository, such as submitting a new ontology or removing an existing one, may change the suggestions returned by the Ontology Recommender for the same inputs. BioPortal services do not provide version-based ontology access, so services such as the Ontology Recommender and the Annotator always run against the latest versions of the ontologies. A possible way of dealing with this shortcoming would be to install the NCBO Virtual Appliance with a particular set of ontologies and keep them locally unaltered.
The Ontology Recommender 2.0 was released in August 2015, as part of BioPortal 4.20.Footnote 9 The traffic data for 2016 reflects the great interest of the community on the new system, with an average of 45.2 K calls per month to the Ontology Recommender API, and 1.2 K views per month on the Ontology Recommender webpage. These numbers represent an increase of more than 600% in the number of calls to the API over 2015, and more than 30% in the number of pageviews over 2015. Other widely used BioPortal services are Search, with an average of 873.9 K calls per month to the API, and 72.9 K pageviews per month in 2016; and the Annotator, with an average of 484.8 K calls per month to the API, and 3 K pageviews per month in 2016. Detailed traffic data for the Ontology Recommender and other top used BioPortal services for the period 2014–2016 is provided as an additional file [see Additional file 1]. The source code is available in GitHubFootnote 10 under a BSD License.
Figure 3 shows the Ontology Recommender 2.0 user interface. The system supports two input types: plain text and comma-separated keywords. It also provides two kinds of output: ranked ontologies and ranked ontology sets. The advanced options section, which is initially hidden, allows the user to customize (1) the weights applied to the evaluation criteria, (2) the maximum number of ontologies in each set (when using the ontology sets output), and (3) the list of candidate ontologies to be evaluated.
Ontology Recommender 2.0 user interface. The user interface has buttons to select the input type (i.e., text or keywords) and output type (i.e., ontologies and ontology sets). A text area enables the user to enter the input data. The "Get Recommendations" button triggers the execution. The "advanced options" button shows additional settings to customize the recommendation process
Figure 4 shows an example of the system's output when selecting "keywords" as input and "ontologies" as output. For each ontology in the output, the user interface shows its final score, the scores for the four evaluation criteria used, and the number of annotations performed with the ontology on the input. For instance, the most highly recommended ontology in Fig. 4 is the Symptom Ontology (SYMP), which covers 17 of the 21 input keywords. By clicking on the different rows of the column "highlight annotations", the user can select any of the suggested ontologies and see which specific input terms are covered. Also, clicking on a particular term in the input reveals the details of the matched class in BioPortal. All scores are translated from the interval [0, 1] to [0, 100] for better readability. A score of '0' for a given ontology and evaluation criterion means that the ontology has obtained the lowest score compared to the rest of candidate ontologies. A score of '1' means that the ontology has obtained the highest score, in relation to all the other candidate ontologies.
Example of the "Ontologies" output. The user interface shows the top recommended ontologies. For each ontology, it shows the position of the ontology in the ranking, the ontology acronym, the final recommendation score, the scores for each evaluation criteria (i.e., coverage, acceptance, detail, and specialization), and the number of annotations performed with the ontology. The "highlight annotations" button highlights the input terms covered by the ontology
Figure 5 shows the "Ontology sets" output for the same keywords displayed in Fig. 4. The output shows that using three ontologies (SYMP, SNOMEDCT and MEDDRA) it is possible to cover all the input keywords. Different colors for the input terms and for the recommended ontologies in Fig. 5 distinguish the specific terms covered by each ontology in the selected set.
Example of the "Ontology sets" output. The user interface shows the top recommended ontology sets. For each set, it shows its position in the ranking, the acronyms of the ontologies that belong to it, the final recommendation score, the scores for each evaluation criteria (i.e., coverage, acceptance, detail, and specialization), and the number of annotations performed with all the ontologies in the ontology set. The "highlight annotations" button highlights the input terms covered by the ontology set
One of the shortcomings of the current implementation is that the acceptance score is calculated using data from only two platforms. BioPortal is used to calculate the visits score, and UMLS is used to calculate the presence score. There are other widely known ontology repositories that should be considered too. We believe that the reliability of the current implementation would be increased by taking into account visits and presence information from additional platforms, such as the OBO Foundry and the Ontology Lookup Service (OLS). Extending our implementation to make use of additional platforms would require us to have a consistent mechanism to check the presence of each candidate ontology into other platforms, as well as a way to access updated traffic data from them.
Another limitation is related to the ability to identify different variations of a particular term. The coverage evaluation metric is dependent on the annotations identified by the Annotator for the input data. The Annotator deals with synonyms and term inflections (e.g., leukocyte, leukocytes, white blood cell) by using the synonyms contained in the ontology for a particular term. For example, Medical Subject Headings (MeSH) provides 11 synonyms for the term leukocytes, including leukocyte and white blood cells. As a consequence, the Annotator would be able to perform an annotation between the input term white blood cells and the MESH term leukocytes. However, not all ontologies provide such level of detail for their classes, and therefore the Annotator may not be able to appropriately perform annotations with them. The NCBO, in collaboration with University of Montpellier, is currently investigating several NLP approaches to improve the Annotator service. Applying lemmatization to both the input terms and the dictionary used by the Annotator is one of the methods currently being tested. As soon as these new features will be made available in the Annotator, they will automatically be used by Ontology Recommender.
To evaluate our approach, we compared the performance of Ontology Recommender 2.0 to Ontology Recommender 1.0 using data from a variety of well-known public biomedical databases. Examples of these databases are PubMed, which contains bibliographic information for the fields of biomedicine and health; the Gene Expression Omnibus (GEO), which is a repository of gene expression data; and ClinicalTrials.gov, which is a registry of clinical trials. We used the API provided by the NCBO Resource IndexFootnote 11 [39] to programmatically extract data from those databases.
Experiment 1: input coverage
We selected 12 widely known biomedical databases and extracted 600 biomedical texts from them, with 127 words on average, and 600 lists of biomedical keywords, with 17 keywords on average, producing a total of 1200 inputs (100 inputs per database). The databases used are listed in Table 6.
Table 6 Databases used for experiment 1
Given the importance of input coverage, we first executed both systems for all inputs and compared the coverage provided by the top-ranked ontology. We focused on the top-ranked ontology because the majority of users always select the first result obtained [40]. The strategy we used to calculate the ontology coverage differed depending on the input type:
For texts, the coverage was computed as the percentage of input words covered by the ontology with respect to the total number of words that could be covered using all BioPortal ontologies together.
For keywords, the coverage was computed as the percentage of keywords covered by the ontology divided by the total number of keywords.
Figures 6 and 7 show a representation of the coverage provided by both systems for each database and input type. Tables 7 and 8 provide a summary of the evaluation results.
Coverage distribution for the first ontology suggested by Ontology Recommender 1.0 (dashed red line) and 2.0 (solid blue line), using the individual ontologies output, for 600 texts extracted from 6 widely known databases (100 texts each). Vertical lines represent the mean coverage provided by the first ontology returned by Ontology Recommender 1.0 (dotted red line) and 2.0 (dashed-dotted blue line). The X-axis indicates the percentage of words covered by the ontology. The Y-axis displays the number of inputs for which a particular coverage percentage was obtained. AUTDB: Autism Database; GEO: Gene Expression Omnibus; GM: ARRS GoldMiner; IDV: Integrated Disease View; PM: PubMed; PMH: PubMed Health Drugs
Coverage distribution for the first ontology suggested by Ontology Recommender 1.0 (dashed red line) and 2.0 (solid blue line), using the individual ontologies output, for 600 lists of keywords extracted from 6 widely known databases (100 lists of keywords each). Vertical lines represent the mean coverage provided by the first ontology returned by Ontology Recommender 1.0 (dotted red line) and 2.0 (dashed-dotted blue line). The X-axis indicates the percentage of input keywords covered by the ontology. The Y-axis displays the number of inputs for which a particular coverage percentage was obtained. AERS: Adverse Event Reporting System; AGDB: AgingGenesDB; CT: ClinicalTrials.gov; DBK: DrugBank; PGGE: PharmGKB-Gene; UPKB: UniProt KB
Table 7 Summary of evaluation results for text inputs
Table 8 Summary of evaluation results for keyword inputs
For some inputs, the first ontology suggested by Ontology Recommender 1.0 provides very low coverage (under 20%). This results from one of the shortcomings previously described: Ontology Recommender 1.0 occasionally assigns a high score to ontologies that provide low coverage because they contain several classes matching the input. The new recommendation approach used by Ontology Recommender 2.0 addresses this problem: Virtually none of its executions provide such low coverage.
For example, Table 9 shows the ontologies recommended if we input the following description of a disease, extracted from the Integrated Disease View (IDV) database: Chronic fatigue syndrome refers to severe, continued tiredness that is not relieved by rest and is not directly caused by other medical conditions. See also: Fatigue. The exact cause of chronic fatigue syndrome (CFS) is unknown. The following may also play a role in the development of CFS: CFS most commonly occurs in women ages 30 to 50.
Table 9 Comparison of the terms covered by Ontology Recommender 1.0 and Ontology Recommender 2.0 for the input text previously shown
Ontology Recommender 1.0 suggests the Bone Dysplasia Ontology (BDO), whereas Ontology Recommender 2.0 suggests the NCI Thesaurus (NCIT). Because BDO covers only 4 of the input terms, while NCIT covers 17, the recommendation provided by Ontology Recommender 2.0 is more appropriate than that of its predecessor.
Ontology Recommender 2.0 also provides better mean coverage for both input types (i.e., text and keywords) across all the biomedical databases included in the evaluation. Compared to Ontology Recommender 1.0, the mean coverage reached using Ontology Recommender 2.0 was 14.9% higher for texts and 19.3% higher for keywords. That increase was even greater using the "ontology sets" output type provided by Ontology Recommender 2.0, which reached a mean coverage of 92.1% for texts (31.3% higher than the Ontology Recommender 1.0 ratings) and 89.8% for keywords (26.9% higher).
For the selected texts, the average execution time of Ontology Recommender 2.0 for the "ontologies" output is 15.4 s, 43.9% higher than the Ontology Recommender 1.0 execution time (10.7 s). The ontology recommendation process performed by Ontology Recommender 2.0 is much more complex than the one performed by the original version, and this is reflected by the execution times. The average execution time for keywords is similar in both systems (9.5 s for Ontology Recommender 1.0 and 9.4 s for Ontology Recommender 2.0). When dealing with keywords, the complex process performed by Ontology Recommender 2.0 is compensated by its ability to discard unnecessary annotations before staring the ontology evaluation process. These execution times are substantially better than those reported for similar systems. For example, the BiOSS system [21] needed an average of 207 s to process 30 keywords with a repository of 200 candidate ontologies. Performance of Ontology Recommender 2.0 is reasonable for general scenarios, where the quality of the suggestions is typically more important than the execution time.
Experiment 2: refining recommendations
Our second experiment set out to examine whether Ontology Recommender 2.0 is effective at discerning how to make meaningful recommendations when ontologies exhibit similar coverage of the input text. Specifically, we were interested in analyzing how the new version uses ontology acceptance, detail and specialization to prioritize the most appropriate ontologies.
We started with the 1200 inputs (600 texts and 600 lists of keywords) from the previous experiment, and selected those inputs for which the two versions of Ontology Recommender suggested different ontologies with similar coverage. We considered two coverage values similar if the difference between them was less than 10%. This yielded a total of 284 inputs (32 input texts and 252 lists of keywords). We executed both systems for those 284 inputs and analyzed the ontologies obtained in terms of their acceptance, detail and specialization scores.
Figure 8 and Table 10 show the results obtained. The ontologies suggested by Ontology Recommender 2.0 have higher acceptance (87.1) and detail scores (72.1) than those suggested by Ontology Recommender 1.0. Importantly, the graphs show peaks of low acceptance (<30%) and detail (<20%) for Ontology Recommender 1.0 that are addressed by Ontology Recommender 2.0.
Acceptance, detail and specialization distribution for the first ontology suggested by Ontology Recommender 1.0 (dashed red line) and 2.0 (solid blue line), for the 284 inputs selected. Vertical lines represent the mean acceptance, detail and specialization scores provided by Ontology Recommender 1.0 (dotted red line) and 2.0 (dashed-dotted blue line). The X-axis indicates the acceptance, detail and specialization score provided by the top ranked ontology. The Y-axis displays the number of inputs for which a particular score was obtained
Table 10 Mean acceptance, detail and specialization scores provided by the two versions of Ontology Recommender for experiment 2
The ontologies suggested by Ontology Recommender 2.0 have, on average, lower specialization scores (65.1) than those suggested by Ontology Recommender 1.0 (95.1). This is an expected result, given that the recommendation approach used by Ontology Recommender 1.0 is based on the relation between the number of annotations provided by each ontology and its size, which is our measure for ontology specialization.
Ontology Recommender 1.0 is better than Ontology Recommender 2.0 at finding small ontologies that provide multiple annotations for the user's input. However, those ontologies are not necessarily the most appropriate to describe the input data. As we have seen (see Section 1.2.1), a large number of annotations does not always indicate a high input coverage. Ontology Recommender 1.0 sometimes suggests ontologies with high specialization scores but with very low input coverage, which makes the ontologies inappropriate for the user's input. The multi-criteria evaluation approach used by Ontology Recommender 2.0 has been designed to address this issue by evaluating ontology specialization in combination with other criteria, including ontology coverage.
Experiment 3: high coverage and specialized ontologies
We set out to evaluate how well Ontology Recommender 2.0 prioritizes recommending small ontologies that provide appropriate coverage for the input data. We created 15 inputs, each of which contained keywords from a very specific domain (e.g., adverse reactions, dermatology, units of measurement), and executed both versions of the Ontology Recommender for those inputs.
Table 11 shows the particular domain for each of the 15 inputs used, and the first ontology suggested by each version of Ontology Recommender, as well as the size of each ontology and the coverage provided.
Table 11 Experiment 3 results
Analysis of the results reveals that Ontology Recommender 2.0 is more effective than Ontology Recommender 1.0 for suggesting specialized ontologies that provide high input coverage. In 9 out of 15 inputs (60%), the first ontology suggested by Ontology Recommender 2.0 is more appropriate, in terms of its size and coverage provided, than the ontology recommended by Ontology Recommender 1.0. Ontology Recommender 2.0 considers input coverage in addition to ontology specialization, which Ontology Recommender 1.0 does not. In addition, Ontology Recommender 2.0 uses a different annotation scoring method (the function annotationScore2(a); see Section 2.1.1) that gives more weight to annotations that cover multi-word terms. There is one input (no. 13), for which the ontology suggested by Ontology Recommender 2.0 provides higher coverage (88% versus 80%), but it is bigger than the ontology recommended by Ontology Recommender 1.0 (324 K classes versus 119 K). In 5 out of 15 inputs (33%), both systems recommended the same ontology.
Recommending biomedical ontologies is a challenging task. The great number, size, and complexity of biomedical ontologies, as well as the diversity of user requirements and expectations, make it difficult to identify the most appropriate ontologies to annotate biomedical data. The analysis of the results demonstrates that ontologies suggested using our new recommendation approach are more appropriate than those recommended using the original method. Our acceptance evaluation method has proved to be successful to rank ontologies, and it is currently used not only by the Ontology Recommender, but also by the BioPortal search engine. The classes returned when searching in BioPortal are ordered according to the general acceptance of the ontologies to which they belong.
We note that, because the system is designed in a modular way, it will be easy to add new evaluation criteria to extend its functionality. As a first priority, we intend to improve and extend the evaluation criteria currently used. In addition, we will investigate the effect of extending the Ontology Recommender to include relevant features not yet considered, such as the frequency of an ontology's updates, its levels of abstraction, formality, granularity, and the language in which the ontology is expressed.
Indeed, using metadata information is a simple but often ignored approach to select ontologies. Coverage-based approaches often miss relevant results because they focus on the content of ontologies and ignore more general information about the ontology. For example, applying the new Ontology Recommender to the Wikipedia definition of anatomyFootnote 12 will return some widely-known ontologies that contain the terms anatomy, structure, organism and biology, but the Foundational Model of Anatomy (FMA), which is the reference ontology about human anatomy will not show up in the top 25 results. Our specialization criterion uses the content of the ontology and the ontology size to discriminate between large ontologies and small ontologies that have better specialization. However, ontologies that provide multiple annotations for the input data are not always specialized to deal with the input domain. Sometimes very specialized ontologies for a domain may provide low coverage for a particular text from the domain. In this scenario, metadata about the domain of the ontology (e.g., 'anatomy' in the case of FMA) could be used to enhance our ontology specialization criterion by limiting the suggestions to those ontologies whose domain matches the input data domain. We are currently refining, in collaboration with the Center for Expanded Data Annotation and Retrieval (CEDAR) [41] and the AgroPortal ontology repository [42], the way BioPortal handles metadata for ontologies in order to support even more ontology recommendation scenarios.
Our coverage evaluation approach may be further enhanced by complementing our annotation scoring method (i.e., annotationScore2) with term extraction techniques. We plan to analyze the application of a term extraction measure, called C-value [43], which is specialized for multi-word term extraction, and that has already been applied to the results of the NCBO Annotator, leading to significant improvements [44].
There are some possible avenues for enhancing our assessment of ontology acceptance. These include considering the number of projects that use a specific ontology, the number of mappings created manually that point to a particular ontology, the number of user contributions (e.g., mappings, notes, comments), the metadata available per ontology, and the number, publication date and publication frequency of ontology versions. There are other indicators external to BioPortal that could be useful for performing a more comprehensive evaluation of ontology acceptance, such as the number of Google results when searching for the ontology name or the number of PubMed publications that contain the ontology name [21].
Reusing existing ontologies instead of building new ones from scratch has many benefits, including lowering the time and cost of development, and avoiding duplicate efforts [45]. As shown by a recent study [46], reuse is fairly low in BioPortal, but there are some ontologies that are approaching complete reuse (e.g., Mental Functioning Ontology). Our approach should be able to identify these ontologies and assign them a lower score than those ontologies where the knowledge was first defined. We will study the inclusion of additional evaluation criteria to weigh the amount of original knowledge provided by a particular ontology for the input data.
The current version of Ontology Recommender uses a set of default parameters to control how the different evaluation scores are calculated, weighted and aggregated. These parameters provide acceptable results for general ontology recommendation scenarios, but some users may need to modify the default settings to match their needs. In the future, we would like the system to use an automatic weight adjustment approach. We will investigate whether it is possible to develop methods of adjusting the weights dynamically for specific scenarios.
Ontology Recommender helps to identify all the ontologies that would be suitable for semantic annotation. However, given the number of ontologies in BioPortal, it would be difficult, computationally expensive, and often useless to annotate user inputs with all the ontologies in the repository. Ontology Recommender could function within BioPortal as a means to screen ontologies for use with the NCBO Annotator. Note that the output of the Annotator is a ranked list of annotations performed with multiple ontologies, while the output of the Ontology Recommender is a ranked list of ontologies. A user might be offered the possibility to "Run the Ontology Recommender first" before actually calling the Annotator. Then only the top-ranked ontologies would be used for annotations.
A user-based evaluation would help us understand the system's utility in real-world settings. Our experience evaluating the original Ontology Recommender and BiOSS showed us that obtaining a user-based evaluation of an ontology recommender system is a challenging task. For example, the evaluators of BiOSS reported that they would need at least 50 min to perform a high-quality evaluation of the system for each test case. We plan to investigate whether crowd-sourcing methods, as an alternative, can be useful to evaluate ontology recommendation systems from a user-centered perspective.
Our approach for ontology recommendation was designed for the biomedical field, but it can be adapted to work with ontologies from other domains so long as they have a resource equivalent to the NCBO Annotator, an API to obtain basic information about all the candidate ontologies, and their classes, and alternative resources for extracting information about the acceptance of each ontology. For example, AgroPortal [42] is an ontology repository based on NCBO BioPortal technology. AgroPortal uses Ontology Recommender 2.0 in the context of plant, agronomic and environmental sciences.Footnote 13
Biomedical ontologies are crucial for representing knowledge and annotating data. However, the large number, complexity, and variety of biomedical ontologies make it difficult for researchers to select the most appropriate ontologies for annotating their data. In this paper, we presented a novel approach for recommending biomedical ontologies. This approach has been implemented as release 2.0 of the NCBO Ontology Recommender, a system that is able to find the best ontologies for a biomedical text or set of keywords. Ontology Recommender 2.0 combines the strengths of its predecessor with a range of adjustments and new features that improve its reliability and usefulness.
Our evaluation shows that, on average, the new system is able to suggest ontologies that provide better input coverage, contain more detailed information, are more specialized, and are more widely accepted than those suggested by the original Ontology Recommender. In addition, the new version is able to evaluate not only individual ontologies, but also different ontology sets, in order to maximize input coverage. The new system can be customized to specific user needs and it provides more explanatory output information than its predecessor, helping users to understand the results returned. The new service, embedded into the NCBO BioPortal, will be a more valuable resource to the community of researchers, scientists, and developers working with ontologies.
The BioPortal API received 18.8 M calls/month on average in 2016. The BioPortal website received 306.9 K pageviews/month on average in 2016 (see Additional file 1 for more detailed traffic data). The two main BioPortal papers [3, 4] accumulate 923 citations at the time of writing this paper, with 145 citations received in 2016.
http://bioportal.bioontology.org/
At the time of writing this paper, there are 63 citations to the NCBO Ontology Recommender 1.0 paper [6]. The Ontology Recommender 1.0 API received 7.1 K calls/month on average in 2014. The Ontology Recommender webpage received 1.4 K pageviews/month on average in 2014. Detailed traffic data is provided in Additional file 1.
This formula is slightly different from the scoring method presented in the paper describing the original Ontology Recommender Web service [6]. It corresponds to an upgrade done in the recommendation algorithm in December 2011, when BioPortal 3.5 was released, for which description and methodology was never published. The normalization strategy was improved by applying a logarithmic transformation to the ontology size to avoid a negative effect on very large ontologies. Mappings between ontologies, used to favor reference ontologies, were discarded due to the small number of manually created and curated mappings that could be used for such a purpose. The hierarchy-based semantic expansion was replaced by the position of the matched class in the ontology hierarchy.
The function is called annotationScore2 to differentiate it from the original annotationScore function.
The API documentation is available at http://data.bioontology.org/documentation#nav_recommender
The Web-based user interface is available at http://bioportal.bioontology.org/recommender
https://www.bioontology.org/wiki/index.php/Category:NCBO_Virtual_Appliance
BioPortal release notes: https://www.bioontology.org/wiki/index.php/BioPortal_Release_Notes
https://github.com/ncbo/ncbo_ontology_recommender
The NCBO Resource Index is an ontology-based index that provides access to over 30 million biomedical records from 48 widely-known databases. It is available at: http://bioportal.bioontology.org/.
https://en.wikipedia.org/wiki/Anatomy
http://agroportal.lirmm.fr/recommender
BIOMODELS:
BioModels ontology (BIOMODELS)
COSTART:
Coding Symbols for Thesaurus of Adverse Reaction Terms
CPT:
Current Procedural Terminology
CRISP:
Computer Retrieval of Information on Scientific Projects thesaurus
EHDA:
Human Developmental Anatomy Ontology, timed version
EP:
Cardiac Electrophysiology Ontology
FMA:
HUPSON:
Human Physiology Simulation Ontology
ICD9CM:
International Classification of Diseases, version 9 - Clinical Modification
ICPC:
International Classification of Primary Care
LOINC:
Logical Observation Identifier Names and Codes
MEDDRA:
Medical Dictionary for Regulatory Activities
MEDLINEPLUS:
MedlinePlus Health Topics
MESH:
Mammalian Phenotype Ontology
NCIT:
National Cancer Institute Thesaurus
NDDF:
National Drug Data File
NDFRT:
National Drug File - Reference Terminology
Online Mendelian Inheritance in Man
PDQ:
Physician Data Query
RCD:
Read Codes, Clinical Terms version 3
RXNORM:
SNOMEDCT:
Systematized Nomenclature of Medicine - Clinical Terms
SWEET:
Semantic Web for Earth and Environment Technology Ontology
SYMP:
Symptom Ontology
VANDF:
Veterans Health Administration National Drug File
VSO:
Vital Sign Ontology
Blake J. Bio-ontologies-fast and furious. Nat Biotechnol. 2004;22:773–4.
Bodenreider O, Stevens R. Bio-ontologies: Current trends and future directions. Brief Bioinform. 2006;7:256–74.
Whetzel PL, Noy N, Shah N, Alexander P, Dorf M, Fergerson R, Storey MA, Smith B, Chute C, Musen M: BioPortal: Ontologies and integrated data resources at the click of a mouse. In CEUR Workshop Proceedings. Volume 833; 2011:292–293
Whetzel PL, Noy NF, Shah NH, Alexander PR, Nyulas C, Tudorache T, Musen MA. BioPortal: Enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications. Nucleic Acids Res. 2011;39(Suppl 2):W541–5.
Rubin DL, Lewis SE, Mungall CJ, Misra S, Westerfield M, Ashburner M, Sim I, Chute CG, Solbrig H, Storey M-A, Smith B, Day-Richter J, Noy NF, Musen MA. National center for biomedical ontology: advancing biomedicine through structured organization of scientific knowledge. OMICS. 2006;10:185–98.
Jonquet C, Musen MA, Shah NH. Building a biomedical ontology recommender web service. J Biomed Semantics. 2010;1(Suppl 1):S1.
Brank J, Grobelnik M, Mladenić D. A survey of ontology evaluation techniques. In: Proceedings of the conference on data mining and data warehouses (SiKDD 2005). 2005.
Sabou M, Lopez V, Motta E. Ontology selection for the real semantic Web: How to cover the Queen's birthday dinner? In: Proceedings of international conference on knowledge engineering and knowledge management. Berlin, Heidelberg: Springer; 2006. p. 96–111.
Cantador I, Fernández M, Castells P. Improving ontology recommendation and reuse in WebCORE by collaborative assessments. In: Work Soc collab constr struct knowl 16th Int world wide Web conf (WWW 2007). 2007.
Gomez-Perez A. Some ideas and examples to evaluate ontologies. In: Proceedings the 11th conference on artificial intelligence for applications (CAIA'94). San Antonio, Texas, USA. 1994.
Gómez-Pérez A. From knowledge based systems to knowledge sharing technology: evaluation and assessment. In: Technical report KSL 94–73. Stanford, CA, USA: Knowledge Systems Laboratory, Stanford University; 1994.
Gruber T. Toward principles for the design of ontologies used for knowledge sharing. In: International workshop on formal ontology. 1993.
Berners-Lee T, Hendler J, Lassila O: The Semantic Web. Scientific American 2001:34–43.
Finin T, Reddivari P, Cost RS, Sachs J. S. A search and metadata engine for the semantic Web. In: Proceedings of the 13th ACM conference on Information and Knowledge Management (CIKM '04). 2004. p. 652–9.
Patel C, Supekar K, Lee Y, Park E: OK. A semantic Web portal for ontology searching, ranking, and classification. In: Proc 5th ACM CIKM Int work Web Inf data manag (WIDM 2003). 2003. p. 58–61.
Zhang Y, Vasconcelos W, Sleeman D. OntoSearch : an ontology search engine. In: Research and development in intelligent systems XXI. London, UK: Springer; 2005. p. 58–69.
Alani H, Noy N, Shah N, Shadbolt N, Musen M. Searching ontologies based on content: Experiments in the biomedical domain. In Proceedings of the 4th international conference on Knowledge capture. New York: ACM; 2007:55–62.
Buitelaar P, Eigner T, Declerck T. OntoSelect: a dynamic ontology library with support for ontology selection. In: Proceedings of the demo session at the international semantic Web conference (ISWC). 2004.
Alani H, Brewster C, Shadbolt N. Ranking ontologies with AKTiveRank. In: Proceedings of the international semantic Web conference (ISWC). Berlin, Heidelberg: Springer; 2006. p. 1–15.
Tartir S, Arpinar I, Moore M, Sheth A, Aleman-Meza B. OntoQA: metric-based ontology quality analysis. In: IEEE work knowl acquis from distrib auton semant heterog data knowl sources. 2005. p. 45–53.
Martínez-Romero M, Vázquez-Naya JM, Pereira J, Pazos A. BiOSS: a system for biomedical ontology selection. Comput Methods Programs Biomed. 2014;114:125–40.
D'Aquin M, Lewen H. Cupboard - a place to expose your ontologies to applications and the community. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics). 2009;5554 LNCS:913–8.
Tan H, Lambrix P. Selecting an ontology for biomedical text mining. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing (BioNLP '09). 2009;55–62.
Maiga G. A flexible biomedical ontology selection tool. Strength Role ICT Dev. 2009;5:171–89.
Martínez-Romero M, Vázquez-Naya JM, Pereira J, Pazos A. A multi-criteria approach for automatic ontology recommendation using collective knowledge. In Recommender Systems for the Social Web. Volume 32. Berlin: Springer-Verlag; 2012;89–104.
Jonquet C, Shah NH, Cherie H, Musen MA, Callendar C. Storey M-a: NCBO annotator : semantic annotation of biomedical data. In: International semantic Web conference (ISWC), poster and demo session. 2009.
López-García P, Schulz S, Kern R. Automatic summarization for terminology recommendation : the case of the NCBO ontology recommender. In: 7th international SWAT4LS conference. 2014. p. 1–8.
Sabou M, Lopez V, Motta E, Uren V. Ontology selection: ontology evaluation on the real semantic Web. In: 15th international world wide Web conference (WWW 2006). 2006. p. 23–6.
Spackman KA, Campbell KE, Côté RA. SNOMED RT: a reference terminology for health care. In: Conf proc Am Med informatics assoc annu fall symp. 1997. p. 640–4.
Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS, Eppig JT, Harris MA, Hill DP, Issel-Tarver L, Kasarskis A, Lewis S, Matese JC, Richardson JE, Ringwald M, Rubin GM, Sherlock G. Gene ontology: tool for the unification of biology. Nat Genet. 2000;25:25–9.
Bodenreider O. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic Acids Res. 2004;32(Database issue):D267–70.
Smith B, Ashburner M, Rosse C, Bard J, Bug W, Ceusters W, Goldberg LJ, Eilbeck K, Ireland A, Mungall CJ, Leontis N, Rocca-Serra P, Ruttenberg A, Sansone S-A, Scheuermann RH, Shah N, Whetzel PL, Lewis S. The OBO foundry: coordinated evolution of ontologies to support biomedical data integration. Nat Biotechnol. 2007;25:1251–5.
Xiang Z, Mungall C, Ruttenberg A, He Y: Ontobee: A linked data server and browser for ontology terms. In CEUR Workshop Proceedings. Volume 833; 2011:279–281
Côté RG, Jones P, Apweiler R, Hermjakob H. The Ontology Lookup Service, a lightweight cross-platform tool for controlled vocabulary queries. BMC Bioinformatics. 2006;7(1):97.
Hoehndorf R, Slater L, Schofield PN, Gkoutos GV. Aber-OWL: a framework for ontology-based data access in biology. BMC Bioinf. 2015;16:1–9.
Bandrowski A, Brinkman R, Brochhausen M, Brush MH, Bug B, Chibucos MC, Clancy K, Courtot M, Derom D, Dumontier M, Fan L. The ontology for biomedical investigations. PloS one. 2016;11(4):e0154556.
Schriml LM, Arze C, Nadendla S, Chang YWW, Mazaitis M, Felix V, Feng G, Kibbe WA. Disease ontology: A backbone for disease semantic integration. Nucleic Acids Res. 2012;40(D1):D940–6.
Tenenbaum JD, Whetzel PL, Anderson K, Borromeo CD, Dinov ID, Gabriel D, Kirschner B, Mirel B, Morris T, Noy N, Nyulas C, Rubenson D, Saxman PR, Singh H, Whelan N, Wright Z, Athey BD, Becich MJ, Ginsburg GS, Musen MA, Smith KA, Tarantal AF, Rubin DL, Lyster P. The biomedical resource ontology (BRO) to enable resource discovery in clinical and translational research. J Biomed Inform. 2011;44:137–45.
Jonquet C, Lependu P, Falconer S, Coulet A, Noy NF, Musen MA, Shah NH. NCBO resource index: ontology-based search and mining of biomedical resources. J Web Semant. 2011;9:316–24.
Noy NF, Alexander PR, Harpaz R, Whetzel PL, Fergerson RW, Musen MA. Getting lucky in ontology search: a data-driven evaluation framework for ontology ranking. Lect Notes Comput Sci (including Subser Lect Notes Artif Intell Lect Notes Bioinformatics). 2013;8218 LNCS(PART 1):444–59.
Musen MA, Bean CA, Cheung KH, Dumontier M, Durante KA, Gevaert O, Gonzalez-Beltran A, Khatri P, Kleinstein SH, O'Connor MJ, Pouliot Y, Rocca-Serra P, Sansone SA, Wiser JA. The center for expanded data annotation and retrieval. J Am Med Inform Assoc. 2015;22:1148–52.
Jonquet C, Dzal E, Arnaud E, Larmande P, Jonquet C, Dzal E, Arnaud E, Larmande P, Jonquet C, Dzalé-yeumo E, Arnaud E, Larmande P. AgroPortal : a proposition for ontology-based services in the agronomic domain. In: In IN-OVIVE'15: 3ème atelier INtégration de sources/masses de données hétérogènes et ontologies, dans le domaine des sciences du VIVant et de l'Environnement. 2015.
Frantzi K, Ananiadou S, Mima H. Automatic recognition of multi-word terms: the C-value/NC-value method. Int J Digit Libr. 2000;3:115–30.
Melzi S, Jonquet C. Scoring semantic annotations returned by the NCBO Annotator. In Proceedings of the 7th International Workshop on Semantic Web Applications and Tools for Life Sciences (SWAT4LS'14). CEURWS; 2014;1320:15.
Hartmann J, Palma R, Gómez-Pérez A. Ontology Repositories. In Handbook on Ontologies; 2009:551–71.
Kamdar MR, Tudorache T, Musen MA: A Systematic Analysis of Term Reuse and Term Overlap across Biomedical Ontologies.
The authors acknowledge the suggestions about the problem of recommending ontologies provided by the NCBO team, as well as their assistance and advice on integrating Ontology Recommender 2.0 into BioPortal. The authors also thank Simon Walk for his report on the BioPortal traffic data. Natasha Noy and Vanessa Aguiar offered valuable feedback.
This work was supported in part by the National Center for Biomedical Ontology as one of the National Centers for Biomedical Computing, supported by the NHGRI, the NHLBI, and the NIH Common Fund under grant U54 HG004028 from the U.S. National Institutes of Health. Additional support was provided by CEDAR, the Center for Expanded Data Annotation and Retrieval (U54 AI117925) awarded by the National Institute of Allergy and Infectious Diseases through funds provided by the trans-NIH Big Data to Knowledge (BD2K) initiative. This project has also received support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 701771 and the French National Research Agency (grant ANR-12-JS02-01001).
Project name: The Biomedical Ontology Recommender.
Project home page: http://bioportal.bioontology.org/recommender.
Project GitHub repository: https://github.com/ncbo/ncbo_ontology_recommender.
REST service parameters: http://data.bioontology.org/documentation#nav_recommender.
Operating system(s): Platform independent.
Programming language: Ruby, Javascript, HTML.
Other requirements: none.
License: BSD (http://www.bioontology.org/BSD-license).
Datasets used in our evaluation: https://git.io/vDIXV.
MMR conceived the approach, designed and implemented the system, and drafted the initial manuscript. CJ participated in technical discussions and provided ideas to refine the approach. MAM supervised the work and gave advice and feedback at all stages. CJ, MJO, JG, and AP provided critical revision and edited the manuscript. All authors gave the final approval of the manuscript.
Stanford Center for Biomedical Informatics Research, 1265 Welch Road, Stanford University School of Medicine, Stanford, CA, 94305-5479, USA
Marcos Martínez-Romero
, Clement Jonquet
, Martin J. O'Connor
, John Graybeal
& Mark A. Musen
Department of Information and Communication Technologies, Computer Science Building, Elviña Campus, University of A Coruña, 15071, A Coruña, Spain
Alejandro Pazos
Laboratory of Informatics, Robotics and Microelectronics of Montpellier (LIRMM), University of Montpellier, 161 rue Ada, 34095, Montpellier, Cdx 5, France
Clement Jonquet
Search for Marcos Martínez-Romero in:
Search for Clement Jonquet in:
Search for Martin J. O'Connor in:
Search for John Graybeal in:
Search for Alejandro Pazos in:
Search for Mark A. Musen in:
Correspondence to Marcos Martínez-Romero.
Ontology Recommender traffic summary. Summary of traffic received by the Ontology Recommender for the period 2014–2016, compared to the other most used BioPortal services. (PDF 27 kb)
Default configuration settings. Default values used by the NCBO Ontology Recommender 2.0 for the parameters that control how the different scores are calculated, weighted and aggregated. (PDF 9 kb)
Martínez-Romero, M., Jonquet, C., O'Connor, M.J. et al. NCBO Ontology Recommender 2.0: an enhanced approach for biomedical ontology recommendation. J Biomed Semant 8, 21 (2017) doi:10.1186/s13326-017-0128-y
Ontology selection
Ontology recommendation
Biomedical ontologies
NCBO BioPortal | CommonCrawl |
Faster quantum mixing for slowly evolving sequences of Markov chains
Davide Orsucci1, Hans J. Briegel1,2, and Vedran Dunjko1,3,4
1Institute for Theoretical Physics, University of Innsbruck, Technikerstraße 21a, 6020 Innsbruck, Austria
2Department of Philosophy, University of Konstanz, Fach 17, 78457 Konstanz, Germany
3Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
4LIACS, Leiden University, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
Markov chain methods are remarkably successful in computational physics, machine learning, and combinatorial optimization. The cost of such methods often reduces to the mixing time, i.e., the time required to reach the steady state of the Markov chain, which scales as $δ^{-1}$, the inverse of the spectral gap. It has long been conjectured that quantum computers offer nearly generic quadratic improvements for mixing problems. However, except in special cases, quantum algorithms achieve a run-time of $\mathcal{O}(\sqrt{δ^{-1}} \sqrt{N})$, which introduces a costly dependence on the Markov chain size $N,$ not present in the classical case. Here, we re-address the problem of mixing of Markov chains when these form a slowly evolving sequence. This setting is akin to the simulated annealing setting and is commonly encountered in physics, material sciences and machine learning. We provide a quantum memory-efficient algorithm with a run-time of $\mathcal{O}(\sqrt{δ^{-1}} \sqrt[4]{N})$, neglecting logarithmic terms, which is an important improvement for large state spaces. Moreover, our algorithms output quantum encodings of distributions, which has advantages over classical outputs. Finally, we discuss the run-time bounds of mixing algorithms and show that, under certain assumptions, our algorithms are optimal.
@article{Orsucci2018fasterquantummixing, doi = {10.22331/q-2018-11-09-105}, url = {https://doi.org/10.22331/q-2018-11-09-105}, title = {Faster quantum mixing for slowly evolving sequences of {M}arkov chains}, author = {Orsucci, Davide and Briegel, Hans J. and Dunjko, Vedran}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {2}, pages = {105}, month = nov, year = {2018} }
[1] Newman, M. E. J. and Barkema, G. T., Monte Carlo Methods in Statistical Physics. Oxford University Press (1999).
[2] Sinclair, A., Algorithms for random generation and counting: a Markov chain approach. Springer (1993).
[3] Bellman, R., A Markovian decision process. Journal of Mathematics and Mechanics 6(5), 679–684 (1957).
https://www.jstor.org/stable/24900506?seq=1#page_scan_tab_contents
[4] Gilks, W. R., Richardson, S. and Spiegelhalter, D. Markov chain Monte Carlo in practice. CRC press (1995).
[5] Hastings, W. K., Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57(1), 97–109 (1970).
https://doi.org/10.1093/biomet/57.1.97
[6] Geman, S. and Geman, D., Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. Readings in Computer Vision, 564–584 (1987).
[7] Martinelli, F., Lectures on Glauber dynamics for discrete spin models. Lectures on probability theory and statistics, Springer, 93–191 (1999).
[8] Norris, J. R., Markov chains. Cambridge University Press (1998).
[9] Levin, D. A. and Peres, Y., Markov chains and mixing times. American Mathematical Soc. (2017).
[10] Aldous, D., László, L. and Winkler, P., Mixing times for uniformly ergodic Markov chains. Stochastic Processes and their Applications 71(2), 165–182 (1995).
https://doi.org/10.1016/S0304-4149(97)00037-9
[11] Kirkpatrick, S., Gelatt, C. D. and Vecchi, M. P., Optimization by simulated annealing. Science 220(4598), 671–680 (1983).
https://doi.org/10.1126/science.220.4598.671
[12] Van Laarhoven, P. J., and Aarts, E. H., Simulated annealing. Simulated annealing: Theory and applications 37 (1987).
https://doi.org/10.1007/978-94-015-7744-1_2
[13] Richter, P. C., Quantum speedup of classical mixing processes. Phys. Rev. A 76, 042306 (2007) [arXiv:0609204].
[14] Nayak, A. and Vishwanath, A., Quantum walk on the line. arXiv:quant-ph/0010117 (2000).
[15] Ambainis, A., Bach, E., Nayak, A., Vishwanath, A. and Watrous, J., One-dimensional quantum walks. Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, 37–49 (2001).
[16] Aharonov, D., Ambainis, A., Kempe, J. and Vazirani, U., Quantum walks on graphs. Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, 50–59 (2001) [arXiv:0012090].
[17] Richter, P. C., Almost uniform sampling via quantum walks. New J. Phys. 9(72) (2007) [arXiv:0606202].
https://doi.org/10.1088/1367-2630/9/3/072
[18] Dunjko, V. and Briegel, H. J., Quantum mixing of Markov chains for special distributions. New J. Phys. 17(7), 073004 (2015) [arXiv:1502.05511].
[19] Kempe, J., Quantum random walks - an introductory overview. Contemp. Phys. 44(4), 307–327 (2003) [arXiv:0303081].
https://doi.org/10.1080/00107151031000110776
[20] Reitzner, D., Nagaj, D. and Bužek, V., Quantum Walks. Acta Phys. Slovaca 61(6), 603–725 (2011) [arXiv:1207.7283].
https://doi.org/10.2478/v10155-011-0006-6
[21] Somma, R. D., Boixo, S., Barnum, H. and Knill, E., Quantum simulations of classical annealing processes. Phys. Rev. Lett. 101, 130504 (2008) [arXiv:0804.1571].
[22] Wocjan, P. and Abeyesinghe, A., Speedup via quantum sampling. Phys. Rev. A 78, 042336 (2008) [arXiv:0804.4259].
[23] Wocjan, P., Chiang, C., Nagaj, D. and Abeyesinghe, A., Quantum algorithm for approximating partition functions. Phys. Rev. A 80, 022340 (2009) [arXiv:1405.2749].
[24] Childs, A., Quantum information processing in continuous time. Ph. D. Thesis, Massachusetts Institute of Technology (2004).
http://hdl.handle.net/1721.1/16663
[25] Nishimori, H. and Ortiz, G., Elements of phase transitions and critical phenomena. OUP Oxford (2010).
[26] Sutton, R. S. & Barto, A. G. Reinforcement learning: An introduction. MIT Press, Cambridge Massachusetts (1998).
[27] Bishop, C. M., Pattern recognition and machine learning. Springer-Verlag, New York (2016).
[28] Szegedy, M., Quantum speed-up of Markov chain based algorithms. 45th Annual IEEE Symposium on Foundations of Computer Science, 32–41(2004).
[29] Magniez, F., Nayak, A., Roland, J. and Santha, M., Search via quantum walk. SIAM Journal on Computing 40(1), 142–164 (2011) [arXiv:0608026].
[30] Kitaev, A. Y., Quantum measurements and the Abelian Stabilizer Problem. arXiv preprint quant-ph/9511026 (1995).
[31] Svore, K. M., Hastings, M. B. and Freedman, M., Faster Phase Estimation. Quantum Information & Computation 14(3-4), 306–328 (2014) [arXiv:1304.0741].
https://dl.acm.org/citation.cfm?id=2600515
[32] Wiebe, N. and Granade, C. E., Efficient Bayesian Phase Estimation Phys. Rev. Lett. 117, 010503 (2016) [arXiv:1508.00869].
[33] Grover, L. K., A fast quantum mechanical algorithm for database search. Proceedings of the 28th annual ACM Symposium on the Theory of Computing, 212–219 (1996) [arXiv:9605043].
[34] Brassard, G., Hoyer, P., Mosca, M. and Tapp, A., Quantum Amplitude Amplification and Estimation. Contemporary Mathematics 305, 53–74 (2002) [arXiv:0005055].
https://doi.org/10.1090/conm/305/05215
[35] Grover, L. K., Fixed-Point Quantum Search. Phys. Rev. Lett. 95, 150501 (2005) [arXiv:0503205].
[36] Yoder, T. J., Low, G. H. and Chuang, I. L., Fixed-Point Quantum Search with an Optimal Number of Queries. Phys. Rev. Lett. 113, 210501 (2014).
[37] Grover, L. and Rudolph, T., Creating superpositions that correspond to efficiently integrable probability distributions. arXiv preprint quant-ph/0208112 (2002).
[38] Paparo, G. D., Dunjko, V., Makmal, A., Matrin-Delgado, MA. and Briegel, H. J. Quantum speedup for active learning agents. Phys. Rev. X 4, 031002 (2014) [arXiv:1401.4997].
[39] Sly, A. Computational transition at the uniqueness threshold. 51st Annual IEEE Symposium on Foundations of Computer Science, 287–296 (2010) [arXiv:1005.5584].
[40] Boyer, M., Brassard, G., Høyer, P. and Tapp, A., Tight bounds on quantum searching. Progress of Physics 46(4-5), 493–505 (1998) [arXiv:9605034].
https://doi.org/10.1002/(SICI)1521-3978(199806)46:4/5<493::AID-PROP493>3.0.CO;2-P
[41] Aharonov, D. and Ta-Shma, A., Adiabatic Quantum State Generation and Statistical Zero Knowledge. Proceedings of the 35th annual ACM symposium on Theory of computing, 20–29 (2003) [arXiv:0301023].
[42] Ambainis, A., Quantum walk algorithms for element distinctness. SIAM Journal on Computing 37(1), 22–31 (2004) [arXiv:0311001].
[43] Magniez, F., Santha, M. and Szegedy, M., Quantum Algorithms for the Triangle Problem. SIAM Journal on Computing 37(2), 413–424 (2007) [arXiv:0310134].
[44] Krovi, H., Magniez, F., Ozols, M. and Roland, J., Quantum walks can find a marked element on any graph. Algorithmica 74(2), 851–907 (2016) [arXiv:1002.2419].
[45] Temme, K., Osborne, T. J., Vollbrecht, K. G. H., Poulin, D. and Verstraete, F., Quantum metropolis sampling. Nature 471, 87–90 (2011), [arXiv:0911.3635].
[46] Yung, M.-H. and Aspuru-Guzik, A., A quantum-quantum metropolis algorithm. Proceedings of the National Academy of Sciences 109(3), 754–759 (2012) [arXiv:1011.1468].
https://doi.org/10.1073/pnas.1111758109
[47] Aaronson, S. and Christiano, P., Quantum Money from Hidden Subspaces. Theory of Computing 9(9), 349-401 (2013) [arXiv:1203.4740].
[48] Briegel, H. J. and De las Cuevas, G., Projective simulation for artificial intelligence. Sci. Rep. 2, 400 (2012).
https://doi.org/10.1038/srep00400
[49] Mautner, J., Makmal, A., Manzano, D., Tiersch, M. and Briegel, H. J., Projective simulation for classical learning agents: a comprehensive investigation. New Generat. Comput. 33(1), 69–114 (2015) [arXiv:1305.1578].
[50] Dunjko, V. and Briegel, H. J., Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Reports on Progress in Physics 81(7), 074001 (2018) [arXiv:1709.02779].
[51] Fischer, A. and Christian, I., An introduction to restricted Boltzmann machines. Iberoamerican Congress on Pattern Recognition, 14–36 (2012).
[52] Tieleman, T., Training restricted Boltzmann machines using approximations to the likelihood gradient. Proceedings of the 25th international conference on Machine learning, 1064–1071 (2008).
[53] Wiebe, N., Kapoor, A. and Svore, K. M., Quantum deep learning. Quantum Information & Computation 16(7-8), 541–587 (2016) [arXiv:1412.3489].
http://dl.acm.org/citation.cfm?id=3179466.3179467
[54] Montanaro, A., Quantum speedup of Monte Carlo methods. Proceedings of the Royal Society A 471(2181), 0301 (2015) [arXiv:1504.06987].
[1] Yao-Yao Jiang, Peng-Cheng Chu, Wen-Bin Zhang, and Hong-Yang Ma, "Quantum walk search algorithm for multi-objective searching with iteration auto-controlling on hypercube", Chinese Physics B 31 4, 040307 (2022).
[2] Yao-Yao Jiang, Wen-Bin Zhang, Peng-Cheng Chu, and Hong-Yang Ma, "Feedback search algorithm for multi-particle quantum walks over a ring based on permutation groups", Acta Physica Sinica 71 3, 030201 (2022).
[3] Shantanav Chakraborty, Kyle Luh, and Jérémie Roland, "How Fast Do Quantum Walks Mix?", Physical Review Letters 124 5, 050501 (2020).
[4] Sofiene Jerbi, Lea M. Trenkwalder, Hendrik Poulsen Nautrup, Hans J. Briegel, and Vedran Dunjko, "Quantum Enhancements for Deep Reinforcement Learning in Large Spaces", PRX Quantum 2 1, 010328 (2021).
[5] Khadeejah Bepari, Sarah Malik, Michael Spannowsky, and Simon Williams, "Quantum walk approach to simulating parton showers", Physical Review D 106 5, 056002 (2022).
[6] Yaoyao JIANG, Pengcheng CHU, Yulin MA, and Hongyang MA, "Search Algorithm Based on Permutation Group by Quantum Walk on Hypergraphes", Chinese Journal of Electronics 31 4, 626 (2022).
[7] Shantanav Chakraborty, Kyle Luh, and Jérémie Roland, "Analog quantum algorithms for the mixing of Markov chains", Physical Review A 102 2, 022423 (2020).
[8] Gösta Gustafson, Stefan Prestel, Michael Spannowsky, and Simon Williams, "Collider events on a quantum computer", Journal of High Energy Physics 2022 11, 35 (2022).
[9] Wenda Zhou, "Review on Quantum Walk Algorithm", Journal of Physics: Conference Series 1748 3, 032022 (2021).
[10] Farrukh Mukhamedov and Ahmed Al-Rawashdeh, "Approximations of non-homogeneous Markov chains on abstract states spaces", Bulletin of Mathematical Sciences 11 03, 2150002 (2021).
[11] Yassine Hamoudi, "Preparing many copies of a quantum state in the black-box model", Physical Review A 105 6, 062440 (2022).
[12] Vedran Dunjko and Hans J. Briegel, "Machine learning & artificial intelligence in the quantum domain: a review of recent progress", Reports on Progress in Physics 81 7, 074001 (2018).
[13] Ashley Montanaro, "Quantum speedup of Monte Carlo methods: Table 1.", Proceedings of the Royal Society of London Series A 471 2181, 20150301 (2015).
[14] Vedran Dunjko and Hans J. Briegel, "Machine learning \& artificial intelligence in the quantum domain", arXiv:1709.02779, (2017).
[15] Aram W. Harrow and Annie Y. Wei, "Adaptive Quantum Simulated Annealing for Bayesian Inference and Estimating Partition Functions", arXiv:1907.09965, (2019).
[16] Shouvanik Chakrabarti, Andrew M. Childs, Shih-Han Hung, Tongyang Li, Chunhao Wang, and Xiaodi Wu, "Quantum algorithm for estimating volumes of convex bodies", arXiv:1908.03903, (2019).
[17] V. Dunjko and H. J. Briegel, "Quantum mixing of Markov chains for special distributions", New Journal of Physics 17 7, 073004 (2015).
[18] Tongyang Li and Ruizhe Zhang, "Quantum Speedups of Optimizing Approximately Convex Functions with Applications to Logarithmic Regret Stochastic Convex Bandits", arXiv:2209.12897, (2022).
[19] Simon Apers, "Quantum Walk Sampling by Growing Seed Sets", arXiv:1904.11446, (2019). | CommonCrawl |
Bootstrap method
From Encyclopedia of Mathematics
Revision as of 05:18, 18 April 2017 by Leonard Huang (talk | contribs) (Completed rendering of article in TeX.)
A computer-intensive re-sampling method, introduced in statistics by B. Efron in 1979 ([a3]) for estimating the variability of statistical quantities and for setting confidence regions (cf. also sample). The name 'bootstrap' refers to the analogy of pulling oneself up by one's own bootstraps. Efron's bootstrap is to re-sample the data. Given observations $ X_{1},\ldots,X_{n} $, artificial bootstrap samples are drawn with replacement from $ X_{1},\ldots,X_{n} $, putting an equal probability mass of $ \dfrac{1}{n} $ on $ X_{i} $ for each $ i \in \{ 1,\ldots,n \} $. For example, with a sample size of $ n = 5 $ and distinct observations $ X_{1},X_{2},X_{3},X_{4},X_{5} $, one might obtain $ X_{3},X_{3},X_{1},X_{5},X_{4} $ as a bootstrap sample. In fact, there are $ 126 $ distinct bootstrap samples in this case.
A more formal description of Efron's non-parametric bootstrap in a simple setting is as follows. Suppose that $ (X_{1},\ldots,X_{n}) $ is a random sample of size $ n $ from a population with an unknown distribution function $ F_{\theta} $ on the real line, i.e., the $ X_{i} $'s are assumed to be independent and identically-distributed random variables with a common distribution function $ F_{\theta} $ that depends on a real-valued parameter $ \theta $. Let $ T_{n} = {T_{n}}(X_{1},\ldots,X_{n}) $ denote a statistical estimator for $ \theta $, based on the data $ X_{1},\ldots,X_{n} $ (cf. also statistical estimation). The object of interest is then the probability distribution $ G_{n} $ of $ \sqrt{n} (T_{n} - \theta) $ defined by $$ \forall x \in \mathbf{R}: \qquad {G_{n}}(x) \stackrel{\text{df}}{=} {\mathsf{P}_{\theta}}(\sqrt{n} (T_{n} - \theta) \leq x), $$ which is the exact distribution function of $ T_{n} $ properly normalized. The scaling factor $ \sqrt{n} $ is a classical one, while the centering of $ T_{n} $ is by the parameter $ \theta $. Here, $ \mathsf{P}_{\theta} $ denotes the probability measure corresponding to $ F_{\theta} $.
Efron's non-parametric bootstrap estimator of $ G_{n} $ is now defined by $$ \forall x \in \mathbf{R}: \qquad {G_{n}^{\ast}}(x) \stackrel{\text{df}}{=} {\mathsf{P}_{n}^{\ast}}(\sqrt{n} (T_{n}^{\ast} - \theta_{n}) \leq x). $$ Here, $ T_{n}^{\ast} = {T_{n}}(X_{1}^{\ast},\ldots,X_{n}^{\ast}) $, where $ (X_{1}^{\ast},\ldots,X_{n}^{\ast}) $ denotes an artificial random sample (the bootstrap sample) from $ \hat{F}_{n} $, the empirical distribution function of the original observations $ (X_{1},\ldots,X_{n}) $, and $ \theta_{n} = \theta \! \left( \hat{F}_{n} \right) $. Note that $ \hat{F}_{n} $ is the random distribution (a step function) that puts a probability mass of $ \dfrac{1}{n} $ on $ X_{i} $ for each $ i \in \{ 1,\ldots,n \} $, sometimes referred to as the re-sampling distribution; $ \mathsf{P}_{n}^{\ast} $ denotes the probability measure corresponding to $ \hat{F}_{n} $, conditionally given $ \hat{F}_{n} $, i.e., given the observations $ X_{1},\ldots,X_{n} $. Obviously, given the observed values $ X_{1},\ldots,X_{n} $ in the sample, $ \hat{F}_{n} $ is completely known and (at least in principle) $ G_{n}^{\ast} $ is also completely known. One may view $ G_{n}^{\ast} $ as the empirical counterpart in the 'bootstrap world' to $ G_{n} $ in the 'real world'. In practice, an exact computation of $ G_{n}^{\ast} $ is usually impossible (for a sample $ X_{1},\ldots,X_{n} $ of $ n $ distinct numbers, there are $ \displaystyle \binom{2 n - 1}{2 n} $ distinct bootstrap samples), but $ G_{n}^{\ast} $ can be approximated by means of Monte-Carlo simulation. Efficient bootstrap simulation is discussed, for example, in [a2] and [a10].
When does Efron's bootstrap work? The consistency of the bootstrap approximation $ G_{n}^{\ast} $, viewed as an estimate of $ G_{n} $ — i.e., one requires $$ \sup_{x \in \mathbf{R}} \left| {G_{n}}(x) - {G_{n}^{\ast}}(x) \right| \to 0, \quad \text{as} ~ n \to \infty, $$ to hold in $ \mathsf{P} $-probability — is generally viewed as an absolute prerequisite for Efron's bootstrap to work in the problem at hand. Of course, bootstrap consistency is only a first-order asymptotic result, and the error committed when $ G_{n} $ is estimated by $ G_{n}^{\ast} $ may still be quite large in finite samples. Second-order asymptotics (cf. Edgeworth series) enables one to investigate the speed at which $ \displaystyle \sup_{x \in \mathbf{R}} \left| {G_{n}}(x) - {G_{n}^{\ast}}(x) \right| $ approaches $ 0 $, and also to identify cases where the rate of convergence is faster than $ \dfrac{1}{\sqrt{n}} $ — the classical Berry–Esseen-type rate for the normal approximation. An example in which the bootstrap possesses the beneficial property of being more accurate than the traditional normal approximation is the Student $ t $-statistic and, more generally, Studentized statistics. For this reason, the use of bootstrapped Studentized statistics for setting confidence intervals is strongly advocated in a number of important problems. A general reference is [a7].
When does the bootstrap fail? It has been proved in [a1] that in the case of the mean, Efron's bootstrap fails when $ F $ is the domain of attraction of an $ \alpha $-stable law with $ 0 < \alpha < 2 $. However, by re-sampling from $ \hat{F}_{n} $ with a (smaller) re-sample size $ m $ that satisfies $ m = m(n) \to \infty $ and $ \dfrac{m(n)}{n} \to 0 $, it can be shown that the (modified) bootstrap works. More generally, in recent years, the importance of a proper choice of the re-sampling distribution has become clear (see [a5], [a9] and [a10]).
The bootstrap can be an effective tool in many problems of statistical inference, for example, the construction of a confidence band in non-parametric regression, testing for the number of modes of a density, or the calibration of confidence bounds (see [a2], [a4] and [a8]). Re-sampling methods for dependent data, such as the block bootstrap, is another important topic of recent research (see [a2] and [a6]).
[a1] K.B. Athreya, "Bootstrap of the mean in the infinite variance case", Ann. Statist., 15 (1987), pp. 724–731.
[a2] A.C. Davison, D.V. Hinkley, "Bootstrap methods and their application", Cambridge Univ. Press (1997).
[a3] B. Efron, "Bootstrap methods: another look at the jackknife", Ann. Statist., 7 (1979), pp. 1–26.
[a4] B. Efron, R.J. Tibshirani, "An introduction to the bootstrap", Chapman&Hall (1993).
[a5] E. Giné, "Lectures on some aspects of the bootstrap", P. Bernard (ed.), Ecole d'Eté de Probab. Saint Flour XXVI-1996, Lecture Notes Math., 1665, Springer (1997).
[a6] F. Götze, H.R. Künsch, "Second order correctness of the blockwise bootstrap for stationary observations", Ann. Statist., 24 (1996), pp. 1914–1933.
[a7] P. Hall, "The bootstrap and Edgeworth expansion", Springer (1992).
[a8] E. Mammen, "When does bootstrap work? Asymptotic results and simulations", Lecture Notes Statist., 77, Springer (1992).
[a9] H. Putter, W.R. van Zwet, "Resampling: consistency of substitution estimators", Ann. Statist., 24 (1996), pp. 2297–2318.
[a10] J. Shao, D. Tu, "The jackknife and bootstrap", Springer (1995).
Bootstrap method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bootstrap_method&oldid=41099
This article was adapted from an original article by Roelof Helmers (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Bootstrap_method&oldid=41099" | CommonCrawl |
The U.S. Centers for Disease Control and Prevention estimates that gastrointestinal diseases affect between 60 and 70 million Americans every year. This translates into tens of millions of endoscopy procedures. Millions of colonoscopy procedures are also performed to diagnose or screen for colorectal cancers. Conventional, rigid scopes used for these procedures are uncomfortable for patients and may cause internal bruising or lead to infection because of reuse on different patients. Smart pills eliminate the need for invasive procedures: wireless communication allows the transmission of real-time information; advances in batteries and on-board memory make them useful for long-term sensing from within the body. The key application areas of smart pills are discussed below.
But he has also seen patients whose propensity for self-experimentation to improve cognition got out of hand. One chief executive he treated, Ngo said, developed an unhealthy predilection for albuterol, because he felt the asthma inhaler medicine kept him alert and productive long after others had quit working. Unfortunately, the drug ended up severely imbalancing his electrolytes, which can lead to dehydration, headaches, vision and cardiac problems, muscle contractions and, in extreme cases, seizures.
This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137.
Take at 10 AM; seem a bit more active but that could just be the pressure of the holiday season combined with my nice clean desk. I do the chores without too much issue and make progress on other things, but nothing major; I survive going to The Sitter without too much tiredness, so ultimately I decide to give the palm to it being active, but only with 60% confidence. I check the next day, and it was placebo. Oops.
There is a similar substance which can be purchased legally almost anywhere in the world called adrafinil. This is a prodrug for modafinil. You can take it, and then the body will metabolize it into modafinil, providing similar beneficial effects. Unfortunately, it takes longer for adrafinil to kick in—about an hour—rather than a matter of minutes. In addition, there are more potential side-effects to taking the prodrug as compared to the actual drug.
On the other metric, suppose we removed the creatine? Dropping 4 grams of material means we only need to consume 5.75 grams a day, covered by 8 pills (compared to 13 pills). We save 5,000 pills, which would have cost $45 and also don't spend the $68 for the creatine; assuming a modafinil formulation, that drops our $1761 down to $1648 or $1.65 a day. Or we could remove both the creatine and modafinil, for a grand total of $848 or $0.85 a day, which is pretty reasonable.
Neuroplasticity, or the brain's ability to change and reorganize itself in response to intrinsic and extrinsic factors, indicates great potential for us to enhance brain function by medical or other interventions. Psychotherapy has been shown to induce structural changes in the brain. Other interventions that positively influence neuroplasticity include meditation, mindfulness , and compassion.
Evidence in support of the neuroprotective effects of flavonoids has increased significantly in recent years, although to date much of this evidence has emerged from animal rather than human studies. Nonetheless, with a view to making recommendations for future good practice, we review 15 existing human dietary intervention studies that have examined the effects of particular types of flavonoid on cognitive performance. The studies employed a total of 55 different cognitive tests covering a broad range of cognitive domains. Most studies incorporated at least one measure of executive function/working memory, with nine reporting significant improvements in performance as a function of flavonoid supplementation compared to a control group. However, some domains were overlooked completely (e.g. implicit memory, prospective memory), and for the most part there was little consistency in terms of the particular cognitive tests used making across study comparisons difficult. Furthermore, there was some confusion concerning what aspects of cognitive function particular tests were actually measuring. Overall, while initial results are encouraging, future studies need to pay careful attention when selecting cognitive measures, especially in terms of ensuring that tasks are actually sensitive enough to detect treatment effects.
Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis.
There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says.
I never watch SNL. I just happen to know about every skit, every line of dialogue because I'm a stable genius.Hey Donnie, perhaps you are unaware that:1) The only Republican who is continually obsessed with how he or she is portrayed on SNL is YOU.2) SNL has always been laden with political satire.3) There is something called the First Amendment that would undermine your quest for retribution.
Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8).
I have also tried to get in contact with senior executives who have experience with these drugs (either themselves or in their firms), but without success. I have to wonder: Are they completely unaware of the drugs' existence? Or are they actively suppressing the issue? For now, companies can ignore the use of smart drugs. And executives can pretend as if these drugs don't exist in their workplaces. But they can't do it forever.
Swanson J, Arnold LE, Kraemer H, Hechtman L, Molina B, Hinshaw S, Wigal T. Evidence, interpretation and qualification from multiple reports of long-term outcomes in the Multimodal Treatment Study of Children With ADHD (MTA): Part II. Supporting details. Journal of Attention Disorders. 2008;12:15–43. doi: 10.1177/1087054708319525. [PubMed] [CrossRef]
I decided to try out day-time usage on 2 consecutive days, taking the 100mg at noon or 1 PM. On both days, I thought I did feel more energetic but nothing extraordinary (maybe not even as strong as the nicotine), and I had trouble falling asleep on Halloween, thinking about the meta-ethics essay I had been writing diligently on both days. Not a good use compared to staying up a night.
How exactly – and if – nootropics work varies widely. Some may work, for example, by strengthening certain brain pathways for neurotransmitters like dopamine, which is involved in motivation, Barbour says. Others aim to boost blood flow – and therefore funnel nutrients – to the brain to support cell growth and regeneration. Others protect brain cells and connections from inflammation, which is believed to be a factor in conditions like Alzheimer's, Barbour explains. Still others boost metabolism or pack in vitamins that may help protect the brain and the rest of the nervous system, explains Dr. Anna Hohler, an associate professor of neurology at Boston University School of Medicine and a fellow of the American Academy of Neurology.
Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it's hard to beat $20, it works, it's not that messy after practice, and it's not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you're going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn't lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.)
It arrived as described, a little bottle around the volume of a soda can. I had handy a plastic syringe with milliliter units which I used to measure out the nicotine-water into my tea. I began with half a ml the first day, 1ml the second day, and 2ml the third day. (My Zeo sleep scores were 85/103/86 (▁▇▁), and the latter had a feline explanation; these values are within normal variation for me, so if nicotine affects my sleep, it does so to a lesser extent than Adderall.) Subjectively, it's hard to describe. At half a ml, I didn't really notice anything; at 1 and 2ml, I thought I began to notice it - sort of a cleaner caffeine. It's nice so far. It's not as strong as I expected. I looked into whether the boiling water might be breaking it down, but the answer seems to be no - boiling tobacco is a standard way to extract nicotine, actually, and nicotine's own boiling point is much higher than water; nor do I notice a drastic difference when I take it in ordinary water. And according to various e-cigarette sources, the liquid should be good for at least a year.
Another common working memory task is the n-back task, which requires the subject to view a series of items (usually letters) and decide whether the current item is identical to the one presented n items back. This task taxes working memory because the previous items must be held in working memory to be compared with the current item. The easiest version of this is a 1-back task, which is also called a double continuous performance task (CPT) because the subject is continuously monitoring for a repeat or double. Three studies examined the effects of MPH on working memory ability as measured by the 1-back task, and all found enhancement of performance in the form of reduced errors of omission (Cooper et al., 2005; Klorman et al., 1984; Strauss et al., 1984). Fleming et al. (1995) tested the effects of d-AMP on a 5-min CPT and found a decrease in reaction time, but did not specify which version of the CPT was used.
These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds."
Smart Pill is formulated with herbs, amino acids, vitamins and co-factors to provide nourishment for the brain, which may enhance memory, cognitive function, and clarity. , which may enhance memory, cognitive function, and clarity. In a natural base containing potent standardized extract 24% flavonoid glycosides. Fast acting super potent formula. A unique formulation containing a blend of essential nutrients, herbs and co-factors.
However, when I didn't stack it with Choline, I would get what users call "racetam headaches." Choline, as Patel explains, is not a true nootropic, but it's still a pro-cognitive compound that many take with other nootropics in a stack. It's an essential nutrient that humans need for functions like memory and muscle control, but we can't produce it, and many Americans don't get enough of it. The headaches I got weren't terribly painful, but they were uncomfortable enough that I stopped taking Piracetam on its own. Even without the headache, though, I didn't really like the level of focus Piracetam gave me. I didn't feel present when I used it, even when I tried to mix in caffeine and L-theanine. And while it seemed like I could focus and do my work faster, I was making more small mistakes in my writing, like skipping words. Essentially, it felt like my brain was moving faster than I could.
On the other hand, sometimes you'll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I've done enough hacking of my brain that I've gotten over that programming… and that when I use nootropics they help me help people. | CommonCrawl |
Communications on Pure & Applied Analysis
September 2015 , Volume 14 , Issue 5
Select all articles
Export/Reference:
Some new regularity results of pullback attractors for 2D Navier-Stokes equations with delays
Julia García-Luengo, Pedro Marín-Rubio and José Real
2015, 14(5): 1603-1621 doi: 10.3934/cpaa.2015.14.1603 +[Abstract](2794) +[PDF](464.3KB)
In this paper we strengthen some results on the existence and properties of pullback attractors for a 2D Navier-Stokes model with finite delay formulated in [Caraballo and Real, J. Differential Equations 205 (2004), 271--297]. Actually, we prove that under suitable assumptions, pullback attractors not only of fixed bounded sets but also of a set of tempered universes do exist. Moreover, thanks to regularity results, the attraction from different phase spaces also happens in $C([-h,0];V)$. Finally, from comparison results of attractors, and under an additional hypothesis, we establish that all these families of attractors are in fact the same object.
Julia Garc\u00EDa-Luengo, Pedro Mar\u00EDn-Rubio, Jos\u00E9 Real. Some new regularity results of pullback attractors for 2D Navier-Stokes equations with delays. Communications on Pure & Applied Analysis, 2015, 14(5): 1603-1621. doi: 10.3934/cpaa.2015.14.1603.
Shape optimization in compressible liquid crystals
Wenya Ma, Yihang Hao and Xiangao Liu
The shape optimization problem for the profile in compressible liquid crystals is considered in this paper. We prove that the optimal shape with minimal volume is attainable in an appropriate class of admissible profiles which subjects to a constraint on the thickness of the boundary. Such consequence is mainly obtained from the well-known weak sequential compactness method (see [25]).
Wenya Ma, Yihang Hao, Xiangao Liu. Shape optimization in compressible liquid crystals. Communications on Pure & Applied Analysis, 2015, 14(5): 1623-1639. doi: 10.3934/cpaa.2015.14.1623.
Sharp threshold for scattering of a generalized Davey-Stewartson system in three dimension
Jing Lu and Yifei Wu
In this paper, we consider the Cauchy problem for the generalized Davey-Stewartson system \begin{eqnarray} &i\partial_t u + \Delta u =-a|u|^{p-1}u+b_1uv_{x_1}, (t,x)\in R \times R^3,\\ &-\Delta v=b_2(|u|^2)_{x_1}, \end{eqnarray} where $a>0,b_1b_2>0$, $\frac{4}{3}+1< p< 5$. We first use a variational approach to give a dichotomy of blow-up and scattering for the solution of mass supercritical equation with the initial data satisfying $J(u_0)
Jing Lu, Yifei Wu. Sharp threshold for scattering of a generalized Davey-Stewartson system in three dimension. Communications on Pure & Applied Analysis, 2015, 14(5): 1641-1670. doi: 10.3934/cpaa.2015.14.1641.
On global solutions in one-dimensional thermoelasticity with second sound in the half line
Yuxi Hu and Na Wang
In this paper, we investigate the initial boundary value problem for one-dimensional thermoelasticity with second sound in the half line. By using delicate energy estimates, together with a special form of Helmholtz free energy, we are able to show the global solutions exist under the Dirichlet boundary condition if the initial data are sufficient small.
Yuxi Hu, Na Wang. On global solutions in one-dimensional thermoelasticity with second sound in the half line. Communications on Pure & Applied Analysis, 2015, 14(5): 1671-1683. doi: 10.3934/cpaa.2015.14.1671.
Finite-dimensional global attractors for parabolic nonlinear equations with state-dependent delay
Igor Chueshov and Alexander V. Rezounenko
We deal with a class of parabolic nonlinear evolution equations with state-dependent delay. This class covers several important PDE models arising in biology. We first prove well-posedness in a certain space of functions which are Lipschitz in time. This allows us to show that the model considered generates an evolution operator semigroup $S_t$ on a certain space of Lipschitz type functions over delay time interval. The operators $S_t$ are closed for all $t\ge 0$ and continuous for $t$ large enough. Our main result shows that the semigroup $S_t$ possesses compact global and exponential attractors of finite fractal dimension. Our argument is based on the recently developed method of quasi-stability estimates and involves some extension of the theory of global attractors for the case of closed evolutions.
Igor Chueshov, Alexander V. Rezounenko. Finite-dimensional global attractors for parabolicnonlinear equations with state-dependent delay. Communications on Pure & Applied Analysis, 2015, 14(5): 1685-1704. doi: 10.3934/cpaa.2015.14.1685.
Optimal polynomial blow up range for critical wave maps
Can Gao and Joachim Krieger
We prove that the critical Wave Maps equation with target $S^2$ and origin $R^{2+1}$ admits energy class blow up solutions of the form \begin{eqnarray} u(t, r) = Q(\lambda(t)r) + \varepsilon(t, r) \end{eqnarray} where $Q:R^2\rightarrow S^2$ is the ground state harmonic map and $\lambda(t) = t^{-1-\nu}$ for any $\nu>0$. This extends the work [14], where such solutions were constructed under the assumption $\nu>\frac{1}{2}$. In light of a result of Struwe [23], our result is optimal for polynomial blow up rates.
Can Gao, Joachim Krieger. Optimal polynomial blow up range for critical wave maps. Communications on Pure & Applied Analysis, 2015, 14(5): 1705-1741. doi: 10.3934/cpaa.2015.14.1705.
On the uniqueness of nonnegative solutions of differential inequalities with gradient terms on Riemannian manifolds
Yuhua Sun
We investigate the uniqueness of nonnegative solutions to the following differential inequality \begin{eqnarray} div(A(x)|\nabla u|^{m-2}\nabla u)+V(x)u^{\sigma_1}|\nabla u|^{\sigma_2}\leq0, \tag{1} \end{eqnarray} on a noncompact complete Riemannian manifold, where $A, V$ are positive measurable functions, $m>1$, and $\sigma_1$, $\sigma_2\geq0$ are parameters such that $\sigma_1+\sigma_2>m-1$.
Our purpose is to establish the uniqueness of nonnegative solution to (1) via very natural geometric assumption on volume growth.
Yuhua Sun. On the uniqueness of nonnegative solutions of differential inequalities with gradient terms on Riemannian manifolds. Communications on Pure & Applied Analysis, 2015, 14(5): 1743-1757. doi: 10.3934/cpaa.2015.14.1743.
Asymptotic profiles for a strongly damped plate equation with lower order perturbation
Ryo Ikehata and Marina Soga
We consider the Cauchy problem in $ R^n$ for a strongly damped plate equation with a lower oder perturbation. We derive asymptotic profiles of solutions with weighted $L^{1,\gamma}(R^n)$ initial velocity by using a new method introduced in [7].
Ryo Ikehata, Marina Soga. Asymptotic profiles for a strongly damped plate equation with lower order perturbation. Communications on Pure & Applied Analysis, 2015, 14(5): 1759-1780. doi: 10.3934/cpaa.2015.14.1759.
Finite dimensional global attractor for a Bose-Einstein equation in a two dimensional unbounded domain
Brahim Alouini
We study the long-time behavior of the solutions to a nonlinear damped driven Schrödinger type equation with quadratic potential on a strip. We prove that this behavior is described by a regular compact global attractor with finite fractal dimension.
Brahim Alouini. Finite dimensional global attractor for a Bose-Einstein equation in a two dimensional unbounded domain. Communications on Pure & Applied Analysis, 2015, 14(5): 1781-1801. doi: 10.3934/cpaa.2015.14.1781.
Positive solution for quasilinear Schrödinger equations with a parameter
GUANGBING LI
In this paper, we study the following quasilinear Schrödinger equations of the form \begin{eqnarray} -\Delta u+V(x)u-[\Delta(1+u^2)^{\alpha/2}]\frac{\alpha u}{2(1+u^2)^{(2-\alpha)/2}}=\mathrm{g}(x,u), \end{eqnarray} where $1 \le \alpha \le 2$, $N \ge 3$, $V\in C(R^N, R)$ and $\mathrm{g}\in C(R^N\times R, R)$. By using a change of variables, we get new equations, whose respective associated functionals are well defined in $H^1(R^N)$ and satisfy the geometric hypotheses of the mountain pass theorem. Using the special techniques, the existence of positive solutions is studied.
GUANGBING LI. Positive solution for quasilinear Schr\u00F6dinger equations with a parameter. Communications on Pure & Applied Analysis, 2015, 14(5): 1803-1816. doi: 10.3934/cpaa.2015.14.1803.
On the initial value problem of fractional stochastic evolution equations in Hilbert spaces
Pengyu Chen, Yongxiang Li and Xuping Zhang
In this article, we are concerned with the initial value problem of fractional stochastic evolution equations in real separable Hilbert spaces. The existence of saturated mild solutions and global mild solutions is obtained under the situation that the nonlinear term satisfies some appropriate growth conditions by using $\alpha$-order fractional resolvent operator theory, the Schauder fixed point theorem and piecewise extension method. Furthermore, the continuous dependence of mild solutions on initial values and orders as well as the asymptotical stability in $p$-th moment of mild solutions for the studied problem have also been discussed. The results obtained in this paper improve and extend some related conclusions on this topic. An example is also given to illustrate the feasibility of our abstract results.
Pengyu Chen, Yongxiang Li, Xuping Zhang. On the initial value problem of fractional stochastic evolution equations in Hilbert spaces. Communications on Pure & Applied Analysis, 2015, 14(5): 1817-1840. doi: 10.3934/cpaa.2015.14.1817.
Approximation schemes for non-linear second order equations on the Heisenberg group
In this work, we propose and analyse approximation schemes for fully non-linear second order partial differential equations defined on the Heisenberg group. We prove that a consistent, stable and monotone scheme converges to a viscosity solution of a second order PDE on the Heisenberg group provided that comparison principles exists for the limiting equation. We also provide examples where this technique is applied.
Pablo Ochoa. Approximation schemes for non-linear second order equations on the Heisenberg group. Communications on Pure & Applied Analysis, 2015, 14(5): 1841-1863. doi: 10.3934/cpaa.2015.14.1841.
Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces
Xiaoping Zhai, Yongsheng Li and Wei Yan
In this paper, we consider the global well-posedness of the incompressible magnetohydrodynamic equations with initial data $(u_0,b_0)$ in the critical Besov space $\dot{B}_{2,1}^{1/2}(\mathbb{R}^3)\times \dot{B}_{2,1}^{1/2}(\mathbb{R}^3)$. Compared with [30], making full use of the algebraical structure of the equations, we relax the smallness condition in the third component of the initial velocity field and magnetic field. More precisely, we prove that there exist two positive constants $\varepsilon_0$ and $C_0$ such that if \begin{eqnarray} (\|u_0^h\|_{\dot{B}_{2,1}^{1/2}} +\|b_0^h\|_{\dot{B}_{2,1}^{1/2}}) \exp\{C_0(\frac{1}{\mu}+\frac{1}{\nu})^3 (\|u_0^3\|_{\dot{B}_{2,1}^{1/2}} +\|b_0^3\|_{\dot{B}_{2,1}^{1/2}})^2\} \le \varepsilon_0\mu\nu, \end{eqnarray} then the 3-D incompressible magnetohydrodynamic system has a unique global solution $(u,b)\in C([0,+\infty);\dot{B}_{2,1}^{1/2})\cap L^1((0,+\infty);\dot{B}_{2,1}^{5/2})\times C([0,+\infty);\dot{B}_{2,1}^{1/2})\cap L^1((0,+\infty);\dot{B}_{2,1}^{5/2}).$ Finally, we analyze the long behavior of the solution and get some decay estimates which imply that for any $t>0$ the solution $(u(t),b(t))\in C^{\infty}(\mathbb{R}^3)\times C^{\infty}(\mathbb{R}^3)$.
Xiaoping Zhai, Yongsheng Li, Wei Yan. Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14(5): 1865-1884. doi: 10.3934/cpaa.2015.14.1865.
Maximal functions of multipliers on compact manifolds without boundary
Woocheol Choi
Let $P$ be a self-adjoint positive elliptic (-pseudo) differential operator on a smooth compact manifold $M$ without boundary. In this paper, we obtain a refined $L^p$ bound of the maximal function of the multiplier operators associated to $P$ satisfying the Hörmander-Mikhlin condition.
Woocheol Choi. Maximal functions of multipliers on compact manifolds without boundary. Communications on Pure & Applied Analysis, 2015, 14(5): 1885-1902. doi: 10.3934/cpaa.2015.14.1885.
Low regularity well-posedness for Gross-Neveu equations
Hyungjin Huh and Bora Moon
We address the problem of local and global well-posedness of Gross-Neveu (GN) equations for low regularity initial data. Combined with the standard machinery of $X_R$, $Y_R$ and $X^{s,b}$ spaces, we obtain local-wellposedness of (GN) for initial data $u, v \in H^s$ with $s\geq 0$. To prove the existence of global solution for the critical space $L^2$, we show non concentration of $L^2$ norm.
Hyungjin Huh, Bora Moon. Low regularity well-posedness for Gross-Neveu equations. Communications on Pure & Applied Analysis, 2015, 14(5): 1903-1913. doi: 10.3934/cpaa.2015.14.1903.
Liouville theorems for fractional Hénon equation and system on $\mathbb{R}^n$
Jingbo Dou and Huaiyu Zhou
In this paper, we establish some Liouville type theorems for positive solutions of fractional Hénon equation and system in $\mathbb{R}^n$. First, under some regularity conditions, we show that the above equation and system are equivalent to the some integral equation and system, respectively. Then, we prove Liouville type theorems via the method of moving planes in integral forms.
Jingbo Dou, Huaiyu Zhou. Liouville theorems for fractional H\u00E9non equation and system on $\\mathbb{R}^n$. Communications on Pure & Applied Analysis, 2015, 14(5): 1915-1927. doi: 10.3934/cpaa.2015.14.1915.
Homoclinic orbits for discrete Hamiltonian systems with indefinite linear part
Qinqin Zhang
Based on a generalized linking theorem for the strongly indefinite functionals, we study the existence of homoclinic orbits of the second order self-adjoint discrete Hamiltonian system \begin{eqnarray} \triangle [p(n)\triangle u(n-1)]-L(n)u(n)+\nabla W(n, u(n))=0, \end{eqnarray} where $p(n), L(n)$ and $W(n, x)$ are $N$-periodic on $n$, and $0$ lies in a gap of the spectrum $\sigma(\mathcal{A})$ of the operator $\mathcal{A}$, which is bounded self-adjoint in $l^2(\mathbb{Z}, \mathbb{R}^{\mathcal{N}})$ defined by $(\mathcal{A}u)(n)=\triangle [p(n)\triangle u(n-1)]-L(n)u(n)$. Under weak superquadratic conditions, we establish the existence of homoclinic orbits.
Qinqin Zhang. Homoclinic orbits for discrete Hamiltonian systems with indefinite linear part. Communications on Pure & Applied Analysis, 2015, 14(5): 1929-1940. doi: 10.3934/cpaa.2015.14.1929.
Derivation of the Quintic NLS from many-body quantum dynamics in $T^2$
Jianjun Yuan
In this paper, we investigate the dynamics of a boson gas with three-body interactions in $T^2$. We prove that when the particle number $N$ tends to infinity, the BBGKY hierarchy of $k$-particle marginals converges to a infinite Gross-Pitaevskii(GP) hierarchy for which we prove uniqueness of solutions, and for the asymptotically factorized $N$-body initial datum, we show that this $N\rightarrow\infty$ limit corresponds to the quintic nonlinear Schrödinger equation. Thus, the Bose-Einstein condensation is preserved in time.
Jianjun Yuan. Derivation of the Quintic NLS from many-body quantum dynamics in $T^2$. Communications on Pure & Applied Analysis, 2015, 14(5): 1941-1960. doi: 10.3934/cpaa.2015.14.1941.
Pointwise estimate for elliptic equations in periodic perforated domains
Li-Ming Yeh
Pointwise estimate for the solutions of elliptic equations in periodic perforated domains is concerned. Let $\epsilon$ denote the size ratio of the period of a periodic perforated domain to the whole domain. It is known that even if the given functions of the elliptic equations are bounded uniformly in $\epsilon$, the $C^{1,\alpha}$ norm and the $W^{2,p}$ norm of the elliptic solutions may not be bounded uniformly in $\epsilon$. It is also known that when $\epsilon$ closes to $0$, the elliptic solutions in the periodic perforated domains approach a solution of some homogenized elliptic equation. In this work, the Hölder uniform bound in $\epsilon$ and the Lipschitz uniform bound in $\epsilon$ for the elliptic solutions in perforated domains are proved. The $L^\infty$ and the Lipschitz convergence estimates for the difference between the elliptic solutions in the perforated domains and the solution of the homogenized elliptic equation are derived.
Li-Ming Yeh. Pointwise estimate for elliptic equations in periodic perforated domains. Communications on Pure & Applied Analysis, 2015, 14(5): 1961-1986. doi: 10.3934/cpaa.2015.14.1961.
An extension of a Theorem of V. Šverák to variable exponent spaces
Carla Baroncini and Julián Fernández Bonder
In 1993, V. Šverák proved that if a sequence of uniformly bounded domains $\Omega_n\subset R^2$ such that $\Omega_n\to \Omega$ in the sense of the Hausdorff complementary topology, verify that the number of connected components of its complements are bounded, then the solutions of the Dirichlet problem for the Laplacian with source $f\in L^2(R^2)$ converges to the solution of the limit domain with same source. In this paper, we extend Šverák result to variable exponent spaces.
Carla Baroncini, Juli\u00E1n Fern\u00E1ndez Bonder. An extension of a Theorem of V. \u0160ver\u00E1k to variable exponent spaces. Communications on Pure & Applied Analysis, 2015, 14(5): 1987-2007. doi: 10.3934/cpaa.2015.14.1987.
Multiplicity of solutions for a fractional Kirchhoff type problem
Wenjing Chen
In this paper, by using the (variant) Fountain Theorem, we obtain that there are infinitely many solutions for a Kirchhoff type equation that involves a nonlocal operator.
Wenjing Chen. Multiplicity of solutions for a fractional Kirchhoff type problem. Communications on Pure & Applied Analysis, 2015, 14(5): 2009-2020. doi: 10.3934/cpaa.2015.14.2009.
Convergence rate of solutions toward stationary solutions to a viscous liquid-gas two-phase flow model in a half line
Haiyan Yin and Changjiang Zhu
In this paper we study an asymptotic behavior of a solution to the initial boundary value problem for a viscous liquid-gas two-phase flow model in a half line $R_+:=(0,\infty).$ Our idea mainly comes from [23] and [29] which describe an isothermal Navier-Stokes equation in a half line. We obtain the convergence rate of the time global solution towards corresponding stationary solution in Eulerian coordinates. Precisely, if an initial perturbation decays with the algebraic or the exponential rate in space, the solution converges to the corresponding stationary solution as time tends to infinity with the algebraic or the exponential rate in time. These theorems are proved by a weighted energy method.
Haiyan Yin, Changjiang Zhu. Convergence rate of solutions toward stationary solutions to a viscous liquid-gas two-phase flow model in a half line. Communications on Pure & Applied Analysis, 2015, 14(5): 2021-2042. doi: 10.3934/cpaa.2015.14.2021.
A fractional Dirichlet-to-Neumann operator on bounded Lipschitz domains
Mahamadi Warma
Let $\Omega\subset R^N$ be a bounded open set with Lipschitz continuous boundary $\partial \Omega$. We define a fractional Dirichlet-to-Neumann operator and prove that it generates a strongly continuous analytic and compact semigroup on $L^2(\partial \Omega)$ which can also be ultracontractive. We also use the fractional Dirichlet-to-Neumann operator to compare the eigenvalues of a realization in $L^2(\Omega)$ of the fractional Laplace operator with Dirichlet boundary condition and the regional fractional Laplacian with the fractional Neumann boundary conditions.
Mahamadi Warma. A fractional Dirichlet-to-Neumann operator on bounded Lipschitz domains. Communications on Pure & Applied Analysis, 2015, 14(5): 2043-2067. doi: 10.3934/cpaa.2015.14.2043.
Inertial manifolds for the 3D Cahn-Hilliard equations with periodic boundary conditions
Anna Kostianko and Sergey Zelik
The existence of an inertial manifold for the 3D Cahn-Hilliard equation with periodic boundary conditions is verified using a proper extension of the so-called spatial averaging principle introduced by G. Sell and J. Mallet-Paret. Moreover, the extra regularity of this manifold is also obtained.
Anna Kostianko, Sergey Zelik. Inertial manifolds for the 3D Cahn-Hilliard equations with periodic boundary conditions. Communications on Pure & Applied Analysis, 2015, 14(5): 2069-2094. doi: 10.3934/cpaa.2015.14.2069.
A nonlocal diffusion population model with age structure and Dirichlet boundary condition
Yueding Yuan, Zhiming Guo and Moxun Tang
In this paper, we study the global dynamics of a population model with age structure. The model is given by a nonlocal reaction-diffusion equation carrying a maturation time delay, together with the homogeneous Dirichlet boundary condition. The non-locality arises from spatial movements of the immature individuals. We are mainly concerned with the case when the birth rate decays as the mature population size becomes large. The analysis is rather subtle and it is inadequate to apply the powerful theory of monotone dynamical systems. By using the method of super-sub solutions, combined with the careful analysis of the kernel function in the nonlocal term, we prove nonexistence, existence and uniqueness of the positive steady states of the model. By establishing an appropriate comparison principle and applying the theory of dissipative systems, we obtain some sufficient conditions for the global asymptotic stability of the trivial solution and the unique positive steady state.
Yueding Yuan, Zhiming Guo, Moxun Tang. A nonlocal diffusion population model with age structureand Dirichlet boundary condition. Communications on Pure & Applied Analysis, 2015, 14(5): 2095-2115. doi: 10.3934/cpaa.2015.14.2095.
RSS this journal
Tex file preparation
Abstracted in
Add your name and e-mail address to receive news of forthcoming issues of this journal:
Select the journal
Select Journals | CommonCrawl |
Algorithms for Molecular Biology
Estimating evolutionary distances between genomic sequences from spaced-word matches
Burkhard Morgenstern1,2,
Bingyao Zhu3,
Sebastian Horwege1 &
Chris André Leimeister1
Algorithms for Molecular Biology volume 10, Article number: 5 (2015) Cite this article
Alignment-free methods are increasingly used to calculate evolutionary distances between DNA and protein sequences as a basis of phylogeny reconstruction. Most of these methods, however, use heuristic distance functions that are not based on any explicit model of molecular evolution. Herein, we propose a simple estimator d N of the evolutionary distance between two DNA sequences that is calculated from the number N of (spaced) word matches between them. We show that this distance function is more accurate than other distance measures that are used by alignment-free methods. In addition, we calculate the variance of the normalized number N of (spaced) word matches. We show that the variance of N is smaller for spaced words than for contiguous words, and that the variance is further reduced if our spaced-words approach is used with multiple patterns of 'match positions' and 'don't care positions'. Our software is available online and as downloadable source code at: http://spaced.gobics.de/.
Alignment-free methods are increasingly used for DNA and protein sequence comparison since they are much faster than traditional alignment-based approaches [1]. Applications of alignment-free approaches include protein classification [2-5], read alignment [6-8], isoform quantification from RNAseq reads [9], sequence assembly [10], read-binning in metagenomics [11-16] or analysis of regulatory elements [17-20]. Most alignment-free algorithms are based on the word or k-mer composition of the sequences under study [21]. To measure pairwise distances between genomic or protein sequences, it is common practice to apply standard metrics such as the Euclidean or the Jensen-Shannon (JS) distance [22] to the relative word frequency vectors of the sequences.
Recently, we proposed an alternative approach to alignment-free sequence comparison. Instead of considering contiguous subwords of the input sequences, our approach considers spaced words, i.e. words containing wildcard or don't care characters at positions defined by a pre-defined pattern P. This is similar as in the spaced-seeds approach that is used in database searching [23]. As in existing alignment-free methods, we compared the (relative) frequencies of these spaced words using standard distance measures [24]. In [25], we extended this approach by using whole sets \(\mathcal {P} = \{P_{1},\dots,P_{m}\}\) of patterns and calculating the spaced-word frequencies with respect to all patterns in . In this multiple-pattern approach, the distance between two sequences is defined as the average of the distances based on the individual patterns \(P_{i}\in {\mathcal {P}}\), see also [26]. 'Spaced words' have been proposed simultaneously by Onodera and Shibuya for protein classification [27] and by Ghandi et al. to study regulatory elements [28,29].
Phylogeny reconstruction is an important application of alignment-free sequence comparison. Consequently, most alignment-free methods were benchmarked by applying them to phylogeny problems [30-35]. The distance metrics used by these methods, however, are only rough measures of dissimilarity, not derived from any explicit model of molecular evolution. This may be one reason why distances calculated by alignment-free algorithms are usually not directly evaluated, but are used as input for distance-based phylogeny methods such as Neighbour-Joining [36]. The resulting tree topologies are then compared to trusted reference topologies. For applications to genomic sequences, we modified our distance d N by taking into account that sequences can contain repeats and homologies on different strands. Obviously, this is only a very rough way of evaluating sequence-comparison methods, since the resulting tree topologies not only depend on the distance values calculated by the evaluated methods, but also on the tree-reconstruction method that is applied to them. Also, comparing topologies ignores branch lengths, so the results of these benchmark studies depend only indirectly on the distance values calculated by the alignment-free methods that are to be evaluated.
Three remarkable exceptions are the papers describing K r [37], Co-phylog [38] and andi [39]. K r estimates evolutionary distances based on shortest unique substrings, Co-phylog uses so-called microalignments defined by spaced-word matches and considers the don't care positions to estimate distances, while andi uses gap-free local alignments bounded by maximal unique matches. To our knowledge, these approaches are the only alignment-free methods so far that try to estimate phylogenetic distances in a rigorous way, based on a probabilistic model of evolution. Consequently, the authors of K r , Co-phylog and andy compared the distance values calculated by their methods directly to reference distances. Haubold et al. could show that K r can correctly estimate evolutionary distances between DNA sequences up to around 0.5 mutations per site [37].
In previous papers, we have shown, that our spaced-word approach is useful for phylogeny reconstruction. Tree topologies calculated with Neighbour-Joining based on spaced-word frequency vectors are usually superior to topologies calculated from the contiguous word frequency vectors that are used by traditional alignment-free methods [25]. Moreover, we could show that the 'multiple-pattern approach' leads to much better results than the 'single-pattern approach'; these results were confirmed by Noé and Martin [40]. We also showed experimentally that distance values and tree topologies produced from spaced-word frequencies are statistically more stable than those based on contiguous words. In fact, the main difference between our spaced words and the commonly used contiguous words is that spaced-word matches at neighbouring positions are statistically less dependent on each other.
Since the aim of our previous papers was to compare (multiple) spaced-word frequencies to contiguous word frequencies, we applied the same distance metrics to our spaced-word frequencies that are applied by standard methods to k-mer frequencies, namely Jensen-Shannon and the Euclidean distance. In the present paper, we propose a new pairwise distance measure based on a probabilistic model of DNA evolution. We estimate the evolutionary distance between two nucleic-acid sequences based on the number N of space-word matches between them. We show that this distance measure is more accurate and works for more distantly related sequences than existing alignment-free distance measures. In addition, we calculate the variance of N for contiguous k-mers, as well as for spaced words using our single and multiple pattern approaches. We show that the variance of N is lower for spaced words than for contiguous words and that the variance is further reduced if multiple patterns are used.
This paper is an extended version of a manuscript that was first published in the proceedings of Workshop on Algorithms in Bioinformatics (WABI) 2013 in Wroclaw, Poland [41]. We added two extensions to our WABI paper that are crucial if our method is to be applied to real-world genomic sequences. (a) While the original version of our distance function assumed that homologies are located on the same strand of two genomes under comparison, we modified our original distance measure to account for homologies that are on different strands. (b) The number N of spaced-word matches is highly sensitive to repeats in the compared sequences, and our previously defined distance function could grossly under-estimate phylogenetic distances in the present of repeats. We therefore propose a simple modification of this distance function that is insensitive to repeats. Finally, we added more test data sets to evaluate our method.
Motifs and spaced words
As usual, for an alphabet Σ and ℓ∈N, Σ ℓ denotes the set of all sequences of length ℓ over Σ. For a sequence S∈Σ ℓ and 0<i≤ℓ, S[i] denotes the i-th character of S. A pattern of length ℓ is a word P∈{0,1}ℓ, i.e. a sequence over {0,1} of length ℓ. In the context of our work, a position i with P[i]=1 is called a match position while a position i with P[i]=0 is called a don't care position. The number of all match positions in a patterns P is called the weight of P. For a pattern P of weight k, \(\hat {P}_{i}\) denotes the i-th match position and \(\hat {P} = \{\hat {P}_{1}, \dots \hat {P}_{k}\}, \hat {P}_{i} < \hat {P}_{i+1},\) denotes the set of all match positions.
A spaced word w of weight k over an alphabet Σ is a pair (P,w ′) such that P is a pattern of weight k and w ′ is a word of length k over Σ. We say that a spaced word (P,w ′) occurs at position i in a sequence S over Σ, if \(S[i+\hat {P}_{r}-1] = w'[r-1]\) for all 1≤r≤k. For example, for
$$\Sigma = \{A,T,C,G\},\ \ P = 1101, \ \ w'= ACT, $$
we have \(\hat {P} = \{1,2,4\}\), and the spaced word w=(P,w ′) occurs at position 2 in sequence S=C A C G T C A since
$$ S[2] S[3] S[5] = ACT = w'. $$
A pattern is called contiguous if it consists of match positions only, a spaced word is called contiguous if the underlying pattern is contiguous. So a 'contiguous spaced word' is just a 'word' in the usual sense.
For a pattern P of weight k and two sequences S 1 and S 2 over an alphabet Σ, we say that there is a spaced-word match with respect to P – or a P-match – at (i,j) if
$$ S_{1}\left[i+\hat{P}_{r}-1\right] = S_{2}\left[j+\hat{P}_{r}-1\right] $$
holds for all 1≤r≤k. For example, for sequences
$$S_{1} = ACTACAG \text{ and} S_{2}= TATAGG $$
and P as above, there is a P-match at (3,1) since one has S 1[3]=S 2[1],S 1[4]=S 2[2] and S 1[6]=S 2[4]. For a set \({\mathcal {P}} = \{P_{1}, \dots, P_{m}\}\) of patterns, we say that there is a -match at (i,j) if there is some \(P\in {\mathcal {P}}\) such that there is a P-match at (i,j).
The number N of spaced-word matches for a pair of sequences with respect to a set of patterns
We consider sequences S 1 and S 2 as above and a fixed set \(\mathcal {P} = \{P_{1},\dots,P_{m}\}\) of patterns. For simplicity, we assume that all patterns in have the same length ℓ and the same weight k. For now, we use a simplified model of sequence evolution without insertions and deletions, with a constant mutation rate and with different sequence positions independent of each other. Moreover, we assume that we have the same substitution rates for all substitutions a→b,a≠b. We therefore consider two sequences S 1 and S 2 of the same length L with match probabilities
$$P(S_{1}[i] = S_{2}[j]) = \left\{ \begin{array}{ll} p & \text{for } i=j \\ q & \text{for } i\not=j \\ \end{array} \right. $$
If q a is the relative frequency of a single character \(a\in {\mathcal {A}}\), \(q = \sum _{a\in {\mathcal {A}}}{q_{a}^{2}}\) is the background match probability, and p≥q is the match probability for a pair of 'homologous' positions.
For a pattern P, let N(S 1,S 2,P) be the number of pairs of positions (i,j) where there is a P-match between S 1 and S 2. We then define
$$ N = N(S_{1},S_{2},\mathcal{P}) = \sum_{P\in\mathcal{P}} N(S_{1},S_{2},P) $$
to be the sum of all P-matches for patterns \(P\in \mathcal {P}\). Note that for two sequences, there can be P-matches for different patterns P at the same pair of positions (i,j). In the definition of N, we count not only the positions (i,j) where there is some P-match, but we count all P-matches with respect to all patterns in .
N can be seen as the inner product of \(m\cdot |{\mathcal {A}}|^{k}\)-dimensional count vectors for spaced words with respect to the set of patterns . In the special case where consists of a single contiguous pattern, i.e. for k=ℓ and m=1, N is also called the D 2 score [42]. The statistical behaviour of the D 2 score has been studied under the null model where S 1 and S 2 are unrelated [18,43]. In contrast to these studies, we want to investigate the number N of spaced-word matches for evolutionarily related sequence pairs under a model as specified above. To this end, we define \(X_{i,j}^{P}\) to be the Bernoulli random variable that is 1 if there is a P-match between S 1 and S 2 at (i,j), \(P\in \mathcal {P}\), and 0 otherwise, so N can be written as
$$ N= \sum_{\substack{P\in\mathcal{P} \\ i,j }} X_{i,j}^{P} $$
If we want to calculate the expectation value and variance of N, we have to distinguish between 'homologous' spaced-word matches, that is matches that are due do 'common ancestry' and 'background matches' due to chance. In our model where we do not consider insertions and deletions, a P-match at (i,j) is 'homologous' if and only if i=j holds. So in this special case, we can define
$$ \mathcal{X}_{Hom} = \left\{ X_{i,i}^{P} | 1 \le i \le L-\ell+1, P\in\mathcal{P} \right\}, $$
$$ \mathcal{X}_{BG} = \left\{ X_{i,j}^{P} | 1 \le i,j \le L-\ell+1, i\not= j, P \in \mathcal{P} \right\}. $$
Note that if sequences do not contain insertions and deletions, every spaced-word match is either entirely a homologous match or entirely a background match. If indels are considered, a spaced-word match may involve both, homologous and background regions, and the above definitions need to be adapted. The set of all random variables \(X_{i,j}^{P}\) can be written as \( \mathcal {X} = \mathcal {X}_{\textit {Hom}} \cup \mathcal {X}_{\textit {BG}}\), the total sum N of spaced-word matches with respect to the set of patterns is
$$ N = \sum_{X\in\mathcal{X}} X $$
and the expected number of spaced-word matches is
$$E(N) = \sum_{X\in \mathcal{X}_{Hom}} E(X) + \sum_{X\in \mathcal{X}_{BG}} E(X),$$
where the expectation value of a single random variable \(X\in \mathcal {X}\) is
$$ E(X) = \left\{ \begin{array}{ll} p^{k} & \text{if}\ \ X \in \mathcal{X}_{Hom} \\ q^{k} & \text{if}\ \ X \in \mathcal{X}_{BG} \end{array} \right. $$
((1))
There are L−ℓ+1 positions (i,i) and (L−ℓ)·(L−ℓ+1) positions (i,j),i≠j where spaced-word matches can occur, so we obtain
$$ E(N) = m \cdot \left[ (L-\ell+1) \cdot p^{k} + (L-\ell)\cdot (L-\ell+1) \cdot q^{k} \right] $$
Estimating evolutionary distances from the number N of spaced-word matches
If the weight of the patterns – i.e. the number of match positions – in the spaced-words approach is sufficiently large, random space-word matches can be ignored. In this case, the Jensen-Shannon distance between two DNA sequences approximates the number of (spaced) words that occur in one of the compared sequences, but not in the other one. Thus, if two sequences of length L are compared and N is the number of (spaced) words that two sequences have in common, their Jenson-Shannon distance can be approximated by L−N. Accordingly, the Euclidean distances between two sequences can be approximated by the square root of this value if the distance is small and k is large enough. For small evolutionary distances, the Jensen-Shannon distance grows therefore roughly linearly with the distance between two sequences, and this explains why it is possible to produce reasonable phylogenies based on this metric. It is clear, however, that the Jensen-Shannon distance is far from linear to the real distance for larger distances. We therefore propose an alternative estimator of the evolutionary distance between two sequences in terms of the number N of spaced-word matches between them.
Again, we first consider sequences without insertions and deletions. From the expected number E(N) of spaced words shared by sequences S 1 and S 2 with respect to a set of patterns as given in equation (2), we obtain
$$ \hat{p} = \sqrt[k]{\frac{N}{m\cdot (L-\ell+1)}- (L-\ell) \cdot q^{k}} $$
as an estimator for the match probability p for sequences without indels, and with Jukes-Cantor [44] we obtain
$$ d_{N}= -\frac{3}{4} \cdot \ln \left[ \frac{4}{3}\cdot \sqrt[k]{\frac{N}{m\cdot(L-\ell+1)}- (L-\ell) \cdot q^{k}} - \frac{1}{3} \right] $$
as an estimator for the distance d between the sequences S 1 and S 2. Note that for a model without insertions and deletions, it is, of course, not necessary to estimate the mismatch probability p from the number N of spaced-word matches. In this case, one could simply count the mismatches between two sequences and use their relative frequency as an estimator for p. The reason why we want to estimate p based on the number of spaced-word matches is that this estimate can be easily adapted to a model with insertions and deletions.
Local homologies, homologies on different strands and repeats
Next, we consider the case where S 1 and S 2 may have different lengths and share one region of local homology. Again, we assume again that there are no insertions and deletions within this homologous region. Let L Hom be the length of the local homology; we assume that L Hom is known. Equation (2) for the expected number N of spaced-word matches between two sequences S 1 and S 2 can be easily generalized to the case of local homology. If L 1 and L 2 are the lengths of S 1 and S 2, respectively, we define
$$ L^{*} = (L_{1}-\ell+1)\cdot(L_{2}-\ell+1) - L_{Hom} $$
to be the (approximate) number of positions (i,j) where a background match can occur (L ∗ is only an approximation since we ignore spaced-word matches that involve both, homologous and background regions of the sequences). Then, we can the estimate the expected number of spaced-word matches as
$$ E(N) \approx m \cdot \left[ \left(L_{Hom}-\ell+1 \right) \cdot p^{k} + L^{*} \cdot q^{k} \right] $$
and we obtain
$$ d_{loc} = -\frac{3}{4} \cdot \ln \left[ \frac{4}{3} \cdot \sqrt[k]{\frac{N/m - L^{*} \cdot q^{k}}{L_{Hom}-\ell+1 }} - \frac{1}{3} \right] $$
as an estimator for the distance between S 1 and S 2. It is straight-forward, though somewhat cumbersome, to extend this estimator to the case where the homologous region contains insertions and deletions.
Note that, if local homologies between input sequences are known to the user, the best thing would be to remove the non-homologous regions of the sequences and to use the distance d N defined by equation (4) to the remaining homologous regions (which are then 'globally' related to each other). Nevertheless, the distance d loc might be useful in situations where the extent of homology between genomic sequences can be estimated, even though the precise location and boundaries of these homologies are unknown.
So far, we considered the case where homologies between two genomic sequences S 1 and S 2 are located on the same respective strand. For realistic applications, we have to take into account that homologies can occur on both strands of the DNA double helix. More importantly, we have to consider the case where a region of homology is located on one strand of S 1, but on the reverse strand on S 2. Let L 1 and L 2 be the lengths of S 1 and S 2, respectively, with L 1≤L 2. For simplicity, we assume that the entire sequence S 1 is homologous to a contiguous segment of S 2 and we ignore insertions and deletions. We now assume, however, that some segments of S 1 may align to their homologous counterpart in S 2 while other segments of S 2 may align to the reverse complement of their counterparts in S 2. The more general situation involving local homology and indels can be accounted for as discussed above.
The simplest way to capture homologies between S 1 and S 2 regardless of their orientation is to concatenate one of the sequences, say S 2 with its reverse complement and to compare S 1 to this concatenated sequence. So in this case, we would consider all spaced-word matches between S 1 and \(\tilde {S_{2}}\) where \(\tilde {S_{2}}\) is the concatenation of S 2 and its reverse complement. To estimate the expected number of spaced-word matches in this situation, we can homologous spaced-word matches can be locate and ≈2·(L 1−ℓ+1)·(L 2−ℓ) positions where background matches can occur. By adapting Formulae (2) to (4) accordingly and ignoring fringe effects, we obtain
$$ \begin{aligned} E(N) \approx m \cdot& \left[ \left(L_{1}-\ell+1 \right) \cdot p^{k} + 2\cdot \left(L_{1}-\ell+1 \right)\right. \\ &\quad\left.\times \left(L_{2}-\ell\right) \cdot q^{k} \right] \end{aligned} $$
$$ \hat{p} = \sqrt[k]{\frac{N}{m\cdot (L_{1}-\ell+1)} - 2 \cdot (L_{2}-\ell) \cdot q^{k}} $$
$$ {\fontsize{8.8}{6} \begin{aligned} d_{RC} = -\frac{3}{4} \cdot \ln \left[ \frac{4}{3} \cdot \sqrt[k]{\frac{N}{m\cdot\left(L_{1}-\ell+1\right)} - 2 \cdot \left(L_{2}-\ell\right) \cdot q^{k}} - \frac{1}{3} \right] \end{aligned}} $$
as an estimator for the distance d between the sequences S 1 and S 2 if homologies on the reverse complement are taken into account.
Finally, we consider the case where sequences contain repeats. A direct application of the distance functions discussed so far would be highly sensitive to repeats in the input sequences, since repeats can drastically increase the number N of (spaced) word matches. This can even lead to negative distance values if the number N of matches between two sequences with repeats exceeds the expected number of matches between a non-repetitive sequence of the same length to itself. A simple but efficient way of dealing with repeats is to use binary variables N bin(S 1,S 2,P) that are one if there are one or several P matches between sequences S 1 and S 2, and zero if there is no such match. Instead of using the number N of matches for a set of patterns, we then consider
$$ N^{bin} = N^{bin}\left(S_{1},S_{2},\mathcal{P}\right) = \sum_{P\in\mathcal{P}} N^{bin}\left(S_{1},S_{2},P\right) $$
and distances \(d_{N}^{bin}\), \(d_{\textit {loc}}^{bin}\) and \(d_{\textit {RC}}^{bin}\), respectively, can be defined as in equations (4), (5) and (8), but with N replaced by N bin.
The variance of N
Our new distance measure and other word-based distance measures depend on the number N of (spaced) word matches between sequences. To study the stability of these measures, we want to calculate the variance of N. To do so, we adapt results on the occurrence of words in a sequence as outlined in [45]. Since N can be written as the sum of all random variables \(X_{i,j}^{P}\), we need to calculate the covariances of these random variables. To simplify this, we make a further assumption on our sequence model: we assume that the four nucleotides occur with the same probability 0.25. In this case, the covariance of two random variables \(X_{i,j}^{P}\) and \(X_{i',j'}^{P'}\) can be non-zero only if i ′−i=j ′−j holds (note that this is not true if nucleotides have different probabilities to occur). In particular, for random variables \(X\in \mathcal {X}_{\textit {Hom}}\) and \(X'\in \mathcal {X}_{\textit {BG}}\), their covariance is zero. Thus, we only need to consider covariances of pairs of random variables \(X_{i,j}^{P}\) and \(X_{i+s,j+s}^{P'}\).
For patterns P,P ′ and s∈N we define n(P,P ′,s) to be the number of integers that are match positions of P or match positions of P ′ shifted by s positions to the right (or both). Formally, if
$$ \hat{P}_{s} = \left\{\hat{P}_{1} + s,\dots, \hat{P}_{k} + s\right\} $$
denotes the set of match positions of a pattern P shifted by s positions to the right, we define
$$ n(P,P',s) = |\hat{P} \cup \hat{P'}_{s}| = |\hat{P}| + |\hat{P'}_{s}| - |\hat{P} \cap \hat{P'}_{s}| $$
For example, for P=101011,P ′=111001 and s=2, there are 6 positions that are match positions of P or of P ′ shifted by 2 positions to the right, namely positions 1, 3, 4, 5, 6, 8:
$$\begin{array}{cccccccccc} P: & & 1 & 0 & 1 & 0 & 1 & 1 & & \\ P': & & & & 1 & 1 & 1 & 0 & 0 & 1\\ \end{array} $$
so one has n(P,P ′,s)=6. In particular, one has n(P,P,0)=k for all patterns P of weight k, and
$$ n(P,P,s) = k + \max\{s,k\} $$
for all contiguous patterns P of weight (or length) k. With this notation, we can write
$$\begin{array}{@{}rcl@{}} E\left(X_{i,j}^{P} \cdot X_{i+s,j+s}^{P'} \right) & = &\left\{ \begin{array}{ll} p^{n(P,P',s)} & \text{if}\ \ i = j\\ q^{n(P,P',s)} & \text{else} \end{array} \right. \end{array} $$
for all \(X_{i,j}^{P}, X_{i+s,j+s}^{P'}\)
To calculate the covariance of two random variables from , we distinguish again between homologous and random matches. We first consider 'homologous' pairs \(X_{i,i}^{P}, X_{i+s,i+s}^{P'} \in \mathcal {X}_{\textit {Hom}}\). Here, we obtain with (9)
$$ \begin{aligned} Cov \left(X_{i,i}^{P}, X_{i+s,i+s}^{P'}\right) &= p^{n(P,P',s)} - p^{2k} \end{aligned} $$
Similarly, for a pair of 'background' variables \(X_{i,j}^{P},\) \( X_{i+s,j+s}^{P'} \in \mathcal {X}_{\textit {BG}}\), one obtains
$$ \begin{aligned} Cov \left(X_{i,j}^{P}, X_{i+s,j+s}^{P'}\right) &= q^{n(P,P',s)} - q^{2k}. \end{aligned} $$
Since 'homologous' and 'background' variables are uncorrelated, the variance of N can be written as
$$\begin{aligned} Var(N) =&\; Var \left(\sum_{X\in\mathcal{X}} X\right) = Var \left(\sum_{X\in \mathcal{X}_{Hom}}\right)\\ &+ Var \left(\sum_{X\in \mathcal{X}_{BG}}\right) \end{aligned} $$
We express the variance of these sums of random variable as the sum of all of their covariances, so for the 'homologous' random variables we can write
$$ Var \left(\sum_{X\in \mathcal{X}_{Hom}}X\right) = \sum_{P,P'\in\mathcal{P}} \sum_{i,i'=1}^{L-l+1} Cov \left(X_{i,i}^{P}, X_{i',i'}^{P'} \right) $$
Since the covariance for non-correlated random variables vanishes, we can ignore the covariances of all pairs \(\left (X_{i,i}^{P}, X_{i',i'}^{P'}\right)\) with |i−i ′|≥l so, ignoring side effects, we can write the above sum as
$${\fontsize{9.3}{6}\begin{aligned} Var \left(\sum_{X\in \mathcal{X}_{Hom}} X\right) & \approx &\sum_{i=1}^{L-\ell+1} \sum_{P,P'\in\mathcal{P}} \sum_{s= -\ell + 1}^{\ell-1} Cov \left(X_{i,i}^{P}, X_{i+s,i+s}^{P'} \right) \end{aligned}} $$
and since the above covariances depend only on s but not on i, we can use (9) and (11) and obtain
$$\begin{aligned} Var &\left(\sum_{X\in \mathcal{X}_{Hom}} X\right) \approx (L-\ell+1)\\ &\times\sum_{P,P'\in\mathcal{P}} \sum_{s=-\ell+1}^{\ell-1} \left(p^{n(P,P',s)} - p^{2k}\right) \end{aligned} $$
and similarly
$$\begin{aligned} Var &\left(\sum_{X\in \mathcal{X}_{BG}}X\right) \approx (L-\ell+1) \cdot (L-\ell)\\ &\times\sum_{P,P'\in\mathcal{P}}\sum_{s=-\ell+1}^{\ell-1} \left(q^{n(P,P',s)} - q^{2k}\right) \end{aligned} $$
Together, we get
$$ {\begin{aligned} Var(N) \approx & \;(L-\ell+1) \cdot \sum_{P,P'\in\mathcal{P}} \sum_{s=-\ell+1}^{\ell-1} \left(p^{n(P,P',s)} - p^{2k} \right)\\ & +\ \ (L-\ell+1) \cdot (L-\ell) \\ &\times\sum_{P,P'\in\mathcal{P}} \sum_{s=-\ell+1}^{\ell-1} \left(q^{n(P,P',s)} - q^{2k}\right) \end{aligned}} $$
Simulated DNA sequences
To evaluate the distance function d N defined by equation (4), we simulated pairs of DNA sequences with an (average) length of 100,000 and with an average of d substitutions per sequence position. More precisely, we generated sequence pairs by generating a first sequence using a Bernoulli model with probability 0.25 for each nucleotide. A second sequence was then generated from the first sequence by substituting nucleotides with a probability corresponding to the substitution frequency d, as calculated with Jukes-Cantor. We varied d between 0 and 1 and compared the distances estimated by our distance measure and by various other alignment-free programs to the 'real' distance d. We performed these experiments for sequence pairs without insertions and deletions and for sequence pairs where we included insertions and deletions with a probability of 1% at every position. The length of indels was randomly chosen between 1 and 50 with uniform probability.
Figure 1 shows the results of these experiments. Our new distance measure d N applied to spaced-word frequencies is well in accordance with the real distances d for values of d≤0.8 on sequence pairs without insertions and deletions if the single-pattern version of our program is used. For the multiple-pattern version, our distance function estimates the real distances correctly for all values of d≤1. If indels are added as specified above, our distance functions slightly overestimates the real distance d. By contrast, the Jensen-Shannon distance applied to the same spaced-word frequencies increased non-linearly with d and flattened for values of around d≥0.4.
Distances calculated by different alignment-free methods. Distances were calculated for pairs of simulated DNA sequences and plotted against their 'real' distances d measured in substitutions per site. Plots on the left-hand side are for sequence pairs without insertions and deletions, on the right-hand side the corresponding results are shown for sequences with an indel probability of 1% for each site and an average indel length of 25. From top to bottom, the applied methods were: 1. spaced words with the single-pattern approach and the Jensen-Shannon distance (squares) and the distance d N defined in equation (4) in this paper (circles), 2. the multiple-pattern version of Spaced Words using sets of m=100 patterns with the same distance functions, 3. distances calculated with K r [37], 4. with kmacs [47] and ACS [30] and 5. with Co-phylog [38].
As mentioned, K r [46] estimates evolutionary distances on the basis of a probabilistic model of evolution. In our study, K r correctly estimated the true distance d for values of around d≤0.6, this precisely corresponds to the results reported by the authors of the program. For larger distances, K r grossly overestimates the distance d, though, and the variance strongly increases. The distances estimated by Co-phylog [38] nearly coincide with the substitution rate d for values d≤0.7, then the curve flattens. Moreover, it appears that the distances calculated with Co-phylog are not much affected by indels. The distance values calculated by the program k mismatch average common substring (kmacs) that we previously developed [47] are roughly linear to the real distances d for values of up to around d=0.3. From around d=0.5 on, the curve becomes flat. With k=30 mismatches, the performance of kmacs was better than with k=0, in which case kmacs corresponds to the Average Common Substring (ACS) approach [30].
Real-world genomes
Next, we applied various distance measures to a set of 27 mitochondrial genomes from different primates that were previously used by [46] as a benchmark data set for alignment-free approaches. We used our multiple spaced-words approach with the parameters that we used in [25], that is with a pattern weight (number of match positions) of k=9 and with pattern lengths ℓ between 9 and 39, i.e. with up to 30 don't-care positions in the patterns. For each value of ℓ, we randomly generated sets of m=100 patterns. For this data set, we used the distance d RC defined in equation (8) that takes the reverse complement of the input sequences into account. (We did not use the 'binary' version \(d_{\textit {RC}}^{bin}\) of our distance function, since these sequences do not contain major repeats). In addition, we used the competing approaches FFP [32], CVTree [48], K r [37], kmacs [47], ACS [30] and Co-phylog [38]. For some of these methods, program parameters need to be defined, e.g. a predefined word length or the number of allowed mismatches. For these methods we tested various parameters and used the best performing values.
With each method, we calculated a distance matrix for the input sequences, and we compared this matrix to a reference distance matrix that we calculated with the program Dnadist from the PHYLIP package [49] based on a reference multiple alignment. For comparison with the reference matrix, we used a software program based on the Mantel test [50] that was also used by Didier et al. [51]. Figure 2 shows the results of this comparison. As can be seen, our new distance measure d RC , applied to multiple spaced-word frequencies, produced distance matrices close to the reference matrix and outperformed the Jenson-Shannon distance for all pattern lengths ℓ that we tested. The distance function d RC also outperformed some of the existing alignment-free methods, with the exception of K r and kmacs.
Comparison of distance matrices for primate mitochondrial genomes. We applied various alignment-free methods to a set of 27 mitochondrial genomes from different primates and compared the resulting distance matrices to a trusted reference distance matrix using the Mantel test. The similarity between the calculated matrices and the reference matrix is plotted. We applied our Spaced-Words approach using sets of 100 randomly calculated patterns with weight k=9 and length ℓ between 9 and 39, i.e. with 9 match positions and up to 30 don't care positions. Yellow squares are the results for the 'binary' version of new distance measure \(d_{N}^{bin}\). We did not use the reverse-complement option on these data, since genes in the compared genomes are known to be on the same strand. Green diamonds are the results for the Jensen-Shannon distance applied to the same spaced-word frequency vectors as explained in [25]. In addition, distances calculated by six other alignment-free methods were evaluated.
In addition to this direct distance comparison we performed an indirect evaluation by phylogeny analysis. To do so, we applied Neighbor-Joining [36] to the distance matrices and compared the resulting trees to the corresponding reference tree, using the Robinson-Foulds (RF) metric [52]. The results for the mitochondrial genomes are shown in Figure 3. The outcome of this evaluation is partially in contradiction to the results of the direct comparison of the distances. Figure 2 shows that the distance matrices produced by Spaced-Words with the Jensen-Shannon divergence are worse than the distance matrices produced by most other methods, if these matrices are directly compared to the reference distance matrix. However, Spaced Words with Jensen-Shannon led to better tree topologies than most other methods in our study, as shown in Figure 3. A similar contradictory result is observed for K r . While the distance matrix produced by K r is similar to the reference matrix, the tree topology produced with these distances is further away from the reference topology than the trees computed by the other alignment-free approaches in our study.
RF distances for primate mitochondrial genomes. Performance of various alignment-free methods on the same set of 27 primate mitochondrial genomes as in Figure 2. Neighbour-Joining was applied to the calculated distance matrices, the resulting tree topologies were compared with the Robinson-Foulds metric. Parameters for Spaced Words and colour coding as in Figure 2.
To evaluate our new method for larger sequences we used two prokaryotic data sets. The first data set consists of 26 E. coli and Shigella genomes, which are very closely related and the second data set consist of 32 Roseobacter genomes, which are far more divergent. For these sequences, we used our 'repeat-aware' distance function \(d_{\textit {RC}}^{bin}\). As for the primate mitochondrial genomes, we calculated distance matrices using the same alignment-free methods, constructed trees with Neighbor-Joining and compared the resulting tree topologies to a benchmark tree using the RF metric. For the E.coli/Shigella genomes we used the tree proposed by [53] as reference which is based on concatenated alignments of the 2034 core genes. For the Roseobacter genomes we used the tree by [54] as reference. This benchmark tree was constructed based on alignments of 70 universal single-copy genes.
The results for the E.coli/Shigella are shown in Figure 4. The best result was achieved by Co-phylog with a RF distance of only 4, followed by our new distance d RC with a RF distance of 10, which is a huge improvement compared to the previously described version of Spaced Words where we used the Jensen-Shannon divergence. K r performed slightly worse than our new estimator with a RF distance of 12. The other alignment-free methods performed relatively poorly. For the Spaced Words approach we performed 25 runs with m=100 patterns that were randomly generated. For this data set, all sets of patterns led to the same tree topology. Additionally the results are also not influenced by the number of don't-care positions, which can be explained by the very small number of substitutions between these genomes.
RF distances for E.coli/Shigella genomes. Performance of various alignment-free methods on a set of 26 E.coli/Shigella genomes. Robinson-Foulds distances to the reference tree are shown. For Spaced Words, we used a weight of k=17 and applied the 'binary' distance function \(d_{\textit {RC}}^{bin}\). Colour coding as in Figure 2.
For the Roseobacter genomes we used the same evaluation procedure as for the E. coli/Shigella genomes. Here, our new evolutionary distance d RC outperformed the other alignment-free methods, if don't-care positions are incorporated in the patterns, and the performance increased with the number of don't-care positions, as shown in Figure 5. (Without don't-care positions, i.e. if classical word-matches are counted, d RC was slightly outperformed by Co-phylog, but was still better than all other methods in our comparison). The RF distance to the benchmark tree varied between 24 and 28. Co-phylog ranked second in this evaluation with a RF distance of 28. All other methods achieved a RF distance of greater or equal to 30. Surprisingly, K r performs worse than other programs on these sequences.
RF distances for Roseobacter genomes. Performance of various alignment-free methods on a set of 32 Roseobacter genomes. Robinson-Foulds distances to the reference tree are shown. Spaced Words was used with parameters as in Figure 4, colour coding is as in Figure 2.
The variance of N: experimental results
Figure 1 shows not only striking differences in the shape of the distance functions used by various alignment-free programs. There are also remarkable differences in the variance of the distances calculated with our new distance measure d N that we defined in equation (4). The distance D N is defined in terms of the number N of (spaced) word matches between two sequences. As mentioned above, the established Jensen-Shannon and Euclidean distances on (spaced) word frequency vectors also depend on N, for small distances, they can be approximated by L−N and \(\sqrt {L-N}\), respectively. Thus, the variances of these three distance measures directly depend on the variance of N. As can be seen in Figure 1, the variance of d N increases with the frequency of substitutions. Also, the variance is higher for the single-pattern approach than for the multiple-pattern approach. To explain this observation, we calculated the variance of the normalized number N/m of spaced-word matches using equation 12. Figure 6 summarizes the results for a sequence length of L=16.000 and mismatch frequencies of 0.7 and 0.25, respectively. As can be seen, for single spaced words the variance of N/m is far smaller than for contiguous words, and for multiple spaced words, the variance is further reduced.
Variance of the number of spaced-word matches. Variance of the normalized number \(\frac {N}{m}\) of spaced-word matches where \(m={|\mathcal {P}|}\) is the number of patterns in the multiple-pattern approach. Formula (12) was applied to contiguous words and to single and multiple spaced words for un-gapped sequence pairs of length 16,000 nt with a mismatch frequency of 0.7 (left) and 0.25 (right).
In this paper, we proposed a new estimator d N for the evolutionary distance between two DNA sequences that is based on the number N of spaced-word matches between them. While most alignment-free methods use ad-hoc distance measures, the distance function that we defined is based on a probabilistic model of evolution and seems to be a good estimator for the number of substitutions per site that have occurred since two sequences have evolved separately. For simplicity, we used a model of evolution without insertions and deletions. Nevertheless, our test results show that our distance function is still a reasonable estimator if the input sequences contain a moderate number of insertions and deletions although, in this case, distances between the input sequences are overestimated since the number N of spaced-word matches is smaller than it would be for sequences without indels.
The model that we used to derive our distance d N assumes that two sequences are globally related. If sequences share only local homology, the number N of spaced-word matches would be smaller than for globally related sequences with the same length and rate of mismatches, so their distance would be over-estimated by our distance measure d N . This is clearly a limitation of our approach. However, as indicated in section Estimating evolutionary distances from the number N of spaced-word matches, our distance function can be adapted to the case of local homologies if the length of these homologies and the number of gaps in the homologous regions can be estimated. In principle, it should therefore be possible to apply our method to locally related sequences by first estimating the extent of their shared (local) homology and then using the distance d loc defined in equation (5) instead of d N .
The distance measures introduced in this paper and other distances that we previously used for our spaced words approach depend on the number N of space-word matches between two sequences with respect to a set of patterns of 'match' and 'don't care' positions. This is similar for more traditional alignment-free methods that calculated distances based on k-mer frequencies. While the expected number of (spaced) word matches is essentially the same for contiguous words and for spaced words of the corresponding weight, we have showed that the variance of N is considerably lower for spaced-words than for the traditionally used contiguous words. Moreover, with our multiple-pattern approach the variance of the normalized number of spaced-word matches is further reduced. This seems to be the main reason why our multiple spaced words approach outperforms the single-pattern approach that we previously introduced as well as the classical k-mer approach when used for phylogeny reconstruction.
As we have shown, the variance of N depends on the number of overlapping 'match' positions if patterns from are shifted against each other. Consequently, in our single-pattern approach, the variance of N is higher for periodic patterns than for non-periodic patterns. For example, for the periodic pattern 101010…, the variance is equal to the variance of the contiguous pattern of the corresponding weight. In our previous benchmark studies, we could experimentally confirm that spaced words performs better with non-periodic patterns than with periodic patterns. The theoretical results of this study may be useful to find patterns or sets of patterns that minimize the variance of N to further improve our spaced-words approach.
Vinga S. Editorial: Alignment-free methods in computational biology. Briefings Bioinf. 2014; 15:341–2.
Leslie C, Eskin E, Noble WSS. The spectrum kernel: a string kernel for SVM protein classification. In: Pacific Symposium on Biocomputing. Singapore: World Scientific Publishing: 2002. p. 566–75.
Lingner T, Meinicke P. Remote homology detection based on oligomer distances. Bioinformatics. 2006; 22:2224–31.
Lingner T, Meinicke P. Word correlation matrices for protein sequence analysis and remote homology detection. BMC Bioinf. 2008; 9:259.
Comin M, Verzotto D. The irredundant class method for remote homology detection of protein sequences. J Comput Biol. 2011; 18:1819–29.
Li R, Li Y, Kristiansen K, Wang J. SOAP: short oligonucleotide alignment program. Bioinformatics. 2008; 24:713–4.
Langmead B, Trapnell C, Pop M, Salzberg S. Ultrafast and memory-efficient alignment of short dna sequences to the human genome. Genome Biol. 2009; 10:25.
Ahmadi A, Behm A, Honnalli N, Li C, Weng L, Xie X. Hobbes: optimized gram-based methods for efficient read alignment. Nucleic Acids Res. 2011; 40:1.
Patro R, Mount SM, Kingsford C. Sailfish enables alignment-free isoform quantification from RNA-seq reads using lightweight algorithms. Nat Biotechnol. 2014; 32:462–4.
Article PubMed Central CAS PubMed Google Scholar
Zerbino DR, Birney E. Velvet: algorithms for de novo short read assembly using de Bruijn graphs. Genome Res. 2008; 18:821–9.
Teeling H, Waldmann J, Lombardot T, Bauer M, Glockner F. Tetra: a web-service and a stand-alone program for the analysis and comparison of tetranucleotide usage patterns in dna sequences. BMC Bioinf. 2004; 5:163.
Chatterji S, Yamazaki I, Bai Z, Eisen JA. Compostbin: A DNA composition-based algorithm for binning environmental shotgun reads. In: Research in Computational Molecular Biology, 12th Annual International Conference, RECOMB 2008, Singapore, March 30 - April 2, 2008. Proceedings. Berlin, Heidelberg: Springer: 2008. p. 17–28.
Wu Y-W, Ye Y. A novel abundance-based algorithm for binning metagenomic sequences using l-tuples. J Comput Biol. 2011; 18:523–34.
Tanaseichuk O, Borneman J, Jiang T. Separating metagenomic short reads into genomes via clustering. Algorithms Mol Biol. 2012; 7:27.
Article PubMed Central PubMed Google Scholar
Leung HCM, Yiu SM, Yang B, Peng Y, Wang Y, Liu Z, et al. A robust and accurate binning algorithm for metagenomic sequences with arbitrary species abundance ratio. Bioinformatics. 2011; 27:1489–95.
Wang Y, Leung HCM, Yiu SM, Chin FYL. Metacluster 5.0: a two-round binning approach for metagenomic data for low-abundance species in a noisy sample. Bioinformatics. 2012; 28:356–62.
Meinicke P, Tech M, Morgenstern B, Merkl R. Oligo kernels for datamining on biological sequences: a case study on prokaryotic translation initiation sites. BMC Bioinf. 2004; 5:169.
Kantorovitz M, Robinson G, Sinha S. A statistical method for alignment-free comparison of regulatory sequences. Bioinformatics. 2007; 23:249–55.
Leung G, Eisen MB. Identifying cis-regulatory sequences by word profile similarity. PloS one. 2009; 4(9):6901.
Federico M, Leoncini M, Montangero M, Valente P. Direct vs 2-stage approaches to structured motif finding. Algorithms Mol Biol. 2012; 7:20.
Blaisdell BE. A measure of the similarity of sets of sequences not requiring sequence alignment. Proc Nat Acad Sci USA. 1986; 83:5155–9.
Lin J. Divergence measures based on the shannon entropy. IEEE Trans Inf theory. 1991; 37:145–51.
Ma B, Tromp J, Li M. PatternHunter: faster and more sensitive homology search. Bioinformatics. 2002; 18:440–5.
Boden M, Schöneich M, Horwege S, Lindner S, Leimeister C-A, Morgenstern B. German Conference on Bioinformatics 2013 In: Beißbarth T, Kollmar M, Leha A, Morgenstern B, Schultz A-K, Waack S, Wingender E, editors. OpenAccess Series in Informatics (OASIcs). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik: 2013. p. 24–34. http://drops.dagstuhl.de/opus/volltexte/2013/4233.
Leimeister C-A, Boden M, Horwege S, Lindner S, Morgenstern B. Fast alignment-free sequence comparison using spaced-word frequencies. Bioinformatics. 2014; 30:1991–9.
Horwege S, Lindner S, Boden M, Hatje K, Kollmar M, Leimeister C-A, et al. Spaced words and kmacs: fast alignment-free sequence comparison based on inexact word matches. Nucleic Acids Res. 2014; 42:W7–W11.
Onodera T, Shibuya T. The gapped spectrum kernel for support vector machines In: Perner P, editor. Machine Learning and Data Mining in Pattern Recognition, Lecture Notes in Computer Science. Berlin,Heidelberg: Springer: 2013.
Ghandi M, Mohammad-Noori M, Beer MA. Robust k-mer frequency estimation using gapped k-mers. J Math Biol. 2014; 69:469–500.
Ghandi M, Lee D, Mohammad-Noori M, Beer MA. Enhanced regulatory sequence prediction using gapped k-mer features. PLoS Comput Biol. 2014; 10(7):1003711.
Ulitsky I, Burstein D, Tuller T, Chor B. The average common substring approach to phylogenomic reconstruction. J Comput Biol. 2006; 13:336–50.
Didier G, Debomy L, Pupin M, Zhang M, Grossmann A, Devauchelle C, et al. Comparing sequences without using alignments: application to HIV/SIV subtyping. BMC Bioinf. 2007; 8:1.
Sims GE, Jun S-R, Wu GA, Kim S-H. Alignment-free genome comparison with feature frequency profiles (FFP) and optimal resolutions. Proc Nat Acad Sci. 2009; 106:2677–82.
Domazet-Loso M, Haubold B. Alignment-free detection of local similarity among viral and bacterial genomes. Bioinformatics. 2011; 27(11):1466–72.
Haubold B, Reed FA, Pfaffelhuber P. Alignment-free estimation of nucleotide diversity. Bioinformatics. 2011; 27:449–55.
Comin M, Verzotto D. Alignment-free phylogeny of whole genomes using underlying subwords. Algorithms Mol Biol. 2012; 7:34.
Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987; 4:406–25.
Haubold B, Pierstorff N, Möller F, Wiehe T. Genome comparison without alignment using shortest unique substrings. BMC Bioinf. 2005; 6:123.
Yi H, Jin L. Co-phylog: an assembly-free phylogenomic approach for closely related organisms. Nucleic Acids Res. 2013; 41:75.
Haubold B, Klötzl F, Pfaffelhuber P. andi: Fast and accurate estimation of evolutionary distances between closely related genomes. Bioinformatics.doi:10.1093/bioinformatics/btu815.
Noé L, Martin DEK. A coverage criterion for spaced seeds and its applications to SVM string-kernels and k-mer distances. J Comput Biol. 2014; 12:947–63.
Morgenstern B, Zhu B, Horwege S, Leimeister C. Estimating evolutionary distances from spaced-word matches. In: Proc. Workshop on Algorithms in Bioinformatics (WABI'14). Lecture Notes in Bioinformatics. Berlin Heidelberg.: Springer: 2014. p. 161–73.
Lippert RA, Huang H, Waterman MS. Distributional regimes for the number of k-word matches between two random sequences. Proc Nat Acad Sci. 2002; 99:13980–9.
Reinert G, Chew D, Sun F, Waterman MS. Alignment-free sequence comparison (i): Statistics and power. J Comput Biol. 2009; 16:1615–34.
Jukes TH, Cantor CR. Evolution of Protein Molecules: Academy Press, NY; 1969.
Robin S, Rodolphe F, Schbath S. DNA, Words and Models: Statistics of Exceptional Words. Cambridge: Cambridge University Press; 2005.
Haubold B, Pfaffelhuber P, Domazet-Loso M, Wiehe T. Estimating mutation distances from unaligned genomes. J Comput Biol. 2009; 16:1487–500.
Leimeister C-A, Morgenstern B. kmacs: the k-mismatch average common substring approach to alignment-free sequence comparison. Bioinformatics. 2014; 30:2000–8.
Qi J, Luo H, Hao B. CVTree: a phylogenetic tree reconstruction tool based on whole genomes. Nucleic Acids Res. 2004; 32(suppl 2):45–7.
Felsenstein J. PHYLIP - Phylogeny Inference Package (Version 3.2). Cladistics. 1989; 5:164–6.
Bonnet E, de Peer YV. zt: A sofware tool for simple and partial mantel tests. J Stat Software. 2002; 7:1–12.
Didier G, Laprevotte I, Pupin M, Hénaut A. Local decoding of sequences and alignment-free comparison. J Comput Biol. 2006; 13:1465–76.
Robinson D, Foulds L. Comparison of phylogenetic trees. Mathematical Biosciences. 1981; 53:131–47.
Zhou Z, Li X, Liu B, Beutin L, Xu J, Ren Y, et al. Derivation of Escherichia coli O157:H7 from Its O55:H7 Precursor. PLOS One. 2010; 5:8700.
Newton RJ, Griffin LE, Bowles KM, Meile C, Gifford S, Givens CE, et al. Genome characteristics of a generalist marine bacterial lineage. ISME J. 2010; 4:784–98.
We would like to thank Marcus Boden, Sebastian Lindner, Alec Guyomard and Claudine Devauchelle for help with the program evaluation and Gilles Didier for help with the software to compare distance matrices. We thank Matteo Comin, Ruth Kantorovitz, Saurabh Sinha and an unknown WABI reviewer for pointing out an an error regarding the covariance of spaced-word matches in the previous version of this manuscript that was published at WABI 2014.
University of Göttingen, Department of Bioinformatics, Goldschmidtstr. 1, Göttingen, 37073, Germany
Burkhard Morgenstern, Sebastian Horwege & Chris André Leimeister
Université d'Evry Val d'Essonne, Laboratoire Statistique et Génome, UMR CNRS 8071, USC INRA 23 Boulevard de France, Evry, 91037, France
Burkhard Morgenstern
University of Göttingen, Department of General Microbiology, Grisebachstr. 8, Göttingen, 37073, Germany
Bingyao Zhu
Sebastian Horwege
Chris André Leimeister
Correspondence to Burkhard Morgenstern.
BM conceived the new distance measures and the theoretical results on the variance of N and wrote most of the manuscripts. CAL implemented Spaced Words, did most of the program evaluation and wrote parts of the manuscript. BZ and SH contributed to the program evaluation. All authors read and approved the final manuscript.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Morgenstern, B., Zhu, B., Horwege, S. et al. Estimating evolutionary distances between genomic sequences from spaced-word matches. Algorithms Mol Biol 10, 5 (2015). https://doi.org/10.1186/s13015-015-0032-x
Received: 19 November 2014
DOI: https://doi.org/10.1186/s13015-015-0032-x
k-mers
Spaced words
Alignment-free
Distance estimation
Genome comparison | CommonCrawl |
A three-dimensional gas-kinetic flux solver for simulation of viscous flows with explicit formulations of conservative variables and numerical flux
Y. Sun1,
L. M. Yang ORCID: orcid.org/0000-0001-7961-48542,
C. Shu2 &
C. J. Teo2
A truly three-dimensional (3D) gas-kinetic flux solver for simulation of incompressible and compressible viscous flows is presented in this work. By local reconstruction of continuous Boltzmann equation, the inviscid and viscous fluxes across the cell interface are evaluated simultaneously in the solver. Different from conventional gas-kinetic scheme, in the present work, the distribution function at cell interface is computed in a straightforward way. As an extension of our previous work (Sun et al., Journal of Computational Physics, 300 (2015) 492–519), the non-equilibrium distribution function is calculated by the difference of equilibrium distribution functions between the cell interface and its surrounding points. As a result, the distribution function at cell interface can be simply calculated and the formulations for computing the conservative flow variables and fluxes can be given explicitly. To validate the proposed flux solver, several incompressible and compressible viscous flows are simulated. Numerical results show that the current scheme can provide accurate numerical results for three-dimensional incompressible and compressible viscous flows.
In the last few decades, the gas-kinetic scheme has been developed in both continuum [1,2,3,4,5,6,7,8] and rarefied flow regimes [9,10,11,12]. Unlike the traditional Riemann solver [13,14,15], the gas-kinetic scheme reconstructs the solution for the continuous Boltzmann equation at local cell interface. As the continuum assumption is avoided in the continuous Boltzmann equation, the gas-kinetic scheme can be applied in both continuum and rarefied flow problems, which is one of the advantages as compared with traditional Riemann solver. Another advantage is that the gas-kinetic scheme can compute the inviscid and viscous fluxes simultaneously from the solution of Boltzmann equation. In contrast, the traditional Riemann solver can only evaluate the inviscid flux and an additional step is required to compute the viscous flux by smooth function approximation.
Currently, there are two common types of gas-kinetic schemes: the kinetic flux vector scheme (KFVS) and gas-kinetic Bhatnagar-Gross-Krook (BGK) scheme. In KFVS, the Boltzmann equation without collision term, which is also called the collisionless Boltzmann equation, is solved. Basically, there are two stages in KFVS: free transport and collision. In the free transport stage, the collisionless Boltzmann equation is solved to calculate the flux at the interface. In the collision stage, the artificial collisions are added in the calculation of initial Maxwellian distribution at the beginning of next time step. The KFVS has been demonstrated to have good positivity property for simulation of flows with strong shock waves [2]. However, owing to the fact that the numerical dissipation in KFVS is proportional to the mesh size, the KFVS usually gives more diffusive results than the Godunov or flux difference splitting (FDS) scheme [16], and is not able to give accurate Navier-Stokes solutions except for cases in which the physical viscosity is much larger than the numerical viscosity. Some of representative researches on KFVS include Pullin [17], Deshpande [18], Perthame [19], Mandal and Deshpande [20], and Chou and Baganoff [21].
One of the significant developments in gas-kinetic schemes is the gas-kinetic BGK scheme, which was firstly proposed by Prendergast and Xu [22], and further developed by Chae et al. [23], Xu [24] and other researchers. In this method, the BGK collision model is adopted in the solution process to obtain the numerical fluxes across the interface. As a consequence, the dissipation in the transport can be controlled by a real collision time, which is proportional to the dynamic viscosity. The gas-kinetic BGK scheme enjoys some intrinsic advantages. Firstly, it has been shown that the gas-kinetic BGK scheme is able to generate a stable and crisp shock transition in the discontinuous region with a delicate dissipative mechanism [24]. At the same time, an accurate Navier-Stokes solution can be obtained in the smooth region. What is more, it is demonstrated that the entropy condition is always satisfied in the gas-kinetic BGK scheme and the "carbuncle phenomenon" is avoided for hypersonic flow simulations [25]. However, the gas-kinetic BGK scheme is not completely free from drawbacks. It is usually more complicated and inefficient than conventional computational fluid dynamics (CFD) schemes. This is because in the gas-kinetic BGK scheme, a number of coefficients related to the physical space should be calculated to evaluate the distribution function at each interface and each time step. Moreover, to the best of our knowledge, there is still no work of the gas-kinetic BGK scheme which can give explicit formulations for evaluating the conservative variables and numerical fluxes.
Recently, a straightforward way to evaluate the distribution function was proposed by Sun et al. [1], which is named as gas-kinetic flux solver (GKFS). Different from the gas-kinetic BGK scheme [24], the non-equilibrium distribution function at cell interface is approximated by the difference of equilibrium distribution functions between the cell interface and its surrounding points in GKFS. To be specific, the equilibrium distribution functions at the surrounding points of the cell interface are firstly given by interpolation from the conservative variables at cell centers. Then, the equilibrium distribution function at cell interface can be evaluated by a streaming process from the surrounding points. After the above steps, the non-equilibrium distribution function at cell interface can be simply calculated and the explicit formulations for computing the conservative flow variables and fluxes can be derived. It has been proven that GKFS can give the same results and only requires about 60% of the computational time as compared with the conventional gas-kinetic BGK scheme [1]. Inspired by the previous work of GKFS [1], a 3D GKFS is developed in this work. In the scheme, the 3D Navier-Stokes equations are discretized by the finite volume method and the numerical flux across the interface is evaluated by the local solution of 3D Boltzmann equation. Therefore, the present scheme can be viewed as a truly 3D flux solver. At the same time, a coordinate transformation is made at the local cell interface to transform the velocities in the Cartesian coordinate system to the normal and tangential directions of interface. In this way, all the cell interfaces can be treated using the same way for evaluation of conservative variables and numerical fluxes. Like the two-dimensional (2D) case, the non-equilibrium distribution function is approximated by the difference of the equilibrium distribution functions between the cell interface and its surrounding points. It is indicated that the present work is the first time to give explicit formulations for evaluating the conservative variables and numerical fluxes for the 3D viscous flow problems. Like other gas-kinetic schemes, the present scheme can be applied to both incompressible and compressible viscous flow problems without any modification. To validate the developed scheme, both incompressible and compressible viscous test cases are solved, including 3D driven cavity flow, incompressible flow past a stationary sphere, flow around ONERA M6 wing and DLR-F6 wing-body configuration. Numerical results show that the present solver can provide accurate results for both incompressible and compressible flows.
Boltzmann equation, Maxwellian distribution function and Navier-Stokes equations
Boltzmann equation and conservative forms of moments for Maxwellian function
With Bhatnagar-Gross-Krook (BGK) collision model [26], the continuous Boltzmann equation in the three-dimensional Cartesian coordinate system can be written as
$$ \frac{\partial f}{\partial t}+u\frac{\partial f}{\partial x}+v\frac{\partial f}{\partial y}+w\frac{\partial f}{\partial z}=\frac{g-f}{\tau }, $$
where f is the real particle distribution function and g is the equilibrium particle distribution function. τ is the collision time, which is determined by the dynamic viscosity and the pressure. The right-hand side of the equation is the collision term which alters the distribution function from f to g within a collision time τ. Both f and g are functions of space (x, y, z), time (t) and particle velocity (u, v, w, ξ). The internal degree of freedom K in ξ is determined by the space dimension and the ratio of specific heat with the relation K + D = 2/(γ − 1), where D is the abbreviation of the dimension (D = 3 in three dimension) and γ is the specific heat ratio. The equilibrium state g of the Maxwellian distribution is
$$ g=\rho {\left(\frac{\lambda }{\pi}\right)}^{\frac{K+3}{2}}{e}^{-\lambda \left({\left(u-U\right)}^2+{\left(v-V\right)}^2+{\left(w-W\right)}^2+{\xi}^2\right)}, $$
where ρ is the density of the mean flow; U = (U, V, W) is the macroscopic velocity vector expressed in the x-, y- and z- directions; λ = m/(2kT) = 1/(2RT), where m is the molecular mass, k is the Boltzmann constant, R is the gas constant and T is the temperature. In the equilibrium state, ξ2 is the abbreviation of \( {\xi}^2={\xi}_1^2+{\xi}_2^2+\cdots +{\xi}_K^2. \)
With the Maxwellian distribution function in Eq. (2), the following 7 conservative forms of moments will be satisfied, which are used to recover Navier-Stokes equations by Eq. (1) through Chapman-Enskog expansion analysis:
$$ \int gd\Xi =\rho, $$
$$ \int {gu}_{\alpha }d\Xi =\rho {U}_{\alpha }, $$
$$ \int g\left({u}_{\alpha }{u}_{\alpha }+{\xi}^2\right)d\Xi =\rho \left({U}_{\alpha }{U}_{\alpha }+ bRT\right), $$
$$ \int {gu}_{\alpha }{u}_{\beta }d\Xi =\rho {U}_{\alpha }{U}_{\beta }+p{\delta}_{\alpha \beta}, $$
$$ \int g\left({u}_{\alpha }{u}_{\alpha }+{\xi}^2\right){u}_{\beta }d\Xi =\rho \left[{U}_{\alpha }{U}_{\alpha }+\left(b+2\right) RT\right]{U}_{\beta }, $$
$$ \int {gu}_{\alpha }{u}_{\beta }{u}_{\chi }d\Xi =p\left({U}_{\alpha }{\delta}_{\beta \chi}+{U}_{\beta }{\delta}_{\chi \alpha}+{U}_{\chi }{\delta}_{\alpha \beta}\right)+\rho {U}_{\alpha }{U}_{\beta }{U}_{\chi }, $$
$$ {\displaystyle \begin{array}{l}\int g\left({u}_{\alpha }{u}_{\alpha }+{\xi}^2\right){u}_{\beta }{u}_{\chi }d\Xi \\ {}=\rho \left\{{U}_{\alpha }{U}_{\alpha }{U}_{\beta }{U}_{\chi }+\left[\left(b+4\right){U}_{\beta }{U}_{\chi }+{U}_{\alpha }{U}_{\alpha }{\delta}_{\beta \chi}\right] RT+\left(b+2\right){R}^2{T}^2{\delta}_{\beta \chi}\right\},\end{array}} $$
where uα, uβ, uχ and Uα, Uβ, Uχ are the particle velocities and the macroscopic flow velocities in the α-, β- and χ- directions. p is the pressure and b = K + D represents the total degree of freedoms of molecules. dΞ = duαduβduχdξ1dξ2⋯dξK is the volume element in the particle velocity space. The integral domain for uα, uβ, uχ, ξ1, ξ2, …, ξK is from −∞ to +∞. Eqs. (3)–(5) are applied to recover the fluid density, momentum and energy, respectively. Eqs. (6) and (7) are used to recover convective fluxes of the momentum equation and the energy equation. Eqs. (8) and (9) are to recover diffusive fluxes of the momentum equation and the energy equation.
Macroscopic governing equations discretized by finite volume method
In this work, the 3D Navier-Stokes equations are solved using the finite volume discretization with the conservative variables defined at cell centers, which can be written as
$$ \frac{d\mathbf{W}}{dt}+\frac{1}{\Omega}\sum \limits_{i=1}^N{\mathbf{F}}_i{S}_i=0, $$
where W is the vector of conservative variables, Ω and N are the volume and number of interfaces of the control volume, respectively, Fi and Si are the flux vector and the area of interface i. It should be noted that the numerical fluxes Fi are reconstructed locally at cell interface from the conservative variables W at cell centers. In the gas-kinetic scheme, the connection between the distribution function f and the conservative variables is
$$ \mathbf{W}={\left(\rho, \kern0.5em \rho U,\kern0.5em \rho V,\kern0.5em \rho W,\kern0.5em \rho E\right)}^T=\int {\boldsymbol{\upvarphi}}_{\alpha } fd\Xi, $$
where \( E=\frac{1}{2}\left({U}^2+{V}^2+{W}^2+ bRT\right) \). φα is the moment given by
$$ {\boldsymbol{\upvarphi}}_{\alpha }={\left(1,\kern0.5em u,\kern0.5em v,\kern0.5em w,\kern0.5em \frac{1}{2}\left({u}^2+{v}^2+{w}^2+{\xi}^2\right)\right)}^T. $$
With the compatibility condition,
$$ \int {\boldsymbol{\upvarphi}}_{\alpha}\frac{g-f}{\tau }d\Xi =0, $$
Eq. (11) is equivalent to
$$ \mathbf{W}={\left(\rho, \kern0.5em \rho U,\kern0.5em \rho V,\kern0.5em \rho W,\kern0.5em \rho E\right)}^T=\int {\boldsymbol{\upvarphi}}_{\alpha } gd\Xi . $$
The above equation shows that the non-equilibrium distribution function has no contribution to the calculation of conservative variables.
After evaluation of conservative variables, the flux vector F can also be obtained from the distribution function
$$ \mathbf{F}=\int u{\boldsymbol{\upvarphi}}_{\alpha } fd\Xi . $$
It should be noted that Eq. (15) is the flux vector of x-direction in the Cartesian coordinate system. In the practical application such as curved boundary problems, we need to calculate the numerical flux in the normal direction of interface Fn
$$ {\mathbf{F}}_n={\left({F}_1,\kern0.5em {F}_2,\kern0.5em {F}_3,\kern0.5em {F}_4,\kern0.5em {F}_5\right)}^T=\int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha } fd\Xi, $$
where u′ is the particle velocity in the normal direction of interface. Suppose that n1 = (n1x, n1y, n1z) is the unit vector in the normal direction of interface and n2 = (n2x, n2y, n2z), n3 = (n3x, n3y, n3z) are the unit vectors in the tangential directions. Then, the relationship between the particle velocities in the normal and tangential directions (u′, v′, w′) and the particle velocities in the Cartesian coordinate system (u, v, w) are
$$ {u}^{\prime }={un}_{1x}+{vn}_{1y}+{wn}_{1z},\kern1.5em {v}^{\prime }={un}_{2x}+{vn}_{2y}+{wn}_{2z},\kern1.5em {w}^{\prime }={un}_{3x}+{vn}_{3y}+{wn}_{3z}, $$
and similarly
$$ u={u}^{\prime }{n}_{1x}+{v}^{\prime }{n}_{2x}+{w}^{\prime }{n}_{3x},\kern1.5em v={u}^{\prime }{n}_{1y}+{v}^{\prime }{n}_{2y}+{w}^{\prime }{n}_{3y},\kern1.5em w={u}^{\prime }{n}_{1z}+{v}^{\prime }{n}_{2z}+{w}^{\prime }{n}_{3z}. $$
Substituting Eq. (18) into Eq. (12), we have
$$ {\boldsymbol{\upvarphi}}_{\alpha }=\left(\begin{array}{ccccc}1& 0& 0& 0& 0\\ {}0& {n}_{1x}& {n}_{2x}& {n}_{3x}& 0\\ {}0& {n}_{1y}& {n}_{2y}& {n}_{3y}& 0\\ {}0& {n}_{1z}& {n}_{2z}& {n}_{3z}& 0\\ {}0& 0& 0& 0& 1\end{array}\right){\left(1,\kern0.5em {u}^{\prime },\kern0.5em {v}^{\prime },\kern0.5em {w}^{\prime },\kern0.5em \frac{1}{2}\left({u^{\prime}}^2+{v^{\prime}}^2+{w^{\prime}}^2+{\xi}^2\right)\right)}^T. $$
With the definition of a new moment
$$ {\boldsymbol{\upvarphi}}_{\alpha}^{\ast }={\left(1,\kern0.5em {u}^{\prime },\kern0.5em {v}^{\prime },\kern0.5em {w}^{\prime },\kern0.5em \frac{1}{2}\left({u^{\prime}}^2+{v^{\prime}}^2+{w^{\prime}}^2+{\xi}^2\right)\right)}^T, $$
and its corresponding flux vector
$$ {\mathbf{F}}_n^{\ast }={\left({F}_1^{\ast },\kern0.5em {F}_2^{\ast },\kern0.5em {F}_3^{\ast },\kern0.5em {F}_4^{\ast },\kern0.5em {F}_5^{\ast}\right)}^T=\int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast } fd\Xi, $$
the real flux vector Fn can be obtained by substituting Eq. (19) into Eq. (16) and using Eq. (21)
$$ {\mathbf{F}}_n=\int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha } fd\Xi =\left(\begin{array}{ccccc}1& 0& 0& 0& 0\\ {}0& {n}_{1x}& {n}_{2x}& {n}_{3x}& 0\\ {}0& {n}_{1y}& {n}_{2y}& {n}_{3y}& 0\\ {}0& {n}_{1z}& {n}_{2z}& {n}_{3z}& 0\\ {}0& 0& 0& 0& 1\end{array}\right){\mathbf{F}}_n^{\ast }. $$
The above Eq. (22) shows that the calculation of Fn is equivalent to the evaluation of \( {\mathbf{F}}_n^{\ast } \) and the key issue is to obtain the gas distribution function f. In the next subsection, a 3D GKFS will be introduced to evaluate the gas distribution function f at cell interface.
Three-dimensional gas-kinetic flux solver
As the flux vector \( {\mathbf{F}}_n^{\ast } \) is evaluated at the local interface, a local coordinate system is applied in the derivation of distribution function f. It is known that the distribution function f can be separated into two parts, the equilibrium part feq and the non-equilibrium part fneq with the relationship of
$$ f={f}^{eq}+{f}^{neq}. $$
Here, the equilibrium part feq equals to
$$ {f}^{eq}=g. $$
With the Chapman-Enskog expansion analysis, the non-equilibrium distribution function can be approximated as
$$ {f}^{neq}=-\tau \left(\frac{\partial }{\partial t}+\mathbf{u}\cdot \nabla \right){f}^{eq}=-\tau \left(\frac{\partial }{\partial t}+\mathbf{u}\cdot \nabla \right)g. $$
Therefore, the gas distribution function truncated to the Navier-Stokes level becomes
$$ f={f}^{eq}+{f}^{neq}=g-\tau \left(\frac{\partial g}{\partial t}+{u}^{\prime}\frac{\partial g}{\partial {n}_1}+{v}^{\prime}\frac{\partial g}{\partial {n}_2}+{w}^{\prime}\frac{\partial g}{\partial {n}_3}\right). $$
By applying the Taylor series expansion in time and physical space, the above equation can be simplified to
$$ {\displaystyle \begin{array}{l}f\left(0,0,0,t+\delta t\right)\\ {}=g\left(0,0,0,t+\delta t\right)-\frac{\tau }{\delta t}\left[g\left(0,0,0,t+\delta t\right)-g\left(-{u}^{\prime}\delta t,-{v}^{\prime}\delta t,-{w}^{\prime}\delta t,t\right)\right],\end{array}} $$
where f(0, 0, 0, t + δt) is the gas distribution function at local interface; g(0, 0, 0, t + δt) and g(−u′δt, −v′δt, −w′δt, t) are the equilibrium distribution functions at local interface and its surrounding points, respectively. δt is the streaming time step. From Eq. (27), it can be seen that the non-equilibrium distribution fneq is calculated by the difference of equilibrium distribution functions between the interface and its surrounding points, which makes current GKFS be much more straightforward.
In the present work, the conservative variables in Eq. (10) are defined at cell centers. In order to solve Eq. (10) by marching in time, the numerical flux in the normal direction of each cell interface \( {\mathbf{F}}_n^{\ast } \) should be evaluated first. Suppose that the conservative variables at cell centers and their first order derivatives are already known, the conservative variables at the left and the right sides of an interface can be easily given by interpolation. Then, the equilibrium distribution functions at these two sides of interface can be given via Eq. (2). After that, the second order approximation of g(−u′δt, −v′δt, −w′δt, t) at the time level t can be written as
$$ g\left(-{u}^{\prime}\delta t,-{v}^{\prime}\delta t,-{w}^{\prime}\delta t,t\right)=\left\{\begin{array}{c}{g}_l-\frac{\partial {g}_l}{\partial {n}_1}{u}^{\prime}\delta t-\frac{\partial {g}_l}{\partial {n}_2}{v}^{\prime}\delta t-\frac{\partial {g}_l}{\partial {n}_3}{w}^{\prime}\delta t,\kern2em {u}^{\prime}\ge 0,\\ {}{g}_r-\frac{\partial {g}_r}{\partial {n}_1}{u}^{\prime}\delta t-\frac{\partial {g}_r}{\partial {n}_2}{v}^{\prime}\delta t-\frac{\partial {g}_r}{\partial {n}_3}{w}^{\prime}\delta t,\kern2em {u}^{\prime }<0.\end{array}\right. $$
Where gl and gr are the equilibrium distribution functions at the left and the right sides of interface, respectively. Note that in Eq. (28), the equilibrium distribution functions at two sides of interface are not necessarily the same, which means that a possible discontinuity has been taken into account in the form. By substituting Eq. (28) into Eq. (27), we have
$$ {\displaystyle \begin{array}{l}f\left(0,0,0,t+\delta t\right)=g\left(0,0,0,t+\delta t\right)\\ {}\kern2em -\frac{\tau }{\delta t}\left[g\left(0,0,0,t+\delta t\right)-{g}_lH\left({u}^{\prime}\right)-{g}_r\left(1-H\left({u}^{\prime}\right)\right)\right]\\ {}\kern2em -\tau \left[\left(\frac{\partial {u}^{\prime }{g}_l}{\partial {n}_1}+\frac{\partial {v}^{\prime }{g}_l}{\partial {n}_2}+\frac{\partial {w}^{\prime }{g}_l}{\partial {n}_3}\right)H\left({u}^{\prime}\right)+\left(\frac{\partial {u}^{\prime }{g}_r}{\partial {n}_1}+\frac{\partial {v}^{\prime }{g}_r}{\partial {n}_2}+\frac{\partial {w}^{\prime }{g}_r}{\partial {n}_3}\right)\left(1-H\left({u}^{\prime}\right)\right)\right],\end{array}} $$
where H(u′) is the Heaviside function defined as
$$ H\left({u}^{\prime}\right)=\left\{\begin{array}{c}0,\kern2.5em {u}^{\prime }<0,\\ {}1,\kern2.5em {u}^{\prime}\ge 0.\end{array}\right. $$
Equation (29) shows that the full information of distribution function at the interface can be decided once we have the equilibrium distribution function at cell interface and its surrounding points.
Evaluation of conservative variables W∗ at cell interface
It is known that the non-equilibrium distribution has no influence on the computation of conservative variables, and thus Eq. (14) can be adopted to calculate the conservative variables W∗ at local interface
$$ {\mathbf{W}}^{\ast }={\left(\rho, \kern0.5em \rho {U}^{\prime },\kern0.5em \rho {V}^{\prime },\kern0.5em \rho {W}^{\prime },\kern0.5em \rho E\right)}^T=\int g{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }d\Xi . $$
According to the compatibility condition (see Eq. (13)), by substituting Eq. (27) and Eq. (28) into Eq. (30), we have
$$ {\displaystyle \begin{array}{c}{\mathbf{W}}^{\ast }=\int {\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(0,0,0,t+\delta t\right)d\Xi =\int {\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(-{u}^{\prime}\delta t,-{v}^{\prime}\delta t,-{w}^{\prime}\delta t,t\right)d\Xi \\ {}\kern1em =\int {\int}_{u^{\prime }>0}{\boldsymbol{\upvarphi}}_{\alpha}^{\ast}\left({g}_l-\frac{\partial {g}_l}{\partial {n}_1}{u}^{\prime}\delta t-\frac{\partial {g}_l}{\partial {n}_2}{v}^{\prime}\delta t-\frac{\partial {g}_l}{\partial {n}_3}{w}^{\prime}\delta t\right)d\Xi \\ {}+\int {\int}_{u^{\prime }<0}{\boldsymbol{\upvarphi}}_{\alpha}^{\ast}\left({g}_r-\frac{\partial {g}_r}{\partial {n}_1}{u}^{\prime}\delta t-\frac{\partial {g}_r}{\partial {n}_2}{v}^{\prime}\delta t-\frac{\partial {g}_r}{\partial {n}_3}{w}^{\prime}\delta t\right)d\Xi .\end{array}} $$
The above equation shows that the conservative variables at cell interface can be obtained by equilibrium distribution function of the surrounding points. By taking the limit δt → 0 [24], the conservative variables at cell interface can be calculated by
$$ {\mathbf{W}}^{\ast }=\int {\int}_{u^{\prime }>0}{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_ld\Xi +\int {\int}_{u^{\prime }<0}{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_rd\Xi . $$
The above equation means that the conservative variables at cell interface are simply computed by the reconstructed variables of left and right sides. With parameters defined in the Appendix, the conservative variables W∗ at cell interface are given by
$$ \rho =\left({\rho}_l{a}_l+{\rho}_r{a}_r\right), $$
$$ \rho {U}^{\prime }=\left({\rho}_l{b}_l+{\rho}_r{b}_r\right), $$
$$ \rho {V}^{\prime }=\left({\rho}_l{V_l}^{\prime }{a}_l+{\rho}_r{V_r}^{\prime }{a}_r\right), $$
$$ \rho {W}^{\prime }=\left({\rho}_l{W_l}^{\prime }{a}_l+{\rho}_r{W_r}^{\prime }{a}_r\right), $$
$$ {\displaystyle \begin{array}{l}\rho E=\frac{1}{2}{\rho}_l\left[{c}_l+\left({V^{\prime}}_l^2+{W^{\prime}}_l^2+\left(b-1\right){RT}_l\right){a}_l\right]\\ {}\kern2.5em +\frac{1}{2}{\rho}_r\left[{c}_r+\left({V^{\prime}}_r^2+{W^{\prime}}_r^2+\left(b-1\right){RT}_r\right){a}_r\right],\end{array}} $$
where "·l" and "·r" ("·" stands for any variable) denote the variables at the left and the right sides of interface, respectively.
Evaluation of numerical flux \( {\mathbf{F}}_n^{\ast } \) at cell interface
As soon as the conservative variables at local interface W∗ are obtained, the equilibrium distribution function g(0, 0, 0, t + δt) can be known by Eq. (2). Then the numerical flux across the cell interface can be calculated via Eq. (29)
$$ {\displaystyle \begin{array}{l}{\mathbf{F}}_n^{\ast }=\int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }f\left(0,0,0,t+\delta t\right)d\Xi \\ {}\kern2.5em =\int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(0,0,0,t+\delta t\right)d\Xi -\frac{\tau }{\delta t}\left[\int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(0,0,0,t+\delta t\right)d\Xi \right.\\ {}\kern2.5em -\int {\int}_{u^{\prime }>0}{u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_ld\Xi -\left.\int {\int}_{u^{\prime }<0}{u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_rd\Xi \right]\\ {}\kern2.5em -\tau \left[\frac{\partial }{\partial {n}_1}\int \left({\int}_{u^{\prime }>0}{u^{\prime}}^2{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u^{\prime }<0}{u^{\prime}}^2{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \right.\\ {}\kern4.5em +\frac{\partial }{\partial {n}_2}\int \left({\int}_{u^{\prime }>0}{u}^{\prime }{v}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u<0}{u}^{\prime }{v}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \\ {}\kern4.5em +\left.\frac{\partial }{\partial {n}_3}\int \left({\int}_{u^{\prime }>0}{u}^{\prime }{w}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u<0}{u}^{\prime }{w}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \right].\end{array}} $$
Note that g(0, 0, 0, t + δt) is the equilibrium distribution function at the interface and time level t + δt, and gl, gr are the distribution functions at the left and the right sides of interface and the time level t. By taking the limit δt → 0, we have
$$ {\displaystyle \begin{array}{l}-\frac{\tau }{\delta t}\left[\int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(0,0,0,t+\delta t\right)d\Xi \right.-\int {\int}_{u^{\prime }>0}{u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_ld\Xi -\left.\int {\int}_{u^{\prime }<0}{u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_rd\Xi \right]\\ {}=-\tau \int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast}\frac{\partial g\left(0,0,0,t\right)}{\partial t}d\Xi .\end{array}} $$
According to the work of Xu [24], ∂g/∂t can be expanded by
$$ \frac{\partial g\left(0,0,0,t\right)}{\partial t}=g\left(0,0,0,t\right)\left({A}_1+{A}_2{u}^{\prime }+{A}_3{v}^{\prime }+{A}_4{w}^{\prime }+{A}_5\varepsilon \right), $$
where A1, A2, A3, A4 and A5 are the derivatives of macroscopic variables with respect to physical space, which will be determined from the compatibility condition, \( \varepsilon =\frac{1}{2}\left({u^{\prime}}^2+{v^{\prime}}^2+{w^{\prime}}^2+{\xi}^2\right) \). Thus, the flux expression in Eq. (38) can be written as
$$ {\displaystyle \begin{array}{l}{\mathbf{F}}_n^{\ast }=\int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(0,0,0,t\right)d\Xi \\ {}\kern2.5em -\tau \int {u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(0,0,0,t\right)\left({A}_1+{A}_2{u}^{\prime }+{A}_3{v}^{\prime }+{A}_4{w}^{\prime }+{A}_5\varepsilon \right)d\Xi \\ {}\kern2.5em -\tau \left[\frac{\partial }{\partial {n}_1}\int \left({\int}_{u^{\prime }>0}{u^{\prime}}^2{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u^{\prime }<0}{u^{\prime}}^2{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \right.\\ {}\kern4.5em +\frac{\partial }{\partial {n}_2}\int \left({\int}_{u^{\prime }>0}{u}^{\prime }{v}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u<0}{u}^{\prime }{v}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \\ {}\kern4.5em +\left.\frac{\partial }{\partial {n}_3}\int \left({\int}_{u^{\prime }>0}{u}^{\prime }{w}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u<0}{u}^{\prime }{w}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \right].\end{array}} $$
Note that δt → 0 has been applied in Eq. (41). In the above equation, the only undetermined variables are the coefficients A1, A2, A3, A4 and A5.
Substituting Eq. (29) into Eq. (11) and adopting the compatibility condition, we have
$$ {\displaystyle \begin{array}{l}\frac{1}{\delta t}\left[\int {\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(0,0,0,t+\delta t\right)d\Xi -\int {\int}_{u^{\prime }>0}{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_ld\Xi -\int {\int}_{u^{\prime }<0}{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_rd\Xi \right]\\ {}=-\left[\int {\int}_{u^{\prime }>0}{\boldsymbol{\upvarphi}}_{\alpha}^{\ast}\left(\frac{\partial {g}_l}{\partial {n}_1}{u}^{\prime }+\frac{\partial {g}_l}{\partial {n}_2}{v}^{\prime }+\frac{\partial {g}_l}{\partial {n}_3}{w}^{\prime}\right)d\Xi \right.\\ {}\kern1.5em +\left.\int {\int}_{u^{\prime }<0}{\boldsymbol{\upvarphi}}_{\alpha}^{\ast}\left(\frac{\partial {g}_r}{\partial {n}_1}{u}^{\prime }+\frac{\partial {g}_r}{\partial {n}_2}{v}^{\prime }+\frac{\partial {g}_r}{\partial {n}_3}{w}^{\prime}\right)d\Xi \right].\end{array}} $$
Using Eqs. (39)–(40), the above equation can be written as
$$ {\displaystyle \begin{array}{l}\int {\boldsymbol{\upvarphi}}_{\alpha}^{\ast }g\left(0,0,0,t\right)\left({A}_1+{A}_2{u}^{\prime }+{A}_3{v}^{\prime }+{A}_4{w}^{\prime }+{A}_5\varepsilon \right)d\Xi \\ {}=-\left[\frac{\partial }{\partial {n}_1}\int \left({\int}_{u^{\prime }>0}{u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u^{\prime }<0}{u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \right.+\frac{\partial }{\partial {n}_2}\int \left({\int}_{u^{\prime }>0}{v}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u^{\prime }<0}{v}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \\ {}\kern2em +\left.\frac{\partial }{\partial {n}_3}\int \left({\int}_{u^{\prime }>0}{w}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u^{\prime }<0}{w}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \right].\end{array}} $$
Defining
$$ {\displaystyle \begin{array}{c}\frac{\partial }{\partial {n}_1}\int \left({\int}_{u^{\prime }>0}{u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u^{\prime }<0}{u}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi +\frac{\partial }{\partial {n}_2}\int \left({\int}_{u^{\prime }>0}{v}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u^{\prime }<0}{v}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \\ {}+\frac{\partial }{\partial {n}_3}\int \left({\int}_{u^{\prime }>0}{w}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_l+{\int}_{u^{\prime }<0}{w}^{\prime }{\boldsymbol{\upvarphi}}_{\alpha}^{\ast }{g}_r\right)d\Xi \end{array}}=\left(\begin{array}{c}{G}_1\\ {}{G}_2\\ {}{G}_3\\ {}{G}_4\\ {}{G}_5\end{array}\right). $$
The explicit formulations of G1 to G5 are given in the Appendix. After a similar derivation process to the work of Xu [24], the coefficients A1, A2, A3, A4 and A5 can be determined by
$$ {A}_5=-\frac{8{\lambda}^2}{\left(K+3\right)\rho}\left[{G}_5-{U}^{\prime }{G}_2-{V}^{\prime }{G}_3-{W}^{\prime }{G}_4-\left({\Re}_1-{U^{\prime}}^2-{V^{\prime}}^2-{W^{\prime}}^2\right){G}_1\right], $$
$$ {A}_4=-\frac{2\lambda }{\rho}\left({G}_4-{W}^{\prime }{G}_1\right)-{W}^{\prime }{A}_5, $$
$$ {A}_3=-\frac{2\lambda }{\rho}\left({G}_3-{V}^{\prime }{G}_1\right)-{V}^{\prime }{A}_5, $$
$$ {A}_2=-\frac{2\lambda }{\rho}\left({G}_2-{U}^{\prime }{G}_1\right)-{U}^{\prime }{A}_5, $$
$$ {A}_1=-\frac{1}{\rho }{G}_1-{U}^{\prime }{A}_2-{V}^{\prime }{A}_3-{W}^{\prime }{A}_4-{\Re}_1{A}_5, $$
$$ {\Re}_1=\frac{1}{2}\left({U^{\prime}}^2+{V^{\prime}}^2+{W^{\prime}}^2+\frac{K+3}{2\lambda}\right). $$
Once the above coefficients are obtained, the numerical flux \( {\mathbf{F}}_n^{\ast } \) across the interface can be calculated via Eq. (41). Similar to the conservative variables W∗, the explicit expressions for numerical flux \( {\mathbf{F}}_n^{\ast } \) can also be given as
$$ {F}_1^{\ast }=\rho {U}^{\prime }, $$
$$ {\displaystyle \begin{array}{l}{F}_2^{\ast }=\left(\rho {U^{\prime}}^2+p\right)-\tau \rho \left[{A}_1\left\langle {u^{\prime}}^2\right\rangle +{A}_2\left\langle {u^{\prime}}^3\right\rangle +{A}_3\left\langle {u^{\prime}}^2\right\rangle \left\langle {v^{\prime}}^1\right\rangle +{A}_4\left\langle {u^{\prime}}^2\right\rangle \left\langle {w^{\prime}}^1\right\rangle \right.\\ {}\kern3.5em +\left.\frac{1}{2}{A}_5\left(\left\langle {u^{\prime}}^4\right\rangle +\left\langle {u^{\prime}}^2\right\rangle \left\langle {v^{\prime}}^2\right\rangle +\left\langle {u^{\prime}}^2\right\rangle \left\langle {w^{\prime}}^2\right\rangle +\left\langle {u^{\prime}}^2\right\rangle \left\langle {\xi}^2\right\rangle \right)\right]\\ {}\kern3.5em -\tau \left[\frac{\partial \left({\rho}_l{d}_l+{\rho}_r{d}_r\right)}{\partial {n}_1}+\frac{\partial \left({\rho}_l{V}_l^{\prime }{c}_l+{\rho}_r{V}_r^{\prime }{c}_r\right)}{\partial {n}_2}+\frac{\partial \left({\rho}_l{W}_l^{\prime }{c}_l+{\rho}_r{W}_r^{\prime }{c}_r\right)}{\partial {n}_3}\right],\end{array}} $$
$$ {\displaystyle \begin{array}{l}{F}_3^{\ast }=\rho {U}^{\prime }{V}^{\prime }-\tau \rho \left[{A}_1\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^1\right\rangle \right.+{A}_2\left\langle {u^{\prime}}^2\right\rangle \left\langle {v^{\prime}}^1\right\rangle +{A}_3\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^2\right\rangle +{A}_4\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^1\right\rangle \\ {}\kern4em +\frac{1}{2}\left.{A}_5\left(\left\langle {u^{\prime}}^3\right\rangle \left\langle {v^{\prime}}^1\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^3\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^2\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^1\right\rangle \left\langle {\xi}^2\right\rangle \right)\right]\\ {}\kern4em -\tau \left[\frac{\partial \left({\rho}_l{V}_l^{\prime }{c}_l+{\rho}_r{V}_r^{\prime }{c}_r\right)}{\partial {n}_1}+\frac{\partial \left[\left({\rho}_l{V^{\prime}}_l^2+{p}_l\right){b}_l+\left({\rho}_r{V^{\prime}}_r^2+{p}_r\right){b}_r\right]}{\partial {n}_2}\right.\\ {}\kern6.5em \left.+\frac{\partial \left({\rho}_l{V_l}^{\prime }{W_l}^{\prime }{b}_l+{\rho}_r{V_r}^{\prime }{W_r}^{\prime }{b}_r\right)}{\partial {n}_3}\right],\end{array}} $$
$$ {\displaystyle \begin{array}{l}{F}_4^{\ast }=\rho {U}^{\prime }{W}^{\prime }-\tau \rho \left[{A}_1\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^1\right\rangle \right.+{A}_2\left\langle {u^{\prime}}^2\right\rangle \left\langle {w^{\prime}}^1\right\rangle +{A}_3\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^1\right\rangle +{A}_4\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^2\right\rangle \\ {}\kern4em +\frac{1}{2}\left.{A}_5\left(\left\langle {u^{\prime}}^3\right\rangle \left\langle {w^{\prime}}^1\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^2\right\rangle \left\langle {w^{\prime}}^1\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^3\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^1\right\rangle \left\langle {\xi}^2\right\rangle \right)\right]\\ {}\kern4em -\tau \left[\frac{\partial \left({\rho}_l{W}_l^{\prime }{c}_l+{\rho}_r{W}_r^{\prime }{c}_r\right)}{\partial {n}_1}\right.+\frac{\partial \left({\rho}_l{V_l}^{\prime }{W_l}^{\prime }{b}_l+{\rho}_r{V_r}^{\prime }{W_r}^{\prime }{b}_r\right)}{\partial {n}_2}\\ {}\kern6.5em \left.+\frac{\partial \left[\left({\rho}_l{W^{\prime}}_l^2+{p}_l\right){b}_l+\left({\rho}_r{W^{\prime}}_r^2+{p}_r\right){b}_r\right]}{\partial {n}_3}\right],\end{array}} $$
$$ {\displaystyle \begin{array}{l}{F}_5^{\ast }=\left(\rho E+p\right){U}^{\prime }-\frac{1}{2}\tau \rho \left\{{A}_1\left[\left\langle {u^{\prime}}^3\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^2\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^2\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {\xi}^2\right\rangle \right]\right.\\ {}\kern2.5em +{A}_2\left[\left\langle {u^{\prime}}^4\right\rangle +\left\langle {u^{\prime}}^2\right\rangle \left\langle {v^{\prime}}^2\right\rangle +\left\langle {u^{\prime}}^2\right\rangle \left\langle {w^{\prime}}^2\right\rangle +\left\langle {u^{\prime}}^2\right\rangle \left\langle {\xi}^2\right\rangle \right]\\ {}\kern2.5em +{A}_3\left[\left\langle {u^{\prime}}^3\right\rangle \left\langle {v^{\prime}}^1\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^3\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^2\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^1\right\rangle \left\langle {\xi}^2\right\rangle \right]\\ {}\kern2.5em +{A}_4\left[\left\langle {u^{\prime}}^3\right\rangle \left\langle {w^{\prime}}^1\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^2\right\rangle \left\langle {w^{\prime}}^1\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^3\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^1\right\rangle \left\langle {\xi}^2\right\rangle \right]\\ {}\kern2.5em +\frac{1}{2}{A}_5\left[\left\langle {u^{\prime}}^5\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^4\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^4\right\rangle +\left\langle {u^{\prime}}^1\right\rangle \left\langle {\xi}^4\right\rangle +2\left\langle {u^{\prime}}^3\right\rangle \left\langle {v^{\prime}}^2\right\rangle +2\left\langle {u^{\prime}}^3\right\rangle \left\langle {w^{\prime}}^2\right\rangle \right.\\ {}\kern6.5em +\left.\left.2\left\langle {u^{\prime}}^3\right\rangle \left\langle {\xi}^2\right\rangle +2\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^2\right\rangle \left\langle {w^{\prime}}^2\right\rangle +2\left\langle {u^{\prime}}^1\right\rangle \left\langle {v^{\prime}}^2\right\rangle \left\langle {\xi}^2\right\rangle +2\left\langle {u^{\prime}}^1\right\rangle \left\langle {w^{\prime}}^2\right\rangle \left\langle {\xi}^2\right\rangle \right]\right\}\\ {}\kern2em -\frac{1}{2}\tau \left\{\frac{\partial }{\partial {n}_1}\left\{{\rho}_l\left[{e}_l+\left({V^{\prime}}_l^2+{W^{\prime}}_l^2+\left(b-1\right){RT}_l\right){c}_l\right]\right.\right.\\ {}\kern8.5em +\left.{\rho}_r\left[{e}_r+\left({V^{\prime}}_r^2+{W^{\prime}}_r^2+\left(b-1\right){RT}_r\right){c}_r\right]\right\}\\ {}\kern5.5em +\frac{\partial }{\partial {n}_2}\left\{{\rho}_l{V}_l^{\prime}\left[{d}_l+\left({V^{\prime}}_l^2+{W^{\prime}}_l^2+\left(b+1\right){RT}_l\right){b}_l\right]\right.\\ {}\kern9em +\left.{\rho}_r{V}_r^{\prime}\left[{d}_r+\left({V^{\prime}}_r^2+{W^{\prime}}_r^2+\left(b+1\right){RT}_r\right){b}_r\right]\right\}\\ {}\kern5.5em +\frac{\partial }{\partial {n}_3}\left\{{\rho}_l{W}_l^{\prime}\left[{d}_l+\left({V^{\prime}}_l^2+{W^{\prime}}_l^2+\left(b+1\right){RT}_l\right){b}_l\right]\right.\\ {}\kern9em +\left.\left.{\rho}_r{W}_r^{\prime}\left[{d}_r+\left({V^{\prime}}_r^2+{W^{\prime}}_r^2+\left(b+1\right){RT}_r\right){b}_r\right]\right\}\right\}.\end{array}} $$
In Eqs. (51)–(55), the definitions of 〈⋅〉 and all coefficients can be found in the Appendix. They are expressed explicitly as the functions of conservative variables and their derivatives. In addition, since the structured mesh is used in this work, the spatial derivatives in Eqs. (51)–(55) can be approximated directly by the finite difference scheme.
From the above derivations, it can be seen that there are two major differences between the present solver and the gas-kinetic BGK scheme [24]. The first difference is that the local asymptotic solution to the Boltzmann equation (see Eq. (26)) is used to calculate the distribution function on the cell interface for the GKFS, while the local integral solution to the Boltzmann equation is utilized for the gas-kinetic BGK scheme. Another difference is that the non-equilibrium distribution function is approximated by the difference of equilibrium distribution functions on the cell interface and its surrounding streaming nodes in GKFS (see Eq. (27)), while in the gas-kinetic BGK scheme, the non-equilibrium distribution function is included in the initial distribution function around the cell interface. These differences lead to the numerical flux reconstructed by the GKFS being time-independent (see Eq. (41)), while that of the gas-kinetic BGK scheme is time-dependent. Since δt → 0 is adopted, the GKFS actually reconstructs the numerical flux at the time level t, as shown in Eq. (41). In the gas-kinetic BGK scheme, the numerical flux can be viewed as the integral average in the time interval [t, t + Δt]. From this point of view, the temporal accuracy of the flux in GKFS is O(Δt), while that of the gas-kinetic BGK scheme is O(Δt2). But in fact, most of the conventional CFD schemes, such as the Roe scheme, HLL scheme and AUSM, also calculate the numerical flux at the time level t, which indicates that the temporal accuracy of the flux may not be important for solving the Euler/Navier-Stokes equations for general cases. In terms of simplicity, fewer coefficients are involved in GKFS than the gas-kinetic BGK scheme.
Determination of collision time τ
Theoretically, the collision time τ in Eq. (1) is proportional to the physical viscosity
$$ \tau =\mu /p, $$
where μ is the dynamic viscosity and p is the pressure. However, the numerical dissipation in Eq. (56) might not be sufficient to get a stable solution in cases such as strong shock wave. Therefore, the effective viscosity should be a combination of both physical and numerical ones. Xu [24] proposed a simple and effective treatment to incorporate the numerical viscosity into the gas-kinetic BGK scheme, which is also adopted in the present work:
$$ \tau =\frac{\mu }{p}+\frac{\mid {p}_l-{p}_r\mid }{p_l+{p}_r}\Delta t, $$
where Δt is the time step in the solution of Navier-Stokes equations, pl and pr are the pressure at the left and the right sides of interface, respectively. The additional term of the above equation corresponds to numerical viscosity, which is applied to take the pressure jump with a thickness in the order of cell size into account.
Prandtl number fix
It is well known that the Prandtl number in the gas-kinetic BGK scheme corresponds to unity [24]. Several approaches are available to make the Prandtl number be consistent with the real problem. BGK-Shakhov model [27] is one of these attempts, which adjusts the heat flux in the relaxation term. In the Shakhov model, the Shakhov equilibrium distribution function is given by
$$ {g}^s=g\left[1+\left(1-\Pr \right)\mathbf{c}\cdot \mathbf{q}\left(\frac{c^2}{RT}-5\right)/\left(5 pRT\right)\right], $$
where g is the Maxwellian distribution function in Eq. (2), Pr is the Prandtl number, c = u − U is the peculiar velocity and q is the heat flux
$$ \mathbf{q}=\frac{1}{2}\int \left(\mathbf{u}-\mathbf{U}\right)\left({\left(u-U\right)}^2+{\left(v-V\right)}^2+{\left(w-W\right)}^2+{\xi}^2\right) fd\Xi . $$
It can be seen from Eq. (58) that the Prandtl number can be changed to any realistic value easily. However, considerable work has to be devoted to extend the current GKFS to the above Shakhov model.
Another alternative approach is to make correction for heat flux, which has been presented in [24].
$$ {F}_5^{new}={F}_5+\left(\frac{1}{\Pr }-1\right)\mathbf{q}\cdot {\mathbf{n}}_1, $$
where F5 is the energy flux and q is the heat flux defined in Eq. (59). Since all momentums in Eq. (60) have been obtained in the evaluation of energy flux F5, there will not be much additional work in the above Prandtl number fix. Therefore, Eq. (60) is employed to adjust the Prandtl number in the present work.
Computational sequence
In this section, the basic solution procedure of the current 3D GKFS is summarized as follows:
Firstly, we need to calculate the derivatives of conservative variables and reconstruct the initial conservative variables at two sides of cell interface.
Compute the unit vector in the normal direction n1 and in the tangential directions n2 and n3 of cell interface. Convert the velocities in the Cartesian coordinate system into the local coordinate system via Eq. (17).
Calculate the conservative variables at cell interface W∗ by using Eqs. (33)–(37).
Calculate the vector (G1, G2, G3, G4, G5)T by using Eqs. (A.19)-(A.23) and further compute coefficients A1, A2, A3, A4, A5 by Eqs. (45)–(49).
Calculate the numerical flux \( {\mathbf{F}}_n^{\ast } \) by Eqs. (51)–(55).
Compute the heat flux q via Eq. (59), and make correction for energy flux by using Eq. (60).
Convert the numerical flux in the local coordinate system \( {\mathbf{F}}_n^{\ast } \) to the global Cartesian coordinate system Fn by using Eq. (22).
Once the fluxes at all cell interfaces are obtained, solve ordinary differential equation (Eq. (10)) by using time marching method. This step gives the conservative variables at cell centers at new time step.
Repeat steps (1) to (8) until convergence criterion is satisfied.
Numerical results and discussion
To validate the proposed 3D GKFS for simulation of incompressible and compressible viscous flows, the 3D lid-driven cavity flow, incompressible flow past a stationary sphere, flow around ONERA M6 wing and DLR-F6 wing-body configuration are considered. For temporal discretization to Eq. (10), four-stages Runge-Kutta method is applied in cases of 3D lid-driven cavity flow and flow past a stationary sphere. In compressible cases, the Lower-upper symmetric-Gauss-Seidel (LU-SGS) scheme [28] is adopted to accelerate the convergence and the Venkatakrishnan's limiter [29] is used to calculate the conservative variables at two sides of interface WL and WR in the reconstruction process. Specifically, WL and WR are computed by
$$ {\displaystyle \begin{array}{l}{\mathbf{W}}^L={\mathbf{W}}_c^L+{\Psi}_c^L\left({\mathbf{x}}_b-{\mathbf{x}}_c^L\right)\cdot \nabla {\mathbf{W}}_c^L,\\ {}{\mathbf{W}}^R={\mathbf{W}}_c^R+{\Psi}_c^R\left({\mathbf{x}}_b-{\mathbf{x}}_c^R\right)\cdot \nabla {\mathbf{W}}_c^R,\end{array}} $$
where \( {\mathbf{W}}_c^L \) and \( {\mathbf{W}}_c^R \) are the conservative flow variables at centers of the left and the right cells, respectively; \( \nabla {\mathbf{W}}_c^L \) and \( \nabla {\mathbf{W}}_c^R \) are their corresponding first-order derivatives. \( {\mathbf{x}}_c^L \), \( {\mathbf{x}}_c^R \) and xb are the coordinates of the left cell center, the right cell center and the midpoint of cell interface, respectively. \( {\Psi}_c^L \) and \( {\Psi}_c^R \) are the limiter functions utilized in the left and the right cells, respectively. In addition, all the simulations were done on a PC with 3.10GHz CPU.
Before applying the GKFS to various fluid flow problems, its accuracy is first validated by the advection of density perturbation for three-dimensional flows [30]. The initial condition of this problem is set as
$$ {\displaystyle \begin{array}{l}\rho \left(x,y,z\right)=1+0.2\sin \left(\pi \left(x+y+z\right)\right),\\ {}u\left(x,y,z\right)=1,v\left(x,y,z\right)=1,w\left(x,y,z\right)=1,p\left(x,y,z\right)=1.\end{array}} $$
The exact solutions under periodic boundary condition are
$$ {\displaystyle \begin{array}{l}\rho \left(x,y,z,t\right)=1+0.2\sin \left(\pi \left(x+y+z-3t\right)\right),\\ {}u\left(x,y,z,t\right)=1,v\left(x,y,z,t\right)=1,w\left(x,y,z,t\right)=1,p\left(x,y,z,t\right)=1.\end{array}} $$
Since this test case belongs to the inviscid flow, the collision time τ takes
$$ \tau =\varepsilon \Delta t+\frac{\mid {p}_l-{p}_r\mid }{p_l+{p}_r}\Delta t, $$
where ε = 0.01 is used. Numerical tests are conducted on the computational domain of [0, 2] × [0, 2] × [0, 2]. The uniform meshes with Δx = Δy = Δz = 2/N and N = 20, 40, 60, 80 are used. The L1 error of the density field at t = 2 is extracted and shown in Fig. 1. It can be seen that the GKFS is about the second order of accuracy in space.
L1 error of the density field for the advection of density perturbation
Case 1: 3D lid-driven cavity flow
The 3D lid-driven cavity flows in a cube are simulated to test the capability of the proposed explicit GKFS for simulating 3D incompressible viscous flows. The non-uniform mesh of 81 × 81 × 81 is used for the cases of Re = 100 and 400. The mesh point in the x-direction is generated by
$$ {\displaystyle \begin{array}{l}{x}_i=0.5\left(1-\eta {\tan}^{-1}\left(\left(1-{\kappa}_i\right)\cdot \tan \left(1/\eta \right)\right)\right),\kern5em i\le \left(i\max +1\right)/2,\\ {}{x}_i=1.0-{x}_{i\max +1-i},\kern20.5em else.\end{array}} $$
Where κi = (i − 1)/((i max − 1)/2), i and imax are the mesh point index and total number of mesh points in the x direction; η is the parameter to control the mesh stretching and is selected as 1.1 in this study. Similarly, the mesh point in the y- and z-directions is generated in the same way.
In the current simulation, the fluid density is taken as ρ = 1.0 and the lid velocity is chosen as U∞ = 0.1. Initially, the density inside the cavity is constant and the flow is static. The lid on the top boundary moves along the x-direction. The no-slip wall condition is imposed at all boundaries. To quantitatively examine the performance of 3D GKFS, the velocity profiles of x-direction component u along the vertical centerline and y-direction component v along the horizontal centerline for Re = 100 and 400 are plotted in Fig. 2. For comparison, the results of Shu et al. [31] and Wu and Shu [32] are also included in the figure. It can be found that all the velocity profiles by current 3D GKFS agree very well with those of Shu et al. [31] and Wu and Shu [32], which demonstrates the capability of present solver for the simulation of 3D incompressible flows on non-uniform grids. To further show the flow patterns of 3D lid-driven cavity flow, the streamlines for Re = 100 and 400 at three orthogonal mid-planes located at x = 0.5, y = 0.5 and z = 0.5 are displayed in Fig. 3. The flow patterns along the mid-plane of z = 0.5 in Fig. 3 demonstrate that the primary vortices gradually shift toward the center position and the second vortices gradually move to the lower bottom wall when the Reynolds number is increased. In this process, the strength of these vortices is also enhanced, which can also be proven by the flow patterns along other two mid-planes. All these observations match well with those in Shu et al. [31].
Comparison of velocity profiles on the plane of z = 0.5 for 3D lid-driven cavity flow. Upper: Re = 100; Lower: Re = 400
Streamlines on three mid-planes for Re = 100 (left) and Re = 400 (right). a mid-plane of z = 0.5. b mid-plane of y = 0.5. c mid-plane of x = 0.5
Case 2: incompressible flow past a stationary sphere
In this section, the 3D GKFS is applied to a benchmark case of incompressible flow past a stationary sphere. In this case, the flow is characterized by the Reynolds number defined by Re = ρU∞D/μ, where ρ and μ are the fluid density and dynamic viscosity, respectively. U∞ is the free stream velocity and D is the sphere diameter. To simulate this test case with a simple Cartesian mesh, the implicit boundary condition-enforced immersed boundary method [33, 34] is coupled with the present 3D GKFS. The computational domain is selected as a rectangular box of 30D × 20D × 20D in the x-, y- and z- directions. The sphere is initially placed at (10D, 10D, 10D), which is discretized by triangular elements with 1195 vertices. As shown in Fig. 4, a non-uniform Cartesian mesh with mesh size of 137 × 122 × 122 is used, in which a uniform mesh spacing of 0.02D is applied around the sphere. The no-slip condition on the curved boundary is imposed by correcting the velocity on the Cartesian mesh through the immersed boundary method [33, 34]. Here, laminar flows at low Reynolds numbers of 50, 100, 150, 200 and 250 are considered.
Partial view of computational mesh for flow past a sphere
At first, the drag coefficients at Re = 100, 200 and 250 are computed and compared quantitatively in Table 1 to verify the accuracy of the present solver. The numerical results of Johnson and Patel [35], Wu and Shu [32], Kim et al. [36] and Wang et al. [37] are also included in the table for comparison. It can be clearly observed that the present results match well with those in the literature.
Table 1 Comparison of drag coefficient for flow past a stationary sphere
Then, for the steady axisymmetric flow, the streamlines of flow past a sphere at Re ≤ 200 are depicted in Fig. 5. Since the flow is axisymmetric, only the streamlines on the x- y plane of symmetry are given. From the figure, a recirculation region appears behind the sphere and its length Ls increases with Reynolds number. Quantitative comparison between the present results of Ls and those of Johnson and Patel [35] and Gilmanov et al. [38] is made in Fig. 6. Good agreement can be found in the figure. When the Reynolds number is increased to 250, the phenomenon of steady non-axisymmetric pattern shows up, which can be seen in Fig. 7. In the figure, the streamlines on the x- z plane remain symmetric. However, there are two asymmetric vortices on the x- y plane, which implies that the symmetry is lost in this plane. These results are in good agreement with previous investigations [35, 38].
Streamlines at four different Reynolds numbers of 50, 100, 150 and 200 in the steady axisymmetric regime
Comparison of recirculation length Ls
Streamlines for flow past a stationary sphere at Re = 250 in the steady non-axisymmetric regime
Case 3: flow around ONERA M6 wing
The ONERA M6 test case is chosen to validate the present solver for the simulation of compressible viscous flows with complex geometry. For numerical simulation, the free-stream Mach number is taken as M∞ = 0.8395, the mean-chord based Reynolds number is chosen as Re = 11.72 × 106 and the angle of attack is α = 3.06∘. The computational mesh in the NASA website [39] is adopted in this work, which has 4 blocks and 316,932 grid points. The mesh spacing of the first mesh point adjacent to the wing surface is 4.5 × 10−5. To take turbulent effect into consideration, the Spalart-Allmaras turbulent model [40] is applied. Figure 8 shows the pressure contours at the wing surface obtained from the present solver, in which the "λ" shape shock wave on the upper surface is clearly presented. The above phenomenon matches well with the result from sphere function-based gas-kinetic scheme [41]. To further validate the present results, the pressure coefficient distributions at selected span-wise locations obtained from the present solver are displayed in Fig. 9. The numerical results of WIND scheme [39] and the experimental results [42] are also included for comparison. As can be seen from the figure, the present results are close to those of WIND scheme [39] and compare well with the experimental data [42]. What is more, the pressure coefficient distributions at 65% and 80% spans show that the present results are much closer to the experimental results [42] as compared with the results from WIND scheme [39]. It demonstrates that the present solver captures the shock wave more precisely and controls the numerical dissipation well.
Pressure contours of flow around ONERA M6 wing
Comparison of pressure coefficient distributions at selected positions for ONERA M6 Wing
To further investigate the performance of GKFS for simulation of high speed flows, in this test, we change the Mach number to M∞ = 5 while keep other parameters the same as the above case. Figure 10 shows the pressure contours and the pressure coefficient distribution at 65% span. It can be seen that the GKFS captures strong shock wave without any oscillation and the pressure coefficient distribution agrees well with the AUSM scheme [43].
Pressure contours and pressure coefficient distribution at 65% span for ONERA M6 Wing at M∞ = 5
Case 4: DLR-F6 wing-body configuration
The DLR-F6 wing-body configuration is a generic transport aircraft model from the 3rd AIAA CFD drag prediction workshop (DPW III) [44]. At first, numerical simulations are conducted at a free-stream Mach number of M∞ = 0.75, a mean-chord based Reynolds number of Re = 3 × 106 and an angle of attack α = 0.49∘. The geometry and computational mesh from the NASA website [45] are utilized in the current work. Owing to the limitation of the computer's memory, only the coarse mesh with 26 blocks and 2,298,880 cells is used. Figure 11 is the pressure contour of DLR-F6 wing-body obtained by present GKFS. The separation bubble at the intersection of wing and body is clearly recognized in Fig. 12, which is in line with the observations of Vassberg et al. [46]. To make a quantitative comparison, the pressure coefficient distributions at selected span-wise locations obtained by present 3D GKFS are compared with the experimental results [47] and numerical results of Vassberg et al. [46] and Yang et al. [48] in Fig. 13. It can be observed that the current results are close to those of Vassberg et al. [46] and Yang et al. [48] and all of them basically agree well with the experimental measurement [47].
Pressure contours of DLR-F6 wing/body
Separation bubble on the intersection of wing and body (left: Vassberg et al. [46]; right: present)
Comparison of pressure coefficient distributions of DLR-F6 wing/body at different locations
To further verify the force coefficients of current solver for the DLR-F6 wing-body, another test case is simulated with the free stream condition of Mach number M∞ = 0.75, Reynolds number Re = 5 × 106 and angle of attach α = 0∘. Table 2 shows the present results of force coefficients, including lift coefficient Cl, pressure drag coefficient Cd, p, friction drag coefficient Cd, f, total drag coefficient Cd and moment coefficient CM. The results of present solver are close to the results of LBFS [48] and can essentially match well with the reference data of Vassberg et al. [46].
Table 2 Comparison of force coefficients for DLR-F6 wing-body configuration
This paper presents a three-dimensional GKFS for simulation of incompressible and compressible viscous flows. The present work is the extension of our previous work [1], where a new gas-kinetic scheme is presented to simulate two-dimensional viscous flows. In this work, the non-equilibrium distribution function is evaluated by the difference of equilibrium distribution functions at cell interface and its surrounding points. As a result, the distribution function at the interface can be simply derived and the formulations of the conservative variables and fluxes at cell interface can be explicitly given. Since the solution of 3D continuous Boltzmann equation is reconstructed locally at cell interface, the present scheme can be viewed as a truly 3D flux solver for viscous flows. To consider general 3D cases, a local coordinate transformation is made to transform the velocities in the global Cartesian coordinate system to the local normal and tangential directions at each cell interface. In this way, all the interfaces can be treated using the same way. Several numerical experiments are conducted to validate the proposed scheme, including 3D lid-driven cavity flow, incompressible flow past a stationary sphere, compressible flow around ONERA M6 wing and DLR-F6 wing-body configuration. Numerical results showed that the proposed flux solver can provide accurate numerical results for three-dimensional incompressible and compressible viscous flows.
All data generated or analyzed during this study are included in this published article.
2D:
KFVS:
Kinetic flux vector scheme
BGK:
Bhatnagar-Gross-Krook
FDS:
Flux difference splitting
GKFS:
Gas-kinetic flux solver
LU-SGS:
Lower-upper symmetric-Gauss-Seidel
Sun Y, Shu C, Teo CJ et al (2015) Explicit formulations of gas-kinetic flux solver for simulation of incompressible and compressible viscous flows. J Comput Phys 300:492–519
MathSciNet MATH Article Google Scholar
Xu K (1998) Gas-kinetic schemes for unsteady compressible flow simulations. VKI for Fluid Dynamics Lecture Series
Su M, Xu K, Ghidaoui MS (1999) Low-speed flow simulation by the gas-kinetic scheme. J Comput Phys 150(1):17–39
MATH Article Google Scholar
Xu K, Mao M, Tang L (2005) A multidimensional gas-kinetic BGK scheme for hypersonic viscous flow. J Comput Phys 203(2):405–421
Tian CT, Xu K, Chan KL et al (2007) A three-dimensional multidimensional gas-kinetic scheme for the Navier-Stokes equations under gravitational fields. J Comput Phys 226(2):2003–2027
Jiang J, Qian Y (2012) Implicit gas-kinetic BGK scheme with multigrid for 3D stationary transonic high-Reynolds number flows. Comput Fluids 66:21–28
Yang LM, Shu C, Wu J (2014) A simple distribution function-based gas-kinetic scheme for simulation of viscous incompressible and compressible flows. J Comput Phys 274:611–632
Li W, Kaneda M, Suga K (2014) An implicit gas kinetic BGK scheme for high temperature equilibrium gas flows on unstructured meshes. Comput Fluids 93:100–106
Li ZH, Zhang HX (2004) Study on gas kinetic unified algorithm for flows from rarefied transition to continuum. J Comput Phys 193(2):708–738
Xu K, Huang JC (2010) A unified gas-kinetic scheme for continuum and rarefied flows. J Comput Phys 229(20):7747–7764
Guo Z, Xu K, Wang R (2013) Discrete unified gas kinetic scheme for all Knudsen number flows: low-speed isothermal case. Phys Rev E 88(3):033305
Liu S, Yu P, Xu K et al (2014) Unified gas-kinetic scheme for diatomic molecular simulations in all flow regimes. J Comput Phys 259:96–113
Roe PL (1981) Approximate Riemann solvers, parameter vectors, and difference schemes. J Comput Phys 43(2):357–372
Engquist B, Osher S (1981) One-sided difference approximations for nonlinear conservation laws. Math Comput 36(154):321–351
Van Leer B (1997) Flux-vector splitting for the Euler equation. In: Hussaini MY, van Leer B, Van Rosendale J (eds) Upwind and high-resolution schemes. Springer, Heidelberg
MATH Google Scholar
Osher S, Chakravarthy S (1983) Upwind schemes and boundary conditions with applications to Euler equations in general geometries. J Comput Phys 50(3):447–481
Pullin DI (1980) Direct simulation methods for compressible inviscid ideal-gas flow. J Comput Phys 34(2):231–244
Deshpande SM (1986) A second-order accurate kinetic-theory-based method for inviscid compressible flows. NASA STI/Recon Technical Report N 87
Perthame B (1992) Second-order Boltzmann schemes for compressible Euler equations in one and two space dimensions. SIAM J Numer Anal 29(1):1–19
Mandal JC, Deshpande SM (1994) Kinetic flux vector splitting for Euler equations. Comput Fluids 23(2):447–478
Chou SY, Baganoff D (1997) Kinetic flux-vector splitting for the Navier-Stokes equations. J Comput Phys 130(2):217–230
Prendergast KH, Xu K (1993) Numerical hydrodynamics from gas-kinetic theory. J Comput Phys 109(1):53–66
Chae D, Kim C, Rho OH (2000) Development of an improved gas-kinetic BGK scheme for inviscid and viscous flows. J Comput Phys 158(1):1–27
Xu K (2001) A gas-kinetic BGK scheme for the Navier-Stokes equations and its connection with artificial dissipation and Godunov method. J Comput Phys 171(1):289–335
Xu K, Sun Q, Yu P (2010) Valid physical processes from numerical discontinuities in computational fluid dynamics. Int J Hypersonics 1(3):157–172
Bhatnagar PL, Gross EP, Krook M (1954) A model for collision processes in gases. I Small amplitude processes in charged and neutral one-component systems. Phys Rev 94(3):511–525
Shakhov EM (1968) Generalization of the Krook kinetic relaxation equation. Fluid Dyn 3(5):95–96
MathSciNet Article Google Scholar
Yoon S, Jameson A (1988) Lower-upper symmetric-Gauss-Seidel method for the Euler and Navier-Stokes equations. AIAA J 26(9):1025–1026
Venkatakrishnan V (1995) Convergence to steady state solutions of the Euler equations on unstructured grids with limiters. J Comput Phys 118(1):120–130
Pan L, Xu K (2020) High-order gas-kinetic scheme with three-dimensional WENO reconstruction for the Euler and Navier-Stokes solutions. Comput Fluids 198:104401
Shu C, Niu XD, Chew YT (2003) Taylor series expansion and least squares-based lattice Boltzmann method: three-dimensional formulation and its applications. Int J Mod Phys C 14(07):925–944
Wu J, Shu C (2010) An improved immersed boundary-lattice Boltzmann method for simulating three-dimensional incompressible flows. J Comput Phys 229(13):5022–5042
Wu J, Shu C (2009) Implicit velocity correction-based immersed boundary-lattice Boltzmann method and its applications. J Comput Phys 228(6):1963–1979
Wang Y, Shu C, Teo CJ et al (2015) An immersed boundary-lattice Boltzmann flux solver and its applications to fluid-structure interaction problems. J Fluids Struct 54:440–465
Johnson TA, Patel VC (1999) Flow past a sphere up to a Reynolds number of 300. J Fluid Mech 378:19–70
Kim J, Kim D, Choi H (2001) An immersed-boundary finite-volume method for simulations of flow in complex geometries. J Comput Phys 171(1):132–150
Wang XY, Yeo KS, Chew CS et al (2008) A SVD-GFD scheme for computing 3D incompressible viscous fluid flows. Comput Fluids 37(6):733–746
Gilmanov A, Sotiropoulos F, Balaras E (2003) A general reconstruction algorithm for simulating flows with complex 3D immersed boundaries on Cartesian grids. J Comput Phys 191(2):660–669
Slater JW (2002) ONERA M6 Wing. http://www.grc.nasa.gov/WWW/wind/valid/m6wing/m6wing01/m6wing01.html. Accessed 30 Aug 2002
Spalart P, Allmaras S (1992) A one-equation turbulence model for aerodynamic flows. AIAA paper 92-0439
Yang LM, Shu C, Wu J (2015) A three-dimensional explicit sphere function-based gas-kinetic flux solver for simulation of inviscid compressible flows. J Comput Phys 295:322–339
Schmitt V, Charpin F (1979) Pressure distributions on the ONERA-M6-wing at transonic Mach numbers, experimental data base for computer program assessment. Report of the Fluid Dynamics Panel Working Group 04, AGARD AR 138, May 1979
Liou MS (2006) A sequel to AUSM, part II: AUSM+-up for all speeds. J Comput Phys 214:137–170
Frink NT (2006) 3rd AIAA CFD Drag Prediction Workshop. https://aiaa-dpw.larc.nasa.gov/Workshop3/workshop3.html. Accessed 29 Nov 2006
Morrison J (2006) https://dpw.larc.nasa.gov/DPW3/multiblock_Boeing_CGNS/F6wb_struc_boeing. Accessed 18 Jan 2006
Vassberg J, Tinoco E, Mani M et al (2007) Summary of the third AIAA CFD drag prediction workshop. Paper presented at the 45th AIAA aerospace sciences meeting and exhibit, 1 Jan 2007
Rumsey CL, Rivers SM, Morrison JH (2005) Study of CFD variation on transport configurations from the second drag-prediction workshop. Comput Fluids 34(7):785–816
Yang LM, Shu C, Wu J (2014) A hybrid lattice Boltzmann flux solver for simulation of 3D compressible viscous flows. Paper presented at the eighth international conference on computational fluid dynamics, Chengdu, China, 24-29 July 2014
This work is supported by National Natural Science Foundation of China (Grant Nos. 11772157 and 11832012).
National Natural Science Foundation of China (Grant Nos. 11772157 and 11832012).
Huawei Technologies Co., Ltd, Bantian, Longgang District, Shenzhen, 518129, China
Y. Sun
Department of Mechanical Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore, 119260, Singapore
L. M. Yang, C. Shu & C. J. Teo
L. M. Yang
C. Shu
C. J. Teo
The contribution of the authors to this work is equivalent. All authors read and approved the final manuscript.
Correspondence to L. M. Yang.
Moments of Maxwellian Distribution Function
In the paper, some notations are taken to simplify the formulations. In this appendix, the notations for the moments of Maxwellian distribution function are introduced. At first, the Maxwellian distribution function for 3D flows is given as (Eq. (2))
$$ g=\rho {\left(\frac{\lambda }{\pi}\right)}^{\frac{K+3}{2}}{e}^{-\lambda \left({\left(u-U\right)}^2+{\left(v-V\right)}^2+{\left(w-W\right)}^2+{\xi}^2\right)}. $$
Following the idea of [2], the notation for the moments of g is defined as
$$ \rho \left\langle \cdot \right\rangle =\int \left(\cdot \right) gdudvdwd\xi . $$
(A.1)
Then the general moment formualtion becomes
$$ \left\langle {u}^n{v}^m{w}^l{\xi}^p\right\rangle =\left\langle {u}^n\right\rangle \left\langle {v}^m\right\rangle \left\langle {w}^l\right\rangle \left\langle {\xi}^p\right\rangle . $$
It should be noted that in Section 3, the conservative variables and numerical fluxes are derived at local interface and the transformation of velocities is made at each cell interface. Therefore, the moments of normal and tangential velocities (u′, v′, w′) are presented to keep pace with the formulations in the paper. As the integration of three components of the particle velocity (u′, v′, w′) are similar to each other, only the moments of u′ are presented here. When the integral of velocity is from −∞ to +∞, the moments of u′n and ξn are
$$ \left\langle {u^{\prime}}^0\right\rangle =1, $$
$$ \left\langle {u^{\prime}}^1\right\rangle ={U}^{\prime }, $$
$$ \left\langle {u^{\prime}}^{(n)}\right\rangle ={U}^{\prime}\left\langle {u^{\prime}}^{\left(n-1\right)}\right\rangle +\frac{n-1}{2\lambda}\left\langle {u^{\prime}}^{\left(n\hbox{-} 2\right)}\right\rangle, $$
$$ \left\langle {\xi}^0\right\rangle =1, $$
$$ \left\langle {\xi}^2\right\rangle =\frac{K}{2\lambda }, $$
$$ \left\langle {\xi}^4\right\rangle =\frac{K^2+2K}{4{\lambda}^2}. $$
When the moments for u′n are calculated in the half space, the exponential function and the complementary error function appear in the formulation. Take notation of the integral from 0 to +∞ as 〈⋅〉>0 and integral from −∞ to 0 as 〈⋅〉<0, the moments become
$$ {a}_l={\left\langle {u^{\prime}}^0\right\rangle}_{>0}=\frac{1}{2}\mathit{\operatorname{erfc}}\left(-\sqrt{\lambda_l}{U}_l^{\prime}\right), $$
$$ {b}_l={\left\langle {u^{\prime}}^1\right\rangle}_{>0}={U}_l^{\prime }{a}_l+\frac{1}{2}\frac{e^{-{\lambda}_l{U_l^{\prime}}^2}}{\sqrt{{\pi \lambda}_l}}, $$
(A.10)
$$ {c}_l={\left\langle {u^{\prime}}^2\right\rangle}_{>0}={U}_l^{\prime }{b}_l+\frac{1}{2{\lambda}_l}{a}_l, $$
$$ {d}_l={\left\langle {u^{\prime}}^3\right\rangle}_{>0}={U}_l^{\prime }{c}_l+\frac{1}{\lambda_l}{b}_l, $$
$$ {e}_l={\left\langle {u^{\prime}}^4\right\rangle}_{>0}={U}_l^{\prime }{d}_l+\frac{3}{2{\lambda}_l}{c}_l, $$
$$ {a}_r={\left\langle {u^{\prime}}^0\right\rangle}_{<0}=\frac{1}{2}\mathit{\operatorname{erfc}}\left(\sqrt{\lambda_r}{U}_r^{\prime}\right), $$
$$ {b}_r={\left\langle {u^{\prime}}^1\right\rangle}_{<0}={U}_r^{\prime }{a}_r-\frac{1}{2}\frac{e^{-{\lambda}_r{U_r^{\prime}}^2}}{\sqrt{{\pi \lambda}_r}}, $$
$$ {c}_r={\left\langle {u^{\prime}}^2\right\rangle}_{<0}={U}_r^{\prime }{b}_r+\frac{1}{2{\lambda}_r}{a}_r, $$
$$ {d}_r={\left\langle {u^{\prime}}^3\right\rangle}_{<0}={U}_r^{\prime }{c}_r+\frac{1}{\lambda_r}{b}_r, $$
$$ {e}_r={\left\langle {u^{\prime}}^4\right\rangle}_{<0}={U}_r^{\prime }{d}_r+\frac{3}{2{\lambda}_r}{c}_r. $$
In order to compute the coefficients A1 to A5 via Eqs. (45)–(49), the values of G1 to G5 in Eq. (44) should be calculated in advance. The explicit formulations of G1 to G5 can be given as
$$ {G}_1=\frac{\partial \left({\rho}_l{b}_l+{\rho}_r{b}_r\right)}{\partial {n}_1}+\frac{\partial \left({\rho}_l{V}_l^{\prime }{a}_l+{\rho}_r{V}_r^{\prime }{a}_r\right)}{\partial {n}_2}+\frac{\partial \left({\rho}_l{W}_l^{\prime }{a}_l+{\rho}_r{W}_r^{\prime }{a}_r\right)}{\partial {n}_3}, $$
$$ {G}_2=\frac{\partial \left({\rho}_l{c}_l+{\rho}_r{c}_r\right)}{\partial {n}_1}+\frac{\partial \left({\rho}_l{V}_l^{\prime }{b}_l+{\rho}_r{V}_r^{\prime }{b}_r\right)}{\partial {n}_2}+\frac{\partial \left({\rho}_l{W}_l^{\prime }{b}_l+{\rho}_r{W}_r^{\prime }{b}_r\right)}{\partial {n}_3}, $$
$$ {\displaystyle \begin{array}{l}{G}_3=\frac{\partial \left({\rho}_l{V_l}^{\prime }{b}_l+{\rho}_r{V_r}^{\prime }{b}_r\right)}{\partial {n}_1}+\frac{\partial \left[\left({\rho}_l{V^{\prime}}_l^2+{p}_l\right){a}_l+\left({\rho}_r{V^{\prime}}_r^2+{p}_r\right){a}_r\right]}{\partial {n}_2}\\ {}\kern3.5em +\frac{\partial \left({\rho}_l{V_l}^{\prime }{W_l}^{\prime }{a}_l+{\rho}_r{V_r}^{\prime }{W_r}^{\prime }{a}_r\right)}{\partial {n}_3},\end{array}} $$
$$ {\displaystyle \begin{array}{l}{G}_4=\frac{\partial \left({\rho}_l{W_l}^{\prime }{b}_l+{\rho}_r{W_r}^{\prime }{b}_r\right)}{\partial {n}_1}+\frac{\partial \left({\rho}_l{V_l}^{\prime }{W_l}^{\prime }{a}_l+{\rho}_r{V_r}^{\prime }{W_r}^{\prime }{a}_r\right)}{\partial {n}_2}\\ {}\kern3.5em +\frac{\partial \left[\left({\rho}_l{W^{\prime}}_l^2+{p}_l\right){a}_l+\left({\rho}_r{W^{\prime}}_r^2+{p}_r\right){a}_r\right]}{\partial {n}_3},\end{array}} $$
$$ {\displaystyle \begin{array}{l}{G}_5=\frac{\partial }{\partial {n}_1}\Big\{{\rho}_l\left[{d}_l+\left({V^{\prime}}_l^2+{W^{\prime}}_l^2+\left(b-1\right){RT}_l\right){b}_l\right]\operatorname{}\\ {}\kern5.5em +\operatorname{}{\rho}_r\left[{d}_r+\left({V^{\prime}}_r^2+{W^{\prime}}_r^2+\left(b-1\right){RT}_r\right){b}_r\right]\Big\}\\ {}\kern2.5em +\frac{\partial }{\partial {n}_2}\Big\{{\rho}_l{V}_l^{\prime}\left[{c}_l+\left({V^{\prime}}_l^2+{W^{\prime}}_l^2+\left(b+1\right){RT}_l\right){a}_l\right]\operatorname{}\\ {}\kern5.5em +\operatorname{}{\rho}_r{V}_r^{\prime}\left[{c}_r+\left({V^{\prime}}_r^2+{W^{\prime}}_r^2+\left(b+1\right){RT}_r\right){a}_r\right]\Big\}\\ {}\kern2.5em +\frac{\partial }{\partial {n}_3}\Big\{{\rho}_l{W}_l^{\prime}\left[{c}_l+\left({V^{\prime}}_l^2+{W^{\prime}}_l^2+\left(b+1\right){RT}_l\right){a}_l\right]\operatorname{}\\ {}\kern5.5em +\operatorname{}{\rho}_r{W}_r^{\prime}\left[{c}_r+\left({V^{\prime}}_r^2+{W^{\prime}}_r^2+\left(b+1\right){RT}_r\right){a}_r\right]\Big\}.\end{array}} $$
Sun, Y., Yang, L.M., Shu, C. et al. A three-dimensional gas-kinetic flux solver for simulation of viscous flows with explicit formulations of conservative variables and numerical flux. Adv. Aerodyn. 2, 13 (2020). https://doi.org/10.1186/s42774-020-00039-6
3D flux solver
Gas-kinetic scheme
Viscous flow
Navier-Stokes equations | CommonCrawl |
Wasy Research
Tolerancing
3D tolerance stack-up analysis with examples
In this post, 3D tolerance stack-up analysis will be explained with two examples. Both traditional "+/-" dimensional tolerance and geometrical tolerance (GD&T) are considered.
Wahyudin Syam
Oct 19, 2021 • 15 min read
In the previous post, tolerance stack-up analyses in 2D have been presented in detailed. However, the main disadvantage is that 2D tolerance analysis only predicts 2D on-plane variations and cannot predict rotational variations in 3D.
The complete MATLAB implementations for the 3D tolerance analysis in this post can be obtained from here.
3D tolerance stack-up analysis
This tolerance analysis requires a matrix multiplication method to calculate the nominal and variation propagation on each part affecting the key characteristic (KC) of an assembled product.
With the matrix multiplication method, all linear and rotational errors of the KC of an assembly can be estimated in 3D with high accuracy (close to real-word situations).
We will discuss briefly some fundamentals related to the matrix multiplication method. Readers who are familiar with matrix concepts can skip this section and directly read the next section.
Homogenous matrix
Before discussing 3D tolerance analysis with the matrix method, the fundamental understanding of matrix should be obtained. The matrix type used for 3D tolerance analysis is homogenous matrix with size of $4\times 4$.
This homogenous matrix, that is as roto-translation matrix, will be used to represent roto-translation operations in 3D. With homogenous matrix, the roto-translation operations in 3D can be represented in a single matrix.
The format of a homogenous matrix is:
Where $c_{ij}$ is the $i$- and $j$-th element of the matrix and $k$ is a constant that is a scale of the matrix element $c_{ij}$.
In general, all $ c_{ij}$ and $k$ itself are divided by $k$ so that the scale $k$ value in the matrix becomes 1.
In 3D coordinate transformation calculations of a point P using homogenous matrix, the point P should also be represented in homogenous coordinate, that is:
Where there is an additional 1 as the fourth element in the 3D position coordinate.
Roto-translational matrix
Before discussing roto-translation matrix, we will have a look at translation and rotation matrices individually.
A translation matrix Tr of a coordinate P is represented as:
Where $t_{x}$ is a translation along x-axis, $t_{y}$ is a translation along y-axis and $t_{z}$ is a translation along z-axis.
Rotation matrices Rx, Ry and Rz that are rotation along x,y and z axes. These rotation matrices are represented as:
Where $\theta _{i}$ is rotation angle along i-axis.
Roto-translation matrix is a matrix that represents both translation and rotation transformations of a point with respect to an axis of rotation.
This roto-translation matrix is used to represent geometrical error variations of features on a part due to given or allocated feature tolerance values.
Very often, in the context of 3D tolerance analysis, a roto-translation matrix is called error matrix.
Error matrix
Error matrix is a roto-translation matrix Tij which has elements representing the value of translation or rotation variations of a feature due to allocated tolerances on that feature.
The error matrix Tij is represented as:
Where Rot is a $3\times 3$ matrix representing rotation error and Trans is $3\times 1$ column vector representing translation error. $d_{i}$ is the translation error component along i-axis in Trans column vector. $d\theta _{i}$ is the rotation error component along i-axis (in radian) in Rot matrix.
One thing to understand is that a roto-translation matrix first performs the translation of a point P and then performs the rotation to the point P.
Note that the rotation errors in matrix Tij (elements of Rot) do not have $sin$ and $cos$. Because, errors elements in matrix Tij in general have very small values.
Hence, for a very small angle $\theta$ ($<0.3 rad$) the value of $sin(\theta )$ is close to the value of $\theta$ itself. And, the value of $cos(\theta )$ is close to 1.
The explanation of $sin$ and $cos$ approximations of very small angles is:
Coordinate transformation with matrix multiplication
It is important to understand the concept of matric multiplication to transform coordinates. In 3D tolerance stack-up analysis, symbol conventions to represent a point P and its transformation process are as follows:
$P_{i}$ represents a point $P$ with respect to $i$ coordinate system
$P_{j}$ represents a point P with respect to $j$ coordinate system
$\mathbf{T_{ij}}$ represents a coordinate transformation from $i$ to $j$ coordinate system
The transformation is modelled as:
$P_{i}=\mathbf{T_{ij}} \cdot P_{j}$
The above equation is interpreted as follows. A point $P$, with respect to $i$ coordinate system, is the result of a transformation of the point $P$, with respect to $j$ coordinate system, by using a transformation matrix $\mathbf{T_{ij}}$.
To reverse the operation, that is obtaining the point P with respect to $j$ coordinate system, $P_{j}$, the transformation is:
$P_{j}=\mathbf{T_{ij}^{-1}} \cdot P_{i}$
Where $\mathbf{T_{ij}^{-1}} =\mathbf{T_{ji}}$.
To ease the naming convention in variation transformation chains (discussed in the next section), the index of the matrix can be written as:
$\mathbf{T_{ii'}}$
Representing a transformation from a nominal point $i$ to a deviated point $i'$ on a feature due to dimensional and geometrical variations.
Figure 1 shows a pictorial presentation of coordinate transformation. From figure 1, an intuitive understanding can be obtained for coordinate transformation to get a point P with respect to coordinate system 1 from point P with respect to coordinate system 2.
Figure 1: An illustration of a coordinate transformation from coordinate system 1 to coordinate system 2.
Nominal and variation transformation chains
Before performing 3D tolerance analysis, the model of nominal and variation transformation chains of an assembly should be obtained.
Figure 2 represents the nominal and variation tolerance chains of an assembly to be analysed. In figure 2 left, a nominal tolerance chain follows the nominal dimension of all features passed by the tolerance chain of an assembly.
Meanwhile, figure 2 right shows additional transformations added to the nominal chain due to variations on the features obtained by allocated tolerances.
Figure 2: Illustration of (left) nominal and (right) variation tolerance chain.
In figure 2 right, matrix $\mathbf{T_{12}}$ transforms the coordinate system 1 (on a feature 1) to the coordinate system 2 (on a feature 2) and matrix $\mathbf{T_{23}}$ transforms the coordinate system 2 (on a feature 2) to the coordinate system 3 (on a feature 3).
Because there is a variation on feature 2 (represented by the type and value of a tolerance assigned to this feature), there are an additional transformation describing the variation on feature 2, that are $\mathbf{T_{22'}}$ due to variations on feature 2 and $\mathbf{T_{33'}}$ due to variations on feature 3.
Example 1: 3D tolerance stack-up analysis of two parts with GD&T
This example uses the same two-part assembly as in the previous example of 2D tolerance analysis (including the parts and assembly detailed drawings for the nominal dimensions and tolerances in this example).
Figure 3 shows the tolerance chain and the key characteristics (KC) of the assembly. The KC is the distance and orientation variations between point A and B. In figure 3, the tolerance chain is in 3D.
the tolerance chain should pass the assembly features of the two parts. The assembly features are the features that connect the two parts.
The selection of tolerance chain should represent as close as possible on how the assembly will function.
Nominal transformation chain
From the tolerance chain (figure 3), the nominal transformation chain of the assembly is:
Where each $\mathbf{T_{ij}}$ contain elements: $d_{x}, d_{y}, d_{z},d\theta _{x}, d\theta _{y}, d\theta _{z}$ that represent variation due to allocated tolerances. $\mathbf{T_{1.12}}$ is the total variation from point 1 to point 12 in the tolerance chain.
Figure 3: The tolerance chain of the two-part assembly example.
The values for all $d_{x}, d_{y}, d_{z},d\theta _{x}, d\theta _{y}, d\theta _{z}$ elements on each transformation matrix $\mathbf{T_{ij}}$ are presented in table 1. In table 1, each row represents a transformation from point $k$ to point $k+1$ in the tolerance chain.
Table 1: Detailed calculations to determine the elements in matrix Tij involved in the nominal transformation chain (the detailed drawing can be seen from the previous post on 2D tolerance analysis).
After establishing the nominal transformation chains and the elements of transformation matrices in the nominal chain, the nominal distance between A and B can be calculated by multiplying all the $4\times 4$ matrices in the nominal transformationchain. The results of the multiplication is the matrix $\mathbf{T_{1.12}}$ .
The result of matrix $\mathbf{T_{1.12}}$ is shown in figure 4. In figure 4, the nominal distance from A to B (the KC) is $70 mm$ in $z$-direction.
This value is obtained from the (3,4)-element of matrix $\mathbf{T_{1.12}}$. The elements of (1,4) and (2,4) are the nominal distance between point A and B in $x$ and $y$ direction and are zero.
And, the elements of (1,2), (1,3), (2,1), (2,3), (3,1) and (3,2) of matrix $\mathbf{T_{1.12}}$ are the rotational variations between point A and B along $x,y,z$-axes. As can bee observed, in perfect condition, there will be only a translation from A to B in $z$-direction for $70 mm$ and there are no rotations.
Figure 4: The calculated nominal matrix resulted from the multiplication of matrices involved in the nominal transformation chain.
The calculation of nominal transformation chain is useful to verify whether our designed KC (in this example the distance between A and B) is correct in case of perfect condition.
Variation transformation chain
The next step is to establish the variation transformation chain from the tolerance chain (figure 3). The variation transformation chain is:
An example on how to calculate the elements $d_{x}, d_{y}, d_{z}, d\theta _{x}, d\theta _{y}, d\theta _{z}$ of the variation matrices $\mathbf{T_{ij'}}$ involved in the variation chain is shown in table 2 below. Note that, all elements for the nominal matrix $\mathbf{T_{ij}}$ are the same as before.
Table 2: Detailed calculations to determine the elements in matrix Tij and Tij' involved in the variation transformation chain (the detailed drawing can be seen from the previous post on 2D tolerance analysis).
The detailed calculations of each variation in the tolerance chain for example 1 can be obtained from here.
In table 2, every row of nominal chain is followed by a row of its variation chain. All rotational variation elements $d\theta _{x}, d\theta _{y}, d\theta _{z}$ are in radian.
Note that for the variation matrices $\mathbf{T_{ij'}}$, elements $d_{x}, d_{y}, d_{z}$ are commonly from the dimensional tolerances assigned to features and bonus tolerances, from the allocated GD&T tolerances.
Meanwhile, elements $d\theta _{x}, d\theta _{y}, d\theta _{z}$ need to be calculated and are related to the dimension of the features and allocated GD&T tolerances as well.
The explanations to calculate the values of elements $d\theta _{x}, d\theta _{y}, d\theta _{z}$ are as follow.
In principle, the values of $d\theta _{x}, d\theta _{y}, d\theta _{z}$ follow the largest values of the tolerance zones of geometric tolerances on features (that is a worst-case assumption on the geometric tolerances). The tolerance zones can be a cylinder or two-parallel planes.
For the tolerance zone in the form of two-parallel planes (figure 5 top), the value of $ d\theta _{i}$ $ is calculated by the value of tolerance zones divided by the length (or width) of the surface of a feature.
For the tolerance zone in the form of a cylinder (figure 5 bottom), the value of $ d\theta _{i}$ $ is calculated by the value of tolerance zones divided by the height of a cylindrical feature (representing the feature or the height of the cylinder tolerance zone).
Note that to follow the equal-bilateral format, the calculated $ d\theta _{i}$ is, then, divided by 2.
Figure 5: Illustration on how to calculate the value of $d\theta _{x}, d\theta _{y}, d\theta _{z}$ on geometrical tolerances.
Note that from figure 5, to calculate the $d\theta _{x}, d\theta _{y}, d\theta _{z}$, one thing to remember is that rotational errors will have relatively large values if the height of hole features is made short or the area of plane features is made small. Because, mathematically, de-numerators to calculate the angular errors become small and then the value of the angular errors become large.
The physical interpretations are that when a hole feature, usually where a pin will be inserted, is made short the pin-hole join will be not stable. Similar to a surface, if the surface area of plane features is made small, the surfaces become unstable and the manufacturing variations (used to make the surface) become high.
The 3D tolerance stack-up analysis is based on statistical method. A Monte-Carlo (MC) simulation is used to re-calculate the total variation transformation chain (the final variation matrix) for a large number of times.
On each simulation run, we sample each element of $d_{x}, d_{y}, d_{z}, d\theta _{x}, d\theta _{y}, d\theta _{z}$. The samples are generated from normal distributions with mean 0 and standard deviation $\sigma$ equal to the error elements divided by 2, that are $N~(0,\sigma ^{2})$ where $\sigma=\frac{d_{i}}{3}$ and $\sigma=\frac{d\theta _{i}}{3}$.
Figure 6 shows the results of the total variation of the distance and orientation between point A and B after 10000 simulation runs. From the results, the distance between point A and B in $z$-direction is $(69.99\pm 3.82)mm$ or $66.18-73.81$. Note that in figure 6, there is no rotation a long $z$-axis as this rotation will not have any effect to point A and B in term of distance and rotation.
All detailed calculations of each variation matrix and the MATLAB codes implementations to run the MC simulation can be obtained from here.
Figure 6: The MC simulation results from the statistical 3D tolerance analysis of the two-part assembly (example 1). The rotation in z-axis is negligible because the rotation will not change the position of A and B (rotation on its axis).
Example 2: 3D tolerance stack-up analysis of a rotary compressor with GD&T
In this example, not only 3D tolerance analysis will be explained, but also other aspects: tolerance allocation, design revision and tolerance re-allocation will be discussed.
The rotary compressor is a type of air compressor to provide high pressure air. The working principle of a rotary compressor is that a cylindrical rotor rotates inside a cylindrical cavity with bigger diameter with the rotor. The axis of the rotor and the cavity is not coaxial.
With this non-coaxial axis configuration, two asymmetric volumes are created when the rotor rotates and causes a working pump effect. In the intake part of the cavity, the volume increases when the pump in operation. Meanwhile, in the outtake part, the volume of the cavity decreases.
Rotary compressors are commonly found as high-pressure hydraulic pump, such as transmission pump in a car, power-steering pump and vacuum pump. Figure 7 shows the assembly of the rotary compressor. The main components of the compressor are a rotor, shaft, body, base and cover.
Figure 7: the assembly of the rotary compressor
In this example, two types of design of the compressor are presented: initial design and revised design. Brief explanations about the tolerance analysis and allocation in both designs are presented.
Meanwhile detailed tolerance analysis calculations re-design processes and MC simulation of the analysis can be obtained from here.
Initial design and the tolerance analysis
The initial design consists of five main parts: base, body, rotor, poros (shaft) and cover. Figure 8 shows the detailed drawing of the five parts constituting the rotary compressor based on the initial design. In figure 8, both traditional "+/-" dimensional and GD&T tolerances are used.
Figure 8: The drawing of each component for the initial design of the compressor.
Figure 9 shows the initial design of the components of the compressor and the tolerance chain of the compressor assembly.
The KC of the assembly is the distance between point 1 and 2 that should be between $0mm-0.5mm$. If there is an interference between point 1 and 2, then, there will be frictions between the rotor and shaft (to create the asymmetric cavity) and cause significant wear of the compressor while operating. Meanwhile, if the gap is too big, then the compression performance will be significantly reduced.
Figure 9: The components and tolerance chain for the initial design of the compressor.
From the tolerance chain shown in figure 9, the nominal and variation transformation chains are constructed and calculated. MC simulations are performed for 10000 runs.
Tolerance allocation to determine the values of all error elements in the variation matrices have been optimised before running the simulation. The results of total variation matrix calculated on each run are stored and statistically analysed.
Figure 10 shows the results of the MC simulations. From MC simulations for the 3D tolerance analysis of the initial design, the distance between point 1 and 2 is ($-0.1\pm 0.53mm$) or $-0.63mm-0.43mm$.
From figure 10, even after tolerance re-allocation optimisation, with the initial design the desired KC, that is the distance between point 1 and 2 to be $0mm-0.5mm$, cannot be achieved. Hence, a redesign process of the compressor need to be carried out.
Figure 10: MC simulation results from the 3D tolerance stack up analysis for the initial design.
Revised design and the tolerance analysis
The redesigned or revised compressor has reduced parts to be only three parts: base, cover+body and poros (shaft)+rotor. Figure 11 shows the parts for the redesigned compressor.
Note that there are two parts that are joined parts. With the reduced parts, the number of variations in the variation propagation chain (tolerance chain) is reduced and then the total variation stack-up is also reduced.
Figure 11: The drawing of each component for the revised design of the compressor.
Figure 12 shows the tolerance chain of the revised designed. Similar to the initial design, the KC is the distance between point 1 and 2 so that there is no interference between point 1 and 2 to avoid frictions during operation.
Figure 12: The components and tolerance chain for the revised design of the compressor.
The nominal and variation transformation chain can then be derived from the tolerance chain in figure 12. Also, the MC simulations are performed for 10000 runs as applied to the tolerance analysis for the initial design.
Tolerance allocation before running the MC simulations has also been optimised. Figure 13 shows the results of the MC simulations for the revised design.
From the simulations, the estimated distance variation between point 1 and 2 is ($0.25\pm 0.25mm$) or $0.0mm-0.5mm$.
Form this result, the desired KC can be achieved by revising the initial design and perform the optimisation of tolerance re-allocation to the parts on the revised design.
Figure 13: MC simulation results from the 3D tolerance stack up analysis for the revised design.
The detailed calculations of each variation in the tolerance chain, tolerance allocation and re-allocation, redesign processes and the MATLAB code to run MC simulation for example 2 can be obtained from here.
This post presents tolerance stack-up analysis in 3D with examples: a two-part assembly of prismatic product and a rotary-compressor. Both "+/-" dimensional tolerance and geometrical tolerance (GD&T) are considered.
In 3D tolerance analysis, a $4\times 4$ homogenous transformation matrix is used to calculate the propagation of variations on each part constituting an assembly.
The main advantage of 3D tolerance analysis is that assembly variations can be calculated in 3D space including rotational variations.
With 3D tolerance stack up analysis, the variation of an assembly can be predicted more accurate than 2D tolerance stack-up analysis.
The detailed calculations of each variation on the tolerance chains and the MATLAB codes for the simulation in the example 1 and example 2 can be obtained from here.
Optical coordinate measuring machine (Optical-CMM): Performance verification and measurement uncertainty estimation
Performance verification and measurement uncertainty estimation of an optical coordinate measuring machine (optical-CMM) are very important aspects assuring that the optical-CMM works within its specification and its measurement results are traceable to the definition of metre.
Wahyudin Syam Jan 23, 2023 • 7 min read
Optical coordinate measuring machine (Optical-CMM): Two fundamental limitations
Optical coordinate measuring machines (Optical-CMM) have advantages over tactile (contact) CMMs, such as more part-feature accessibility, no surface damaging-risk and large surface points capture in relatively a short period of time (compared to tactile CMMs).
Digital transformation of dimensional and geometrical measurements
Today era is the time for thorough digitalisation, from product design, process design to data analysis and executive summary making.
Wahyudin Syam Jan 10, 2023 • 11 min read
Wasy Research © 2023 | CommonCrawl |
smabie
Quantitative trading, finance, machine learning, and functional programming.
Home About Projects Posts
© 2022. MIT License.
ETFs, Volatility and Leverage: Towards a New Leveraged ETF Part 1
04 Oct 2019 • Posts
In part one of this three part series, we will explore the concept of levered ETFs, common misconceptions, the effect of volatility on the returns of a portfolio, and the compounded returns of the S&P 500 utilizing different leverage ratios. We will also touch on the basic mathematical underpinnings of volatility drag and ideal leverage ratio.
In part two, we will look into different ways to forecast future market regimes and their associated optimal leverage ratios.
In part three, we will construct a fully automated ETF that seeks to obtain variable leveraged exposure to the S&P 500 conditioned on the future forecasted market return and volatility regime.
The recent ascent of ETFs as one of the most popular trading vehicles for both retail and institutional investors has dramatically affected the business of all funds seeking retail flows, and even many who do not. ETFs and ETNs serve as wrappers for a diverse array of strategies that run the gamut from simple and transparent to complex and proprietary. ETFs (and ETNs though for brevity's sake we will just refer to all securities employing this legal structure as "ETFs") can be placed on a two dimensional spectrum: simple to complex on one axis, and transparent to proprietary on another. These products are united by several distinguishing features: placement on public equity markets with tickers alongside equities and a pricing mechanism that, also much like equities, utilizes arbitrage and a bid-ask mechanism that is used by market makers to provide liquidity. ETF shares are created and destroyed in blocks as needed in order ensure that the value of the wrapper is inline with the value of the underlying securities or strategy that the ETF conceptually represents.
By far the most popular type of ETF in terms of total asset value and flows are ETFs that provide exposure to popular indices such as the S&P 500, the Russell 2000, and many others. These ETFs simply aim to match the relative daily returns of their respective index and occupy the bottom left spot on the aforementioned two dimensional spectrum. A closely related (albeit far less popular) type of product that occupies the bottom middle is the leveraged ETF. A leveraged ETF seeks to obtain a daily exposure on an underlying index scaled by a constant $l$; which, at this time, is somewhere between -3 and 3 for products currently on the market. If $l$ is less than zero, the ETF provides short exposure to the index and are often called "bear" ETFs; conversely, if $l$ is greater than zero, the ETF provides long exposure to the index, commonly referred to as "bull" ETFs. These securities are usually implemented by means of a rolling futures strategy. By rebalancing everyday, the ETFs eliminate the risk of ruin (in this case, losing more money than the fund's total value) and obviate the need for margin payments.
The Popular Argument Against Leverage
At face value, a retail investor might assume that if the S&P 500 returned 10% in a given year, a 3x leveraged ETF would return 30%. Fortunately or unfortunately depending on your perspective, this is not the case as the ETF seeks to maintain a 3x multiple on the daily return of the S&P 500 instead of the annualized return. As touched on in the previous section, by rebalancing daily the ETF can easily handle the inflows and outflows of the fund while also eliminating the need for capital to be held in margin. This strategy also greatly reduces the risk of ruin, as the S&P 500 would need to lose at least 1/3 of its total value in a single day for the fund to be wiped out. Though certainly imaginable, this event is unlikely as the biggest single day loss in the history of the S&P 500 was Black Monday in 1987, in which around 20% of value evaporated from the S&P 500 in a single day.
The common wisdom about leveraged ETFs is that they don't so much fill a roll as an investment vehicle, but merely as a day-trading instrument that allows a trader to easily obtain short-term tactical Beta exposure to the market in an efficient and simple fashion. In numerous articles scattered across the internet, the author always drives home the point again and again that leveraged ETFs are ill-suited to long-term buy-and-hold investing and should only be purchased by those who are savvy enough to lay on short-term trades. Instead of cementing this advice on a solid mathematical foundation, the author usually cherry picks some example time window for the S&P 500 of around two months where the market is mostly sideways and volatility is high. He then usually remarks on the prescient insight that the levered fund made less or even lost money whilst the regular fund came out ahead. The conclusion to be drawn from this I suppose is that levered funds are deceptive, don't actually multiply your returns by the amount claimed, and are a bad investment. In the next section, we will discuss the mathematical foundation of this claim and reason about its validity.
A Little Bit of Math
We can see the immediate effect of volatility from a simple example. What happens to a $100 portfolio that gains and then loses 10% of its value versus a portfolio that gains and then losses 50%?
\[\$100(1.10)(0.90) = \$99\] \[\$100(1.50)(0.50) = \$75\]
As we can see, the difference between the arithmetic mean (which for both examples is 1) and the geometric mean (appropriate for compounded returns) can be quite significant. In order to calculate the effect that volatility has on a portfolio we can use the volatility drag formula:
\[r_a = r_p - \frac{\sigma^2_p}{2}\]
Where $\sigma_p$ is the standard deviation of the portfolio, $r_p$ the return of the portfolio and $r_a$ the actualized return of the portfolio after deducting for volatility.
It's important to understand that the value of the volatility drag equation is when we are changing the volatility of a known return stream. If we are using past realized returns to forecast future returns, the volatility equation isn't necessary or applicable, since the past returns are already reflective of the post-drag return. The volatility drag equation becomes useful when we are asking ourselves about the returns of a levered portfolio in terms of the unlevered returns and volatility. Alternatively, the equation also comes in handy if we would like to answer questions such as: If we reduced the volatility our of current portfolio by 25%, how would that affect the mean return?
We need to rewrite the volatility drag equation in terms of leverage. Assuming normality, we see that rescaling a normal distribution by a constant affects the mean linearly and the variance non-linearly (thus affecting the standard deviation linearly):
\[Y = lX \sim N(l\mu, l^2\sigma^2)\]
In other words, a 2x levered portfolio with a mean return of 5% and a volatility of 5% is the same as an unlevered fund with a mean return of 10% and a volatility of 10%. As such we can rewrite $r_p$ in terms of $r_b$ (the base unlevered return of the portfolio) and the leverage amount ($l$):
\[r_p = r_bl\]
As well as $\sigma_p$ in terms of $\sigma_b$ and $l$:
\[\sigma_p = \sigma_bl\]
Substituting into our drag equation:
\[r_a = r_bl - \frac{\sigma_b^2}{2}l^2\]
In order to find the maximum leverage that will result in the greatest mean return, we then take the derivative in terms of the leverage ratio and set to zero and solve:
\[\frac{\text{d}(r_a)}{\text{d}l} = r_b - \sigma^2_bl = 0\] \[l = \frac{r_b}{\sigma^2_b}\]
Also, our adjusted Sharpe ratio is:
\[S_a = \frac{r_b}{\sigma_b} - \frac{\sigma_b}{2}l - \frac{r_f}{\sigma_bl}\]
Below is a graph of the adjusted Sharpe ratio, volatility and mean return of a portfolio given $r_b= \sigma_b=10\%$. We can see that the returns form a concave quadratic, the Sharpe ratio a negative linear function, and the volatility a positive linear function. Note that in this graph we are using log returns:
An interesting and important property of volatility drag is that even if two portfolios look the same given a specific leverage ratio, after the drag calculation, they often end up quite different. Below is a table of different example portfolios and leverage ratios that illustrate this point:
Pre-drag Return
Post-drag Return
A 1% 1% 5 5.025% 4.9%
B 5% 5% 1 5.125% 5%
C 10% 10% 0.5 5.25% 5.125%
We found the pre-drag return values by first calculating the natural unlevered return given no volatility and then scaling it by the leverage ratio:
\[r_{pre} = l \left( r_p + \frac{\sigma^2_p}{2} \right)\]
To calculate the post-drag return values in the table, we use the regular volatility drag equation but use the pre-drag return and levered volatility as inputs:
\[r_{post} = r_{pre} - \frac{(\sigma_pl)^2}{2}\]
As we can see, if we are choosing between multiple portfolios with the same Sharpe ratio, we should always prefer the portfolio that has the highest natural return. For portfolio A, we pay a significant cost in volatility. Portfolio B pays no cost, as the historical return has already taken into account the realized volatility. Portfolio C actually outperforms the natural portfolio B, since we get some volatility "lift" from deleveraging. The takeaway is that the Sharpe ratio is not a sufficient distillation of a portfolio or strategy due to the concave quadratic nature of volatility drag. Instead, we need to also look at the unlevered mean return of the strategy, as well as the Sharpe, in order to determine the actual quality of the strategy. In some cases, it would actually be preferable to choose a strategy with a lower Sharpe ratio than one with a higher ratio if the mean unlevered return of the former is sufficiently greater than the latter.
Historical S&P 500 Returns and Leverage
While the leverage ratio formula is correct if we are sampling from a normal distribution, market returns often exhibit excess kurtosis and skew, resulting in the decreased accuracy of our formula. The greater the skew or kurtosis, the less accurate the model becomes. In periods of positive skew, the model underestimates the magnitude of mean return, while in negative skew, the model overestimates the mean return. Below is a density plot of returns between 2018-01-01 and 2019-09-01:
For this periods volatility and returns, the model equation suggests a leverage ratio of approximately 3 in order to maximize returns. Looking at the cumulative return streams of several different leverage ratios, we see that despite the non-normality of the data, the prediction is solid:
Unfortunately, the use of variable leverage conditioned on volatility and returns does not yet constitute a viable trading strategy: during actual trading, future volatility and return information is not available. Without a model to estimate the future returns and volatility of the market, we will be unable to effectively calculate the optimal leverage ratio for our portfolio. There are several remediations of varying complexity and accuracy that we could use to work around this problem. The most basic model we could employ would be one that uses a trailing window of returns and volatility in order to predict the future returns and volatility. Other options would be to use statistical time series models such as ARIMA (Auto-Regressive Integrated Moving Average) in order to forecast returns and GARCH (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict volatility. Other approaches might include looking at the VIX (Volatility Index) or even constructing RNNs (Recurrent Neural Networks) to help forecast an ideal leverage ratio.
While the Sharpe ratio of a levered index ETF will indeed get worse as leverage is applied, the use of leverage and the associated volatility drag does not constitute a separate and distinct issue aside from volatility alone. For all investors, return and volatility are intimately related through a portfolio's Sharpe ratio. Ultimately though, investors cannot eat risk-adjusted returns and must instead try to maximize the Sharpe ratio of their respective portfolios in order to always assume the greatest amount of risk in line with their investment objectives. For most investors, alpha generation through security selection and trading is a lofty and unattainable strategy. Instead of chasing elusive alpha, these investors adjust the lever of risk through the management of asset class and factor exposures. When market volatility is higher than personally tolerable, investors cycle into lower volatility investments such as value stocks, bonds, and metals. When volatility and the associated risk-premium is too low, investors rotate into growth stocks, emerging markets, and real estate.
In this post we touched on a different and arguably simpler way to manage volatility and risk-premiums: through the conditional application of leverage. In part two of this three part series, we will look at ways to forecast the ideal leverage ratio as a function of three parameters: future returns, volatility, and personal risk limits.
Thanks for reading, I hope you enjoyed this piece! If you want to play around with the Quantopian notebook, click here! Possible things to change would be the start and end dates, reference leverage ratios, and the ticker to analyze.
Forecasting Market Kurtosis with the Volatility Smile (Poorly) 02 Sep 2020
Portfolio Construction with Risk Parity 01 Aug 2020
Risk Imparity: A Simple Strategy for Alpha Generation 29 Jul 2020 | CommonCrawl |
What is the size of Australia's sexual minority population?
Tom Wilson ORCID: orcid.org/0000-0001-8812-75561,
Jeromey Temple1,
Anthony Lyons2,3 &
Fiona Shalley4
The aim is to present updated estimates of the size of Australia's sexual minority adult population (gay, lesbian, bisexual, and other sexual minority identities). No estimate of this population is currently available from the Australian Bureau of Statistics, and very little is available from other sources. We obtained data on sexual minority identities from three data collections of two national surveys of recent years. Combining averaged prevalence rates from these surveys with official Estimated Resident Population data, we produce estimates of Australia's sexual minority population for recent years.
According to percentages averaged across the three survey datasets, 3.6% of males and 3.4% of females described themselves with a minority sexual identity. When applied to Estimated Resident Populations, this gives a sexual minority population at ages 18 + in Australia of 599,500 in 2011 and 651,800 in 2016. Population estimates were also produced by sex and broad age group, revealing larger numbers and higher sexual minority percentages in the younger age groups, and smaller numbers and percentages in the oldest age group. Separate population estimates were also prepared for lesbian, gay, bisexual, and other sexual minority identities.
How many people in Australia identify themselves as lesbian, gay, bisexual or an alternative sexual minority orientation (e.g., queer, pansexual)? The question is difficult to answer because the Australian Bureau of Statistics (ABS) does not publish population estimates which include a sexual identity breakdown, nor does it directly collect data on sexual identity in the census or its continuous large-scale surveys which would permit such estimates to be easily calculated. The availability of population statistics from other sources is extremely limited; only a handful of academic studies have attempted to estimate the size of Australia's sexual minority population [1,2,3,4].
Despite this paucity of data, the value of population statistics by sexual identity has been increasingly recognised in recent years [5, 6]. Population estimates on sexual minorities can provide visibility and voice to those communities. They may assist in combating misinformation and stereotypes [7]. Population numbers can inform the likely demand for specialised goods and services aimed at sexual minorities. They provide the denominators for demographic rates and indicators which enable the health and wellbeing of sexual minorities to be monitored [8]. Sexual identity population statistics should also be useful in light of legislative requirements. For example, the federal Sex Discrimination Act [9] prohibits discrimination on the basis of sexual orientation, and the Aged Care Act [10] mentions "lesbian, gay, bisexual, transgender and intersex people" as a special needs group.
This paper updates and extends the sexual minority population estimates for Australia calculated previously [3]. It reports proportions of the population identifying as a sexual minority from reliable national surveys. It then presents population estimates for the sexual minority adult population of Australia in 2011 and 2016, including by age group, and by sexual identity (gay, lesbian, bisexual, and other sexual minority identities).
Data on the proportions of the population with a specific sexual identity were sourced from three data collections from two representative national household surveys, namely the General Social Survey (GSS), and waves 12 and 16 of the Household, Income And Labour Dynamics in Australia (HILDA) Survey. Other large surveys also ask about sexual identity [11,12,13], but were not considered for this study because they cover only part of the Australian population. The GSS was undertaken by the ABS between March and June 2014 [14] with face to face computer-assisted interviewing. It achieved a sample of about 13,000 people aged 15 years and over in households (i.e., excluding institutional accommodation). HILDA is an ongoing national longitudinal study which began data collection in 2001 [15]. About 17,000 people in households are interviewed every year using both face-to-face computer-assisted interviewing and a self-completion questionnaire (which contains the sexual identity questions). We made use of data from waves 12 and 16, conducted in 2012 and 2016 respectively, when sexual identity questions were asked.
The questions on sexual identity from the surveys are reproduced in Additional file 1: Figure S1. It is important to note that responses to these questions refer to reported sexual identity, not sexual attraction or sexual behaviour. There can be quite large variations in population numbers depending on which aspect of sexual orientation is being considered [16]. Importantly, this is reported sexual identity; people who are uncomfortable disclosing a minority sexual identity may not respond to the question or may report a different sexual identity.
The 2011 and 2016 estimated resident populations (ERPs) of Australia by sex and age group were obtained from the ABS [17]. These two years were chosen because the ERPs for these years are based on 2011 and 2016 census counts and likely to be more accurate than those for non-census years, and they are close to the reference dates of the surveys.
Sexual identity proportions were calculated from the weighted number of adults in each sexual identity category in all three datasets. The proportions were calculated by sex for individual sexual identity categories (gay, lesbian, bisexual, and other), and for the total sexual minority population—defined as the sum of those four categories. Proportions were also calculated for the total sexual minority population by broad age group and sex. We do not present proportions by sex, age group and individual sexual identity categories as the variability around the point estimates increases significantly. The don't know, not stated/refused responses were included in the denominators of the proportions.
Population estimates by sexual identity were calculated by taking the proportion of the population in each sexual identity category derived from the surveys and multiplying them by the published ERPs of Australia for 2011 and 2016. They were prepared in three steps. First, an estimate of the total sexual minority population aged 18 + by sex was calculated. Given a lack of information to suggest that any one survey dataset was more reliable than the others, we weighted all proportions equally. Thus, the sexual minority (\(M\)) population (\(P\)) aged 18 + by sex (\(s\)) was calculated as:
$${P}_{s,18+}^{M} ={ P}_{s,18+}^{ERP} \frac{1}{3}\left({p}_{s,18+}^{M,GSS}+{p}_{s,18+}^{M,HILDA-12}+{p}_{s,18+}^{M,HILDA-16}\right),$$
where \(ERP\) is the official estimated resident population, \(p\) denotes the proportion of the population, and \(GSS\), \(HILDA-12\), and \(HILDA-16\) refer to the three survey datasets with the HILDA labels including the survey wave number.
Second, estimates of the total sexual minority population by age groups 18–24, 25–34, 35–44, 45–54, 55–64 and 65 + were calculated. Preliminary (\(pr\)) estimates were calculated as:
$${P}_{s,a}^{M}\left[pr\right]={ P}_{s,a}^{ERP} \frac{1}{3}\left({p}_{s,a}^{M,GSS}+{p}_{s,a}^{M,HILDA-12}+{p}_{s,a}^{M,HILDA-16}\right),$$
where \(a\) refers to age group. Then a small constraining adjustment was made to ensure these age-specific estimates summed to the overall 18 + estimate:
$${P}_{s,a}^{M} ={P}_{s,a}^{M}\left[pr\right] \frac{{P}_{s,18+}^{M}}{\sum_{a}{P}_{s,a}^{M}\left[pr\right]}$$
Third, estimates of the 18 + population by sex by individual sexual identity category were calculated:
$${P}_{s,18+}^{m}\left[pr\right]={ P}_{s,18+}^{ERP} \frac{1}{3}\left({p}_{s,18+}^{m,GSS}+{p}_{s,18+}^{m,HILDA-12}+{p}_{s,18+}^{m,HILDA-16}\right),$$
where \(m\) refers to gay/lesbian, bisexual or other. As before, a slight adjustment was required to ensure consistency with the overall sexual minority estimate:
$${P}_{s,18+}^{m} ={P}_{s,18+}^{m}\left[pr\right] \frac{{P}_{s,18+}^{M}}{\sum_{m}{P}_{s,18+}^{m}\left[pr\right]}.$$
The distribution of the adult population of Australia across sexual identity categories from the three survey datasets is shown in Table 1. The total sexual minority population varies from just under 3% according to the GSS to just over 4% in HILDA wave 16, with slightly higher percentages for females than males in the GSS but not HILDA. For males, the percentage of the population identifying as gay is higher than the bisexual percentage, while for females the bisexual percentages are higher than those for lesbian in the two HILDA datasets. Interestingly, the HILDA survey results from wave 16 indicate an increase in the share of the population identifying as a sexual minority from four years earlier. Although it is not possible to determine a trend from the limited data available, an increase over time would be consistent with recent evidence from the USA and UK [18, 19]. Complicating the analysis is the fact that the percentages for heterosexual and don't know/not stated/refused differ noticeably between HILDA and the GSS. This may be related to differences in the survey mode and list of available responses in the three surveys. Relative standard errors for the data in this table are provided in the Additional file 1.
Table 1 Percentage of Australia's adult population by sexual identity and sex
Table 2 presents the percentage of the population identifying as a sexual minority by sex and age group. Relative standard errors are also provided in the Additional file 1. There is a strong relationship between sexual minority identity and age in the GSS results whereby percentages decline with increasing age, but the relationship is less distinct in the HILDA data, especially for males. Overall, percentages are highest in the younger 18–24 and 25–34 age groups, and lowest in the 65 + age group. Amongst those aged 18–24, the percentages reach as high as 7.5% for females and 5.7% for males, while in the 65 + age group all percentages are below the population averages for each gender.
Table 2 Percentage of Australia's adult population identifying as a sexual minority by age group
Population estimates for Australia's sexual minority populations in 2011 and 2016 are shown in Table 3. The total sexual minority population of Australia aged 18 + is estimated to have been 599,500 in 2011 with slightly fewer females (296,400) than males (303,100); by 2016 it is estimated to have grown to about 651,800 (323,500 females and 328,300 males). The population is young compared to the Australian population overall, with close to half (46%) aged 18–34. The numbers in the 65 + age group are relatively small—about 63,900 in 2011 and 76,600 in 2016. The population aged 18 + identifying as lesbian/gay is estimated to have been about 286,400 in 2016 (44% of the sexual minority population), with 215,600 as bisexual (33%), and 149,700 (23%) as other. For males, the gay population was larger than the bisexual population (182,100 and 77,900 respectively), while for females the opposite was the case (104,400 lesbian and 137,800 bisexual).
Table 3 The estimated sexual minority population of Australia, 2011 and 2016
This paper has presented new estimates of Australia's sexual minority population based on official Estimated Resident Populations and representative surveys which collect information on sexual identity. Our study shows that Australia's sexual minority population reached about 651,700 in 2016, representing 3.5% of the adult population, a little higher than the 3.2% estimated previously [3] due to the inclusion of HILDA wave 16 data. Equivalent percentages for other countries in recent years include 3.5% for New Zealand [20], 2.5% [21] and 2.9% for the UK [19], and 4.1% for the US [22], though these statistics are not strictly comparable due to differences in questions and survey modes. We hope that the new population estimates (Table 3) will prove useful for various policy, planning and research activities.
The sexual identity population estimates reported in this paper are probably as good as possible given the available data sources, and limitations of the population estimates are listed below. The accuracy and detail of population estimates will only be enhanced if sexual identity is included in the quinquennial census or a very large-scale national survey, such as the ABS Monthly Population Survey [23].
One of the most important findings of this study is the higher proportion of younger people reporting a minority sexual identity. This suggests that a cohort effect may be at work. Sexual identities and the willingness to disclose one's identity can be influenced by the social attitudes and legal environment of the time when each cohort passes through their formative years. Older cohorts have spent much of their lives during a time when social acceptance was lower than today [24], and this might still influence how some of them report their identity. This cohort effect may have an important role in the proportions of people reporting a sexual minority identity in future surveys. Those young cohorts with 5–8% sexual minority identities may well maintain their identities as they get older in the future, and in the current accepting environment younger cohorts replacing them are likely to report similar, or perhaps higher, percentages. If this occurs, then Australia's known sexual minority population will increase rapidly over the coming decades, and the estimates will need regular updating.
This study contains several limitations.
We assumed that the sexual minority percentages obtained from surveys undertaken between 2012 and 2016 were valid for creating 2011 and 2016 sexual minority populations. If the trend in identifying as a sexual minority is increasing, then the 2011 population might be slightly over-estimated and the 2016 population slightly under-estimated.
The survey data were collected using different survey modes and with slightly different wording in the sexual identity question, so they are not perfectly comparable.
The various residual categories of don't know, not stated, and refused need careful consideration. They vary substantially between surveys and their interpretation is not straightforward. Sexual minority percentages would be slightly higher if they were excluded from denominators.
The scope of all surveys excluded institutional accommodation which may have led to a small amount of bias.
Our population estimates only refer to those who reported a sexual minority identity. It is therefore a 'revealed' population which excludes those who do not wish to disclose their sexuality (in the survey, at least).
Finally, the sexual minority population estimates are approximate. They are based on ABS estimated resident populations, which are good quality data, but also weighted survey data based on fairly small samples of sexual minority individuals.
An Excel file of sexual minority population estimates is available from the corresponding author on request.
ABS:
ERP:
Estimated resident population
GSS:
General Social Survey
HILDA:
Household, Income and Labour Dynamics in Australia
Madeddu D, Grulich A, Richters J, Ferris J, Grierson J, Smith A, Allan B, Prestage G. Estimating population distribution and HIV prevalence among homosexual and bisexual men. Sex Health. 2006;3:37–43.
Prestage G, Ferris J, Grierson J, Thorpe R, Zablotska I, Imrie J, Smith A, Grulich AE. Homosexual men in Australia: population, distribution and HIV prevalence. Sex Health. 2008;5:97–102.
Wilson T, Shalley F. Estimates of Australia's non-heterosexual population. Aust Popul Stud. 2018;2(1):26–38.
Callander D, Mooney-Somers J, Keen P, Guy R, Duck T, Bavinton BR, Grulich AE, Holt M, Prestage G. Australian 'gayborhoods' and 'lesborhoods': a new method for estimating the number and prevalence of adult gay men and lesbian women living in each Australian postcode. Int J Geogr Inf Sci. 2020. https://doi.org/10.1080/13658816.2019.1709973.
Office for National Statistics (ONS) Measuring sexual identity: an evaluation report. 2010. https://webarchive.nationalarchives.gov.uk/20151014015853/http://www.ons.gov.uk/ons/rel/ethnicity/measuring-sexual-identity---evaluation-report/2010/index.html. Accessed 24 Sept 2010
LGBTI Health Alliance. Joint statement in support of LGBTI inclusion in the 2021 Census (2019) https://lgbtihealth.org.au/joint-statement-in-support-of-lgbti-inclusion-in-the-2021-census/. Accessed 24 Mar 2020
Gates GJ. LGBT identity: a demographer's perspective. Loy LA L Rev. 2012;45:693–714.
Perales F. The health and wellbeing of Australian lesbian, gay and bisexual people: a systematic assessment using a longitudinal national sample. Aust NZ J Publ Heal. 2019;43:281–7.
Sex Discrimination Act 1984 (Cth). https://www.legislation.gov.au/Details/C2014C00002. Accessed 5 Feb 2020
Aged Care Act 1997 (Cth). https://www.legislation.gov.au/Details/C2020C00054. Accessed 25 Jan 2020
Australian Longitudinal Study on Women's Health. 2020. https://www.alswh.org.au/. Accessed 14 Apr 2020
SA Health. South Australia Population Health Survey Questionnaire 2020. 2020. https://www.sahealth.sa.gov.au/wps/wcm/connect/f88b47b7-ca05-4ea4-898c-1581d47bc249/SAPHS+2020_Public+Document.pdf?MOD=AJPERES&CACHEID=ROOTWORKSPACE-f88b47b7-ca05-4ea4-898c-1581d47bc249-naxHPpI. Accessed 21 Feb 2020
VicHealth. VicHealth Indicators Survey 2015—Supplementary Report: Sexuality. https://www.vichealth.vic.gov.au/media-and-resources/publications/vichealth-indicators-survey-2015-supplementary-report-sexuality. Accessed 21 Feb 2020
Australian Bureau of Statistics (ABS). General social survey: summary results, Australia, 2014. 2015. https://www.abs.gov.au/ausstats/[email protected]/mf/4159.0. Accessed 21 Feb 2020
Wilkins R, Laß I, Butterworth P, Esperanza V-T. The household, income and labour dynamics in Australia survey: selected findings from waves 1 to 17. Melbourne: Melbourne Institute; 2019.
Richters J, Altman D, Badcock PB, Smith AMA, de Visser RO, Grulich AE, Rissel C, Simpson JM. Sexual identity, sexual attraction and sexual experience: the Second Australian Study of Health and Relationships. Sex Health. 2014;11:451–60.
Australian Bureau of Statistics (ABS) ABS.Stat. 2020. https://stat.data.abs.gov.au/. Accessed 25 Apr 2020
Newport F (2018) In U.S., Estimate of LGBT Population Rises to 4.5%. https://news.gallup.com/poll/234863/estimate-lgbt-population-rises.aspx. Accessed 10 May 2020
Office for National Statistics (ONS) Sexual orientation, UK: 2018. 2020 https://www.ons.gov.uk/peoplepopulationandcommunity/culturalidentity/sexuality/bulletins/sexualidentityuk/2018. Accessed 10 May 2020
Statistics New Zealand (2019) New sexual identity wellbeing data reflects diversity of New Zealanders. https://www.stats.govt.nz/news/new-sexual-identity-wellbeing-data-reflects-diversity-of-new-zealanders. Accessed 10 May 2020
van Kampen SC, Lee W, Fornasiero M, Husk K. The proportion of the population of England that self-identifies as lesbian, gay or bisexual: producing modelled estimates based on national social surveys. BMC Res Notes. 2017;10:594.
Williams Institute (2019) Adult LGBT Population in the United States. https://williamsinstitute.law.ucla.edu/publications/adult-lgbt-pop-us/. Accessed 10 May 2020
Australian Bureau of Statistics (ABS) (2020) Labour Force, Australia, May 2020. https://www.abs.gov.au/AUSSTATS/[email protected]/Lookup/6202.0Explanatory%20Notes1May%202020?OpenDocument. Accessed 24 Jun 2020
Perales F, Campbell A. Who supports equal rights for same-sex couples? Evidence from Australia. Fam Matters. 2018;100:28–41.
This paper uses unit record data from the Household, Income and Labour Dynamics in Australia (HILDA) Survey. The HILDA Project was initiated and is funded by the Australian Government Department of Social Services (DSS) and is managed by the Melbourne Institute of Applied Economic and Social Research (Melbourne Institute). The findings and views reported in this paper, however, are those of the authors and should not be attributed to the DSS or the Melbourne Institute.
TW and JT were supported by the Australian Research Council Centre of Excellence in Population Ageing Research (Project number CE1101029).
Melbourne School of Population and Global Health, The University of Melbourne, Melbourne, VIC, Australia
Tom Wilson & Jeromey Temple
Australian Research Centre in Sex, Health and Society, La Trobe University, Melbourne, VIC, Australia
Anthony Lyons
School of Psychology and Public Health, La Trobe University, Melbourne, VIC, Australia
Northern Institute, Charles Darwin University, Darwin, NT, Australia
Fiona Shalley
Jeromey Temple
TW designed the study, undertook analysis, and wrote the first draft of the manuscript. JT did statistical analysis and contributed to the manuscript. FS participated in the acquisition of data, analysis and writing of the manuscript. AL contributed to the analysis and writing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Tom Wilson.
Ethics approval for this project was granted by the Melbourne School of Population and Global Health Human Ethics Advisory Group (ID 2056346.1).
All authors read and approved the final version of the manuscript.
. Figure S1: Sexual identity questions in HILDA and the GSS. Table S1: Relative standard errors for sexual identity percentages in Table 1. Table S2: Relative standard errors for the sexual minority percentages in Table 2.
Wilson, T., Temple, J., Lyons, A. et al. What is the size of Australia's sexual minority population?. BMC Res Notes 13, 535 (2020). https://doi.org/10.1186/s13104-020-05383-w
Sexual minority | CommonCrawl |
Béla Szőkefalvi-Nagy Medal 2002
Green's equivalences on noncommutative lattices
Graţiela Laslo, Jonathan Leech
Abstract. The equivalences ${\cal D}, {\cal L}$ and ${\cal R},$ defined initially on semigroups by J. A. Green, are used to study both noncommutative lattices and their congruence lattices, with particular attention given to the effects of assuming that some or all of these equivalences are congruences. Several specialized classes of noncommutative lattices are considered, including some that are simple algebras. Occurrences of distribution in noncommutative lattices as well as their congruence lattices are also considered.
AMS Subject Classification (1991): 06F05, 08A30, 20M10
Keyword(s): noncommutative lattices, congruences, Green's equivalences
Received February 7, 2001, and in revised form January 26, 2002. (Registered under 2854/2009.)
Boolean systems of relations and Galois connections
Ferdinand Börner, Reinhard Pöschel, Vitaly Sushchansky
Abstract. This paper contributes to the investigation of general relation algebras in connection with first order definable operations: e.g. Boolean systems of relations with projections (BSP) are algebras of relations closed with respect to set-theoretical operations definable by first order formulas without equality. As in the case of relational clones and Krasner algebras, BSP are Galois closed sets with respect to a Galois connection -- the strong invariance -- between operations (here unary operations) and relations. They can internally be described also as extensions of Krasner algebras. Variations of the first order formulas under considerations lead to several Galois connections the Galois closed elements of which are also completely characterized. In a unified setting instead of unary functions we use multifunctions as objects corresponding to relations w.r.t. the Galois connection.
AMS Subject Classification (1991): 08A02, 03G99, 06A15
Keyword(s): relation algebra, Galois connection, strongly invariant relation, first order logic without equality, multifunction
Received April 3, 1998, and in revised form March 30, 2001. (Registered under 2855/2009.)
Intervals of collapsing monoids
Miklós Dormán
Abstract. We present some new families of collapsing monoids. These monoids form large intervals in the submonoid lattices of the full transformation semigroups. Some of these intervals have cardinalities $\ge2^{2^{cn}}$ where $n$ is the size of the base set.
Received March 28, 2001, and in revised form July 25, 2001. (Registered under 2856/2009.)
Dualisability and algebraic constructions
J. G. Pitkethly
Abstract. We show that there are many natural algebraic constructions under which dualisability is not always preserved. In particular, we exhibit two dualisable unary algebras whose product is not dualisable.
AMS Subject Classification (1991): 08A60, 08C15, 18A40
Keyword(s): Natural duality, dualisability, unary algebra
Received December 8, 2000, and in final form March 16, 2001. (Registered under 2857/2009.)
A non-trivial congruence implication between identities weaker than modularity
Paolo Lipparini
Abstract. We use some commutator theory together with a recent result by K. Kearnes and A. Szendrei in order to provide a non-trivial implication between two congruence identities strictly weaker than modularity.
AMS Subject Classification (1991): 08B99, 06B20, 08A30, 08B10
Keyword(s): Congruence (lattice) identities, congruence modularity, weak difference term, commutator of congruences
Received April 29, 1999, and in revised form April 23, 2000. (Registered under 2858/2009.)
Clone segment independence in topology and algebra
J. Sichler, V. Trnková
Abstract. This is a complete characterization of all possible simultaneous relations between the clones of uniformly continuous maps of two metric spaces and the respective clones of their continuous maps, in terms of the equality, isomorphism and elementary equivalence of their initial clone segments. In conjunction with earlier results, the apparatus introduced here gives a full characterization of the equality, isomorphism and elementary equivalence of clone segments for two topological spaces and their various lower and upper modifications, and a similar characterization of the segments of centralizer clones for two algebras with at least three non-nullary operations and their respective reducts.
AMS Subject Classification (1991): 54C05, 08C05
Keyword(s): clone, clone segment, finitary algebraic theory, subtheory, functors preserving finite products, algebras and their reducts, categories of universal algebras, categories of uniform or topological spaces, topological modifications, equality of clone segments, isomorphism of clone segments, elementary equivalence of clone segments
Received May 9, 2001, and in revised form April 7, 2002. (Registered under 2859/2009.)
Characterization of CNS trinomials
Horst Brunotte
Abstract. Trinomials which define canonical number systems are characterized in terms of their coefficients.
AMS Subject Classification (1991): 11R04, 11R16, 11R21, 12D99
Received February 12, 2001, and in revised form November 8, 2001. (Registered under 2860/2009.)
On the splitting over normal subgroups with abelian Sylow 2-subgroups
Radoš Bakić
Abstract. Using the characterization of groups with abelian Sylow 2-subgroups, we deduce some splitting criteria. The main result is: if $G$ is a group with abelian Sylow 2-subgroups without non-trivial solvable factor groups and without non-trivial solvable normal subgroups, then any extension of $G$ splits over $G$. Also, we give new proofs of some known theorems about splitting over normal subgroups with abelian Sylow subgroups.
AMS Subject Classification (1991): 20D40, 20F17
Received March 25, 2001, and in revised form April 12, 2002. (Registered under 2861/2009.)
Embedding relations of classes of numerical sequences
Abstract. It is proved that the class of quasi-monotonic sequences with the additional assumption $\Sigma c_n/n < \infty $ is not comparable to the class of $\delta $-quasi-monotonic sequences with the assumption $\Sigma n^\gamma\delta _n < \infty $, $\gamma >0$; furthermore none of them is comparable to the class of sequences of rest bounded variation.
AMS Subject Classification (1991): 26D15, 40-99, 42A20
Keyword(s): Inequalities, embedding relations, sums, \delta, -quasi monotone sequences, R^+_0 BV, -sequences, sine and cosine series
A divergence criterion and an elementary proof of the divergence of ergodic averages along special subsequences
Minh Dzung Ha
Abstract. Consider ${\bf T}=\{z \in {\bf C}:|z|=1\}$, the unit circle with the usual normalized arc-length measure ${\cal L}$. We give a simple sufficient condition (a Divergence Criterion), with a completely self-contained and elementary proof, for the divergence of ergodic averages along subsequences in ${\bf N}$. As an application, we give a very elementary argument of the following result. Let $(n_k)_1^\infty$ be any increasing sequence in ${\bf N}$ with strictly increasing gaps, i.e., $n_{k+1}-n_{k}>n_{k}-n_{k-1}, k\geq 2$. Let $0<\rho<1$ be given. Then there exists an ergodic rotation $\tau \colon {\bf T}\to {\bf T}$ such that for any given $\epsilon >0$, there are infinitely many $f \in L^\infty({\bf T})$ satisfying $$ {\cal L}\Big( \big\{z \in {\bf T}: \overline{\lim}{1 \over l}\sum_{k=1}^{l}f \circ \tau^{10^{n_k}}(z)- \underline{\lim}{1 \over l}\sum_{k=1}^lf \circ\tau^{10^{n_k}}(z) \geq \rho \big\}\Big)\geq 1 -\epsilon.$$
Received March 27, 2001. (Registered under 2863/2009.)
On small solutions of second order linear differential equations with non-monotonous random coefficients
László Hatvani
Abstract. The equation $$x''+a^2(t)x=0, a(t):=a_k\ \hbox{ if }\ t_{k-1} \le t< t_k, \ \hbox{ for }\ k=1,2,\ldots $$ is considered, where the sequence $\{a_k\} ^\infty_{k=1}$ $(a_k>0, k=1,2,\ldots )$ is given, and $t_{k+1}-t_k$, $k=1,2,\ldots $ are totally independent random variables uniformly distributed on interval $[0,1]$. The probability of events $\gamma =0$, $\Gamma =0$, and $\Gamma >0$ are studied, where $$\gamma :=\liminf_{t\to\infty }\left(x^2(t)+{(x'(t))^2\over a(t)}\right ),\qquad \Gamma :=\limsup_{t\to\infty }\left(x^2(t)+{(x'(t))^2\over a(t)}\right ).$$
Received April 3, 2002, and in revised form October 21, 2002. (Registered under 2864/2009.)
Strongly resonant semilinear and quasilinear hemivariational inequalities
Leszek Gasiński, Nikolaos S. Papageorgiou
Abstract. In this paper we prove some abstract minimax principles for nonsmooth locally Lipschitz energy functionals and then we use those abstract results to study semilinear and quasilinear hemivariational inequalities at resonance. We permit the possibility of strong resonance at $\pm\infty $ and using a variational approach, based on the nonsmooth critical point theory of Chang, we prove the existence of nontrivial solutions and multiple solutions for semilinear and quasilinear hemivariational inequalities at resonance.
AMS Subject Classification (1991): 35J20, 35J85, 35R70
Keyword(s): hemivariational inequalities, strong resonance, locally Lipschitz functional, subdifferential, nonsmooth Cerami condition, critical point, minimax principle, nonsmooth Saddle Point Theorem, Ekeland variational principle, Rayleigh quotient, principal eigenvalue, p-Laplacian
Received March 3, 2000, and in revised form April 18, 2002. (Registered under 2865/2009.)
On Fomin and Fomin-type integrability and $L^1$-convergence classes
N. Tanović-Miller
Abstract. We show that four successive enlargements of the Sidon--Telyakovskii's class ${\cal ST}$, introduced as new integrability and $L^1$-convergence classes, are identical. For even trigonometric series, they coincide with the wellknown even classes ${\cal F}_p$, $p>1$, introduced by Fomin in 1978. For general trigonometric series, they coincide with a Fomin-type integrability class introduced by F. Móricz in 1991. It is somewhat surprising that several `different' enlargements of ${\cal ST}$ should yield only equivalent and indeed more complicated descriptions of the Fomin's and the Fomin-type classes. We also prove that the Fomin-type classes for general series, due to F. Móricz, are subclasses of $(dv^2)'$, one of the largest known integrability and $L^1$-convergence classes, and discuss other relationships between the known integrability classes. Furthermore, we show that the Fomin-type theorems for general series can be directly deduced from the original Fomin's results for even, i.e. cosine series.
Received June 5, 2000, and in final form November 6, 2001. (Registered under 2866/2009.)
A uniqueness theorem for Rademacher series
Kaoru Yoneda
Abstract. A generalized uniqueness problem for Rademacher series has been posed and solved.
Keyword(s): Rademacher function, Uniqueness
Received May 8, 2001, and in revised form August 1, 2001. (Registered under 2867/2009.)
Pointwise Fourier inversion on rank one compact symmetric spaces using Cesàro means
Francisco Javier González Vieli
Abstract. Conditions for pointwise Fourier inversion using Cesàro means of a given order are established on rank one compact symmetric spaces.
Strongly harmonic operators
Janko Bračič
Abstract. A bounded linear operator $ T, $ respectively an $n$-tuple $ T $ of commuting bounded operators, on a complex Banach space $ {\cal X} $ is strongly harmonic if it is contained in a unital commutative strongly harmonic closed subalgebra $ {\cal A} \subset B({\cal X}). $ Every strongly harmonic operator is decomposable in the sense of Foiaş and every strongly harmonic $n$-tuple is decomposable in the sense of Frunză. On the other hand, it is proven that the class of strongly harmonic operators is quite large and that operators in this class have very nice properties. If an elementary operator is determined by two strongly harmonic $ n$-tuples, then it is strongly harmonic, and its local spectra are in a simple connection with the analytic local spectra of $2n$-tuple of the coefficients.
AMS Subject Classification (1991): 47B40, 47B47, 47B48
Received February 27, 2001, and in revised form April 23, 2001. (Registered under 2869/2009.)
Rates of merge in generalized St.Petersburg games
Sándor Csörgő
Abstract. Even though there are no asymptotic distributions in the usual sense, we show that the distribution functions of the suitably centered and normed cumulative winnings in a full sequence of generalized St.Petersburg games merge together uniformly with completely specified semistable infinitely divisible distribution functions at certain fast rates, depending upon the tail parameter of the game.
AMS Subject Classification (1991): 60F05, 60E07, 60G50
Received February 12, 2002, and in final form June 25, 2002. (Registered under 2870/2009.)
Exact Hausdorff measure of the graph of Brownian motion on the Sierpiński gasket
Jun Wu, Yimin Xiao
Abstract. Let $X=\{X(t), t\geq0, {\msbm P}^x, x \in G \} $ be the Brownian motion on the Sierpiński gasket $G$. We prove that there exist two positive constants $c$ and $C$ such that for every $x \in G$, ${\msbm P}^x$-a.s. for all $t \in[0, \infty )$, we have $ ct \leq\varphi-m({\rm Gr}(X[0,t]))\leq Ct$, where ${ \rm Gr}X([0,t])=\{(s, X(s)): 0 \leq s \leq t \} $ is the graph set of $X$, $$\varphi(s)=s^{1+ \log3/\log2 - \log3/\log5}(\log\log {1}/{s})^{ \log3/\log5}, s \in(0, {1}/{8}],$$ and $\varphi $-$m$ denotes Hausdorff $\varphi $-measure.
AMS Subject Classification (1991): 60G17, 60J60, 28A78
Keyword(s): Brownian motion on the Sierpiński gasket, Hausdorff measure, graph
Received April 23, 2001, and in revised form October 24, 2001. (Registered under 2871/2009.)
György Pollák's work on the theory of semigroup varieties: its significance and its influence so far*
M. V. Volkov
Abstract. We survey eleven papers by György Pollák published from 1973 to 1989 and devoted to various aspects of the theory of semigroup varieties: hereditarily finitely based varieties, permutation identities, covers in varietal lattices.
AMS Subject Classification (1991): 20M07, 08B05, 08B15
On certain equations in free groups*
Piroska Csörgő, Benjamin Fine, Gerhard Rosenberger
Abstract. We prove that if $\{x,y\},\{u,v\} $ are two sets of generating pairs for a free group $F$ satisfying the equation $ [x,y^n] = [u,v^m]$ then $n = m$. Further if $n = m \ge2$ then $y$ is conjugate in $F$ to $v^{\pm1}$. This theorem rose out of a question concerning Schottky groups. The method of proof is used to consider certain related equations in free groups and generalizations to genus one Fuchsian groups.
AMS Subject Classification (1991): 20E05
Keyword(s): Free Groups, Equations, Test Elements, Scottky Groups
Received January 19, 2001, and in revised form July 12, 2001. (Registered under 2874/2009.)
On convergent interpolatory processes associated with periodic-basis functions*
F. J. Narcowich, N. Sivakumar, J. D. Ward
Abstract. A periodic-basis function (PBF) is a function of the form $$ \phi(u)=\sum_{k\in{\msbm Z}}\widehat{\phi }(k) e^{iku}, u\in{\msbm R}, $$ where the sequence of Fourier coefficients $\{\widehat{\phi }(k) : k\in{\msbm Z}\} $ satisfies the following conditions: $$ \widehat{\phi }(k)=\widehat{\phi }(-k), k\in{\msbm Z}, \hbox{ and } \sum_{k\in{\msbm Z}}|\widehat{\phi }(k)|< \infty. $$ A PBF $\phi $ is said to be strictly positive definite if every Fourier coefficient of $\phi $ is positive. It is known that if $\phi $ is strictly positive definite, then given any continuous $2\pi $-periodic function $f$ and any triangular array $\{\theta_{j,\mu } : 1\le j\le\mu, \mu\in {\msbm N}\} $ of distinct points in $[-\pi,\pi )$, there exists a unique PBF interpolant $ I(\theta ):= \sum_{j=1}^\mu a_j\phi(\theta -\theta_{j,\mu })$, $a_j\in{\msbm R}$, such that $ I(\theta_{k,\mu })=f(\theta_{k,\mu })$, $1\le k\le\mu $. This paper studies the uniform convergence of these PBF interpolants to the approximand $f$. Even though there is a rather well-developed theory which supplies various results of this nature, it also has the shortcoming that if $\phi $ is very smooth, then the class of functions $f$ which can be simultaneously approximated and interpolated by PBF interpolants is highly restricted. The primary objective of this paper is to suggest an oversampling strategy to overcome this problem. Specifically, it is shown that by increasing the dimension of the underlying space of approximants/interpolants judiciously, one can construct PBF interpolants (based on very smooth $\phi $) that converge to approximands which are only assumed to be continuous. The main tool in the analysis is a periodic version of a result of Szabados on algebraic polynomials, the proof of which relies on the trigonometric version of a fundamental theorem due to Erdős.
Received November 21, 2000, and in revised form July 17, 2001. (Registered under 2875/2009.)
Bernstein inequalities for polynomials with constrained roots*
Tamás Erdélyi, József Szabados
Abstract. We prove Bernstein type inequalities for algebraic polynomials on the interval $I:=[-1,1]$ and for trigonometric polynomials on {\msbm R} when the roots of the polynomials are outside of a certain domain of the complex plane. The cases of real vs. complex coefficients are handled separately. In case of trigonometric polynomials with real coefficients and root restriction, the $L_p$-situation will also be considered. In most cases, the sharpness of the estimates will be shown.
Received February 26, 2001, and in revised form August 8, 2001. (Registered under 2876/2009.)
Koliha--Drazin invertible operators and commuting Riesz perturbations*
Vladimir Rakočević
Abstract. A bounded linear operator in a Banach space is called Koliha--Drazin invertible (generalized Drazin invertible) if ${0}$ is not an accumulation point of its spectrum. In this paper the main result is the stability of the Koliha--Drazin invertible operators with finite nullity under commuting Riesz operator perturbations. We also generalize some recent results of Castro, Koliha and Wei, and characterize the perturbation of the Koliha--Drazin invertible operators with essentialy equal eigenprojections at zero.
Keyword(s): generalized Drazin inverse, perturbation, Riesz operator
Received January 2, 2001, and in revised form March 26, 2001. (Registered under 2877/2009.)
Hypercyclic and supercyclic cohyponormal operators*
Nathan S. Feldman, Vivien Glass Miller, Thomas L. Miller
Abstract. We give a sufficient condition involving local spectra for an operator on a separable Banach space to be hypercyclic. Similar conditions are given for supercyclicity. These spectral conditions allow us to characterize the hyponormal operators with hypercyclic adjoints and those with supercyclic adjoints.
AMS Subject Classification (1991): 47A10, 47A11, 47A16, 47B20, 47B40
Keyword(s): Hypercyclic, supercyclic, hyponormal, (\beta ), (\delta ), propertiesand
Received February 7, 2001, and in final form October 2, 2001. (Registered under 2878/2009.) | CommonCrawl |
Tag: Evolutionary Genetics
A Kimura Age to the Kern-Hahn Era: neutrality & selection
Posted on November 9, 2018 November 9, 2018 by Razib Khan
I'm pretty jaded about a lot of journalism, mostly due to the incentives in the industry driven by consumers and clicks. But Quanta Magazine has a really good piece out, Theorists Debate How 'Neutral' Evolution Really Is. It hits all the right notes (you can listen to one of the researchers quoted, Matt Hahn, on an episode of my podcast from last spring).
As someone who is old enough to remember reading about the 'controversy' more than 20 years ago, it's interesting to see how things have changed and how they haven't. We have so much more data today, so the arguments are really concrete and substantive, instead of shadow-boxing with strawmen. And yet still so much of the disagreement seems to hinge on semantic shadings and understandings even now.
But, as Richard McElreath suggested on Twitter part of the issue is that ultimately Neutral Theory might not even be wrong. It simply tries to shoehorn too many different things into a simple and seductively elegant null model when real biology is probably more complicated than that. With more data (well, exponentially more data) and computational power biologists don't need to collapse all the complexity of evolutionary process across the tree of life into one general model, so they aren't.
Let me finish with a quote from Ambrose, Bishop of Milan, commenting on the suffocation of the Classical religious rites of Late Antiquity:
It is undoubtedly true that no age is too late to learn. Let that old age blush which cannot amend itself. Not the old age of years is worthy of praise but that of character. There is no shame in passing to better things.
Posted in Evolutionary GeneticsTagged Evolutionary Genetics7 Comments on A Kimura Age to the Kern-Hahn Era: neutrality & selection
A historical slice of evolutionary genetics
Posted on October 12, 2018 October 12, 2018 by Razib Khan
A few friends pointed out that I likely garbled my attribution of who were the guiding forces between the "classical" and "balance" in the post below (Muller & Dobzhansky as opposed to Fisher & Wright as I said). I'll probably do some reading and update the post shortly…but it did make me reflect that in the hurry to keep up on the current literature it is easy to lose historical perspective and muddle what one had learned.
Of course on some level science is not as dependent on history as many other disciplines. The history is "baked-into-the-cake." This is clear when you read The Origin of Species. But if you are interested in a historical and sociological perspective on science, with a heavy dose of narrative biography, I highly recommend Ullica Segerstrale's Defenders of the Truth: The Battle for Science in the Sociobiology Debate and Beyond and Nature's Oracle: The Life and Work of W.D. Hamilton.
Defenders of the Truth in particular paints a broad and vivid picture of a period in the 1960s and later into the 1970s when evolutionary thinkers began to grapple with ideas such as inclusive fitness. E. O. Wilson's Sociobiology famously triggered a counter-reaction by some intellectuals (Wilson was also physically assaulted in the 1978 AAAS meeting). Characters such as Noam Chomsky make cameo appearances.
Segerstrale's Nature's Oracle focuses particularly on the life and times of W. D. Hamilton, though if you want that at high speed and max density, read Narrow Roads of Gene Land, Volume 2. Because Hamilton died before the editing phase, the biographical text is relatively unexpurgated. Hamilton also makes an appearance in The Price of Altruism: George Price and the Search for the Origins of Kindness.
The death of L. L. Cavalli-Sforza reminds us that the last of the students of the first generation of population geneticists are now passing on. With that, a great of history is going to be inaccessible. The same is not yet true of the acolytes of W. D. Hamilton, John Maynard Smith, or Robert Trivers.
Posted in Evolutionary GeneticsTagged Evolutionary Genetics
Idle theories are the devil's workshop
Posted on February 28, 2018 February 28, 2018 by Razib Khan
In the 1970s Richard C. Lewontin wrote about how the allozyme era finally allowed for the testing of theories which had long been perfected and refined but lay unused like elegant machines without a task. Almost immediately the empirical revolution that Lewontin began in the 1960s kickstarted debates about the nature of selection and neutrality on the molecular level, now that molecular variation was something they could actually explore.
This led to further debates between "neutralists" and "selectionists." Sometimes the debates were quite acrimonious and personal. The most prominent neutralist, Motoo Kimura, took deep offense to the scientific criticisms of the theoretical population geneticist John Gillespie. The arguments around neutral theory in the 1970s eventually spilled over into other areas of evolutionary biology, and prominent public scientists such as Richard Dawkins and Stephen Jay Gould got pulled into it (neither of these two were population geneticists or molecular evolutionists, so one wonders what they truly added besides bluster and publicity).
Today we do not have these sorts of arguments from what I can tell. Why? I think it is the same reason that is the central thesis of Benjamin Friedman's The Moral Consequences of Economic Growth. In it, the author argues that liberalism, broadly construed, flourishes in an environment of economic growth and prosperity. As the pie gets bigger zero-sum conflicts are attenuated.
What's happened in empirical studies of evolutionary biology over the last decade or so is that in genetics a surfeit of genomic data has swamped the field. Some scholars have even suggested that in evolutionary genomics we have way more data than can be analyzed or understood (in contrast to medical genomics, where more data is still useful and necessary). Scientists still have disagreements, but instead of bickering or posturing, they've been trying to dig out from the under the mountain of data.
It's easy to be gracious to your peers when you're rich in data….
Posted in Evolutionary Genetics, Evolutionary GenomicsTagged Evolutionary Genetics1 Comment on Idle theories are the devil's workshop
Synergistic epistasis as a solution for human existence
Posted on May 6, 2017 May 6, 2017 by Razib Khan
Epistasis is one of those terms in biology which has multiple meanings, to the point that even biologists can get turned around (see this 2008 review, Epistasis — the essential role of gene interactions in the structure and evolution of genetic systems, for a little background). Most generically epistasis is the interaction of genes in terms of producing an outcome. But historically its meaning is derived from the fact that early geneticists noticed that crosses between individuals segregating for a Mendelian characteristic (e.g., smooth vs. curly peas) produced results conditional on the genotype of a secondary locus.
Molecular biologists tend to focus on a classical, and often mechanistic view, whereby epistasis can be conceptualized as biophysical interactions across loci. But population geneticists utilize a statistical or evolutionary definition, where epistasis describes the extend of deviation from additivity and linearity, with the "phenotype" often being fitness. This goes back to early debates between R. A. Fisher and Sewall Wright. Fisher believed that in the long run epistasis was not particularly important. Wright eventually put epistasis at the heart of his enigmatic shifting balance theory, though according to Will Provine in Sewall Wright and Evolutionary Biology even he had a difficult time understanding the model he was proposing (e.g., Wright couldn't remember what the different axes on his charts actually meant all the time).
These different definitions can cause problems for students. A few years ago I was a teaching assistant for a genetics course, and the professor, a molecular biologist asked a question about epistasis. The only answer on the key was predicated on a classical/mechanistic understanding. But some of the students were obviously giving the definition from an evolutionary perspective! (e.g., they were bringing up non-additivity and fitness) Luckily I noticed this early on and the professor approved the alternative answer, so that graders would not mark those using a non-molecular answer down.
My interested in epistasis was fed to a great extent in the middle 2000s by my reading of Epistasis and the Evolutionary Process. Unfortunately not too many people read this book. I believe this is so because when I just went to look at the Amazon page it told me that "Customers who viewed this item also viewed" Robert Drews' The End of the Bronze Age. As it happened I read this book at about the same time as Epistasis and the Evolutionary Process…and to my knowledge I'm the only person who has a very deep interest in statistical epistasis and Mycenaean Greece (if there is someone else out there, do tell).
In any case, when I was first focused on this topic genomics was in its infancy. Papers with 50,000 SNPs in humans were all the rage, and the HapMap paper had literally just been published. A lot has changed.
So I was interested to see this come out in Science, Negative selection in humans and fruit flies involves synergistic epistasis (preprint version). Since the authors are looking at humans and Drosophila and because it's 2017 I assumed that genomic methods would loom large, and they do.
And as always on the first read through some of the terminology got confusing (various types of statistical epistasis keep getting renamed every few years it seems to me, and it's hard to keep track of everything). So I went to Google. And because it's 2017 a citation of the paper and further elucidation popped up in Google Books in Crumbling Genome: The Impact of Deleterious Mutations on Humans. Weirdly, or not, the book has not been published yet. Since the author is the second to last author on the above paper it makes sense that it would be cited in any case.
So what's happening in this paper? Basically they are looking to reduced variance of really bad mutations because a particular type of epistasis amplifies their deleterious impact (fitness is almost always really hard to measure, so you want to look at proxy variables).
Because de novo mutations are rare, they estimate about 7 are in functional regions of the genome (I think this may be high actually), and that the distribution should be Poisson. This distribution just tells you that the mean number of mutations and the variance of the the number of mutations should be the same (e.g., mean should be 5 and variance should 5).
Epistasis refers (usually) to interactions across loci. That is, different genes at different locations in the genome. Synergistic epistasis means that the total cumulative fitness after each successive mutation drops faster than the sum of the negative impact of each mutation. In other words, the negative impact is greater than the sum of its parts. In contrast, antagonistic epistasis produces a situation where new mutations on the tail of the distributions cause a lower decrement in fitness than you'd expect through the sum of its parts (diminishing returns on mutational load when it comes to fitness decrements).
These two dynamics have an effect the linkage disequilibrium (LD) statistic. This measures the association of two different alleles at two different loci. When populations are recently admixed (e.g., Brazilians) you have a lot of LD because racial ancestry results in lots of distinctive alleles being associated with each other across genomic segments in haplotypes. It takes many generations for recombination to break apart these associations so that allelic state at one locus can't be used to predict the odds of the state at what was an associated locus. What synergistic epistasis does is disassociate deleterious mutations. In contrast, antagonistic epistasis results in increased association of deleterious mutations.
Why? Because of selection. If a greater number of mutations means huge fitness hits, then there will strong selection against individuals who randomly segregate out with higher mutational loads. This means that the variance of the mutational load is going to lower than the value of the mean.
How do they figure out mutational load? They focus on the distribution of LoF mutations. These are extremely deleterious mutations which are the most likely to be a major problem for function and therefore a huge fitness hit. What they found was that the distribution of LoF mutations exhibited a variance which was 90-95% of a null Poisson distribution. In other words, there was stronger selection against high mutation counts, as one would predict due to synergistic epistasis.
They conclude:
Thus, the average human should carry at least seven de novo deleterious mutations. If natural selection acts on each mutation independently, the resulting mutation load and loss in average fitness are inconsistent with the existence of the human population (1 − e−7 > 0.99). To resolve this paradox, it is sufficient to assume that the fitness landscape is flat only outside the zone where all the genotypes actually present are contained, so that selection within the population proceeds as if epistasis were absent (20, 25). However, our findings suggest that synergistic epistasis affects even the part of the fitness landscape that corresponds to genotypes that are actually present in the population.
Overall this is fascinating, because evolutionary genetic questions which were still theoretical a little over ten years ago are now being explored with genomic methods. This is part of why I say genomics did not fundamentally revolutionize how we understand evolution. There were plenty of models and theories. Now we are testing them extremely robustly and thoroughly.
Addendum: Reading this paper reinforces to me how difficult it is to keep up with the literature, and how important it is to know the literature in a very narrow area to get the most out of a paper. Really the citations are essential reading for someone like me who just "drops" into a topic after a long time away….
Citation: Science, Negative selection in humans and fruit flies involves synergistic epistasis.
Posted in Evolution, Genetics, GenomicsTagged Epistasis, Evolutionary Genetics
Why the rate of evolution may only depend on mutation
Posted on April 23, 2017 April 24, 2017 by Razib Khan
Sometimes people think evolution is about dinosaurs.
It is true that natural history plays an important role in inspiring and directing our understanding of evolutionary process. Charles Darwin was a natural historian, and evolutionary biologists often have strong affinities with the natural world and its history. Though many people exhibit a fascination with the flora and fauna around us during childhood, often the greatest biologists retain this wonderment well into adulthood (if you read W. D. Hamilton's collections of papers, Narrow Roads of Gene Land, which have autobiographical sketches, this is very evidently true of him).
But another aspect of evolutionary biology, which began in the early 20th century, is the emergence of formal mathematical systems of analysis. So you have fields such as phylogenetics, which have gone from intuitive and aesthetic trees of life, to inferences made using the most new-fangled Bayesian techniques. And, as told in The Origins of Theoretical Population Genetics, in the 1920s and 1930s a few mathematically oriented biologists constructed much of the formal scaffold upon which the Neo-Darwinian Synthesis was constructed.
The product of evolution
At the highest level of analysis evolutionary process can be described beautifully. Evolution is beautiful, in that its end product generates the diversity of life around us. But a formal mathematical framework is often needed to clearly and precisely model evolution, and so allow us to make predictions. R. A. Fisher's aim when he wrote The Genetical Theory Natural Selection was to create for evolutionary biology something equivalent to the laws of thermodynamics. I don't really think he succeeded in that, though there are plenty of debates around something like Fisher's fundamental theorem of natural selection.
But the revolution of thought that Fisher, Sewall Wright, and J. B. S. Haldane unleashed has had real yields. As geneticists they helped us reconceptualize evolutionary process as more than simply heritable morphological change, but an analysis of the units of heritability themselves, genetic variation. That is, evolution can be imagined as the study of the forces which shape changes in allele frequencies over time. This reduces a big domain down to a much simpler one.
Genetic variation is concrete currency with which one can track evolutionary process. Initially this was done via inferred correlations between marker traits and particular genes in breeding experiments. Ergo, the origins of the "the fly room".
But with the discovery of DNA as the physical substrate of genetic inheritance in the 1950s the scene was set for the revolution in molecular biology, which also touched evolutionary studies with the explosion of more powerful assays. Lewontin & Hubby's 1966 paper triggered a order of magnitude increase in our understanding of molecular evolution through both theory and results.
The theoretical side occurred in the form of the development of the neutral theory of molecular evolution, which also gave birth to the nearly neutral theory. Both of these theories hold that most of the variation with and between species on polymorphisms are due to random processes. In particular, genetic drift. As a null hypothesis neutrality was very dominant for the past generation, though in recent years some researchers are suggesting that selection has been undervalued as a parameter for various reasons.
Setting the live scientific debate, which continue to this day, one of the predictions of neutral theory is that the rate of evolution will depend only on the rate of mutation. More precisely, the rate of substitution of new mutations (where the allele goes from a single copy to fixation of ~100%) is proportional to the rate of mutation of new alleles. Population size doesn't matter.
The algebra behind this is straightforward.
[latexpage]
First, remember that the frequency of the a new mutation within a population is $\frac{1}{2N}$, where $N$ is the population size (the $2$ is because we're assuming diploid organisms with two gene copies). This is also the probability of fixation of a new mutation in a neutral scenario; it's probability is just proportional to its initial frequency (it's a random walk process between 0 and 1.0 proportions). The rate of mutations is defined by $\mu$, the number of expected mutations at a given site per generation (this is a pretty small value, for humans it's on the order of $10^{-8}$). Again, there are $2N$ gene copies, so you have $2N\mu$ to count the number of new mutations.
The probability of fixation of a new mutations multiplied by the number of new mutations is:
\[
\( \frac{1}{2N} \) \times 2N\mu = \mu
\]
So there you have it. The rate of fixation of these new mutations is just a function of the rate of mutation.
Simple formalisms like this have a lot more gnarly math that extend them and from which they derive. But they're often pretty useful to gain a general intuition of evolutionary processes. If you are genuinely curious, I would recommend Elements of Evolutionary Genetics. It's not quite a core dump, but it is a way you can borrow the brains of two of the best evolutionary geneticists of their generation.
Also, you will be able to answer the questions on my survey better the next time!
Posted in Genetics, UncategorizedTagged Evolutionary Genetics, Population Genetics9 Comments on Why the rate of evolution may only depend on mutation
Fisherianism in the genomic era
Posted on April 12, 2017 by Razib Khan
There are many things about R. A. Fisher that one could say. Professionally he was one of the founders of evolutionary genetics and statistics, and arguably the second greatest evolutionary biologist after Charles Darwin. With his work in the first few decades of the 20th century he reconciled the quantitative evolutionary framework of the school of biometry with mechanistic genetics, and formalized evolutionary theory in The Genetical Theory of Natural Selection.
He was also an asshole. This is clear in the major biography of him, R.A. Fisher: The Life of a Scientist. It was written by his daughter. But The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century also seems to indicate he was a dick. And W. D. Hamilton's Narrow Roads of Gene Land portrays Fisher has rather cold and distant, despite the fact that Hamilton idolized him.
Notwithstanding his unpleasant personality, R. A. Fisher seems to have been a veritable mentat in his early years. Much of his thinking crystallized in the first few decades of the 20th century, when genetics was a new science and mathematical methods were being brought to bear on a host of topics. It would be decades until DNA was understood to be the substrate of heredity. Instead of deriving from molecular first principles which were simply not known in that day, Fisher and his colleagues constructed a theoretical formal edifice which drew upon patterns of inheritance that were evident in lineages of organisms that they could observe around them (Fisher had a mouse colony which he utilized now and then to vent his anger by crushing mice with his bare hands). Upon that observational scaffold they placed a sturdy superstructure of mathematical formality. That edifice has been surprisingly robust down to the present day.
One of Fisher's frameworks which still gives insight is the geometric model of the distribution of fitness of mutations. If an organism is near its optimum of fitness, than large jumps in any direction will reduce its fitness. In contrast, small jumps have some probability of getting closer to the optimum of fitness. In plainer language, mutations of large effect are bad, and mutations of small effect are not as bad.
A new paper in PNAS loops back to this framework, Determining the factors driving selective effects of new nonsynonymous mutations:
Our study addresses two fundamental questions regarding the effect of random mutations on fitness: First, do fitness effects differ between species when controlling for demographic effects? Second, what are the responsible biological factors? We show that amino acid-changing mutations in humans are, on average, more deleterious than mutations in Drosophila. We demonstrate that the only theoretical model that is fully consistent with our results is Fisher's geometrical model. This result indicates that species complexity, as well as distance of the population to the fitness optimum, modulated by long-term population size, are the key drivers of the fitness effects of new amino acid mutations. Other factors, like protein stability and mutational robustness, do not play a dominant role.
In the title of the paper itself is something that would have been alien to Fisher's understanding when he formulated his geometric model: the term "nonsynonymous" to refer to mutations which change the amino acid corresponding to the triplet codon. The paper is understandably larded with terminology from the post-DNA and post-genomic era, and yet comes to the conclusion that a nearly blind statistical geneticist from about a century ago correctly adduced the nature of mutation's affects on fitness in organisms.
The authors focused on two primary species which different histories, but well characterized in the evolutionary genomic literature: humans and Drosophila. The models they tested are as follows:
Basically they checked the empirical distribution of the site frequency spectra (SFS) of the nonsynonymous variants against expected outcomes based on particular details of demographics, which were inferred from synonymous variation. Drosophila have effective population sizes orders of magnitude larger than humans, so if that is not taken into account, then the results will be off. There are also a bunch of simulations in the paper to check for robustness of their results, and they also caveat the conclusion with admissions that other models besides the Fisherian one may play some role in their focal species, and more in other taxa. A lot of this strikes me as accruing through the review process, and I don't have the time to replicate all the details to confirm their results, though I hope some of the reviewers did so (again, I suspect that the reviewers were demanding some of these checks, so they definitely should have in my opinion).
In the Fisherian model more complex organisms are more fine-tuned due topleiotropy and other such dynamics. So new mutations are more likely to deviate away from the optimum. This is the major finding that they confirmed. What does "complex" mean? The Drosophila genome is less than 10% of the human genome's size, but the migratory locust has twice as large a genome as humans, while wheat has a sequence more than five times as large. But organism to organism, it does seem that Drosophila has less complexity than humans. And they checked with other organisms besides their two focal ones…though the genomes there are not as complete presumably.
As I indicated above, the authors believe they've checked for factors such as background selection, which may confound selection coefficients on specific mutations. The paper is interesting as much for the fact that it illustrates how powerful analytic techniques developed in a pre-DNA era were. Some of the models above are mechanistic, and require a certain understanding of the nature of molecular processes. And yet they don't seem as predictive as a more abstract framework!
Citation: Christian D. Huber, Bernard Y. Kim, Clare D. Marsden, and Kirk E. Lohmueller, Determining the factors driving selective effects of new nonsynonymous mutations PNAS 2017 ; published ahead of print April 11, 2017, doi:10.1073/pnas.1619508114
Posted in GeneticsTagged Evolutionary Genetics1 Comment on Fisherianism in the genomic era
Razib's home page
Twinkie on Our New Elite
Joshua Byrne on R1b-L21 and Goidelic Celtic
Douglas Knight on Our New Elite
Adrian Martyn on R1b-L21 and Goidelic Celtic
Jason m on R1b-L21 and Goidelic Celtic
Reconstructing the spatiotemporal patterns of admixture during the European Holocene using a novel genomic dating method | bioRxiv
Genetic control of KRAB-ZFP genes explains distal CpG-site methylation which associates with human disease phenotypes. | bioRxiv
The False Tropes of Darwinism and a New Narrative of Prosocial Evolution - This View Of Life
There is No Such Thing as a Takedown - Freddie deBoer
The YIMBYs are starting to win a few - by Noah Smith
The Mismeasure and Misuse of GDP
In Support of Amy Wax - Glenn Loury
The genetic identity of the earliest human-made hybrid animals, the kungas of Syro-Mesopotamia
Partial sex linkage and linkage disequilibrium on the guppy sex chromosome | bioRxiv
Evolution of assortative mating following selective introgression of pigmentation genes between two Drosophila species | bioRxiv
The effect of mainland dynamics on data and parameter estimates in island biogeography | bioRxiv
Google, Amazon, Meta and Microsoft Weave a Fiber-Optic Web of Power - WSJ
U.S. Businesses Sour on Saudi Arabia in Blow to Crown Prince's Growth Plans - WSJ
Three Reagents for in-Solution Enrichment of Ancient Human DNA at More than a Million SNPs | bioRxiv
PolarMorphism enables discovery of shared genetic variants across multiple traits from GWAS summary statistics | bioRxiv
The Rise and Fall of Civilizations: A Reader Course – The Scholar's Stage
Estimating alpha, beta, and gamma diversity through deep learning | bioRxiv
Background selection theory overestimates effective population size for high mutation rates | bioRxiv
The heritability of BMI varies across the range of BMI: a heritability curve analysis in a twin cohort | bioRxiv
SNP-level FST outperforms window statistics for detecting soft sweeps in local adaptation | bioRxiv
Aboriginal Australians
After the Ice
Albion's Seed
Blood of the Isles
Calculus Made Easy
Castes of Mind
China's Cosmopolitan Empire
Darwin's Cathedral
Deep Ancestry
Descartes' Baby
Empires and Barbarians
Empires of the Silk Road
Empires of the Word
End of the Bronze Age
Europe Between the Oceans
Fourth Crusade & the Sack of Constantinople
Genghis Khan & the Making of the Modern World
God's War
How Pleasure Works
In Search of the Trojan War
India: A New History
Marketplace of the Gods
Pandora's Seed
Rome and Jersalem
Sailing to Byzantium
Supernatural Selection
The Age of Confucian Rule
The Altruism Equation
The Ancestor's Tale
The Early Chinese Empires
The Faith Instinct
The Power of Babel
The Price of Altruism
The Stuff of Thought
The Tenth Parallel
The Troubled Empire
The Vertigo Years
Unknown Quantity
War, Wine, and Taxes
We Are Doomed
Wealth and Poverty of Nations
Why Some Like It Hot
Admin African Genetics American History Ancient DNA Behavior Genetics Book Club Books China coronavirus COVID-19 cultural evolution Culture Economics Europe Evolution Evolutionary Genetics Fantasy Game of Thrones Genetics Genomics GSS Historical Population Genetics History Hot Sauce Human Evolution Human Genetics Human Population Genetics India India genetics Indo-Europeans Islam Neanderthals Open Thread Paleoanthropology Personal genomics Pigmentation Politics Population Genetics Population genomics Religion Roman History Selection Southeast Asia Technology The Insight | CommonCrawl |
Quantifying contributions of chlorofluorocarbon banks to emissions and impacts on the ozone layer and climate
Your article has downloaded
Similar articles being viewed by others
Carousel with three slides shown at a time. Use the Previous and Next buttons to navigate three slides at a time, or the slide dot buttons at the end to jump three slides at a time.
Joint inference of CFC lifetimes and banks suggests previously unidentified emissions
Megan Lickley, Sarah Fletcher, … Susan Solomon
An unexpected and persistent increase in global emissions of ozone-depleting CFC-11
Stephen A. Montzka, Geoff S. Dutton, … James W. Elkins
Increase in CFC-11 emissions from eastern China based on atmospheric observations
M. Rigby, S. Park, … D. Young
Renewed and emerging concerns over the production and emission of ozone-depleting substances
Martyn P. Chipperfield, Ryan Hossaini, … Susann Tegtmeier
A decline in global CFC-11 emissions during 2018−2019
Stephen A. Montzka, Geoffrey S. Dutton, … Christina Theodoridi
Dominance of the residential sector in Chinese black carbon emissions as identified from downwind atmospheric observations during the COVID-19 pandemic
Yugo Kanaya, Kazuyo Yamaji, … Zbigniew Klimont
A decline in emissions of CFC-11 and related chemicals from eastern China
Sunyoung Park, Luke M. Western, … Matthew Rigby
Gridded fossil CO2 emissions and related O2 combustion consistent with national inventories 1959–2018
Matthew W. Jones, Robbie M. Andrew, … Corinne Le Quéré
Rapid increase in dichloromethane emissions from China inferred through atmospheric observations
Minde An, Luke M. Western, … Matthew Rigby
Megan Lickley ORCID: orcid.org/0000-0001-5810-87841,
Susan Solomon ORCID: orcid.org/0000-0002-2020-75811,
Sarah Fletcher ORCID: orcid.org/0000-0003-3289-22372,
Guus J. M. Velders ORCID: orcid.org/0000-0002-6597-70883,
John Daniel4,
Matthew Rigby ORCID: orcid.org/0000-0002-2020-92535,
Stephen A. Montzka ORCID: orcid.org/0000-0002-9396-04006,
Lambert J. M. Kuijpers ORCID: orcid.org/0000-0002-0979-96947 &
Kane Stone ORCID: orcid.org/0000-0002-2721-87851
Nature Communications volume 11, Article number: 1380 (2020) Cite this article
261 Altmetric
Chlorofluorocarbon (CFC) banks from uses such as air conditioners or foams can be emitted after global production stops. Recent reports of unexpected emissions of CFC-11 raise the need to better quantify releases from these banks, and associated impacts on ozone depletion and climate change. Here we develop a Bayesian probabilistic model for CFC-11, 12, and 113 banks and their emissions, incorporating the broadest range of constraints to date. We find that bank sizes of CFC-11 and CFC-12 are larger than recent international scientific assessments suggested, and can account for much of current estimated CFC-11 and 12 emissions (with the exception of increased CFC-11 emissions after 2012). Left unrecovered, these CFC banks could delay Antarctic ozone hole recovery by about six years and contribute 9 billion metric tonnes of equivalent CO2 emission. Derived CFC-113 emissions are subject to uncertainty, but are much larger than expected, raising questions about its sources.
The Montreal Protocol to phase out production and consumption of ozone-depleting substances (ODS) has become one of the signature environmental success stories of the past century. Since entry into force in the late 1980s, the Protocol initiated global reductions and virtual cessation of new production of chlorofluorocarbons (CFCs) that dominate ozone depletion, first in developed and then developing nations, with all nations agreeing to essentially phase out production of CFC-11 and CFC-12 by 2010. Global actions have demonstrably avoided a world in which large ozone losses would have become widespread1 and there are signs that the ozone layer is beginning to recover2,3. Because CFCs have lifetimes of many decades to centuries, atmospheric chlorine loading and ozone loss from these chemicals declines only slowly even after emissions cease. Further, CFCs were produced for use in equipment, some of which have lifetimes of up to multiple decades. This implies that a bank of material could still exist, contributing to current and future CFC emissions. Recent measurements of CFC-11 indicate that emissions of this gas have increased despite global reports of near-zero production since 20104,5. This raises concerns regarding future ozone recovery3 and how much emission could still be coming from banks stored in equipment. CFCs are also effective greenhouse gases, contributing to climate change. Indeed, the Montreal Protocol, while motivated by safeguarding the ozone layer, also reduced global warming that would otherwise have occurred (with about five times the equivalent greenhouse gas emission impact that had been anticipated from the Kyoto Protocol by 20106).
A long-standing challenge in understanding the underlying causes of measured changes in ODS concentrations is in evaluating not just production and emission in a given year, but also the quantity of banked CFCs, subject to later release. In the 1970s, the majority of CFC emission was nearly immediate after production as most use was as spray can propellants, spray foam, and solvents, but as those uses were phased out, CFC use continued in applications designed to retain rather than release the material, such as refrigeration, air conditioning, and insulation foam blowing7, increasing the bank of material that can leak out later. The observation of unexpected CFC-11 emissions after the 2010 global production phase-out4 therefore highlights the need for the best possible understanding of how much CFC remains in banks worldwide and how much banks are contributing to current emissions and their changes over time. Continuing emissions from remaining banks are not prohibited under the Montreal Protocol, but recovery and destruction of unneeded CFC banks has been considered by policymakers as a means to both enhance ozone recovery and further safeguard the climate system as part of the Protocol8. The issue of additional production (potentially illegally or as an accidental by-product) is also a topic of scrutiny.
Previous work on evaluating banks focused on two primary methods, commonly referred to as top-down and bottom-up. In top-down analyses, bank magnitudes are obtained by the cumulative difference between global production (generally estimated from reported production values compiled by the United Nations Environment Programme (UNEP)) and emissions, estimated from observed mole fractions and an estimate of atmospheric destruction (a lifetime). Prior to 2006, this method had been the basis for international assessments of bank size, but is sensitive to small biases in some variables. In bottom-up analyses, an inventory of sales of material and leakage rates in different applications such as refrigeration, industrial processes, air conditioning, closed and open cell foams are carefully tallied and considered at different stages of application use9. Extensive bottom-up inventories for banks as reported in the Intergovernmental Panel on Climate Change's Technical and Economic Assesment Panel (IPCC/TEAP, 2005)10 were much larger than top-down estimates in the World Meteorological Organization (WMO) assessment of the time7, raising important questions about why they differed and whether the benefits for ozone and climate of bank destruction policies might be greater than previously thought. A subsequent TEAP (2006) assessment11 suggested that some of the discrepancy could stem from longer lifetimes, a result supported by later stratospheric modeling analysis12. Post-2006 WMO estimates adopted the bottom-up values for 2008 and integrated forward to diminish the influence of lifetime errors on derived bank magnitudes.
By using the broadest range of constraints to date in a Bayesian framework, we estimate that banks of CFC-11 and 12 are likely to be substantially larger than recent scientific assessments suggested3, in part due to apparent underreporting of production. Current banks of these gases could delay ozone hole recovery by 6 years and contribute ~9 billion metric tonnes of equivalent CO2 emission. Further, our analysis better quantifies key discrepancies between observationally derived emissions and reported production and emission values. Namely, we find that recent increases in CFC-11 emissions as well as ongoing CFC-113 emissions are considerably larger than expectations from banks and other sources, implying added unanticipated contributions to climate change and ozone depletion.
Modeling framework
Here we introduce a new Bayesian probabilistic approach to assess bank sizes and changes in emissions for the three primary chlorofluorocarbons CFC-11, 12, and 113. Observed mole fractions of each gas, together with lifetime scenarios, are used to infer emissions. We develop a process-based model using production and equipment information to construct Bayesian prior distributions for bank and emissions estimates (representing a bottom-up approach). Observationally derived emissions are then treated as observations in Bayes' Theorem and used to develop posterior estimates for the simulated emissions and banks. Posterior distributions therefore represent bank and emissions estimates in which observationally derived emissions are used to constrain uncertainties in bottom-up methods. We call this Bayesian Parameter Estimation (BPE; see Methods). This approach aids in understanding the differences between past evaluations using top-down and bottom-up methods. We also examine how current understanding of the atmospheric lifetimes of these gases propagates into bank sizes and uncertainties. Differences between annual production and sales (e.g., stockpiling) are possible but not included here due to lack of quantitative information. Our analysis suggests a substantial amount (up to 90% in the 1990s) of CFC-11 and 12 production has gone into banks, while CFC-113 provides a useful contrast, as it is not subject to significant banking. Continuing CFC-113 production for feedstock use remains substantial under the Montreal Protocol, but Parties are urged to minimize leakage. We examine how factors such as potential unreported production, uncertainties in bank release rates, and atmospheric lifetime assumptions affect BPE bank estimates. Here we address the following questions: What are best estimates and uncertainties in emissions of banked CFC-11, 12, and 113? How much could the bank from pre-2010 production contribute to recent increased emissions of CFC-11? How have emissions from banks likely contributed to delaying ozone recovery relative to a scenario where banks were recovered, and how much could they contribute to future delays if they are left unrecovered? Finally, how will bank emissions contribute to climate change if they are not efficiently recovered?
Bayesian bank estimates and comparisons
Figure 1a, b shows how top-down derived bank estimates provide large differences in bank estimates for different lifetime and production assumptions for CFC-11. In Fig. 1a, we compare the top-down estimate for two different lifetime assumptions—first we consider a constant atmospheric lifetime of 45 years, taken from WMO (2003) estimates, and second we assume the time-dependent Stratosphere–troposphere Processes And their Role in Climate (SPARC) multi-model mean (MMM) lifetime scenario, which averages 62.9 years over the period considered (see Supplementary Fig. 1 and Methods). With the constant, shorter lifetime, the bank would have been fully depleted by 2010, whereas with the longer and time-dependent SPARC lifetimes a bank estimate close to 1000 Gg of CFC-11 is obtained in 2010. Both of these scenarios assume the same production prior over time, illustrating how the assumed lifetime scenario impacts the inferred banks with the top-down method. A comparison of these two lifetime scenarios is further discussed below.
Fig. 1: Bank estimates and comparisons.
Comparison of banks derived from Bayesian Parameter Estimation (BPE) along with previously published values, top-down bank estimates, and alternative assumptions. a Top-down CFC-11 bank estimates assuming known lifetimes and reported production (see Eq. 2). Banks are derived using SPARC multi-model mean (MMM) time-varying atmospheric lifetimes (blue) and a constant lifetime of 45 years (red). b Top-down CFC-11 bank estimates assuming SPARC MMM time-varying lifetimes and three production scenarios: Reported production (blue), 1.05× reported production (red), and 1.1× reported production (yellow). For (a) and (b) production values are based on AFEAS and UNEP reported values (see Methods). c BPE-derived CFC-11 bank estimates assuming the SPARC MMM lifetime (blue) and constant lifetime of 45 years (red). The gray line is analogous to the blue line but production prior includes additional production to account for unexpected emissions from 2000 to 2016 (see Methods). d BPE-derived CFC-11 bank estimates assuming SPARC MMM time-varying lifetimes (average value of 62.9 years) shown in blue, and constant lifetime of 62.9 years is shown in red. Dashed lines are corresponding top-down bank estimates. e BPE-derived CFC-12 bank estimate assuming SPARC MMM lifetimes (average value of 112.9 years) shown in blue, and 100-year lifetime is shown in red. Dashed lines are corresponding top-down bank estimates. f BPE-derived CFC-113 bank estimates assuming SPARC MMM lifetimes (average value of 106.3 years) shown in blue, and 80-year lifetime is shown in red. Dashed lines are corresponding top-down bank estimates. The black line in (a–c), (d) and (f) is the WMO (2003) bank estimate. For (c)–(f), the BPE median estimates are shown using thin solid lines with the 95% confidence intervals indicated by corresponding shaded region. The markers in plots (c) and (e) indicate previously published bank estimates as follows: the green marker is from Ashford (2004)9, the red marker is from TEAP(2009)32, the black marker is from WMO(2018)3, where banks were derived beginning with TEAP(2009)32 estimates, and the pink marker is from TEAP (2019)33.
Figure 1b illustrates the effect of a consistently larger production estimate on bank size. Here we assume the SPARC MMM lifetime scenario and allow production to be as reported, 5% larger than reported, or 10% larger than reported. Due to the cumulative effect of production on bank size, a 5% increase in production results in a bank size in the top-down approach that is ~50% larger in 2011, whereas a 10% increase in production results in a bank size that is ~100% larger. The two figures underscore that the potential uncertainties in the banks are very large with the top-down approach.
None of the results shown in Fig. 1a, b make use of the uncertainties in observed CFC mole fractions, nor do they incorporate knowledge of the uncertainty ranges for a direct emissions factor (DE), release fractions (RF), or production, making it difficult to place any uncertainty on the results. Figure 1c, d shows the results of the BPE analysis for a range of assumed CFC-11 lifetimes (45 years, the SPARC MMM, and the time-averaged SPARC MMM of 62.9 years), and two production scenarios (one constructed with reported production, and one with additional unexpected production and emission starting in 2000; see Methods). While differences still exist between scenarios, the figure illustrates how uncertainties in the suite of inputs (including lifetime, production, observed concentrations, etc., see Eq. 4) better constrain the possible range in bank estimates compared to the fixed-input top-down approach.
Important factors in the differences in BPE bank size in the two atmospheric lifetime scenarios are the sensitivity to uncertainties in DE and RF. This is evident when comparing the posterior distributions of DE and RF for the two scenarios (shown in Supplementary Figs. 2 and 3). For the SPARC MMM lifetime scenario, both DE and RF posteriors are more noticeably skewed towards lower values from the prior distribution as compared to the constant lifetime scenario. This suggests some key differences in the behavior of the posteriors between lifetime scenarios: the constant lifetime scenario of 45 yrs is associated with higher emissions leading to relatively larger DE values during high production years when the bank is still accumulating, and then relatively larger RF during low production years when a larger proportion of emissions is coming from the bank. This relationship is also illustrated in the joint posterior distributions of the banks with DE and RF, respectively (see Supplementary Figs. 4–7). Supplementary Fig. 5 confirms that the bank size is correlated most strongly with RF towards later time periods, and with DE (albeit only slightly) in earlier time periods (Supplementary Fig. 4). This strong negative correlation between bank size and RF in recent decades is to be expected for two reasons. First, for the simulation model, a low RF would lead to a larger accumulation in the banks in earlier decades. Because RF has high autocorrelation, a low RF in earlier decades would be correlated with a low RF in recent decades as well, thus explaining the strong negative correlation between RF and Banks in the prior. And second, for the posterior, if the near-zero reported production in recent decades is accurate, then emissions must be entirely driven by the depletion of the banks, and thus controlled by RF (i.e. Emissions ≅ RF×Bank). Therefore, we would expect values on the ridge where RF×Bank are closer to the observationally derived emissions to have a higher likelihood than values further from the ridge.
The prior and posterior production distributions are shown in Supplementary Fig. 8. The most noticeable difference in posteriors between the two lifetime scenarios occurs in the 1980s where the SPARC MMM lifetime results in a lower production posterior than the constant lifetime scenario of 45 yrs. Importantly, our posterior estimates of production indicate that total production from 1955 to 2016 of CFC-11 has likely been 13% (1-sigma ≅ 3%) larger than the values used in previous scientific assessments, contributing further to the discrepancies between the BPE bank size and WMO (2003) bank estimates.
An important result illustrated in Fig. 1 is that the BPE bank for CFC-11 is broadly consistent with the bottom-up bank estimates9,10 with the BPE bank being the larger of the two. The great bulk of remaining CFC closed cell foams are thought to contain CFC-11, while remaining CFC in cooling systems is nearly all CFC-12 (this analysis and SROC13). Our analysis thus shows that the apparent contradiction between the bottom-up inventory assessment and the fixed-input top-down approach taken in scientific assessments up to the early 2000s can be reconciled when uncertainties are more extensively considered. It also implies that the total amount of material in the banks is indeed very likely to be much larger than thought by the best international WMO/UNEP scientific assessments in the late 1990s and early 2000s, both because of updated lifetimes estimates and a more extensive uncertainty analysis.
The impact of time-dependent lifetimes is shown in Fig. 1d with a comparison between the SPARC MMM and its average over the period considered of 62.9 years. The two scenarios produce similar bank sizes from 1955 to 1990, after which point the constant lifetime leads to a slightly larger bank size. This divergence is driven largely by the fact that the SPARC MMM lifetimes are decreasing throughout the time period such that prior to 1980, the SPARC MMM is larger than 62.9 years, and from 1981 onwards, it is smaller. In recent decades, when emissions are strongly correlated with RF, the constant lifetime scenario results in lower RF posteriors and thus smaller reductions in bank size relative to the time-dependent scenario. Because RF has high temporal correlation, the constant lifetime scenario used here has a consistently lower RF throughout. Prior to 1980, when the constant lifetime is lower than the SPARC MMM, differences in production compensate for lower RFs, producing similar bank sizes between the two scenarios.
For CFC-12 (Fig. 1e), we see a smaller difference in bank size from the two lifetime scenarios, with the BPE bank again being much closer to the Ashford9 and IPCC/TEAP(2005)10 estimates than to the WMO(2003)14 fixed-input top-down values of the late 1990s and early 2000s. Our values are again larger than Ashford9 and IPCC/TEAP(2005)10, and indicate a continuing bank of CFC-12 currently present. This contrasts with the most current WMO assessment's evaluation that the bank of CFC-12 has already been fully exhausted3, although this conclusion is sensitive to the actual lifetime of CFC-12. While the SPARC MMM lifetime results in a higher bank estimate throughout the time period, the two CFC-12 BPE-derived bank estimates are within uncertainty of each other throughout the entire simulation period. This similarity in bank size occurs because the SPARC MMM lifetime has an averaged lifetime of 101.5 years over the period where observations are available (i.e. 1980–2016, see Supplementary Fig. 1), which is close to the constant lifetime estimate of 100 years for CFC-12. Similarly for CFC-113, the two lifetime scenarios do not result in significantly different BPE posterior bank estimates (see Fig. 1f). This is in part due to smaller time-dependent changes in lifetime (an average lifetime of 98 years from 1980 to 2016 for the SPARC MMM scenario versus a constant lifetime of 80 years, see Supplementary Fig. 1), but also due to larger relative uncertainties in modeled and observationally derived emissions for CFC-113 (i.e. larger σ × UB values relative to emissions). See Supplementary Fig. 9 for a comparison in the posterior distribution of uncertainties and relative uncertainties for each gas.
Figure 2 shows the reported production overlaid on top of the total calculated emissions for each of the chlorofluorocarbon gases considered here. This figure shows how emissions from the bank continue after global reported production becomes negligible (~2010), becoming the sole source of additional atmospheric emissions (unless unreported production is occurring). The figure underscores the importance of knowing how large the banks are in order to estimate whether or not observationally derived emissions exceed expectations following the Protocol, as well as future CFC concentrations and ozone recovery timescales. Recent studies have found that production of CFC-11 is likely continuing despite the Montreal Protocol phase-out4. Whether or not observationally derived emissions of other CFCs are consistent with expectations from the Protocol is also assessed below.
Fig. 2: Reported production and estimated sources of emissions.
a Mean annual estimates of CFC-11 bank emissions (dark gray) and direct emissions (light gray) resulting from the BPE analysis using the SPARC multi-model mean lifetime assumption, and reported production to build the priors (i.e. we assume no large unexpected production post 2000). The red dashed line shows annual reported production values. (b) as in (a) but for CFC-12. (c) as in (a) but for CFC-113.
Emissions estimates and discrepancies
Figure 3 presents the observationally derived emissions (which depend upon the choice of lifetime, illustrated in the figure) along with posterior emissions from the Bayesian analysis (distributions of the residuals (i.e. Demiss,t−M(θt)emiss) are shown in Supplementary Fig. 10). The insets expand the results since 2010, when global production should have ceased under the Protocol. For CFC-11, under the reported production emissions scenario, observationally derived emissions are broadly consistent with the range of uncertainty in BPE banks from 2010 up to 2013. However, the simulated emission space does not encompass the increase in observationally derived emissions after 2012, consistent with findings in Montzka and colleagues4. When unexpected production is accounted for in prior production, the posterior emission space essentially encompasses observationally derived emissions (see Supplementary Fig. 11).
Fig. 3: Observationally derived and posterior CFC emissions.
Emissions estimates are shown for (a) CFC-11, (b) CFC-12, and (c) CFC-113. In each panel, an inset shows results after 2010 while the main panels cover 1955 to 2016. Red and blue lines show results for observationally derived emissions using the SPARC MMM and constant lifetimes, respectively. The gray line indicates the mean Bayesian estimate, the gray shaded region indicates the 95% confidence interval and the dashed line indicates the 99% confidence interval.
An important finding of Fig. 3 is that CFC-12 observationally derived emissions to date are broadly consistent with the analysis in this paper, suggesting that significant unexpected emissions are not needed to explain the behavior of that gas. It is interesting that the observationally derived emissions for both CFC-11 and CFC-12 lie at the lower edge of the Bayesian estimates from the mid 1990s to mid-2000s. Potential reasons for this joint behavior could include transient changes in circulation and hence lifetimes of both, or releases from stockpiles of both as phaseouts occurred, but other explanations such as larger errors in production are also possible. For CFC-113 on the other hand, there appears to be emission post-2010 that substantially exceeds this Bayesian analysis (discussed further below).
Sensitivity of bank estimates to input parameters
Note that the results of the BPE analysis are constrained by our choice of priors, which have been developed using published estimates of the input parameters. We investigate the sensitivity of our results to various input parameters. In particular we test the sensitivity of bank size to ~10% increases in the mean of the prior distributions of RF, DE for all equipment type in the bank (see Methods, Supplementary Methods 1, and Supplementary Tables 1–3 for details), as well a ~10% increase in the mean of the prior distribution of production. We also test the sensitivity of the bank to an increase in the standard deviation of the RF prior distribution on closed cell foams, which are the largest component of the bank in recent decades. We find that BPE-derived bank estimates are moderately sensitive to production values and RF uncertainties. Production is not likely to be lower than the reported values, which were used to construct the base case scenario, and the lower bound of RF is fairly constrained, implying that our choice of priors are likely leading to conservative estimates in the size of banks (see Supplementary Fig. 12).
Understanding the trajectory of atmospheric CFC abundance in the coming years is key to understanding the timing of ozone hole recovery and future trends in radiative forcing of climate. While reported production of CFC-11 and CFC-12 has reduced to zero (or near zero), we can expect continued emissions from the current banks (Figs. 2 and 3). Accurate projections of atmospheric CFC abundances rely on knowledge of the quantity of banked CFC in existing equipment and products. Here we have provided a Bayesian uncertainty analysis of the bank size by integrating knowledge and uncertainties of CFC production quantities and equipment and product emissions functions, with observed concentrations of CFCs and atmospheric lifetime scenarios. Our analysis supports the view from bottom-up analyses that previous top-down estimates have underestimated CFC-11 and CFC-12 bank size (Fig. 1) by not accounting for uncertainties and likely biases in the parameters considered here (RF, DE, and Production), and not integrating all of these parameters into bank estimates. Another important finding is that substantial CFC-12 banks are likely still present, in contrast to recent WMO assessments3 and current CFC-12 observationally derived emissions are broadly consistent with those expected from the banks according to our Bayesian model. Further, the emissions of CFC-11 are broadly consistent through 2012 but not beyond. This demonstrates that the constraints imposed on the CFC-11 priors from the current literature lead to posteriors that cannot feasibly reproduce the data. Since the model (and/or likelihood function) do not capture the full range of uncertainty, other factors must be at play. In particular, our analysis supports the finding that additional, unreported CFC-11 production after the 2010 global phase-out date mandated by the Montreal Protocol provides a more consistent emissions trajectory with observationally derived emissions, as suggested by Montzka and colleagues4, but unexpected production is not required for consistency prior to 2012 when uncertainties are considered in detail. Further, our estimate of the unexpected total production associated with this emission would imply that the current CFC-11 bank size is approximately 140 Gg larger than it would otherwise be without the unexpected production assumption, implying ongoing additional contributions to ozone destruction in the future beyond those previously thought to be in the bank, even if further production ceases now.
Our study also underscores that emissions of CFC-113 significantly exceed expectations from banks alone after 2010 (see Supplementary Fig. 13 and Supplementary Note 1 for an analysis of uncertainties due to the lifetime of this gas). The absolute values of the total observationally derived emission averaged for 2005–2015 are relatively small for CFC-113, around 7.3 Gg yr−1 (with a 1-sigma confidence interval ranging from 3.7 to 10.1 Gg yr−1 using the uncertainty range in its lifetime estimated from the SPARC tracer-tracer correlation method). Further, uncertainties in the measured concentrations of this gas are larger than those of the other two. Nevertheless, it is notable that the emission of CFC-113 at about 7.3 Gg yr−1 is comparable to the unexpected increase of emission of around 10 Gg yr−1for CFC-11 reported by Montzka and colleagues4 after 2012. As noted earlier, CFC-113 is used as a feedstock for production of other chemicals, an allowed continuing use under the Montreal Protocol. According to the agreement, Parties are urged to keep feedstock leakage to a technically feasible minimum, which is thought to be of the order of 0.5%15. Global production of CFC-113 for feedstock use was reported to be about 131 Gg in 201415, implying emission of about 0.7 Gg yr−1 at 0.5%, or about ten times less than our estimate. Figure 3 therefore suggests the need for further analysis of CFC-113 feedstock leakage as well as any potential for unreported non-feedstock production and use.
Given our BPE mean bank size estimates using the SPARC MMM lifetime, we next consider how different policy options would affect equivalent effective atmospheric chlorine (EESC) abundance for Antarctic chlorine, and future CO2 equivalent emissions. Here we consider three different policy scenarios. Scenario 1 is a business as usual scenario. Under this scenario we assume a constant bank release fraction and bank size equal to that estimated in the last time period of the BPE simulation (i.e. the median RF in 2016 in the BPE simulation). We simulate emissions forward in time and estimate the chlorofluorocarbon abundance using the resulting emissions and the SPARC MMM atmospheric lifetime from 2010. As a test, we also consider 100% recovery and destruction of the CFC banks as an idealized best case. In Scenario 2 we consider an idealized upper limit in which there is 100% recovery and destruction of CFC banks in 2020 and no further emissions past 2020. Scenario 3 assumes that all banks are destroyed in 2000; this is an idealized "opportunity lost" emissions scenario where we consider CFC abundance with zero emissions following 2000. For each of the scenarios, we estimate the polar EESC following Newman and colleagues1 with an average 5.5 year age of polar stratospheric air to account for the typical time required for air to reach the polar stratosphere from the surface. With the exception of CFCs, EESC values use mixing ratios from the WMO 2018 Assessment3. For CFCs, EESC values are estimated using mixing ratios from the WMO 2018 Assessment leading up to the scenarios. Results are shown in Fig. 4.
Fig. 4: Measured and projected chlorine abundance and ozone recovery times.
Measured and projected Antarctic equivalent effective stratospheric chlorine (EESC) for all measured and projected abundances of ozone-depleting gases where mixing ratios come from the WMO 2018 assessment3. a EESC contributions are stacked in a manner that optimizes understanding of what has dominated the recovery of EESC to date. b EESC contributions are stacked with CFCs shown on top, including three scenarios for CFC−11, CFC-12, and CFC-113 constructed using mean bank emissions estimates resulting from the BPE analysis. Scenario 1 (dotted black line) represents the business as usual scenario, where bank emissions are simulated using the median release fraction (RF) and the median BPE estimated bank size in 2016. The RF is held constant over the entire simulation period. In scenario 2 (dashed black line) the banks are destroyed in 2020 with no further emissions. Scenario 3 (dashed red line) is the same as Scenario 2 except the banks are destroyed in 2000 followed by no further emissions. The SPARC MMM 2010 atmospheric lifetime is used to estimate the projected CFC abundance for each of the scenarios. EESC values leading up to the scenario simulations use mixing ratios from WMO (2018).
Figure 4a stacks contributions to EESC in a manner that optimizes understanding of what has dominated the recovery of EESC to date. The Figure makes clear that the bulk of the ozone recovery from the peak in EESC around 2000 to present is due to the global phaseout and rapid decline of CH3CCl3 (which has a global atmospheric lifetime of only about 5 years), along with substantial decreases in CH3Br and Halon concentrations. CFCs have declined slightly over this time, however, the contributions from CFC reductions can also be viewed as being offset to some extent by increases in EESC from the HCFCs that have replaced them. Figure 4a illustrates that the fastest part of ozone recovery since peak depletion has already occurred. Future recovery is therefore increasingly dependent on reductions in CFCs, as well as other ODS reduction measures. Figure 4b stacks contributions differently to illustrate the gains in ozone recovery that could be obtained through recovery and destruction of CFC banks. These scenarios are all based on bank and emissions estimates using reported production (i.e. we do not include the unexpected emissions scenario). While Fig. 4a illustrates that CFCs have declined slightly from 2000 to present, the ongoing emission from banks (even without additional unexpected emissions) means that they have contributed less to the total reduction in EESC than they would have if the banks had been destroyed (e.g. Scenario 3 vs Scenario 2).
The year in which Antarctic EESC falls below 1980 levels is often used as a benchmark3 to describe the path to ozone recovery, neglecting potential dynamical contributions. Using current estimates of lifetimes, polar EESC returns to pre-1980 levels in 2080 (scenario 1), 2074 (scenario 2), and 2067 (scenario 3). This comparison indicates that emissions from banked CFCs delay the recovery of the ozone hole by more than a decade compared to total destruction of the banks in 2000 and about seven years compared to current destruction. While 100% destruction of the banks is unrealistic, certainly some material can be recovered and destroyed (for example, via soil degradation of foams by careful burial in landfills instead of shredding16).
Our analysis demonstrates that CFC bank sizes are likely larger than what is currently assumed in the recent assessment3. Given the assumptions outlined above, we illustrate the effects of these larger bank sizes and the unexpected production scenario on projected mole fractions of CFC-11, 12, and 113 in Fig. 5 against the most recent projections. For CFC-11 in particular, the impacts on mole fraction projections can be substantial (e.g. a difference in 25 ppt), illustrating the importance of improved modeling of the banks for future international assessments. As a comparison, the WMO 2018 EESC projection results in Antarctic chlorine loading returning to 1980 levels by 2076. Our analysis indicates that scenario 1 projects a recovery by 2080, however, including the unexpected emissions scenario would result in a delay of an additional year (assuming that the source stops in 2019). An important assumption is that the unexpected emissions are only a fraction of the total production. Our analysis approximates that about 20% of total production makes up the unexpected emission, and the rest is initially banked. This would lead to long-term differences across scenarios with bank emissions as high as 49 Gg yr−1 by 2030 if the unexpected production continues unchecked for another decade, compared to about 32 Gg yr−1 bank emissions for a scenario with no unexpected production.
Fig. 5: Measured and projected estimates of CFC concentrations.
Concentrations are shown for (a) CFC-11, (b) CFC-12 and c) CFC-113. In each panel, an inset shows results from 2010 to 2040 while the main panels cover 1955 to 2100. For each panel, the blue line shows the WMO 2018 concentration estimates and projections. The black lines (Scen A in each panel) shows the concentrations projections using the median bank size and release fraction from our analysis starting in 2017 under the reported production scenario. The shaded gray region represents 1s.d. of uncertainty due to uncertainties in bank estimates. For (a) Scen B is equivalent to Scen A, except allows banks to account for the unexpected emissions scenario from 2000 to 2019, and Scen C is equivalent to Scen B except it allows the unexpected emissions to continue to 2029. For (c), Scen B allows for an additional 7.2 Gg yr−1 of production until 2029 with the shaded region representing 1-s.d. of uncertainty in continued production (±5 Gg yr−1).
Finally, we examine implications for global warming based upon carbon dioxide equivalents (CO2eq) for a 100-year time horizon17. Table 1 shows the 21st century mean cumulative emissions for three scenarios described above and corresponding mean cumulative CO2eq emissions. The estimated future emissions from current banks could lead to an additional 9 billion metric tonnes of CO2eq in global warming potential between 2020 and 2100, illustrating the importance of recovery and destruction of as large a fraction of the bank as is feasible and efficient. Avoiding the emission of CFC-113 of 7 Gg yr−1 over the past decade (Supplementary Fig. 13, and Supplementary Note 1) would have represented about 0.4 billion tonnes CO2eq. As illustrative example comparisons of upper limits of benefits, the European Union's cumulative projected greenhouse gas reductions under their Paris agreement pledge by 2030 relative to 2019 is ~7 billion metric tonnes18 while the cumulative avoided emission of CO2eq of HFCs from 2020 to 2050 under the Kigali amendment to the Montreal Protocol is ~53 billion metric tonnes (WMO, 2018)3. The opportunity lost already by not destroying the CFC banks in the year 2000 represents 25 billion metric tonnes of CO2eq emissions since 2000 and delayed ozone hole recovery by an additional 7 years, illustrating the importance of prompt actions to the extent practical and efficient. Recovery and destruction of discarded or obsolete CFC banks benefits the climate system. However, we note that to optimize net gains for climate in systems that are still in use, a full life cycle analysis, taking account of factors including for example how existing foams contribute to energy efficiency, must be weighed against the CO2eq content of the banks.
Table 1 Greenhouse gas contributions for example bank destruction options.
Background and motivation
Top-down estimates of the banks are the cumulative sum of the difference between production and emissions since the onset of CFC production7. A noted challenge with the top-down approach is that it depends on small differences between large values (cumulative emissions and production) and requires both highly accurate reported production and observationally derived emissions for accurate results. Several studies suggest that uncertainties in production could be substantial, as discussed further below. Uncertainties in emissions depend on the accuracy of measurements of CFC abundances in the global atmosphere, and atmospheric lifetimes (discussed further below).
We can estimate annual global emissions, Demiss,t, as
$$D_{{\mathrm{emiss,t}}} = A\left( {\left[ {{\mathrm{CFC}}} \right]_{t + 1} \, - \, \left[ {{\mathrm{CFC}}} \right]_t \times \exp \left( { - \frac{{{\mathrm{\Delta }}t}}{{{\mathrm{LT}}_t}}} \right)} \right),$$
where [CFC]t refers to the concentrations of the particular CFC in year t, LTt is the atmospheric lifetime in year t, Δt is equal to 1 year, and A is a constant converting units of atmospheric concentrations to units of emissions. The time step is small enough that this is an accurate representation for the long-lived gases considered.
The bank estimates using the top-down approach can then be estimated as
$${\mathrm{Bank}}_t = \mathop {\sum }\limits_{y = y1}^t ({\mathrm{Prod}}_t - D_{{\mathrm{emiss}},y}),$$
where y1 is the first year of CFC production, and Prodt is the estimated production value in year t.
An alternative, bottom-up method could make use of information regarding a reference bank size starting point for a specific year as well as annual production, bank release fraction (i.e. the fraction of the existing bank that is emitted each year, composited across different applications), and direct emissions (the fraction emitted essentially immediately, in applications such as sprays, or through leakage). Using this approach, the bank size in year t could be estimated recursively as
$${\mathrm{Bank}}_{t} = \left( {1 - {\mathrm{DE}}_{{t}}} \right) \times {\mathrm{Prod}}_{{t}} + (1 - {\mathrm{RF}}_{{t}}) \times {\mathrm{Bank}}_{{{t}} - 1},$$
where RFt is the bank release fraction, Bankt−1 is the size of the bank in the previous year, DEt is direct emissions (i.e. the fraction of production in year t that is directly emitted that year, such as through leakage in the production process), and Prodt is the amount of CFC that is manufactured in year t. Estimates of chlorofluorocarbon bank size with this approach are therefore dependent on knowledge of production over time, the partitioning of production across different types of manufactured goods, as well as accurate assessments of the rate of release of ODSs for each type of manufactured product. Velders and Daniel19 use the estimates of bank sizes from bottom-up inventory analysis as a starting point in 2008 using the findings of Ashford and colleagues9 and IPCC/TEAP (2005)10, and show how uncertainties in the different input parameters from Eq. (3) result in significant future uncertainties in bank size.
Here we adopt a Bayesian approach throughout the period considered. Our approach for discerning bank size may be thought of as a hybrid between the top-down and bottom-up that includes a wider range of constraints by making use of the information from both approaches and provides probabilistic outcomes. Using the input parameters from Eq. (3), we employ an alternative estimate to Eq. (1) by modeling emissions, M(θt)emiss, as
$$M({\mathbf{\theta }}_{\mathbf{t}})_{{\mathrm{emiss}}} = {\mathrm{RF}}_{\mathrm{t}} \, \times \, {\mathrm{Bank}}_{{\mathrm{t}} - 1} + {\mathrm{DE}}_{\mathrm{t}} \times {\mathrm{Prod}}_{\mathrm{t}},$$
Where θt is the vector of input parameters (RFt, DEt, Prodt, and Bankt−1). With the exception of Bankt−1, prior probability density functions for these parameters are constructed using a combination of probabilistic estimates of application-specific and time-dependent release fraction estimates from Ashford and colleagues9, the distribution of production across equipment type from AFEAS (Alternative Fluorocarbons Environmental Acceptability Study) data (see for example https://unfccc.int/files/methods/other_methodological_issues/interactions_with_ozone_layer/application/pdf/cfc1100.pdf), and total production from the AFEAS (2001) and UNEP databases. The prior distributions for Bank input parameters are not independently defined. Instead they are simulated as a function of prior distributions for all previous timesteps of RF, DE, and production. They can be estimated by iterating Eq. (3) forward in time, or equivalently;
$${\mathrm{Bank}}_t = \, \left( {1 - {\mathrm{DE}}_t} \right){\mathrm{Prod}}_t + \mathop {\sum }\limits_{y = y1 + 1}^{t - 1} (1 - {\mathrm{DE}}_y){\mathrm{Prod}}_y\mathop {\prod }\limits_{j = 0}^{t - y - 1} \left( {1 - {\mathrm{RF}}_{t - j}} \right) \\ + \, {\mathrm{Bank}}_{{{y}}1}\mathop {\prod }\limits_{j = y1 + 1}^t \left( {1 - {\mathrm{RF}}_{\mathrm{j}}} \right),$$
where y1 is the first year in the simulated time period.
Prior work has estimated banks using Eq. (2) either throughout the entire production record14 or after 2008 using the inventory estimates of bank sizes for that year3,19, but has not provided a statistical framework to constrain uncertainties in manufacturing parameters using uncertainties in CFC concentrations. Here we provide a probabilistic estimate of bank size by making use of Eq. (1) to constrain the distribution of Eqs. (3) and (4)'s parameter space in a Bayesian framework referred to as BPE. This allows us to assess whether the bottom-up and top-down approaches are consistent within estimated uncertainty, or if additional factors (e.g. fugitive emissions or stockpiled production) are necessary to reconcile the two approaches.
Model framework
To estimate the distribution of the parameters in Eq. (3), we use a form of Bayesian analysis called Bayesian melding that was designed by Poole and Rafferty20 to apply inference to deterministic simulation models. It allows us to infer parameter estimates by taking advantage of the information available from both observed concentrations and the mechanistic simulation model of the bank, emissions, and concentrations comprised by (1), (3), and (4), hereafter termed simulation model. We employ a version of this method for input parameter uncertainty outlined in Bates and colleagues21 and implemented in Hong and colleagues22, which we henceforth refer to as Bayesian Parameter Estimation (BPE). Because we are interested in the effects of atmospheric lifetimes on the range of bank outcomes, we implement the BPE algorithm separately for various assumed lifetimes. In the simulation model, we simulate bank size and emissions time series recursively, assuming an initial bank size in 1955 (t=1) equal to that estimated in WMO (2003)14. Bank sizes in 1955 are small enough that uncertainties in this number are insignificant. Bayesian updating is then implemented simultaneously for all time periods with available observations (1981–2016); therefore, the estimate for the bank in each year is based on all available observations.
We obtain posterior distributions for the vector of input parameters, θ, by implementing Bayes' theorem as follows:
$$P\left( {{\mathbf{\theta }}{\mathrm{|}}D_{{\rm{emiss}},1}, \ldots D_{{\rm{emiss}},N}} \right) = \frac{{P\left( {\mathbf{\theta }} \right)P(D_{{\rm{emiss}},1}, \ldots D_{{\rm{emiss}},N}|{\mathbf{\theta }})}}{{P(D_{{\rm{emiss}},1}, \ldots D_{{\rm{emiss}},N})}},$$
where P(θ) describes the joint prior distribution of the input parameters (RF, DE, Production, and Bank) and \(P\left( {D_{{\rm{emiss}},1}, \ldots D_{{\rm{emiss}},N}{\mathrm{|}}{\mathbf{\theta }}} \right)\)is the multivariate likelihood of all observed emissions given the input and output parameters of the simulation model. Each of the input parameters are N×1 vectors, where N is one less than the number of years of mole fraction observations used in the analysis (1980 to 2016). RF and DE are modeled jointly and assumed independent of Prod. Bank is modeled using Eq. (3) and therefore depends on all input parameters.
To solve Eq. (6), the general BPE model flow is implemented as follows. Begin by specifying prior distributions for input parameters. Next, using Monte Carlo simulation, sample from the prior distributions of the input parameters to simulate prior time series distributions for the simulation model outputs, emissions and bank size. We then specify the likelihood function of emissions from observed mole fraction and assumed lifetime. And finally, we estimate the posterior parameter distributions by implementing a sampling procedure. Each step of this model flow is described in more detail below.
Atmospheric Lifetimes
Assumptions about atmospheric lifetimes can have substantial impacts on CFC top-down estimates of bank size (see Fig. 1 for example). Many evaluations of CFC lifetimes23,24 employed simple steady state models23,24. Understanding of atmospheric lifetimes has advanced through a recent assessment using three-dimensional models to better evaluate the time-dependent lags between tropospheric and stratospheric mixing ratios as emissions change (SPARC, 2013); that assessment showed that time dependent lifetime changes are substantial. To explore the impact of lifetimes and their time dependence on bank size we run the BPE using (i) constant lifetimes for each gas from the values in WMO 2003, (ii) time-dependent transient global lifetimes estimated by global photochemical models (taken from SPARC, 201312 which have mean values between 1960 and 2010 of 62.5, 113, 107 for CFC-11, -12 and -113, and (iii) constant lifetimes equal to the mean time-dependent lifetimes extended over the time period of the analysis (1955 to 2016). Values for WMO (2003) represent the last scientific assessment using the top-down approach without imposed constraints from bottom-up information provided by Ashford and colleagues9 and IPCC/TEAP(2005)10. For purposes of comparison, we therefore adopt the values from WMO(2003)14 of atmospheric lifetimes of 45 yrs, 100 yrs, and 85 yrs for CFC-11, CFC-12, and CFC-113, respectively. For the time-dependent lifetime scenario, we adopt the SPARC multi-model mean values, shown in Supplementary Fig. 1. Note that SPARC modeled lifetime estimates begin in 1960 and end between 1998 and 2010, depending on the model. Because we require a lifetime estimate for all years between 1955 and 2016, we extend each model's initial values from 1960 to earlier lifetimes (i.e. from 1955 to 1959), and extend their end values to all subsequent years until 2016. The time-dependent lifetime is then taken to be the mean of these extended modeled lifetimes.
Priors for input parameters
Implementing the BPE model requires a joint prior probability distribution to reflect our initial estimate of the uncertainty space of the input parameters, including production, direct emissions and bank release fraction based on the bottom-up methodology described above. We note that this approach, rather than developing uninformative priors, is intended to constrain the BPE results based on literature values. This allows us to assess the consistency of the top-down and bottom-up approaches within estimated uncertainty ranges. Time series of the prior and posterior distributions for each of the parameters are shown in the SM (Supplementary Figs. 14-16). We describe the choices of input parameter prior distributions below.
Estimates for production typically rely on industry reported values (from the AFEAS database) or country level values (UNEP database), however, these estimates should be viewed with caution. Production from the former Soviet Union was not included in AFEAS and increases these values in earlier years by as much as about 20%25. In addition, by 2000 significant production in major developing countries was also not included in AFEAS. In broad terms, we expect reported values to underestimate true production values, as some of a growing number of producers may be omitted from national inventories, and some studies have probed possible black-market production of CFCs26.
We build our prior distributions of global production based on reported values from AFEAS for years prior to 1989, and from UNEP from 1989 onwards. We adopt a correction for AFEAS data following WMO (2002)14 (henceforth referred to as AFEAS/WMO), where AFEAS production values are augmented with production data from UNEP. Prior to 1989, companies reported their production of each molecule to AFEAS as part of the manufacturers' association. From 1989 onwards, countries reported national production values to the UNEP and were expected to meet the Protocol's reduction targets relative to 1986 values. Inconsistencies in accounting or reporting practices between different countries are possible, as are simple omissions depending upon the number of manufacturers and national regulatory mechanisms.
Given the potential biases discussed above, we construct our production priors under the assumption that these reported and adjusted annual production values are likely to be lower than the true total production in any given year. Our production prior follows a lognormal distribution such that:
$$\begin{array}{l}\log \left( {X_1,X_2, \ldots X_N} \right)\sim N(\mu ,{\mathrm{\Sigma }})\\ {\mathrm{Prod}}_{\mathrm{t}} = B\, {\ast}\, {\mathrm{Prod}}_{0,{\mathrm{t}}} \, \ast \, X_t + 0.95\, \ast \,{\mathrm{Prod}}_{0,{\mathrm{t}}}\end{array}$$
where N is the number of years considered in the model, μ is equal to zero for each year, and ∑ is a covariance matrix constructed with autocorrelation parameter (ρ1) such that diagonal elements are equal to 0.25 and off-diagonal d years apart are equal to \(0.25 \times \rho _1^d\). Prod0,t is the reported production value in time period, t. B is a constant that controls the uncertainty range which we set to 0.2 for production prior to 1989 when AFEAS/WMO data is adopted for reported production, and to 0.1 for production after 1989, when UNEP data is adopted. The higher uncertainty in the upper bound for the AFEAS/WMO data reflects our larger degree of uncertainty due to unreported production noted above, especially before the Protocol entered into force in 198927. \(\log \left( {X_1,X_2, \ldots, X_N} \right)\) are lognormally distributed random variables used to reflect our prior assumption that true production is not likely to be lower than reported and has a probability, albeit low, of being substantially higher than reported (e.g. for B = 0.1, there is a 3% probability of sampling above 1.2×Prod0,t). See Supplementary Fig. 17 for an illustration of the distribution.
Because we do not have data on the autocorrelation in the covariance matrix representing the uncertainty in reported production values, we estimate ρ1 as an additional hyperparameter. Including this hyperparameter reflects our belief that there is some degree of consistency in underreporting across time. We assume the autocorrelation parameter PDF follows a Beta distribution (shown in Supplementary Fig. 18) as follows:
$$\rho _1\sim 0.5 + 0.5\, \ast \, {\rm{Beta}}\left( {2,2} \right).$$
Note that the lower bound on the prior distribution for ρ1 is greater than zero for computational efficiency; initial tests of the model found near-zero posterior probabilities for values lower than 0.5.
In light of recent work suggesting unexpected emission of CFC-11 after 20124, we also build an alternative production prior for CFC-11 to test how additional unreported production of this gas could impact bank size and emissions. For this unexpected emissions scenario, we assume an upper bound for added production based on the estimate from Montzka and colleagues4 of unreported emissions after 2012 as high as 13,000 tonnes yr−1. Based on our assumption of mean direct emissions (see below for details) of 21% of production for any year following 2000, this would equate to an upper bound of ~61,000 tonnes of CFC-11 produced in 2014 (i.e., 79% of production would be banked in that year). For this emissions scenario, we assume a linear increase in the upper bound of the unexpected production from 0 in 2000 to 61,000 tonnes by the end of 2012 and held constant thereafter. To reflect our adopted uncertainty in production, from 2000 onwards we assume a uniform distribution with a lower bound of reported production and an upper bound as described above. If the direct emission of this unexpected production were higher, or if the production were higher for applications that released CFC-11 quickly, this total production figure would be smaller, perhaps substantially so.
Direct emissions and bank release fraction
We estimate annual direct emissions and the bank release fractions jointly using a bottom-up accounting of the various equipment types comprising the bank, their relative prevalence, and the unique loss rates at which they emit CFCs. DE and RF are assumed to be stationary and unique for each equipment type (e.g., open cell foams, closed cell foams, chillers, etc.,) with their respective uncertainties and loss functions as shown in the supplement (Supplementary Tables 1-3). RF for the total bank is time dependent as it depends on the composition of the bank, which changes over time. DE is modeled as a fraction of total production in a given year. Therefore, DE in year t depends on how production is apportioned across equipment type in year t combined with the loss rate for each equipment type in its first year of life. RF is modeled as the fraction of the bank that is released in a given year and therefore depends on the composition of the bank. Thus, RF depends on the relative prevalence of each equipment type, and their unique loss rates in all prior years.
To estimate DE and RF, we first develop priors for annual production and unique loss rates for each equipment type. The priors for production for each equipment type are developed using production data from AFEAS,(2001)28 which provides data on both total reported production of each CFC molecule in any given year from 1930 to 2000, as well as the breakdown of total production into the types of applications. After AFEAS data ends in 2000, we use that year for the priors in each subsequent year. The one exception is when constructing CFC-11's unexpected emissions scenario. Because we have no knowledge about the applications for which this new production is being used, our prior assumption is that each equipment type is equally probable following 2000. For the loss rate parameters, we use chlorofluorocarbon release rates from Ashford and colleagues9 to construct priors. For each type of equipment, Ashford and colleagues9 construct estimates of loss rates over time by type of product for each molecule. For example, they estimate that 50% of CFC-11 used in aerosols and solvents are emitted the year they are produced, and 50% is emitted the following year. In contrast they estimate closed cell foam releases at 3.66% of its bank each year. For more details on these priors, and how the RF and DE sample time series are constructed refer to the Supplementary Methods 1 and Supplementary Tables 1-3.
Note that for the unexpected emission scenario, the assumption of equally probable production across equipment types leads to a wider and time-varying range of RF and DE sampled values than for all other scenarios. The result of jointly constructing RF and DE time series in this manner is that both parameters are constructed to exhibit covariance and temporal correlation for physical consistency. Also note that for total production, we use AFEAS data up until 1989, after which we use UNEP data. For estimating RF and DE, production data from AFEAS is used only to approximate relative production by equipment type over time. This, in turn, provides a prior estimate of the relative distribution of equipment type in the bank, which we use to estimate RF and DE. These RF and DE priors are constructed independently of total production priors.
Specifying the Likelihood function
For each atmospheric lifetime scenario, emissions are inferred from observed global mole fractions using Eq. (1). We henceforth refer to these as observationally derived emissions, or data, Demiss,t, where t refers to the year. Observations come from the merged AGAGE and NOAA global surface mean mole fraction in ref. 29 and are available from 1980 to 2018. We assume that:
$$D_{{\rm{emiss}},t} = M\left( {{\mathbf{\theta }}_{\mathbf{t}}} \right)_{{{\rm{emiss}}}} \, + \, \sigma _t,$$
where M(θt)emiss is the modeled emissions following Eq. (4), θt is the vector of input parameters (RFt, DEt, Prodt, and Bankt-1), and σt is the error term assumed normal with mean zero and covariance \(S_t^2\). The likelihood function is therefore a multivariate function of the difference between modeled and observationally derived emissions:
$$P(D_{{\rm{emiss}},1}, \ldots ,D_{{\rm{emiss}},N}\left| {\mathbf{\theta }} \right.) = \frac{1}{{\left( {2\pi } \right)^{\frac{N}{2}}\sqrt {\left| S \right|} }}\exp \left\{ { - \frac{1}{2}{\mathrm{\Delta }}^TS^{ - 1}{\mathrm{\Delta }}} \right\},$$
where Δ is an NxN diagonal matrix with diagonal elements;
$${\mathrm{\Delta }}_{t,t} = D_{{\rm{emiss}},t} - M({\mathbf{\theta }}_{\mathbf{t}})_{{\rm{emiss}}},$$
S is a covariance matrix representing the sum of the observationally derived and modeled emissions uncertainties. While there exist published estimates of observationally derived uncertainties30 we have no prior information on modeled uncertainties. We therefore estimate S as follows: All diagonal elements are equal to σ×UB where UB is set equal to the larger value of 40 Gg yr−1 or twice the mean difference in emissions inferred from observations using the maximum time-varying SPARC lifetimes and minimum time-varying SPARC lifetimes. σ is a parameter that is estimated from a Beta prior distribution with input parameters α = β = 5. The prior and posterior distributions for σ×UB are shown in Supplementary Fig. 9. Off-diagonals of S are estimated using an autocorrelation hyperparameter, ρerr, drawn from a Beta distribution with parameters α = β = 2 with a lower bound of 0.5 and upper bound of 1.
Estimating posteriors
Because the analytical form of the posterior is intractable, we use the sampling importance resampling (SIR) method to approximately sample from the marginal posterior distributions21,22,31. This method involves sampling from the prior and then resampling the prior samples according to an importance ratio. For a detailed description of SIR, refer to the work of Hong and colleagues22. We implement the SIR method by drawing 1,000,000 samples from the prior and then resampling from these samples 100,000 times to obtain the posterior distribution. These sample sizes were chosen such that multiple sampling estimates produced consistent results for prior and posterior bank distributions.
The datasets generated and/or analyzed during the current study are available at https://github.com/meglickley/CFCbanks.
All code used in this work is available at https://github.com/meglickley/CFCbanks. All analyses were done in MATLAB.
Newman, P. A. et al. What would have happened to the ozone layer if chlorofluorocarbons (CFCs) had not been regulated? Atmos. Chem. Phys. 9, 2113–2128 (2009).
Article CAS ADS Google Scholar
Solomon, S. et al. Emergence of healing in the Antarctic ozone layer. Science 353, 269–274 (2016).
WMO: Scientific Assessment of Ozone Depletion. Glob. Ozone Res. Monit. Proj.—Rep. No. 58, 588 (2018). 2018.
Montzka, S. A. et al. An unexpected and persistent increase in global emissions of ozone-depleting CFC-11. Nature 557, 413 (2018).
Rigby, M. et al. Increase in CFC-11 emissions from eastern China based on atmospheric observations. Nature 569, 546–550 (2019).
Velders, G., Anderson, S. O., Daniel, J. S., Fahey, D. W. & McFarland, M. The importance of the Montreal Protocol in protecting climate. Proc. Natl Acad. Sci. 104, 4814–4819 (2007).
Daniel, J. S., Velders, G. J. M., Solomon, S., McFarland, M. & Montzka, S. A. Present and future sources and emissions of halocarbons: toward new constraints. J. Geophys. Res. Atmos. 112, D02301 (2007).
InforMEA. (2008). Available at: https://www.informea.org/en/decision/decision-xx7-environmentally-sound-management-banks-ozone-depleting-substances#decision-body-field.
Ashford, P., Clodic, D., McCulloch, A. & Kuijpers, L. Emission profiles from the foam and refrigeration sectors comparison with atmospheric concentrations. Part 1: Methodology and data. Int. J. Refrig. 27, 687–700 (2004).
UNEP Technology and Economic Assessment Panel. Progress Report p 86–87 (United Nations Environment Programme, Nairobi, Kenya, 2005).
UNEP. TEAP Task Force on Emissions Discripancies Report (United Nations Environment Programme, Nairobi, Kenya, 2006).
Chipperfield, M. P. et al. Multimodel estimates of atmospheric lifetimes of long-lived ozone-depleting substances: present and future. J. Geophys. Res. Atmos. 119, 2555–2573 (2014).
Article ADS Google Scholar
Prepared by Working Group I and III of the Intergovernmental Panel on Climate Change, and the T. and E. A. P. IPCC: Special Report: Safeguarding the Ozone Layer and the Global Climate System: Issues Related to Hydrofluorocarbons and Perfluorocarbons 478 (Cambridge University Press, New York, 2005).
WMO Scientific Assessment of Ozone Depletion: 2002, Global Ozone Research and Monitoring Project Report No. 47 (World Meteorological Organization, Geneva Switzerland, 2003).
UNEP. UNEP Technology and Economic Assessment Panel. Progress Report Vol. 1 (United Nations Environment Programme, Nairobi, Kenya, 2016).
Scheutz, C. & Kjeldsen, P. Capacity for biodegradation of CFCs and HCFCs in a methane oxidative counter-gradient laboratory system simulating landfill soil covers. Environ. Sci. Technol. 37, 5143–5149 (2003).
Stocker, T. F. et al. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge University Press, 2013).
Climate Action Tracker. (2019). Available at: https://climateactiontracker.org/countries/eu/.
Velders, G. J. M. & Daniel, J. S. Uncertainty analysis of projections of ozone depleting substances: mixing ratios, EESC, ODPs, and GWPs. Atmos. Chem. Phys. 14, 2757–2776 (2014).
Poole, D. & Raftery, A. E. Inference for deterministic simulation models: the Bayesian melding approach. J. Am. Stat. Assoc. 95, 1244–1255 (2000).
Bates, S. C., Cullen, A. & Raftery, A. E. Bayesian uncertainty assessment in multicompartment deterministic simulation models for environmental risk assessment. Environmetrics . J. Int. Environmetrics Soc. 14, 355–371 (2003).
Hong, B., Strawderman, R. L., Swaney, D. P. & Weinstein, D. A. Bayesian estimation of input parameters of a nitrogen cycle model applied ot a forested reference watershed, Hubbard Brook Watershed Six. Water Resour. Res. 41, W03007 (2005).
Clerbaux, C. et al. Long-lived compounds, Chapter 1 in Scientific Assessment of Ozone Depletion: 2006, Global Ozone Research and Monitoring Project Report No. 50. Scientific Assessment of Ozone Depletion: 2006, Global Ozone Research and Monitoring Project Report No. 50 (World Meteorological Organization, 2007).
Montzka, S. A. et al. Ozone-Depleting Substances (ODSs) and Related Chemicals, Chapter 1 in Scientific Assessment of Ozone Depletion (World Meteorological Organization, 2011).
Gamlen, P. H., Lane, B. C., Midgley, P. M. & Steed, J. M. The production and release to the atmosphere of CCl3F and CCl2F2 (chlorofluorocarbons CFC 11 and CFC 12). Atmos. Environ. 20, 1077–1085 (1986).
Landers Jr, F. P. The black market trade in chlorofluorocarbons: the montreal protocol makes banned refrigerants a hot commodity. Georgia J. Int. Comparative 26 https://digitalcommons.law.uga.edu/gjicl/vol26/iss2/7 (1996).
McCulloch, A., Midgley, P. M. & Ashford, P. Releases of refrigerant gases (CFC-12, HCFC-22 and HFC-134a) to the atmosphere. Atmos. Environ. 37, 889–902 (2003).
AFEAS (Alternative Fluorocarbons Environmental Acceptability Study). Production, Sales and Calculated Emissions of Fluorocarbons Through 2000. (2001).
Engel, A. et al. Update on Ozone-Depleting Substances (ODSs) and other gases of interest to the Montreal Protocol, Chapter 1 in Scientific Assessment of Ozone Depletion: 2018, Global Ozone Research and Monitoring Project. Report No. 58. (World Meteorological Organization, 2019).
Rigby, M. et al. Recent and future trends in synthetic greenhouse gas radiative forcing. Geophys. Res. Lett. 41, 2623–2630 (2014).
Rubin, D. B. Using the SIR algorithm to simulate posterior distributions (with discussion). Bayesian Stat. 3, 395–402 (1988).
Coordinated & by L. Kuijpers, and D. Verdonik, U. TEAP (Technology and Economic Assessment Panel), Task Force Decision XX/8 Report, Assessment of Alternatives to HCFCs and HFCs and Update of the TEAP 2005 Supplement Report Data. (2009).
UNEP. Decision XXX/3 TEAP Task Force Report on unexpected emissions of Trichlorofluoromethane (CFC-11). Final Report, Vol. 1 (2019).
M.J.L. and S.S. gratefully acknowledge support by a grant from VoLo foundation.
Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
Megan Lickley, Susan Solomon & Kane Stone
Civil and Environmental Engineering, Massachusetts Institute of Technology, Cambridge, MA, 02139-4307, USA
Sarah Fletcher
National Institute for Public Health and the Environment (RIVM), 3720, Bilthoven, the Netherlands
Guus J. M. Velders
Earth System Research Laboratory, National Oceanic and Atmospheric Administrations, Boulder, CO, 80305-3328, USA
John Daniel
School of Chemistry, University of Bristol, Bristol, BS8 1QU, UK
Matthew Rigby
Global Monitoring Division, Earth System Research Laboratory, National Oceanic and Atmospheric Administration, Boulder, CO, 80305, USA
Stephen A. Montzka
A/gent b.v. Consultancy, Venlo, Netherlands
Lambert J. M. Kuijpers
Susan Solomon
Kane Stone
M.J.L., S.S., G.J.M.V. and J.D. conceptualized the work. M.J.L., S.S., S.F., and M.R. designed the work. M.J.L. conducted the analysis. G.J.M.V., J.D., M.R. and K.S. acquired the data. All authors contributed to the interpretation of the data. M.L., S.S. and S.F. drafted the manuscript. All authors contributed substantial revisions of the manuscript.
Correspondence to Megan Lickley.
Peer review information Nature Communications thanks Paul Ashford and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Lickley, M., Solomon, S., Fletcher, S. et al. Quantifying contributions of chlorofluorocarbon banks to emissions and impacts on the ozone layer and climate. Nat Commun 11, 1380 (2020). https://doi.org/10.1038/s41467-020-15162-7
Multi-objective evolutionary optimization and thermodynamics performance assessment of a novel time-dependent solar Li-Br absorption refrigeration cycle
Soheil Mohtaram
WeiDong Wu
YongBao Chen
Science China Technological Sciences (2022)
Nature Communications (2021)
Exploring the spatial heterogeneity and temporal homogeneity of ambient PM10 in nine core cities of China
Rong Zhou
Xuekun Fang
Sunyoung Park
Luke M. Western
Geoffrey S. Dutton
Christina Theodoridi
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Renewed emissions of ozone depleting substances
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Only show content I have access to (20)
Physics And Astronomy (59)
Materials Research (54)
MRS Online Proceedings Library Archive (54)
Canadian Journal of Neurological Sciences (7)
The Journal of Laryngology & Otology (4)
Experimental Agriculture (3)
Cardiology in the Young (2)
European Journal of Anaesthesiology (1)
International Journal of Tropical Insect Science (1)
Journal of Mechanics (1)
Journal of Pacific Rim Psychology (1)
Mathematika (1)
Microscopy Today (1)
Proceedings of the Royal Society of Edinburgh Section A: Mathematics (1)
Quarterly Reviews of Biophysics (1)
Materials Research Society (54)
Canadian Neurological Sciences Federation (7)
The Australian Society of Otolaryngology Head and Neck Surgery (4)
test society (3)
AEPC Association of European Paediatric Cardiology (2)
Nutrition Society (2)
Weed Science Society of America (2)
Asian Association of Social Psychology (1)
Godbillon-Vey helicity and magnetic helicity in magnetohydrodynamics
PLA Topological Methods in MHD, Fluids, and Plasmas
G. M. Webb, A. Prasad, S. C. Anco, Q. Hu
Journal: Journal of Plasma Physics / Volume 85 / Issue 5 / October 2019
Published online by Cambridge University Press: 10 October 2019, 775850502
The Godbillon–Vey invariant occurs in homology theory, and algebraic topology, when conditions for a co-dimension 1, foliation of a three-dimensional manifold are satisfied. The magnetic Godbillon–Vey helicity invariant in magnetohydrodynamics (MHD) is a higher-order helicity invariant that occurs for flows in which the magnetic helicity density $h_{m}=\boldsymbol{A}\boldsymbol{\cdot }\boldsymbol{B}=\boldsymbol{A}\boldsymbol{\cdot }(\unicode[STIX]{x1D735}\times \boldsymbol{A})=0$ , where $\boldsymbol{A}$ is the magnetic vector potential and $\boldsymbol{B}$ is the magnetic induction. This paper obtains evolution equations for the magnetic Godbillon–Vey field $\unicode[STIX]{x1D6C8}=\boldsymbol{A}\times \boldsymbol{B}/|\boldsymbol{A}|^{2}$ and the Godbillon–Vey helicity density $h_{\text{gv}}=\unicode[STIX]{x1D6C8}\boldsymbol{\cdot }(\unicode[STIX]{x1D735}\times \unicode[STIX]{x1D6C8})$ in general MHD flows in which either $h_{m}=0$ or $h_{m}\neq 0$ . A conservation law for $h_{\text{gv}}$ occurs in flows for which $h_{m}=0$ . For $h_{m}\neq 0$ the evolution equation for $h_{\text{gv}}$ contains a source term in which $h_{m}$ is coupled to $h_{\text{gv}}$ via the shear tensor of the background flow. The transport equation for $h_{\text{gv}}$ also depends on the electric field potential $\unicode[STIX]{x1D713}$ , which is related to the gauge for $\boldsymbol{A}$ , which takes its simplest form for the advected $\boldsymbol{A}$ gauge in which $\unicode[STIX]{x1D713}=\boldsymbol{A}\boldsymbol{\cdot }\boldsymbol{u}$ where $\boldsymbol{u}$ is the fluid velocity. An application of the Godbillon–Vey magnetic helicity to nonlinear force-free magnetic fields used in solar physics is investigated. The possible uses of the Godbillon–Vey helicity in zero helicity flows in ideal fluid mechanics, and in zero helicity Lagrangian kinematics of three-dimensional advection, are discussed.
Respiratory syncytial virus hospitalisations among young children: a data linkage study
Namrata Prasad, E. Claire Newbern, Adrian A. Trenholme, Tim Wood, Mark G. Thompson, Nayyereh Aminisani, Q. Sue Huang, Cameron C. Grant
Published online by Cambridge University Press: 29 July 2019, e246
We aimed to provide comprehensive estimates of laboratory-confirmed respiratory syncytial virus (RSV)-associated hospitalisations. Between 2012 and 2015, active surveillance of acute respiratory infection (ARI) hospitalisations during winter seasons was used to estimate the seasonal incidence of laboratory-confirmed RSV hospitalisations in children aged <5 years in Auckland, New Zealand (NZ). Incidence rates were estimated by fine age group, ethnicity and socio-economic status (SES) strata. Additionally, RSV disease estimates determined through active surveillance were compared to rates estimated from hospital discharge codes. There were 5309 ARI hospitalisations among children during the study period, of which 3923 (73.9%) were tested for RSV and 1597 (40.7%) were RSV-positive. The seasonal incidence of RSV-associated ARI hospitalisations, once corrected for non-testing, was 6.1 (95% confidence intervals 5.8–6.4) per 1000 children <5 years old. The highest incidence was among children aged <3 months. Being of indigenous Māori or Pacific ethnicity or living in a neighbourhood with low SES independently increased the risk of an RSV-associated hospitalisation. RSV hospital discharge codes had a sensitivity of 71% for identifying laboratory-confirmed RSV cases. RSV infection is a leading cause of hospitalisation among children in NZ, with significant disparities by ethnicity and SES. Our findings highlight the need for effective RSV vaccines and therapies.
Infantile-Onset Multisystem Neurologic, Endocrine, and Pancreatic Disease: Case and Review
Christine Le, Asuri N. Prasad, C. Anthony Rupar, Derek Debicki, Andrea Andrade, Chitra Prasad
Journal: Canadian Journal of Neurological Sciences / Volume 46 / Issue 4 / July 2019
We report three brothers born to consanguineous parents of Syrian descent, with a homozygous novel c.324G>A (p.W108*) mutation in PTRH2 that encodes peptidyl-tRNA hydrolase 2, causing infantile-onset multisystem neurologic, endocrine, and pancreatic disease (IMNEPD). We describe the core clinical features of postnatal microcephaly, motor and language delay with regression, ataxia, and hearing loss. Additional features include epileptic seizures, pancreatic insufficiency, and peripheral neuropathy. Clinical phenotyping enabled a targeted approach to the investigation and identification of a novel homozygous nonsense mutation in PTRH2, c.324G>A (p.W108*). We compare our patients with those recently described and review the current literature for IMNEPD.
Interactive effects of age and respiratory virus on severe lower respiratory infection
N. Prasad, A. A. Trenholme, Q. S. Huang, M. G. Thompson, N. Pierse, M. A. Widdowson, T. Wood, R. Seeds, S. Taylor, C.C. Grant, E. C. Newbern, SHIVERS team
Journal: Epidemiology & Infection / Volume 146 / Issue 14 / October 2018
Published online by Cambridge University Press: 26 July 2018, pp. 1861-1869
We investigated risk factors for severe acute lower respiratory infections (ALRI) among hospitalised children <2 years, with a focus on the interactions between virus and age. Statistical interactions between age and respiratory syncytial virus (RSV), influenza, adenovirus (ADV) and rhinovirus on the risk of ALRI outcomes were investigated. Of 1780 hospitalisations, 228 (12.8%) were admitted to the intensive care unit (ICU). The median (range) length of stay (LOS) in hospital was 3 (1–27) days. An increase of 1 month of age was associated with a decreased risk of ICU admission (rate ratio (RR) 0.94; 95% confidence intervals (CI) 0.91–0.98) and with a decrease in LOS (RR 0.96; 95% CI 0.95–0.97). Associations between RSV, influenza, ADV positivity and ICU admission and LOS were significantly modified by age. Children <5 months old were at the highest risk from RSV-associated severe outcomes, while children >8 months were at greater risk from influenza-associated ICU admissions and long hospital stay. Children with ADV had increased LOS across all ages. In the first 2 years of life, the effects of different viruses on ALRI severity varies with age. Our findings help to identify specific ages that would most benefit from virus-specific interventions such as vaccines and antivirals.
P.134 Infantile Onset Multisystem Neurologic, Endocrine and Pancreatic Disease: case series and review
C Le, AN Prasad, D Debicki, A Andrade, AC Rupar, C Prasad
Journal: Canadian Journal of Neurological Sciences / Volume 45 / Issue s2 / June 2018
Published online by Cambridge University Press: 27 June 2018, p. S51
Background: We report three brothers born to consanguineous parents of Syrian descent with a novel homozygous c.324G>A (p.W108*) mutation in PTRH2 that encodes mitochondrial peptidyl-tRNA hydrolase 2. Mutations in PTRH2 have recently been identified in the autosomal recessive condition, Infantile Onset Multisystem Neurologic, Endocrine and Pancreatic Disease (IMNEPD). To our knowledge, this is the first case of IMNEPD described in a Canadian centre. Methods: Clinical phenotyping enabled a targeted approach in which all exons of PTRH2 were sequenced. We identified a novel mutation and compared our patients with those recently described. Results: We identified a homozygous nonsense mutation in PTRH2, c.324G>A (p.W108*). This G to A mutation results in a premature stop at codon 108 that produces a truncated protein, removing most of the amino acids at the enzymatic active site. This mutation is not listed in the human Gene Mutation Database Cardiff, NCBI dbSNP, 1000 Genomes, Exome Variant Server or ClinVar and is a rare variant listed in gnomAD. Conclusions: In IMNEPD, nonsense mutations in PTRH2 appear to cause severe disease with postnatal microcephaly, neurodevelopmental regression, and ataxia with additional features of seizures, peripheral neuropathy, and pancreatic dysfunction, whereas missense mutations may produce a milder phenotype. The spectrum exhibited by our patients suggests variable expressivity with PTRH2 mutations.
P.017 Results of a Pilot feasibility study to develop reduce wait times strategy in the evaluation of children with new onset epilepsy
JA Mailo, M Diebold, E Mazza, P Guertjens, H Gangam, S Levin, C Campbell, AN Prasad
Background: The goal was to understand factors leading to prolonged wait times for neurological assessment of children with new onset seizures. A second objective was to develop an innovative approach to patient flow through and achieve a reduction in waiting times utilizing limited resources.
Audit of the referrals, flow through, wait times
Identification of bottlenecks
Development of triaging strategy:
Suspected Febrile seizures and non-epileptic events;
Suspected benign and absence epilepsies;
Suspected other Focal epilepsies, generalized epilepsies, epilepsy under 2 years
Initiation of early telephone contact and support
Development of a ketogenic diet
Results: Using a triaging strategy and focusing on timely access to investigations, wait times for clinic evaluations were shortened despite larger numbers of referrals (mean wait time reductions from 179 to 91 days). Limiting factors such increase in referral numbers, attrition in support staff, interfered with sustainability of reduced wait times achieved in the initial phase of the program. Conclusions: This pilot study highlights the effectiveness of an innovative triaging strategy and improvements in patient flow through in achieving the goals of reduction in wait times for clinical evaluation and timely investigations to improve care for children with new onset seizures. Insights into limitations of such strategies and factors determining sustainability are discussed.
Systematic review of infant and young child complementary feeding practices in South Asian families: the Pakistan perspective
Logan Manikam, Anika Sharmila, Abina Dharmaratnam, Emma C Alexander, Jia Ying Kuah, Ankita Prasad, Sonia Ahmed, Raghu Lingam, Monica Lakhanpaul
Journal: Public Health Nutrition / Volume 21 / Issue 4 / March 2018
Print publication: March 2018
Suboptimal nutrition among children remains a problem among South Asian (SA) families. Appropriate complementary feeding (CF) practices can greatly reduce this risk. Thus, we undertook a systematic review of studies assessing CF (timing, dietary diversity, meal frequency and influencing factors) in children aged <2 years in Pakistan.
Searches between January 2000 and June 2016 in MEDLINE, EMBASE, Global Health, Web of Science, OVID Maternity & Infant Care, CINAHL, Cochrane Library, BanglaJOL, POPLINE and WHO Global Health Library. Eligibility criteria: primary research on CF practices in SA children aged 0–2 years and/or their families. Search terms: 'children', 'feeding' and 'Asians' with their derivatives. Two researchers undertook study selection, data extraction and quality appraisal (EPPI-Centre Weight of Evidence).
From 45 712 results, seventeen studies were included. Despite adopting the WHO Infant and Young Child Feeding guidelines, suboptimal CF was found in all studies. Nine of fifteen studies assessing timing recorded CF introduced between 6 and 9 months. Five of nine observed dietary diversity across four of seven food groups; and two of four, minimum meal frequency in over 50 % of participants. Influencing factors included lack of CF knowledge, low maternal education, socio-economic status and cultural beliefs.
This is the first systematic review to evaluate CF practices in Pakistan. Campaigns to change health and nutrition behaviour are needed to meet the substantial unmet needs of these children.
Co-circulation and co-infections of all dengue virus serotypes in Hyderabad, India 2014
K. VADDADI, C. GANDIKOTA, P. K. JAIN, V. S. V. PRASAD, M. VENKATARAMANA
Journal: Epidemiology & Infection / Volume 145 / Issue 12 / September 2017
The burden of dengue virus infections increased globally during recent years. Though India is considered as dengue hyper-endemic country, limited data are available on disease epidemiology. The present study includes molecular characterization of dengue virus strains occurred in Hyderabad, India, during the year 2014. A total of 120 febrile cases were recruited for this study, which includes only children and 41 were serologically confirmed for dengue positive infections using non-structural (NS1) and/or IgG/IgM ELISA tests. RT-PCR, nucleotide sequencing and evolutionary analyses were carried out to identify the circulating serotypes/genotypes. The data indicated a high percent of severe dengue (63%) in primary infections. Simultaneous circulation of all four serotypes and co-infections were observed for the first time in Hyderabad, India. In total, 15 patients were co-infected with more than one dengue serotype and 12 (80%) of them had severe dengue. One of the striking findings of the present study is the identification of serotype Den-1 as the first report from this region and this strain showed close relatedness to the Thailand 1980 strains but not to any of the strains reported from India until now. Phylogenetically, all four strains of the present study showed close relatedness to the strains, which are reported to be high virulent.
Propagation of SH-Waves Through Non Planer Interface between Visco-Elastic and Fibre-Reinforced Solid Half-Spaces
B. Prasad, P. C. Pal, S. Kundu
Journal: Journal of Mechanics / Volume 33 / Issue 4 / August 2017
In the propagation of seismic waves through layered media, the boundaries play crucial role. The boundaries separating the different layers of the earth are irregular in nature and not perfectly plane. It is, therefore, necessary to take into account the corrugation of the boundaries while dealing with the problem of reflection and refraction of seismic waves. The present study explores the reflection and refraction phenomena of SH-waves at a corrugated interface between visco-elastic half-space and fibre-reinforced half-space. Method of approximation given by Rayleigh is adopted and the expressions for reflection and transmission coefficients are obtained in closed form for the first and second order approximation of the corrugation. The closed form formulae of these coefficients are presented for a corrugated interface of periodic shape (cosine law interface). It is found that these coefficients depend upon the amplitude of corrugation of the boundary, angle of incidence and frequency of the incident wave. Numerical computations for a particular type of corrugated interface are performed and a number of graphs are plotted. Some special cases are derived.
By Hamid M. Abdolmaleky, Cory Adamson, Paola Allavena, Dimitrios Anastasiou, Johanna Apfel, Surinder K. Batra, Mark E. Burkard, Amancio Carnero, Michael J. Clemens, Jeanette Gowen Cook, Isabel Dominguez, Jeremy S. Edwards, Wafik S. El-Deiry, Androulla Elia, Mohammad R. Eskandari, Aurora Esquela-Kerscher, Manel Esteller, Rob M. Ewing, Douglas V. Faller, Kristopher Frese, Xijin Ge, Giovanni Germano, Daniel A. Haber, William C. Hahn, Antoine Ho, Christine Iacobuzio-Donahue, Sergii Ivakhno, Prasad V. Jallepalli, Rosanne Jones, Sharyn Katz, Arnaud Krebs, Karl Krueger, Arthur W. Lambert, Adam Lerner, Holly Lewis, Jason W. Locasale, Giselle Y. López, Shyamala Maheswaran, Alberto Mantovani, José Ignacio Martín-Subero, Simon J. Morley, Oliver Müller, Kathleen R. Nevis, Sait Ozturk, Panagiotis Papageorgis, Jignesh R. Parikh, Steven M. Powell, Kimberly L. Raiford, Andrew M. Rankin, Patricia Reischmann, Simon Rosenfeld, Marc Samsky, Anthony Scott, Shantibhusan Senapati, Yashaswi Shrestha, Anurag Singh, Rakesh K. Singh, Gromoslaw A. Smolen, Sudhir Srivastava, Simon Tavaré, Sam Thiagalingam, László Tora, David Tuveson, Asad Umar, Matthew G. Vander Heiden, Cyrus Vaziri, Zhenghe John Wang, Kevin Webster, Chen Khuan Wong, Yu Xia, Hai Yan, Jian Yu, Lihua Yu, Min Yu, Lin Zhang, Jin-Rong Zhou
Edited by Sam Thiagalingam
Book: Systems Biology of Cancer
Print publication: 09 April 2015, pp ix-xiv
Variability in the Diagnosis and Treatment of Group A Streptococcal Pharyngitis by Primary Care Pediatricians
Julie L. Fierro, Priya A. Prasad, A. Russell Localio, Robert W. Grundmeier, Richard C. Wasserman, Theoklis E. Zaoutis, Jeffrey S. Gerber
To compare practice patterns regarding the diagnosis and management of streptococcal pharyngitis across pediatric primary care practices.
All encounters to 25 pediatric primary care practices sharing an electronic health record.
Streptococcal pharyngitis was defined by an International Classification of Diseases, Ninth Revision code for acute pharyngitis, positive laboratory test, antibiotic prescription, and absence of an alternative bacterial infection. Logistic regression models standardizing for patient-level characteristics were used to compare diagnosis, testing, and broad-spectrum antibiotic treatment for children with pharyngitis across practices. Fixed-effects models and likelihood ratio tests were conducted to analyze within-practice variation.
Of 399,793 acute encounters in 1 calendar year, there were 52,658 diagnoses of acute pharyngitis, including 12,445 diagnoses of streptococcal pharyngitis. After excluding encounters by patients with chronic conditions and standardizing for age, sex, insurance type, and race, there was significant variability across and within practices in the diagnosis and testing for streptococcal pharyngitis. Excluding patients with antibiotic allergies or prior antibiotic use, off-guideline antibiotic prescribing for confirmed group A streptococcal pharyngitis ranged from 1% to 33% across practices (P < .001). At the clinician level, 13 of 25 sites demonstrated significant within-practice variability in off-guideline antibiotic prescribing (P ≤ .05). Only 18 of the 222 clinicians in the network accounted for half of all off-guideline antibiotic prescribing.
Significant variability in the diagnosis and treatment of pharyngitis exists across and within pediatric practices, which cannot be explained by relevant clinical or demographic factors. Our data support clinician-targeted interventions to improve adherence to prescribing guidelines for this common condition.
Case of Multiple Sulfatase Deficiency and Ocular Albinism: A Diagnostic Odyssey
Chitra Prasad, C. Anthony Rupar, Craig Campbell, Melanie Napier, David Ramsay, K.Y. Tay, Sapna Sharan, Asuri N. Prasad
Journal: Canadian Journal of Neurological Sciences / Volume 41 / Issue 5 / September 2014
Multiple sulfatase deficiency (MSD) is a rare autosomal recessive inborn error of lysosomal metabolism. The clinical phenotypic spectrum encompasses overlapping features of variable severity and is suggestive of individual single sulfatase deficiencies (i.e., metachromatic leukodystrophy, mucopolysaccharidosis, and X-linked ichthyosis).
We describe a 3-year-old male with severe hypotonia, developmental regression and progressive neurodegeneration, coarse facial features, nystagmus (from ocular albinism), and dysmyelinating motor sensory neuropathy. Ethics approval was obtained from the Western University Ontario.
Extensive investigative work-up identified deficiencies of multiple sulfatases: heparan sulfate sulfamidase: 6.5 nmoles/mg/protein/17 hour (reference 25.0-75.0), iduronate-2-sulfate sulfatase: 9 nmol/mg/protein/4 hour (reference 31-110), and arylsulfatase A: 3.8 nmoles/hr/mg protein (reference 22-50). The identification of compound heterozygous pathogenic mutations in the SUMF1 gene c.836 C>T (p.A279V) and c.1045C>T (p.R349W) confirmed the diagnosis of MSD.
The complex clinical manifestations of MSD and the unrelated coexistence of ocular albinism as in our case can delay diagnosis. Genetic counselling should be provided to all affected families.
A tale of two rings
Deepa Prasad, Manish Bansal, Ravi C. Ashwath
Journal: Cardiology in the Young / Volume 24 / Issue 4 / August 2014
We describe a rare case of double vascular ring diagnosed with cardiac magnetic resonance imaging in a patient with ventricular septal defect, pulmonary stenosis, and right aortic arch.
MELAS: A Multigenerational Impact of the MTTL1 A3243G MELAS Mutation
M. Prasad, B. Narayan, A.N. Prasad, C.A. Rupar, S. Levin, J. Kronick, D. Ramsay, K.Y. Tay, C. Prasad
Journal: Canadian Journal of Neurological Sciences / Volume 41 / Issue 2 / March 2014
the maternally inherited MTTL1 A3243G mutation in the mitochondrial genome causes MelaS (Mitochondrial encephalopathy lactic acidosis with Stroke-like episodes), a condition that is multisystemic but affects primarily the nervous system. Significant intra-familial variation in phenotype and severity of disease is well recognized.
retrospective and ongoing study of an extended family carrying the MTTL1 A3243G mutation with multiple symptomatic individuals. tissue heteroplasmy is reviewed based on the clinical presentations, imaging studies, laboratory findings in affected individuals and pathological material obtained at autopsy in two of the family members.
there were seven affected individuals out of thirteen members in this three generation family who each carried the MTTL1 A3243G mutation. the clinical presentations were varied with symptoms ranging from hearing loss, migraines, dementia, seizures, diabetes, visual manifestations, and stroke like episodes. three of the family members are deceased from MelaS or to complications related to MelaS.
the results of the clinical, pathological and radiological findings in this family provide strong support to the current concepts of maternal inheritance, tissue heteroplasmy and molecular pathogenesis in MelaS. neurologists (both adult and paediatric) are the most likely to encounter patients with MelaS in their practice. genetic counselling is complex in view of maternal inheritance and heteroplasmy. newer therapeutic options such as arginine are being used for acute and preventative management of stroke like episodes.
Array CGH Analysis and Developmental Delay: A Diagnostic Tool for Neurologists
F. Cameron, J. Xu, J. Jung, C. Prasad
Journal: Canadian Journal of Neurological Sciences / Volume 40 / Issue 6 / November 2013
Print publication: November 2013
Developmental delay occurs in 1–3% of the population, with unknown etiology in approximately 50% of cases. Initial genetic work up for developmental delay previously included chromosome analysis and subtelomeric FISH (fluorescent in situ hybridization). Array Comparative Genomic Hybridization (aCGH) has emerged as a tool to detect genetic copy number changes and uniparental disomy and is the most sensitive test in providing etiological diagnosis in developmental delay. aCGH allows for the provision of prognosis and recurrence risks, improves access to resources, helps limit further investigations and may alter medical management in many cases. aCGH has led to the delineation of novel genetic syndromes associated with developmental delay. An illustrative case of a 31-year-old man with long standing global developmental delay and recently diagnosed 4q21 deletion syndrome with a deletion of 20.8 Mb genomic interval is provided. aCGH is now recommended as a first line test in children and adults with undiagnosed developmental delay and congenital anomalies.
Why nature really chose phosphate
Shina C. L. Kamerlin, Pankaz K. Sharma, Ram B. Prasad, Arieh Warshel
Journal: Quarterly Reviews of Biophysics / Volume 46 / Issue 1 / February 2013
Published online by Cambridge University Press: 15 January 2013, pp. 1-132
Phosphoryl transfer plays key roles in signaling, energy transduction, protein synthesis, and maintaining the integrity of the genetic material. On the surface, it would appear to be a simple nucleophile displacement reaction. However, this simplicity is deceptive, as, even in aqueous solution, the low-lying d-orbitals on the phosphorus atom allow for eight distinct mechanistic possibilities, before even introducing the complexities of the enzyme catalyzed reactions. To further complicate matters, while powerful, traditional experimental techniques such as the use of linear free-energy relationships (LFER) or measuring isotope effects cannot make unique distinctions between different potential mechanisms. A quarter of a century has passed since Westheimer wrote his seminal review, 'Why Nature Chose Phosphate' (Science 235 (1987), 1173), and a lot has changed in the field since then. The present review revisits this biologically crucial issue, exploring both relevant enzymatic systems as well as the corresponding chemistry in aqueous solution, and demonstrating that the only way key questions in this field are likely to be resolved is through careful theoretical studies (which of course should be able to reproduce all relevant experimental data). Finally, we demonstrate that the reason that nature really chose phosphate is due to interplay between two counteracting effects: on the one hand, phosphates are negatively charged and the resulting charge-charge repulsion with the attacking nucleophile contributes to the very high barrier for hydrolysis, making phosphate esters among the most inert compounds known. However, biology is not only about reducing the barrier to unfavorable chemical reactions. That is, the same charge-charge repulsion that makes phosphate ester hydrolysis so unfavorable also makes it possible to regulate, by exploiting the electrostatics. This means that phosphate ester hydrolysis can not only be turned on, but also be turned off, by fine tuning the electrostatic environment and the present review demonstrates numerous examples where this is the case. Without this capacity for regulation, it would be impossible to have for instance a signaling or metabolic cascade, where the action of each participant is determined by the fine-tuned activity of the previous piece in the production line. This makes phosphate esters the ideal compounds to facilitate life as we know it.
Recurrent Encephalopathy: NAGS (N-Acetylglutamate Synthase) Deficiency in Adults
A. Cartagena, A.N. Prasad, C.A. Rupar, M. Strong, M. Tuchman, N. Ah Mew, C. Prasad
Journal: Canadian Journal of Neurological Sciences / Volume 40 / Issue 1 / January 2013
Published online by Cambridge University Press: 23 September 2014, pp. 3-9
N-acetyl-glutamate synthase (NAGS) deficiency is a rare autosomal recessive urea cycle disorder (UCD) that uncommonly presents in adulthood. Adult presentations of UCDs include; confusional episodes, neuropsychiatric symptoms and encephalopathy. To date, there have been no detailed neurological descriptions of an adult onset presentation of NAGS deficiency. In this review we examine the clinical presentation and management of UCDs with an emphasis on NAGS deficiency. An illustrative case is provided. Plasma ammonia levels should be measured in all adult patients with unexplained encephalopathy, as treatment can be potentially life-saving. Availability of N-carbamylglutamate (NCG; carglumic acid) has made protein restriction largely unnecessary in treatment regimens currently employed. Genetic counselling remains an essential component of management of NAGS.
By Robert C. Basner, Carl Bazil, Lee J. Brooks, Sean M. Caples, Kelly A. Carden, Ronald D. Chervin, Christopher Cielo, David G. Davila, Katherine A. Dudley, Judy Fetterolf, W. Ward Flemons, Neil Freedman, Christian Guilleminault, Fauziya Hassan, Shelley Hershner, David M. Hiestand, Mithri Junna, Kristen Kelly-Pieper, Douglas Kirsch, Brian B. Koo, Carin Lamm, Raman Malhotra, Meghna P. Mansukhani, Carole L. Marcus, B. Marshall, Jean K. Matheson, Timothy I. Morgenthaler, Gökhan M. Mutlu, Irina Ok, Vidya Pai, Winnie C. Pao, Sairam Parthasarathy, Shalini Paruthi, Nimesh Patel, Sachin R. Pendharkar, Ravi K. Persaud, Bharati Prasad, Stuart F. Quan, Satish C. Rao, Patti Reed, Alcibiades Rodriguez, Dennis Rosen, Vijay Seelall, Anita Valanju Shelgikar, Jeffrey J. Stanley, Kingman Strohl, Shannon S. Sullivan, Kevin A. Thomas, Robert Thomas, John R. Wheatley, Lisa Wolfe, Peter J.-C. Wu, Motoo Yamauchi
Edited by Robert C. Basner
Book: Case Studies in Polysomnography Interpretation
Print publication: 18 October 2012, pp x-xii
Persistent infection with neurotropic herpes viruses and cognitive impairment
A. M. M. Watson, K. M. Prasad, L. Klei, J. A. Wood, R. H. Yolken, R. C. Gur, L. D. Bradford, M. E. Calkins, J. Richard, N. Edwards, R. M. Savage, T. B. Allen, J. Kwentus, J. P. McEvoy, A. B. Santos, H. W. Wiener, R. C. P. Go, R. T. Perry, H. A. Nasrallah, R. E. Gur, B. Devlin, V. L. Nimgaonkar
Journal: Psychological Medicine / Volume 43 / Issue 5 / May 2013
Published online by Cambridge University Press: 14 September 2012, pp. 1023-1031
Print publication: May 2013
Herpes virus infections can cause cognitive impairment during and after acute encephalitis. Although chronic, latent/persistent infection is considered to be relatively benign, some studies have documented cognitive impairment in exposed persons that is untraceable to encephalitis. These studies were conducted among schizophrenia (SZ) patients or older community dwellers, among whom it is difficult to control for the effects of co-morbid illness and medications. To determine whether the associations can be generalized to other groups, we examined a large sample of younger control individuals, SZ patients and their non-psychotic relatives (n=1852).
Using multivariate models, cognitive performance was evaluated in relation to exposures to herpes simplex virus type 1 (HSV-1), herpes simplex virus type 2 (HSV-2) and cytomegalovirus (CMV), controlling for familial and diagnostic status and sociodemographic variables, including occupation and educational status. Composite cognitive measures were derived from nine cognitive domains using principal components of heritability (PCH). Exposure was indexed by antibodies to viral antigens.
PCH1, the most heritable component of cognitive performance, declines with exposure to CMV or HSV-1 regardless of case/relative/control group status (p = 1.09 × 10−5 and 0.01 respectively), with stronger association with exposure to multiple herpes viruses (β = −0.25, p = 7.28 × 10−10). There were no significant interactions between exposure and group status.
Latent/persistent herpes virus infections can be associated with cognitive impairments regardless of other health status.
The Aid Triangle: Recognising the Human Dynamics of Dominance, Justice and Identity
Malcolm MacLachlan, Stuart C. Carr, Eilish McAuliffe, Biman Prasad
Journal: Journal of Pacific Rim Psychology / Volume 5 / Issue 1 / 01 August 2011 | CommonCrawl |
Accuracy of estimated breeding values with genomic information on males, females, or both: an example on broiler chicken
Daniela A. L. Lourenco1,
Breno O. Fragomeni1,
Shogo Tsuruta1,
Ignacio Aguilar2,
Birgit Zumbach3,
Rachel J. Hawken3,
Andres Legarra4 &
Ignacy Misztal1
As more and more genotypes become available, accuracy of genomic evaluations can potentially increase. However, the impact of genotype data on accuracy depends on the structure of the genotyped cohort. For populations such as dairy cattle, the greatest benefit has come from genotyping sires with high accuracy, whereas the benefit due to adding genotypes from cows was smaller. In broiler chicken breeding programs, males have less progeny than dairy bulls, females have more progeny than dairy cows, and most production traits are recorded for both sexes. Consequently, genotyping both sexes in broiler chickens may be more advantageous than in dairy cattle.
We studied the contribution of genotypes from males and females using a real dataset with genotypes on 15 723 broiler chickens. Genomic evaluations used three training sets that included only males (4648), only females (8100), and both sexes (12 748). Realized accuracies of genomic estimated breeding values (GEBV) were used to evaluate the benefit of including genotypes for different training populations on genomic predictions of young genotyped chickens.
Using genotypes on males, the average increase in accuracy of GEBV over pedigree-based EBV for males and females was 12 and 1 percentage points, respectively. Using female genotypes, this increase was 1 and 18 percentage points, respectively. Using genotypes of both sexes increased accuracies by 19 points for males and 20 points for females. For two traits with similar heritabilities and amounts of information, realized accuracies from cross-validation were lower for the trait that was under strong selection.
Overall, genotyping males and females improves predictions of all young genotyped chickens, regardless of sex. Therefore, when males and females both contribute to genetic progress of the population, genotyping both sexes may be the best option.
Large amounts of genomic information have accumulated for nearly all livestock species and its use has led to increases in the accuracy of estimated breeding values (EBV) [1]. These increases are mainly due to improved inferences on relationships between individuals and linkage disequilibrium (LD) between quantitative trait loci (QTL) and markers [2]. Higher accuracies are obtained when relationships between animals in the training population are weak and the relationship between the training and validation populations is high [3].
Questions about how the genotyped population should be structured and which animals should be used in the training population are still a matter of debate in all species. In dairy cattle, for example, phenotypes for production traits are collected on females and combined with genotypes of males for successful genomic evaluation. According to Rendel and Robertson [4], genetic progress in a population is a combination of the progress in each of the four paths of selection. In dairy cattle, selection intensities are highest for elite sires of bulls and elite dams of bulls [5] because strong selection pressure can be applied in both these pathways. With genomic selection, very young females can be chosen (e.g., even heifers) as dams of bulls, and elite cows are often genotyped [6]. Although accurate genomic breeding values for females are highly relevant, including female genotypes and phenotypes in the training population resulted in very small increases in the accuracy of evaluation of young dairy bulls [6, 7]. For instance, adding 17 000 female genotypes to 7000 male genotypes increased the accuracy of evaluation of young bulls from 0.70 to 0.72 [8]. This small increase is due to female phenotypes being largely redundant, since these phenotypes are already included in their sire's information, either explicitly in the form of pseudo-phenotypes, or implicitly, as in the single-step genomic best linear unbiased predictor (ssGBLUP). However, in dairy cattle, genotyping females is useful for intra-herd selection of females [9] and for identifying elite females to produce future sires.
In species such as broiler chickens or pigs, the number of progeny is smaller per male and larger per female than in dairy cattle. Therefore, the impact of female paths on genetic progress is potentially stronger. Also, when phenotypes are recorded on both sexes (e.g., body weight), then not only can female phenotypes contribute to male evaluations but male phenotypes can also contribute to female evaluations. For this reason, genotyping females in these species can make a substantial contribution to accuracy and genetic progress.
Realized accuracies of genetic values can be obtained from the correlation between true and estimated breeding values for the validation population [10]. There are large discrepancies between theoretical accuracy (e.g., by inversion of the coefficient matrix of the mixed model equations) and realized accuracy of EBV in populations under selection, where the latter is noticeably smaller [11]. For genetic values obtained through genomic BLUP methods (GBLUP), the accuracies that are obtained by inversion of the coefficient matrix depend on the assumed allele frequencies [12], although scaling of genomic relationships for compatibility with pedigree relationships [13, 14] reduces this dependency.
The objective of our work was to analyze a commercial broiler chicken population and determine the gains in the accuracy of genomic evaluations on males and females due to the use of genotypes and phenotypes of males, females, or both sexes.
The dataset and variance components used in this study were provided by Cobb-Vantress Inc. (Siloam Springs, AR). The dataset consisted of phenotypes recorded on purebred broiler chickens across four generations for four production traits referred to as T1, T2, T3, and T4; heritabilities for all traits ranged from 0.22 to 0.49, genetic correlations ranged from −0.02 to 0.21 and phenotypic correlations from −0.02 to 0.46 (Table 1). The first trait (T1) was recorded on 196 613 birds, whereas the three other traits (T2, T3 and T4) were recorded on 26, 5, and 26 % of the birds with records for T1, respectively. Traits T1 and T3 were measured on birds at 35 days of age, whereas traits T2 and T4 were measured within a 2-week period after 35 days of age. Multiple measurements for T2 and T4 were combined into a unique record for T2 and for T4. Thus, each trait was analyzed as a single record. The number of birds in the pedigree relationship matrix (A) was 198 915.
Table 1 Heritabilities (diagonal), genetic correlations (above the diagonal), and phenotypic correlations (below the diagonal) for the four traits
Genotypes from the 60 k SNP (single nucleotide polymorphism) panel developed by Groenen et al. [15] were available for 15 723 birds. Quality control of genomic data retained SNPs with call rates greater than 0.9, minor allele frequencies greater than 0.05, and departures from Hardy-Weinberg equilibrium (difference between expected and observed frequency of heterozygous) less than 0.15. Parent-progeny pairs were tested for discrepant homozygous SNPs, and progenies were eliminated when the conflict rate was greater than 1 %. Also, SNPs with an unknown position or located on sex chromosomes were excluded from the analyses. After quality editing, 39 102 autosomal SNPs for 15 723 birds remained for analysis. The genotype file was split by sex and the three genotype datasets (males, females, and both sexes) were used in different analyses. The total numbers of genotyped males and females were 6149 and 9574, respectively and the numbers of genotyped birds with phenotypes for each trait are in Table 2.
Table 2 Number of genotyped birds with phenotypes for each trait
The birds that were genotyped were chosen randomly or based on phenotypes, depending on the trait. The dataset available for this study was split into training and validation populations according to date of birth. Thus, 2975 birds born in generation 4 were chosen as validation animals and their phenotypes were removed from the analyses.
Model and analysis
For traditional pedigree-based and genomic evaluations, the following multiple-trait animal model was used:
$$ {\mathbf{y}}_{\mathrm{t}} = {\mathbf{X}}_{\mathrm{t}}{\mathbf{b}}_{\mathrm{t}} + {\mathbf{Z}}_{\mathrm{t}}{\mathbf{u}}_{\mathrm{t}} + {\mathbf{e}}_{\mathrm{t}}, $$
where t is for traits T1 to T4; y, b, u, and e are vectors of phenotypes, fixed effects of sex and generation-hatch interaction, random additive direct genetic effects, and random residuals, respectively; X and Z are incidence matrices for b and u, respectively. A vector of random maternal permanent environmental effects was added for T1. Although sex effect was fitted in the model, no sexual dimorphism was considered and the traits on males and females were assumed to have a genetic correlation of 1, which may not always be the case in practice [16].
Genomic evaluations were conducted using ssGBLUP. In this method, the inverse of the numerator relationship matrix (A −1) in the mixed model equations was replaced by the inverse of the realized relationship matrix (H −1) [17, 18], which was written as:
$$ {\mathbf{H}}^{\hbox{-} 1} = {\mathbf{A}}^{\hbox{-} 1}+\left[\begin{array}{cc}\hfill \mathbf{0}\hfill & \hfill \mathbf{0}\hfill \\ {}\hfill \mathbf{0}\hfill & \hfill {\left(\upalpha \left(\mathrm{a}+\mathrm{b}\mathbf{G}\right)+\upbeta {\mathbf{A}}_{\mathbf{22}}\right)}^{\hbox{-} 1}\hbox{--}\ {\mathbf{A}}_{\mathbf{22}}^{\hbox{-} 1}\hfill \end{array}\right], $$
where G is the genomic relationship matrix that was constructed as in VanRaden [13], using observed allele frequencies; A ‐ 1 22 is the inverse of the pedigree-based relationship matrix for genotyped animals. Weights were assigned for G (α = 0.95) and A 22 (β = 0.05) to avoid singularity problems [13]. Coefficients a and b were used to match pedigree and genomic relationships [14, 19, 20]. Different H matrices were used based on different G that contained 2975 birds from the validation population plus one of the three training populations: males (n = 4648), females (n = 8100), and both sexes (n = 12 748).
Traditional and genomic evaluations were computed using the software BLUP90IOD [21, 22]. The convergence criterion was set to 10−14 for all evaluations. Variance components used in all analyses were pre-computed by Cobb-Vantress Inc. using the same data and model as presented here.
Composition of genomic estimated breeding values from ssGBLUP
We used the composition of genomic estimated breeding values (GEBV) and some general rules to better understand some of our results. In traditional BLUP evaluations, the EBV for an animal i can be expressed as [23]:
$$ {\mathrm{u}}_{\mathrm{i}} = {\mathrm{w}}_1\mathrm{P}{\mathrm{A}}_{\mathrm{i}} + {\mathrm{w}}_2\mathrm{Y}{\mathrm{D}}_{\mathrm{i}} + {\mathrm{w}}_3\mathrm{P}{\mathrm{C}}_{\mathrm{i}}, $$
where PAi is the parent average EBV for animal i, YDi is the yield deviation (phenotype adjusted for the model effects' solutions other than additive genetic effects and errors) for animal i, and PCi is the progeny contribution for animal i. When both parents are known, the phenotype is available, and each progeny has a known mate, weights w1 to w3 sum to 1. The decomposition of EBV can be derived by analyzing a row of the mixed model equations for a given animal. More specifically, YD is based on own phenotypic information, PA is the average of the parental EBV, and PC is the sum of the differences between the EBV of any progeny of animal i minus one half of the EBV of each progeny's dam (or the mate of animal i).
The EBV for an animal i when genomic information is available (GEBV) is [24]:
$$ {\mathrm{u}}_{\mathrm{i}} = {\mathrm{w}}_1\mathrm{P}{\mathrm{A}}_{\mathrm{i}} + {\mathrm{w}}_2\mathrm{Y}{\mathrm{D}}_{\mathrm{i}} + {\mathrm{w}}_3\mathrm{P}{\mathrm{C}}_{\mathrm{i}} + {\mathrm{w}}_4\mathrm{G}{\mathrm{I}}_{\mathrm{i}}, $$
where GIi contains information from genotypes of animal i and all weights sum to 1. According to VanRaden and Wright [24], the weight for GI is:
$$ {\mathrm{w}}_4 = \frac{{\mathrm{g}}^{\mathrm{ii}}\hbox{-} {\mathrm{a}}_{22}^{\mathrm{ii}}}{\mathrm{den}}, $$
where gii and a ii22 are the diagonal elements of G −1 and A ‐ 1 22 , respectively; den = 2 + nr/α + np/2 + gii ‐ a ii22 , where nr is the number of records, α is the variance ratio (residual variance over additive genetic variance), and np is progeny size. Aguilar et al. [17] showed that in ssGBLUP, GI consists of two components:
$$ \mathrm{G}\mathrm{I} = {\mathrm{w}}_{4_1}\mathrm{D}\mathrm{G}\mathrm{V}\hbox{-} {\mathrm{w}}_{4_2}\mathrm{P}\mathrm{P}, $$
where DGV is the portion of prediction due to the genomic information, which comes from G, and PP is pedigree prediction that comes from A 22 . The weights \( {\mathrm{w}}_1,\ {\mathrm{w}}_2,\ {\mathrm{w}}_{3,\ }{\mathrm{w}}_{4_1},\ \mathrm{and}\ {\mathrm{w}}_{4_2} \) sum to 1 and values for DGV and PP are equal to:
$$ \mathrm{D}\mathrm{G}{\mathrm{V}}_{\mathrm{i}}=\frac{\hbox{-} {\displaystyle {\sum}_{\mathrm{j},\mathrm{j}\ne \mathrm{i}}{\mathrm{g}}^{\mathrm{i}\mathrm{j}}{\mathrm{u}}^{\mathrm{j}}}}{{\mathrm{g}}^{\mathrm{i}\mathrm{i}}}, $$
$$ \mathrm{P}{\mathrm{P}}_{\mathrm{i}}=\frac{\hbox{-} {\displaystyle {\sum}_{\mathrm{j},\mathrm{j}\ne \mathrm{i}}{\mathrm{a}}_{22}^{\mathrm{i}\mathrm{j}}{\mathrm{u}}^{\mathrm{j}}}}{{\mathrm{a}}_{22}^{\mathrm{i}\mathrm{i}}}, $$
where gij and a ij22 are the off-diagonal elements of G −1 and A ‐ 1 22 , respectively; uj is the inverse EBV of animal j.
In general, PP accounts for the part of PA that is explained by DGV; when all animals are genotyped, A = A 22, PA and PP cancel out and DGV explains a larger fraction of the GEBV; when a genotyped animal is unrelated to the genotyped population, PP = 0 and DGV explains a smaller portion of the GEBV; when both parents are genotyped, PP will include a large part of PA. The accuracy of DGV differs between animals, depending on how many ancestors of that animal are genotyped, as reported by Mulder et al. [25]. When a genotyped animal has many progeny, w3 ≈ 1 and its GEBV is mainly driven by PC; however, genotyping those animals is useful since they are usually included in the training population. When an animal is not genotyped, w4 = 0 and predictions can be improved due to improved PA and PC if its relatives are genotyped. When an animal is not genotyped and has no phenotypes and no progeny, the GEBV is driven by PA and, in most cases, only a slight improvement in prediction is achieved based on genotyped relatives [17, 18, 26].
Validation of EBV was based on that proposed by Legarra et al. [10]; predictive ability of traditional and genomic evaluations was defined as the correlation between (G)EBV and trait phenotypes corrected for fixed effects (Y) for birds in the validation population:
$$ \mathrm{r} = \mathrm{c}\mathrm{o}\mathrm{r}\left(\left(\mathrm{G}\right)\mathrm{E}\mathrm{B}\mathrm{V},\mathrm{Y}\right), $$
where (G)EBV can be either EBV or GEBV.
Accuracy, as determined by the correlation between true and predicted breeding values, was calculated as r/h; where h is the square root of heritability [10]. Accuracy was obtained for young birds in the validation population, with and without splitting them into groups according to sex (Fig. 1). Accuracy of GEBV was used to assess the benefit of including genotypes for different sets of birds on predictive ability of birds with the same sex, opposite sexes, and combined; accuracy of EBV was the benchmark used to compare the gain in predictive ability due to genomic information.
Cross-validation scheme representing birds in training and validation populations
Correlation between EBV and GEBV
Correlations between EBV and GEBV using genotypes for both sexes were calculated for sires with large (≥500) and small (<50) progeny groups, and for dams with large (≥50) and small (<5) progeny groups to check the importance of progeny size versus genomic information on EBV of proven parents.
A summary of the population structure is in Table 3. About half of all parents were genotyped, but in the validation population, 96 % of the parents were genotyped. According to Pszczola et al. [3], animals in the validation population should be closely related to at least some of the animals in the training population in order to obtain more accurate direct genomic values (DGV). In ssGBLUP, the accuracy of GEBV is less affected by genotype structure, because GEBV includes PA (from A) and additional pedigree information (from A 22 ), and the latter accounts for a different level of relationship between a given genotyped animal and the genotyped population. In general, additional information due to genomic data is approximately proportional to the square of the difference between pedigree and genomic relationships [27]; the standard deviation of such differences increases for animals that are more related [28–30], but this increase is not equal for all classes of animals since full-sib groups presented greater standard deviation than parent-offspring groups [30], for instance.
Table 3 Family structure for all birds and for genotyped birds in the dataset
For quality control, Fig. 2 contains the distribution of genomic relationships for full-sibs. The quality of genomic relationships can also be evaluated for other groups of siblings or by checking all genomic relationships against all pedigree relationships. Broiler chickens have large full-sib families and a greater gain in accuracy is expected from genomic evaluations over traditional evaluations in this case, provided genomic relationships are based on high-quality SNP genotypes. Although the expected relationship among full-sibs in the absence of inbreeding is equal to 0.50, the average (SD) genomic relationship for this dataset was 0.47 (0.05). The standard deviation of 0.05 and the skewed shape agree with theory [12, 23]. However, if the distribution of genomic relationships is not centered on the expected relationship and is long-tailed, genotyping and pedigree errors are present. For the most recent generations, for which stricter quality controls were imposed, such as checking for heritability of gene content as proposed by Forneris et al. [31], the distribution of genomic relationships among full-sibs was nearly normal and centered on 0.5 (data not provided).
Distribution of genomic relationships for full-sibs among the 15 748 genotyped birds. The expected relationship based on pedigree information is 0.5 (black vertical line)
Accuracies and genomic contributions
Correlations between EBV and GEBV were equal to 0.97 and 0.93 for sires with more than 500 and less than 50 progeny, respectively, whereas correlations for dams with more than 50 and less than 5 progeny were equal to 0.89 and 0.88, respectively. Correlations for dams were lower because they have less progeny than males and, as a result, the weight on genotypic information is greater than the weight on PC for dams. For sires, even if there was some re-ranking between EBV and GEBV by including genomic information, the accuracy of the GEBV of sires with many progeny came mostly from PC, because the contribution from other sources was small or null. Although genomic information had a smaller impact on the GEBV of parents with large numbers of progeny, genotyping those birds was helpful to improve predictions from related birds.
Accuracies for traditional and genomic evaluations are in Fig. 3. Genomic evaluations were derived using three different sets of genotyped birds (only males, only females, and both sexes) in the training population. In all analyses, phenotypes were included for all genotyped animals, except for the youngest chickens that had hatched later in the last generation. In addition, validation sets were also created for young males, young females, and young chickens from both sexes. When the training and validation populations included both sexes, the accuracy of genomic evaluations was always greater (on average, 17 percentage points) than that of traditional evaluations. However, when the genotypes of only one sex for the training population and for both sexes in the validation population were considered, the impact on the accuracy of GEBV differed by trait. For traits T1 and T3, using only female genotypes for the training population resulted in only a slight change in accuracy, whereas using only male genotypes had a much greater impact on accuracy. The opposite was true for traits T2 and T4, for which using only female genotypes had a greater impact than using only male genotypes. These differences can be partially attributed to the number of phenotypes available for genotyped chickens and can be better explained when evaluations of males and females are considered separately.
Accuracy of evaluation for all birds, males, and females in the validation population when different sets of genotyped birds were used to construct the G matrix. BLUP did not include genotypes and T3 females had no phenotypes
Traits for which male genotypes had a greater impact (T1 and T3) had either a larger number of phenotypes compared to the other traits, or females had no phenotypes such as T3 (Table 2). For T1, the number of phenotypes on males was 57 % of the number of phenotypes on females, but for T2 and T4 the number of phenotypes on males was roughly 27 % of the number of phenotypes on females. In contrast to using a training population with only males, using genotypes for both sexes improved accuracies for all traits except for T3, for which females had no phenotypes. When males were evaluated, including only female genotypes increased the accuracy only slightly. Also, when females were evaluated, including male genotypes hardly increased accuracies. The same trend was observed by Cooper et al. [32] in a study on the US Holstein population.
Table 4 shows accuracies for pedigree and genomic PA for genotyped and non-genotyped birds. For all traits, accuracies of pedigree PA for non-genotyped birds were greater than for genotyped birds. For non-genotyped birds, the accuracy of genomic PA was very similar to that of pedigree PA for all traits, except for T3, for which the accuracy of genomic PA was greater. For T3, which was measured only on males and for which there were fewer phenotypes than for the other traits, including genomic information improved the accuracy of the GEBV of parents. When the progeny is not genotyped but parents are, realized Mendelian sampling terms from parents to offspring cannot be accurately estimated and gains in accuracy are lower [33]. The gains in accuracy are mainly due to improved accuracy of PA if only the parents are genotyped or also of PC if both parents and progeny are genotyped. Genotyping parents of non-genotyped birds may result in greater benefit for sex-limited traits or when trait recording is limited to a small number of birds. Comparisons between accuracies of genomic PA (Table 4) and genomic EBV (Fig. 3) show that genomic information on genotyped young birds contributes significantly to accuracy of evaluation. Pszczola et al. [33] showed that accuracies of GEBV increased when progenies were genotyped and parents were not, compared to the opposite situation; but still the highest accuracy was achieved when a large portion of the population was genotyped. According to Mulder et al. [25], the number of genotyped ancestor generations affects the accuracy of genomic predictions.
Table 4 Accuracy for pedigree and genomica parent average for genotyped and non-genotyped birds
For males in the validation population, accuracy improved significantly when male genotypes were added to the training population (Fig. 3). Similarly, for females, accuracy improved significantly when female genotypes were added. Consequently, genotypes for a particular sex that are linked to phenotypic information benefit the genotyped birds of that sex. Cooper et al. [32] showed that using only female genotypes in the training population, opposed to using genotypes only on males, was advantageous for predicting the GEBV for cows, and the same was true for bulls; however, adding female genotypes to an already existing training population of bulls resulted in a very small benefit.
In our study, when genotypes of both sexes were included, opposed to using genotypes for one sex, there was an additional increase in accuracy for each sex (Fig. 3). This may be caused by the contribution of males versus females to the population being different in broiler chickens than in dairy cattle, in which males have a much greater impact on the population due to larger progeny groups. Part of this increase is likely due to the use of the ssGBLUP method, which can model phenotypes and genotypes from both sexes when genotypes are not available for the entire population. This method weights the records of males and females and avoids double-counting of phenotypic and pedigree information. It also establishes connections among more animals with independent information (since it avoids double-counting) through genomic relationships, and combines PA and pedigree prediction.
The increase in accuracy from including genotypes of the opposite sex was greater for validation males than for validation females (Fig. 3). This could be due to several factors: (1) the number of genotypes for females was much larger than that for males and consequently more links were established through H (as G is identical by state) and estimates of DGV and PP were improved; (2) genetic correlations between phenotypes on males and females differ from 1 (our study assumes a correlation of 1); or (3) genomic imprinting is present and thus gene expression depends on the parental origin of the allele [34].
The relative increase in accuracy for females from adding male genotypes was larger for trait T1 than for T4 because T1 had a larger number of male phenotypes (4648) than trait T4 (2017 male phenotypes) (Table 2 and Fig. 3). Since accuracy was computed as the correlation between EBV or GEBV and phenotypes corrected for fixed effects, no accuracy could be computed for T3 for females because this trait was only recorded for males. Therefore, there was no improvement in accuracy of GEBV from adding female genotypes for T3. In fact, the accuracy deteriorated slightly from 0.50 to 0.46, although adding genotypes is not expected to decrease accuracy if the model is correct, the genomic information is accurate, and all selection is accounted for. Thus, the observed decrease in accuracy could be due to modeling issues, e.g., insufficient modeling of factors associated with T3, structure of the validation population, unaccounted selection, or sexual dimorphism [35].
Our study ignored sexual dimorphism [16, 35, 36] because genetic correlations between sexes were assumed to be equal to 1. If this assumption does not hold, realized accuracies could be higher with proper modeling. Follow-up research is required to evaluate the change in ranking for animals evaluated for different traits when sexual dimorphism is accounted for and genomic information is available.
Realized accuracy and accuracy from the inverse of the coefficient matrix of the mixed model equations
In spite of a large number of genotyped birds, the overall accuracies obtained for the dataset used in this study were below expectations. The maximum theoretical accuracy with PA is 0.71; however, the average accuracy was only 0.35 for BLUP and 0.54 for ssGBLUP with birds from both sexes in the training population. VanRaden et al. [1] obtained, respectively, 0.44 and 0.60 for dairy bulls. Realized accuracies in selected populations are smaller than accuracies by inversion of the coefficient matrix of the mixed model equations, if selection is not accounted for [1, 11], with lower realized accuracies under stronger direct selection [37]. In this study, traits T2 and T4 had similar numbers of phenotypes (within a gender) and genotypes, and similar heritabilities. Yet, average accuracies of EBV were up to 48 % higher for T4 than for T2, with differences being larger for females. This suggests that differential selection pressure is placed on these two traits. Indeed, T2 was strongly selected for, while genetic trends for T4 showed no selection pressure in any direction (Fig. 4). While accuracies of EBV and GEBV for a weakly selected trait such as T4 were higher for females than for males, accuracies for females were slightly lower than for males for T2 and much lower for T1. Parents of the validation population were selected in a generation in which the selection pressure for females was higher than for males for T1 and T2. The very low accuracy for females for T1, especially with BLUP, was due to strong phenotypic preselection of females based on T1; in case of extreme selection, the realized accuracy tends towards zero. When selection takes place, cross-validation accuracy differs from accuracy obtained by inversion of the coefficient matrix of the mixed model equations, and adjusting the latter is notoriously difficult since it would require selection differentials; however, selection is a multiple trait and possibly multistage process but the exact process is unknown, and selection intensity varies depending on the selection pathway [11].
Genetic trends based on traditional EBV for all traits for genotyped males and females. Trends are shown over generations and were obtained from a multi-trait model of all four traits
Accuracies in genomic selection depend on the number, distribution, and contributions of genotypes and phenotypes to the genomic evaluation. Contrary to what has been reported for dairy cattle, in this chicken population, the gain in accuracy of GEBV for young genotyped animals was higher when the training population included genotypes for both males and females. We also observed that when the training population has only animals from one sex, the greatest benefit is for young genotyped animals from the same sex. However, when both sexes are genotyped, the amount of genomic information increases greatly and accuracy of GEBV also increases. Thus, genotyping both sexes may be a suitable option in species and production systems for which not only males but also females have a high reproductive impact. For highly selected traits, realized accuracy of GEBV is smaller because it accounts for selection.
VanRaden PM, VanTassel CP, Wiggans GR, Sonstegard TS, Schnabel RD, Taylor JF, et al. Invited review: reliability of genomic predictions for North American Holstein bulls. J Dairy Sci. 2009;92:16–24.
Daetwyler HD, Kemper KE, van der Werf JH, Hayes BJ. Components of the accuracy of genomic prediction in a multi-breed sheep population. J Anim Sci. 2012;90:3375–84.
Pszczola M, Strabel T, Mulder HA, Calus MPL. Reliability of direct genomic values for animals with different relationships within and to the reference population. J Dairy Sci. 2012;95:389–400.
Rendel JM, Robertson A. Estimation of genetic gain in milk yield by selection in a closed herd of dairy cattle. J Genet. 1950;50:1–8.
Schaeffer LR. Strategy for applying genome-wide selection in dairy cattle. J Anim Breed Genet. 2006;123:218–23.
Wiggans GR, Cooper TA, VanRaden PM, Cole JB. Technical note: adjustment of traditional cow evaluations to improve accuracy of genomic predictions. J Dairy Sci. 2011;94:6188–93.
Tsuruta S, Misztal I, Lawlor TJ. Short communication: genomic evaluations of final score for US Holsteins benefit from the inclusion of genotypes on cows. J Dairy Sci. 2013;96:3332–5.
Harris BL, Winkelman AM, Johnson DL. Impact of including a large number of female genotypes on genomic selection. Interbull Bull. 2013;47:23–7.
Di Croce FA, Osterstock JB, Weigel DJ, Lormore MJ. Gains in reliability with genomic information in US commercial Holstein heifers [abstract]. J Dairy Sci. 2014;97:154.
Legarra A, Granie CR, Manfredi E, Elsen JM. Performance of genomic selection in mice. Genetics. 2008;180:611–8.
PubMed Central PubMed Article Google Scholar
Bijma P. Accuracies of estimated breeding values from ordinary genetic evaluations do not reflect the correlation between true and estimated breeding values in selected populations. J Anim Breed Genet. 2012;129:345–58.
Stranden I, Christensen OF. Allele coding in genomic evaluation. Genet Sel Evol. 2011;43:25.
VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23.
Groenen MA, Megens HJ, Zare Y, Warren WC, Hillier LW, Crooijmans RP, et al. The development and characterization of a 60 K SNP chip for chicken. BMC Genomics. 2011;12:274.
Closter AM, van As P, Elferink MG, Crooijmanns RPMA, Groenen MAM, Vereijken ALJ, et al. Genetic correlation between heart ratio and body weight as a function of ascites frequency in broilers split up into sex and health status. Poult Sci. 2012;91:556–64.
Christensen OF, Lund MS. Genomic prediction when some animals are not genotyped. Genet Sel Evol. 2010;42:2.
Christensen OF. Compatibility of pedigree-based and marker-based relationship matrices for single-step genetic evaluation. Genet Sel Evol. 2012;44:37.
Aguilar I, Misztal I, Legarra A, Tsuruta S. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation. J Anim Breed Genet. 2011;128:422–8.
Tsuruta S, Misztal I, Strandén I. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications. J Anim Sci. 2001;79:1166–72.
VanRaden PM, Wiggans GR. Deviation, calculation, and use of national animal model information. J Dairy Sci. 1991;74:2737–46.
VanRaden PM, Wright JR. Measuring genomic pre-selection in theory and in practice. Interbull Bull. 2013;47:147–50.
Mulder HA, Calus MPL, Druet T, Schrooten C. Imputation of genotypes with low-density chips and its effect on reliability of direct genomic values in Dutch Holstein cattle. J Dairy Sci. 2012;95:876–89.
Legarra A, Aguilar I, Misztal I. A relationship matrix including full pedigree and genomic information. J Dairy Sci. 2009;92:4656–63.
Garcia-Cortes LA, Legarra A, Chevalet C, Toro MA. Variance and covariance of actual relationships between relatives at one locus. PLoS One. 2013;8:e57003.
CAS PubMed Central PubMed Article Google Scholar
Hill WG, Weir BS. Variation in actual relationship as a consequence of Mendelian sampling and linkage. Genet Res (Camb). 2011;93:47–64.
Wang H, Misztal I, Legarra A. Differences between genomic-based and pedigree-based relationships in a chicken population, as a function of quality control and pedigree links among individuals. J Anim Breed Genet. 2014;131:445–51.
Forneris NS, Legarra A, Vitezica ZG, Tsuruta S, Aguilar I, Misztal I, et al. Quality control of genotypes using heritability estimates of gene content at the marker. Genetics. 2015;199:675–81.
Cooper TA, Wiggans GR, VanRaden PM. Short Communication: analysis of genomic predictor population for Holstein dairy cattle in the United States–effects of sex and age. J Dairy Sci. 2015;98:2785–8.
Pszczola M, Strabel T, van Arendonk JAM, Calus M. The impact of genotyping different groups of animals on accuracy when moving from traditional to genomic selection. J Dairy Sci. 2012;95:5412–21.
de Koning DJ, Rattink AP, Harlizius B, van Arendonk JA, Brascamp EW, Groenen MA. Genome-wide scan for body composition in pigs reveals important role of imprinting. Proc Natl Acad Sci USA. 2000;97:7947–50.
Mignon-Gasteau S, Beaumont C, Poivey JP, Rochambeau H. Estimation of the genetic parameters of sexual dimorphism of body weight in'label' chickens and Muscovy ducks. Genet Sel Evol. 1998;30:481–91.
Maniatis G, Demiris N, Kranis A, Banos G, Kominakis A. Genetic analysis of sexual dimorphism of body weight in broilers. J Appl Genet. 2013;54:61–70.
Edel C, Neuner S, Emmerling R, Gotz KU. A note on 'forward prediction' to access precision and bias of genomic predictions. Interbull Bull. 2012;46:16–9.
This study was partially supported by USDA Agriculture and Food Research Initiative (Grant no. 2009-65205-05665 from the USDA National Institute of Food and Agriculture Animal Genome Program). We would like to thank Cobb-Vantress Inc. (Siloam Springs, AR) for providing access to the dataset, and Robyn Sapp for helping with data details. Helpful comments from the anonymous reviewers are also gratefully acknowledged.
Department of Animal and Dairy Science, University of Georgia, Athens, GA, 30602, USA
Daniela A. L. Lourenco, Breno O. Fragomeni, Shogo Tsuruta & Ignacy Misztal
Instituto Nacional de Investigacion Agropecuaria, Las Brujas, 90200, Uruguay
Ignacio Aguilar
Cobb-Vantress Inc., Siloam Springs, AR, 72761, USA
Birgit Zumbach & Rachel J. Hawken
Institut National de la Recherche Agronomique, UMR1388 GenPhySE, 31326, Castanet-Tolosan, France
Andres Legarra
Daniela A. L. Lourenco
Breno O. Fragomeni
Shogo Tsuruta
Birgit Zumbach
Rachel J. Hawken
Ignacy Misztal
Correspondence to Daniela A. L. Lourenco.
DALL was responsible for the analyses and the first draft of the manuscript. IM and AL designed the evaluation process. BOF, ST, and IA helped with the initial tests and data checks. BZ and RJH prepared the datasets. All authors read and approved the final manuscript.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Lourenco, D.A.L., Fragomeni, B.O., Tsuruta, S. et al. Accuracy of estimated breeding values with genomic information on males, females, or both: an example on broiler chicken. Genet Sel Evol 47, 56 (2015). https://doi.org/10.1186/s12711-015-0137-1
Broiler Chicken
Genomic Prediction
Genomic Evaluation
Estimate Breeding Value | CommonCrawl |
Science Education and Careers
Programs Why do I have to take "Calc-Based Physics"as a Math major?
Thread starter Integreat
Integreat
MidgetDwarf said:
I question whether you have the ability to pursue a math degree. Mathematics is not solely using an algorithm to solve trivial problems. Most likely, up till now, all your math classes consisted of plugging and chugging. You believe you were good at math, because you can get an A on a superficial test.
That's not true, I don't just get As on tests and believed I'm good at math, math is about understanding the concept and the nature of it, and becoming one with mathematics. I don't just take formulas and memorize it, I investigate "WHY." i.e. Why do we use integration by substitution when integrating a differentiated composite function--i like the analysis of mathematics. If you just merely memorizing formulas, that's NOT real learning of mathematics. You're right, currently my math curriculum are consist of "plug and chug" but i believe once i get into analysis--I will understand why we plug and chug--it's just the matter of time. As for physics... Mathematicians don't study math because it's "useful", I doubt Physics will help me understand math more-- as physicists viewed mathematics as nothing but a mere tool.
Anyhow, my main concern, once again-- is not how physics will help me understand math, but rather how much harder is calc- based physics compared to alg-base physics--and will I do fine if I start from scratch without any background knowledge about physics.
symbolipoint
Education Advisor
Integreat, none of this is correct.
...currently my math curriculum are consist of "plug and chug" but i believe once i get into analysis--I will understand why we plug and chug--it's just the matter of time. As for physics... Mathematicians don't study math because it's "useful", I doubt Physics will help me understand math more-- as physicists viewed mathematics as nothing but a mere tool.
All of that is wrong.
You yourself really need the sequence of Physics courses for the STEM people.
micromass said:
Somebody who can't or doesn't want to do a simple calc based physics course doesn't deserve being called a mathematician
I'm afraid I have to disagree, there are a handful of elite Mathematicians doesn't have Physics knowledge/background. It is absurd to even say someone doesnt deserve to be a mathematicians, just because their lack of interest in Physics. With all due respect, I believed those who only cares about the applications of Mathematics, doesn't deserve to be a Mathematician-- considering they only viewed math as a mere tool. No offense, here is just my $0.02.
Your response to micromass message:
Integreat said:
Very misguided. You really, really, very much NEED the sequence of Physics courses for the S.T.E.M. major-field students.
ShayanJ
This is really absurd. You're a math student! Math isn't easy and if you're going to be scared of only a calc-based introductory mechanics course, you're going to be terrified at more advanced courses that are waiting for you on the way!
And come on...you really think taking just one such course is going too deep in physics than necessary for you?!...this is even more absurd. Its not like refusing to going inside a pet shop to buy a pet, its like refusing to look at the pet shop even once from a far distance because you just suppose you'll never want a pet. Just look at the shop for once(take this course), then if you didn't like any of the pets you've seen, don't go inside(don't take more advanced physics courses!).
Shyan said:
Alright, I guess I'll give it a try. The reason is that the course was offered online, and I have absolutely no physics background other than General Physics at high school (which consist of applying formulas w/o understanding why). That's why I wondered if i will have the same problem problem in that class.
symbolipoint said:
I suppose...
Alright, I guess I'll give it a try. The reason is that the course was offered online, and I have absolutely no physics background. That's why I wondered if i will have a problem in that class.
You said you've already had a alg-based physics course. What other background do you think may be needed?
This is basic physics, this is the background itself!
But maybe you have problem with the course being online and you like courses where you're present in the class. That's a different story!
Well, the fact is, general physics made absolutely no sense to me in high school, i barely survived with a B- by doing a lot of hand outs without UNDERSTANDING why. i.e. why force is F=ma. my teacher just threw a list of formula and tell me to apply it. that's why I dont know if calc-based physics would be the same. It was a nightmare in gen. phys.
Mondayman
General Physics at high school (which consist of applying formulas w/o understanding why)
Introducing calculus into the physics will alleviate this problem.
Some of the most important mathematical developments came from the demands of physics, so it seems logical to have an introductory mechanics course. This is mandatory at my school for math majors, taken alongside first year calculus.
Reactions: billy_joule and symbolipoint
micromass
You are very naive. That's ok. Back when I started my undergrad I hated applications of mathematics. I actively avoided them. But boy, I regret that attitude so much now.
Take functional analysis. It's a very cool field of research. But can one really understand it without knowing QM? I don't think so. How is somebody supposed to understand topology or differential geometry without the physical applications of stuff like measuring the earth or GR. How is somebody supposed to understand even calculus without seeing it in action in physics? Sure, you think you understand it, but do you really know the relevance of Stokes' theorem? I didn't until I studied more physics.
Talk about great mathematicians, you'll find that many great mathematician knew their physics very well. Von Neumann was very aware of QM. Hilbert did research on GR. Euler, Gauss, Laplace all had applications in mind. Do you really think you can be a mathematician without knowing some physics? Perhaps you can, but I guarantee that you will regret this attitude later in your life.
Reactions: martinbn and billy_joule
Crush1986
F=ma will still be the starting point for almost everything in freshman physics. You're going to have to take it a physical fact of life at first. If you ever take more advanced mechanics you'll see more of why f=ma is a fact of life. The good news is most everything else you'll be able to prove using it, not much else should be, "thrown" at you.
My advice, give it a chance and give it your all. Physics strengthened my mathematical skills many times fold. A lot of math majors I know felt the same and decided to go for the physics minor. In fact every math major I met in undergrad physics went for the physics minor now that I think of it.
Akorys
Micromass, you say that physics helps in understanding the motivation for certain math fields, of which I've no doubt. However, studying mathematics already gives you a large number of courses to choose from, and additionally the recommended stat courses. If one wants to be a math major, surely they do not need to learn quantum mechanics, general relativity, etc, or they may as well do a minor/ double degree? While I'm sure it is helpful, I wonder as to the extent of physics courses you're recommending for math majors?
Do you really think you can be a mathematician without knowing some physics? Perhaps you can, but I guarantee that you will regret this attitude later in your life.
I think there's a misconception here-- that I'm "afraid"of physics because of the "math" part. if anyone claim to be a mathematician but afraid the math, they might as well as NOT call themselves a mathematician. Perhaps you're right, Ill probably enjoy physics one day. although someone had already answered my true concern-- but once again, really boils down to-- is it going to be just memorizing formulas like i did in high school, because i did physics like that. but i dont understand anything at all, please allow me to use the good ol' example: F=ma. why is F=ma, how did they derived it. why is acceleration of gravity is -9.8m/s^2, where did the s^2 came from and why is it squared.
As you see I'm not that type of person who just take whatever the teacher told without asking "why," i like to understand the nature of everything. i.e. why do we use integration by substitution when dealing with a "differentiated" composite function.
I guess the reason why I'm having the "FEAR" of physics and applying math on it is because my high school experience, where I just memorize a colossal of formulas without truly understanding their nature. As I've said before, I barely survived Gen. Physics with a low B in high school. I'm just affraid that it'll be the same with calc-based physics.
But after reading your respond it made me feel better now.
When you do proper physics (starts once calculus is introduced into it) you should learn things like this, I am from the uk and until I got to university level (known as college in the US) we did plug and chug physics where we just had to use the formulas because you needed more maths than what we were taught at high school.
But once you hit university level, all the things you want to find out in bold will come through various physics courses (what you will cover derivation wise will depend on your course), yes some physics equations are still given to you at the earlier stage, but thats normally because it requires more advanced maths/physics than you have currently covered so wont understand the derivation with the current level of knowledge but there is no stopping you going and learning it of course
Dont worry it sounds like once you start doing proper physics you will actually enjoy it :D
Reactions: Mondayman
axmls
I think there's a misconception here-- that I'm not "afraid"of physics because of the "math" part. if anyone claim to be a mathematician but afraid the math, they might as well as NOT call themselves a mathematician. Perhaps you're right, Ill enjoy physics one day. although someone had already answered my true concern-- but once again, really boils down to-- is it going to be just memorizing formulas like i did in high school, because i did physics like that. but i dont understand anything at all, please allow me to use the good ol' example: F=ma. why is F=ma, how did they derived it. why is acceleration of gravity is -9.8m/s^2, where did the s^2 came from and why is it squared.
##F = m a## because Newton postulated that there is something called a force that causes objects to accelerate, and that acceleration also depends on the mass of the object. That's not really proven, though there are other formulations of classical mechanics where ##F = ma## can be derived (but it all boils down to making some assumption and seeing if experiments agree with it). ##g = 9.8 m/s^2## because experimentally, the gravitational force is ##F = G \frac{M m}{r^2}##, and if ##F = m a##, then since these are equal, ##a = G \frac{M}{r^2}##. Plugging in known values of ##G##, the mass of the Earth, and the radius of the Earth gives ##a = 9.8 m/s^2##. The seconds term is squared because ##m/s^2 = m/s/s##, i.e. "meters per second, per second"--acceleration tells you how fast the velocity (meters per second) changes per second. Things like this become very clear when you've learned calculus (especially the seconds squared part) and applied it to physics. I believe it is impossible to intuitively understand differentiation until you've seen the relationships between acceleration, velocity, and displacement in classical mechanics. Almost everything you do in first year physics is a result of solving the differential equation $$F = m \frac{d^2 x}{dt^2}$$ with various forces (spring forces, no forces, gravitational forces, etc.)
You will find physicists sometimes treat math very sloppily in the first year (and even onward). Don't be shocked when your professors are manipulating differentials. It's all grounded in proper math somehow, even if it is an abuse of notation.
I really feel that if where F=ma or the s^2 came from wasn't clear, the course either wasn't taught well or you didn't pay attention. Even in an introductory algebra-based physics class, it's pretty intuitive;
Anyway, introductory classes are guilty of dropping equations out of nowhere with not a lot of motivation. Just "use this in this situation". When you take your first calculus-based physics class, you start deriving these equations yourself, and they begin to make a whole lot more sense.
amys299
Taking freshman physics isn't going to just be memorizing equations and plugging and chugging if your university is actually teaching actual physics and if you're actually invested in the "why" part. All of physics is "WHY." The reason algebra-based physics had you memorize formulas is because there is an assumption you don't know calculus therefor you cannot fully understand the equations your using. If anything, algebra-based physics is the most useless thing in any curriculum so stop basing your assumption of physics because of algebra-based physics alone.
Also, a graduate mentor once told me no one fully understands F=ma until graduate school. It is more mathematically complicated, but all you will learn in freshman physics is F=dx2/d2t. If this formula has not given you an understanding about how math and physics are related, or what this math actually says about force and in turn what force says about math, you may want to look more in depth at what you learned in math, especially Calculus. When I took Calculus at my university, most of it was theory (the "why") but there was also a heavy emphasis on application.
...but once again, really boils down to-- is it going to be just memorizing formulas like i did in high school, because i did physics like that.
No. Physics in college or your university will not be like that.
MidgetDwarf
You would like Fundamental of University Physics, by Alonso and Finn. It is different than most introductory physics books. DO not purchase the book titled Physics.
Everything is derived, experiments that led to discoveries are given a good coverage. Many topics not introduced in the introductory physics sequence are discussed.
Very interesting, ill take a look, thanx
jtbell
Hi, why do I have to take calculus based physics, if I'm a math major?
Andy Resnick said:
Probably because it's a required course. If you are not happy with that, complain to your department.
Or go to a different university. I checked two that came first to the top of my head: Michigan and Ohio State. Neither requires math majors to take the calculus-based intro physics sequence, although Michigan does "strongly recommend" it.
jtbell said:
A mathematics undergraduate program might not require a set of courses in Physics. The institutions would specify a list of "cognates" to choose from, which are of high mathematical content, meaning courses outside of the Mathematics department and which rely very much on Mathematics for their understanding. These are things like Finance, Business Management, Economics, PHYSICS, Chemistry, Computer Science, Engineering.
GHZ2016
I know that a lot of people have said things of this nature, but I think you're underestimating the relationship that most physicists and physics students have with mathematics. A lot of people are interested in physics precisely because of their love for mathematics. Mathematics is not just a tool, it's the way to translate the world around you into something understandable and manipulatable. Topics like classical mechanics or electrodynamics were awe-inspiring for me exactly because of the beautiful mathematics involved. Not only that, but they completely changed my relationship with mathematics and rigor and helped me to better understand where the original ideas came from and how new ideas are developed. If you think that there is such a thing as math for math's sake that is completely divorced from the physical world and the attitudes that have driven physics, I think that's more a mark of naivete and a lack of really understanding either at this point in your education. That's completely alright and even understandable, since you seem quite young, but you will find that you will be much better served intellectually and personally if you don't go about shutting down whole fields or areas of thinking based on a very small bit of experience with them. There's a lot of wonderful things to learn about in the world, and a lot of topics will surprise you so long as you remain open to them. If you approach something with only an attitude of being bitter that you have to do it in the first place, you're setting yourself up not to like it and you might miss something really cool.
Best of luck in your studies.
Fervent Freyja
I know that Micromass is from Europe
I wonder, do Russians prefer or dislike being called European or Asian, or do they consider themselves exclusively Russian?
"Why do I have to take "Calc-Based Physics"as a Math major?" You must log in or register to reply here.
Related Threads for: Why do I have to take "Calc-Based Physics"as a Math major?
Programs Math major to take intro physics: calculus or non-calculus based?
Programs Do Chemistry majors have to take the same calculus as do Engineering and Math majors?
GirlInDoubt
Is calc based physics while taking calc crazy?
jaysquestions
Programs Which math classes should I take as a [physical] chemistry major?
djh101
Programs Do I have what it takes to be a Math major?
Mandanesss
Programs Studying Physics Courses at 50 years old
Started by jlcd
Do chemists think differently than physicists?
Started by needsomeadvicemb
Help! A university blunder has messed up my undergraduate result, and I may end up taking a gap year because of it. I just need some guidance.
Started by RisingChariot
Courses Learning the Italian Language in order to Learn Music Theory
Started by bagasme
Programs What kind of physics should I study?
Started by kelly0303 | CommonCrawl |
Decathlon: the Art of Scoring Points
Article by Professor John Barrow
Published June 2012.
The decathlon consists of ten track and field events spread over two days. It is the most physically demanding event for athletes. On day one, the 100m, long jump, shot putt, high jump and 400m are contested. On day two, the competitors face the 110m hurdles, discus, pole vault, javelin and, finally, the 1500m. In order to combine the results of these very different events - some give times and some give distances - a points system has been developed. Each performance is awarded a predetermined number of points according to a set of performance tables. These are added, event by event, and the winner is the athlete with the highest points total after ten events. The women's heptathlon works in exactly the same way but with three fewer events (100m hurdles, high jump, shot, 200m, long jump, javelin and 800m).
The most striking thing about the decathlon is that the tables giving the number of points awarded for different performances are rather free inventions. Someone decided them first back in 1912 and they have subsequently been updated on different occasions, taking into account performances in all the events by decathletes and specialist competitors. Clearly, working out the fairest points allocation for any running, jumping or throwing performance is crucial and defines the whole nature of the event very sensitively. Britain's Daley Thompson missed breaking the decathlon world record by one point when he won the Olympic Games 1984 but a revision of the scoring tables the following year increased his score slightly and he became the new world record holder retrospectively! The current world record is 9026 points set by Roman Šebrle of the Czech Republic in 2001. For comparison, if you broke the world record in each of the ten individual decathlon events you would score about 12,500 points! The best ten performances ever achieved by anyone during decathlon competitions sum up to a total score of 10,485.
Originally, the points tables were set up so that (approximately) 1000 points would be scored by the world record for each event at the time. But records move on and now, for example, Usain Bolt's world 100m record of 9.58s would score him 1202 decathlon points whereas the fastest 100m ever run in a decathlon is 'only' 10.22s for a points score of 1042. The current world record that would score the highest of all in a decathlon is Jürgen Schult's discus record of 74.08m, which accumulates 1383 points.
All of this suggests some important questions that bring mathematics into play. What would happen if the points tables were changed? What events repay your training investment with the greatest points payoff? And what sort of athlete is going to do best in the decathlon - a runner, a thrower or a jumper?
The decathlon events fall into two categories: running events where the aim is to record the least possible time and throwing or jumping events where the aim is to record the greatest possible distance. The simplest way of scoring this would be record all the throws and jumps distances in metres, multiply them together and then multiply all the running times in seconds together and divide the product of the throws and jumps by the product of the running times, T. The Special Total that results will have units of $(\mathrm{length})^6 \div (\mathrm{time})^4 = \mathrm{m}^6/\rm{s}^4$ and spelt out in full it looks like this:
$$\mbox{Special Total, ST} = \frac{\mathrm{LJ} \times \mathrm{HJ} \times \mathrm{PV} \times \mathrm{JT} \times \mathrm{DT} \times \mathrm{SP}}{T(100\mathrm{m}) \times T(400\mathrm{m}) \times T(110\mathrm{mH}) \times T(1500\mathrm{m})}$$
If we take the two best ever decathlon performances by Šebrle (9026 pts) and Dvořák (8994 pts) and work out the Special Totals for the 10 performances they each produced then we get
Šebrle (9026 pts): ST = 2.29
Dvořák (8994 pts): ST = 2.40
Interestingly, we see that the second best performance by Dvořák becomes the best using this new scoring system.
In fact, our new scoring system is not a good one. It contains some biases. Since the distances attained and the times recorded are different for the various events you can make a bigger change to the ST score for the same effort. An improvement in the 100m by from 10.6s to 10.5s requires considerable improvement but you don't get much of a reward for it in the ST score. By contrast reducing a slow 1500m run by 10 seconds has a big impact. The events with the room for larger changes have bigger effects on the total. The actual scoring points tables that are used incorporate far more information about comparable athletic performances than the simple ST formula we have invented.
The setting of the points tables that are used in practice is a technical business that has evolved over a long period of time and pays attention to world records, the standards of the top ranked athletes, and historical decathlon performances. However, ultimately it is a human choice and if a different choice was made then different points would be received for the same athletic performances and the medallists in the Olympic Games might be different. The 2001 IAAF scoring tables have the following simple mathematical structure:
The points awarded (decimals are rounded to the nearest whole number to avoid fractional points) in each track event - where you want to give higher points for shorter times (T) are given by the formula
$$\mbox{Track event points} = \rm{A} \times (\rm{B} - \rm{T})^\rm{C},$$
where T is the time recorded by the athlete in a track event and A, B and C are numbers chosen for each event so as to calibrate the points awarded in an equitable way. The quantity B gives the cut-off time at and above which you will score zero points and T is always less than B in practice -- unless someone falls over and crawls to the finish! For the jumps and throws - where you want to give more points for greater distances (D) - the points formula for each event is
$$\mbox{Field event points} = \rm{A} \times (\rm{D} - \rm{B})^\rm{C}$$
The three numbers A, B and C are chosen differently for each of the ten events and are shown in this table. You score zero points for a distance equal to or less than B. The distances here are all in metres and the times in seconds.
Most importantly, the points achieved for each of the 10 events are then added together to give the total score. In our experimental ST scoring scheme above they were multiplied together. You could have added all the distance and all the times before dividing one total by the other though.
100 m 25.4347 18 1.81
Long jump 0.14354 220 1.4
Shot put 51.39 1.5 1.05
High jump 0.8465 75 1.42
400 m 1.53775 82 1.81
110 m hurdles 5.74352 28.5 1.92
Discus throw 12.91 4 1.1
Pole vault 0.2797 100 1.35
Javelin throw 10.14 7 1.08
1500 m 0.03768 480 1.85
In order to get a feel for which events are 'easiest' to score in, take a look at this table which shows what you would have to do to score 900 points in each event for an Olympic-winning 9000-point total.
100m 10.83s
Long jump 7.36m
Shot put 16.79m
High jump 2.10m
110m hurdles 14.59s
Discus throw 51.4m
Pole vault 4.96m
Javelin throw 70.67m
1500m 247.42s (= 4m 07.4s)
There is an interesting pattern in the decathlon formulae that change the distances and times achieved into points. The power index C is approximately 1.8 for the running events (1.9 for the hurdles), close to 1.4 for the jumps and pole vault and close to 1.1 for the throws. The fact that C > 1 indicates that the points scoring system is a 'progressive' one, curving upwards in a concave way; that is, it gets harder to score points as your performance gets better. This is realistic. We know that as you get more expert at your event it gets harder to make the same improvement but beginners can easily make large gains. The opposite type of ('regressive') points system would have C < 1, curving increasingly less, while a 'neutral'; one would have C = 1 and be a straight line. We can see that the IAAF tables are very progressive for the running events, fairly progressive for the jumps and vault, but almost neutral for the throws.
In order to get a feel for how the total points scored is divided across events, the Figure below shows the division between the ten events for the averages of the all-time top 100 best ever men's decathlon performances.
Figure : Average points spread achieved across the 10 decathlon events in the 100 highest points totals
It is clear that there has been a significant bias towards gathering points in the long jump, hurdles and sprints (100m and 400m). Performances in these events are all highly correlated with flat-out sprinting speed. Conversely, the 1500m and three throwing events are well behind the other disciplines in points scoring. If you want to coach a successful decathlete, start with a big strong sprint hurdler and build up strength and technical ability for the throws later. No decathletes bother much with 1500m preparation and rely on general distance running training.
Clearly, changes to the points scoring formula would change the event. The existing formulae are based largely upon (recent) historical performance data of decathletes rather than of top performances by the specialists in each event. Of course, this tends to reinforce any biases inherent in the current scoring tables because the top decathletes are where they are because of the current edition of the scoring tables – it is not unfavourable to them. As an exercise, we could consider a simple change that is motivated by physics. In each event (with the possible exception of the 1500m), whether sprinting, throwing or jumping, it is the kinetic energy generated by the athlete that counts. This depends on the square of his or her speed (= $\frac{1}{2}\rm{MV}^2$, where M is their mass and V their speed), The height cleared by the high jumper or pole vaulter, or the horizontal distance reached by the long jumper are all proportional to the square of their launch speed ($\propto \rm{V}^2/g$, where $g$ = 9.8m/s is the acceleration due to gravity). Since the time achieved running at constant speed will be proportional to (distance/time)$^2$ this implies that we pick C = 2 for all events. If we do that and pick the best A and B values to fit the accumulated performance data as well as possible then the sports scientist Wim Westera has calculated that we get an interesting change in the top ten decathletes. Šebrle becomes number 2 with a new score of 9318 whilst the present number 2, Dvořák, overtakes him to take first place with a new world record score of 9468 (just like he did with my ST scoring system - although notice that I multiplied the performances together, all these other schemes add the points achieved in each event together). Other top rankings change accordingly. The pattern of change is interesting. Picking C = 2 across all events is extremely progressive and greatly favours competitors with outstanding performances as opposed to those with consistent similar ones. However, it dramatically favours good throwers over the sprint hurdlers because of the big change in the value of C = 1.1 being applied to the throws at present. And this illustrates the basic difficulty with points systems of any sort – there is always a subjective element that could have been chosen differently.
Stop Press! A new world record was set for this event in the US Olympic trials during the weekend 23rd June 2012 by Ashton Eaton. The performances for each event and the points accrued by them can be found at http://www.usatf.org/events/2012/OlympicTrials-TF/Results/Summary-39.htm
It is an interesting little project to compare the pattern of points obtained across each of the events with the ideal 'average' decathalete in the graph in the article. Eaton is actually very different, doing far better in the running events and worse in the throws.
$^i$ See http://www.iaaf.org/mm/Document/Competitions/TechnicalArea/ScoringTables_CE_744.pdf
$^{ii}$ www.iaaf.org
John Barrow is the Director of the Millennium Maths project, of which NRICH is a part. He is Professor in the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge, and lectures widely on the public understanding of maths and science. His most recent book '100 Essential Things You Didn't Know You Didn't Know About Sport' was published by Bodley Head in March 2012. | CommonCrawl |
Home Journals IJHT Entropy Generation Analysis of Carbon Nanotubes Nanofluid 3D Flow along a Nonlinear Inclined Stretching Sheet through Porous Media
Entropy Generation Analysis of Carbon Nanotubes Nanofluid 3D Flow along a Nonlinear Inclined Stretching Sheet through Porous Media
Shalini Jain* | Preeti Gupta
Dept. of Mathemathics, University of Rajasthan, Jaipur-302004, Rajasthan, India
Dept. of Mathematics & Statistics, Manipal University Jaipur, Jaipur-303007, Rajasthan, India
[email protected]
Second law analysis for three-dimensional flow of water based CNTs nanofluid over an inclined stretching sheet subject to convective boundary condition in the presence of porous media has been done. For the analysis, we have taken two types of nanoparticles namely, single wall carbon nanotube (SWCNT) and multiwall carbon nanotube (MWCNT). A system of coupled non-linear differential equations for the flow is obtained by using similarity transformations through the conservation laws. Solution of resulting equations is obtained by Runge-Kutta fourth order mcethod with shooting technique. The effects of various physical parameters on flow and heat transfer characteristics as well as entropy generation has been investigated and displayed through graphs and tables.
Significance of the study:
The entropy generation analysis has been investigated in order to determine the optimal working condition for the given geometry under the considered boundary conditions. The mixture model with constant temperature properties was employed to simulate the nanofluid. Carbon nanotubes are hexagonally shaped arrangements of carbon atoms that have been rolled into tubes. Carbon nanotubes have high thermal conductivity, therefore, by adding nanoparticles or nanotubes in the base fluids the effective thermal conductivity of heat transfer fluids enhances.
CNTs, entropy, nonlinear inclined stretching sheet, porous media
Nanofluids are used to increase the thermal conductivity and heat transfer rate of base fluids. The nanoparticles used in nanofluids are usually made of metals, carbides, oxides or carbon nanotubes. Carbon nanotubes are hexagonally shaped arrangements of carbon atoms that have been rolled into tubes. Carbon nanotubes were discovered in 1991 by Iijima. Carbon nanotubes have high thermal conductivity, exceptional mechanical strength and exceptional corrosion resistance. Their novel properties make them very useful in applications like microwave amplifier, nanotube sensors, nanotube transistors, hand held X-ray, field emission display, solar cell, lithium ion batteries and chemical sensors. Firstly nanofluids were introduced by Choi [1]. He presented that solid nano size particles dropped in a carrier fluid to get a new type of complex fluid, known as nanofluid. Hayat et al. [2] studied the impact of margoni convection in viscous fluid flow of carbon water nanofluid. Aman et al. [3] investigated
four different types of molecular liquids are taken with CNTs Maxwell nanofluids in free convection flow. Hayat et al. [4] discussed three dimensional flow for homogenous-heterogeneous reactions for carbon nanotubes with porous media. Jain and Bohra [5] studied radiation and hall current effects on squeezing MHD nanofluid flow with a lower permeable stretching wall in a rotating channel.
A boundary-layer flow through a stretching surface has applicability in extrusion of plastic sheets, wire drawing, glass fiber production and paper production etc. Hayat et al. [6] studied three-dimensional stretching nonlinear surface magnetohydrodynamics nanofluid flow with a convective boundary condition. Mustafa et al. [7] discussed water based magnetite nanofluid rotating flow over a stretching surface by nonlinear thermal radiation. Gopal et al. [8] studied Joule's and viscous dissipation on Casson fluid flow with inclined magnetic field over a chemical reacting stretching sheet. Kandasamy et al. [9] discussed MHD SWCNT type nanofluid flow considering both water and sea water as a base fluid.
A convective boundary condition increases the temperature and the thermal conductivity of nanofluids. Mahantesh et al. [10] studied radiative heat transfer in 3-D MHD nanofluid flow subject to the convective boundary condition over a stretching nonlinear sheet. Jain and Choudhary [11] studied the effect of MHD on boundary layer flow over exponentially shrinking sheet in the presence of porous media and slip. Chauhan and Olkha [12] studied heat transfer and slip flow of second-grade fluid through a porous medium past a stretching sheet with heat flux and power law surface temperature. Nayak [13] studied heat transfer effects of nanofluid flow by shrinking surface with convective condition. Jain and Bohra [14] studied Heat and mass transfer over a 3-D stretching inclined non-linear sheet with convective boundary conditions.
The entropy generation analysis has been investigated to determine the optimal working condition for the given geometry under the considered boundary conditions. The mixture model with constant temperature properties was employed to simulate the nanofluid. Results clearly showed that, the inclusion of nanoparticles produced a considerable increase of the heat transfer with respect to that of the base liquid. Heat transfer enhancement increased with the particle volume concentration. In thermal engineering, entropy generation minimization is applicable in devices such as air separators, chillers, fuel cell reactors, and thermal solar. Irreversibility phenomena associated with viscous dissipation, mass and heat transfer determined through entropy generation. Matin et al. [15] studied entropy in mixed convection MHD nanofluid flow over a sheet. Rehman et al. [16] discussed entropy of radioactive nanofluid flow with thermal slip. Das et al. [17] investigated entropy analysis of unsteady nanofluid flow past stretching sheet with convective condition. Chauhan and Kumar [18] studied Entropy generation impacts on non-Newtonian third grade fluid flow through partially filled annulus with temperature dependent viscosity and porous media. Shirley et al. [19] studied entropy analysis of nimonic 80A nanoparticles type nanofluid and stagnation point flow over a convectively stretching heated sheet. Vasanthakumari and Pondy [20] studied Mixed convection of titanium dioxide and silver nanofluids along inclined sheet with MHD and heat generation / suction effect.
The aim of this study is to examine the entropy generation for 3D flow of water based CNTs (SWCNTs and MWCNTs) nanofluid over an inclined stretching sheet subject to the convective boundary condition in the presence of porous media. The governing equations are solved using Runge-Kutta fourth order method with shooting technique. Effect of pertinent parameter has been obtained for velocity, temperature, and entropy generation profile. Results are discussed and displayed through graphs and tables.
2. Formulation of The Problem
Second law analysis over a inclined nonlinear stretching sheet with water based carbon nanotube nanofluid 3D flow through porous media under the convective boundary condition has been done. Velocity along x and y-directions are $U_w(x,y)=c(x+y)^n$, $V_w (x,y)=d(x+y)^n$. where n>0, c and d are positive constants. Schematic diagram shown in Figure 1.
figure_1.png
Figure 1. Schematic diagram
The governing equations are:
\(\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+\frac{\partial w}{\partial z}=0\) (1)
\(u\frac{\partial u}{\partial x}+v\frac{\partial u}{\partial y}+w\frac{\partial u}{\partial z}={{\upsilon }_{nf}}\frac{{{\partial }^{2}}u}{\partial {{z}^{2}}}-\frac{{{\upsilon }_{nf}}}{{{k}_{p}}}u+g\left[ {{\beta }_{T}}\left( T-{{T}_{\infty }} \right) \right]\cos \alpha \)(2)
\(u\frac{\partial v}{\partial x}+v\frac{\partial v}{\partial y}+w\frac{\partial v}{\partial z}={{\upsilon }_{nf}}\frac{{{\partial }^{2}}v}{\partial {{z}^{2}}}-\frac{{{\upsilon }_{nf}}}{{{k}_{p}}}v+g\left[ {{\beta }_{T}}\left( T-{{T}_{\infty }} \right) \right]sin\alpha \) (3)
\(u\frac{\partial T}{\partial x}+v\frac{\partial T}{\partial y}+w\frac{\partial T}{\partial z}=\frac{{{k}_{nf}}}{{{\left( \rho {{c}_{p}} \right)}_{nf}}}\frac{{{\partial }^{2}}T}{\partial {{z}^{2}}}-\frac{1}{{{\left( \rho {{c}_{p}} \right)}_{nf}}}\frac{\partial {{q}_{r}}}{\partial Z}\) (4)
Here $u,v,w$ are fluid velocities along the directions $x,y,z$ respectively; $k_p$ is the permeability of porous medium; $β_T$ is the thermal expansion.
Under the boundary conditions are
\(z=0,\), \(u=c{{\left( x+y \right)}^{n}},\) \(v=d{{\left( x+y \right)}^{n}},\) \(w=0,\)\(-{{k}_{nf}}\frac{\partial T}{\partial z}=\gamma \left( {{T}_{w}}-T \right)\) (5)
at \(z\to \infty \), \(u\to 0\), \(v\to 0\), \(T\to {{T}_{∞}}\) (6)
where $μ_{nf},υ_{nf},ϕ,ρ_{nf},ρ_{CNT}$ is the viscosity of nanofluid, kinematic viscosity of nanofluid, volume fraction of nanoparticle, density of nanofluid, the density of carbon nanotube respectively. Thermo physical property of fluid and nano particles is given in Table 1.
\({{\mu }_{nf}}=\frac{{{\mu }_{f}}}{{{\left( 1-\phi \right)}^{2.5}}},\)\({{\upsilon }_{nf}}=\frac{{{\mu }_{nf}}}{{{\rho }_{nf}}},\)\({{A}_{1}}=\frac{{{\rho }_{nf}}}{{{\rho }_{f}}}=\left( 1-\phi \right)+\phi \frac{{{\rho }_{CNT}}}{{{\rho }_{f}}}\),\({{A}_{2}}=\frac{{{\left( \rho {{c}_{p}} \right)}_{nf}}}{{{\left( \rho {{c}_{p}} \right)}_{f}}}=\left( 1-\phi \right)+\phi \frac{{{\left( \rho {{c}_{p}} \right)}_{CNT}}}{{{\left( \rho {{c}_{p}} \right)}_{f}}}\) (7)
The effective thermal conductivity of nanofluid is expressed as
\(\frac{{{k}_{nf}}}{{{k}_{f}}}=\left( \frac{1-\phi +2\phi \left( \frac{{{k}_{CNT}}}{{{k}_{CNT}}-{{k}_{f}}} \right)\ln \left( \frac{{{k}_{CNT}}+{{k}_{f}}}{2{{k}_{f}}} \right)}{1-\phi +2\phi \left( \frac{{{k}_{f}}}{{{k}_{CNT}}-{{k}_{f}}} \right)\ln \left( \frac{{{k}_{CNT}}+{{k}_{f}}}{2{{k}_{f}}} \right)} \right)\) (8)
Table 1. Thermophysical properties of base fluid and nanoparticles
Thermophysical properties of base fluid and nanoparticles
(Base-fluid)
SWCNT (nanoparticle)
MWCNT (nanoparticle)
ρ (kg/m3)
$C_p$ (J/kgK)
k (W/mK)
The Rosseland approximation is expressed as
\({{q}_{r}}=-\frac{4{{\sigma }^{*}}}{3{{k}^{*}}}\frac{\partial {{T}^{4}}}{\partial z}\) (9)
where σ* is the Stefen-Boltzmann constant and k* is the mean absorption coefficient. The temperature difference has been considered very small, so that T4 may be expressed as a linear function of temperature.
\({{T}^{4}}\approx 4{{T}^{3}}_{\infty }T-3{{T}^{4}}_{\infty }\) (10)
Similarity transformations are
\(u=c{{\left( x+y \right)}^{n}}f'(\eta ),\) \(v=c{{\left( x+y \right)}^{n}}g'\left( \eta \right),\)\(\eta ={{\left( \frac{c\left( n+1 \right)}{2{{\upsilon }_{f}}} \right)}^{1/2}}{{\left( x+y \right)}^{\frac{n-1}{2}}}z,\)\(T-{{T}_{\infty }}=\left( {{T}_{w}}-{{T}_{\infty }} \right)\theta \)
\(w=-{{\left( \frac{c{{\upsilon }_{f}}\left( n+1 \right)}{2} \right)}^{1/2}}\left\{ \left( f+g \right)+\frac{n-1}{n+1}\eta \left( f'+g' \right) \right\}{{\left( x+y \right)}^{\frac{n-1}{2}}},\) (11)
On substituting equation (11) in Equation (1)–(4), equation (1) identically satisfies and equation (2)-(4) transformed into the following form:
\(\begin{align} & f'''+\left\{ \left( f+g \right)f''-\frac{2n}{n+1}\left( f'+g' \right)f'+\frac{2}{n+1}\delta \cos {{\alpha }_{1}}\theta \right\} \\ & {{(1-\phi )}^{2.5}}{{A}_{1}}-\frac{2K}{n+1}f'=0 \\ \end{align}\) (12)
\(\begin{align} & g'''+\left\{ \left( f+g \right)g''-\frac{2n}{n+1}\left( f'+g' \right)g'+\frac{2}{n+1}\delta sin{{\alpha }_{1}}\theta \right\} \\ & {{(1-\phi )}^{2.5}}{{A}_{1}}-\frac{2K}{n+1}g'=0, \\ \end{align}\) (13)
\(\frac{1}{{{A}_{2}}\Pr }\left( {{A}_{3}}+\frac{4}{3}R \right)\theta ''+\left( f+g \right)\theta '=0\) (14)
Under the boundary condition
at $η=0$, $f(0)=g(0)=0$, $f'(0)=1$, $g'(0)=α$, \(\theta \left( 0 \right)=1+\frac{{{A}_{3}}}{{{B}_{i}}}\theta '\left( 0 \right)\)
at η→∞, f'(∞)→0, g'(∞)→0, θ(∞)→0 (15)
where K is the local porosity parameter, α is the ratio parameter, R is the radiation parameter, $P_r$ is the Prandtl number. These non-dimensional variables are defined by
\(K=\frac{{{\upsilon }_{f}}}{{{k}_{p}}c{{\left( x+y \right)}^{n-1}}}\), \(\alpha =\frac{d}{c}\), \(\Pr =\frac{{{\mu }_{f}}{{\left( {{c}_{p}} \right)}_{f}}}{{{k}_{f}}}\), \(R=\frac{4{{\sigma }^{*}}{{T}_{\infty }}^{3}}{{{k}^{*}}{{k}_{f}}}\) (16)
3. Solution
The equations (12)-(14) under the boundary conditions (15) have been solved numerically by RK-4 method with shooting technique. Runge-Kutta fourth method need a finite domain 0≤η≤η∞, in this study we have chosen η∞=10 . The boundary value problems are changed into initial value problems, are defined as
f=f1, f'=f2, f''=f3, g=f4, g'= f5, g''= f6,
${{f}_{3}}'=-$ $\left\{ \begin{align} & -\frac{2K}{n+1}{{f}_{2}}+{{(1-\phi)}^{2.5}}{{A}_{1}} \\ & ({{f}_{1}}+{{f}_{4}}){{f}_{3}}-\frac{2n}{n+1}({{f}_{2}}+{{f}_{5}}){{f}_{2}}+\frac{2}{n+1}\delta cos {{\alpha }_{1}}{{f}_{8}} \\ \end{align} \right\}$ (17)
${{f}_{6}}^{'}=-$ $\left\{ \begin{align} & -\frac{2K}{n+1}{{f}_{5}}+{{(1-\phi)}^{2.5}}{{A}_{1}} \\ & ({{f}_{1}}+{{f}_{4}} ){{f}_{6}}-\frac{2n}{n+1}\left( {{f}_{2}}+{{f}_{5}} \right){{f}_{5}} \\ & +\frac{2}{n+1}\delta sin{{\alpha }_{1}}{{f}_{8}} \\ \end{align} \right\}$ (18)
${{f}_{8}}'$$=\frac{{{A}_{2}}\Pr }{( {{A}_{3}}+\frac{4}{3}R)}$$\{( {{f}_{1}}+{{f}_{4}} ){{f}_{_{8}}}\}$ (19)
\({{f}_{1}}\left( 0 \right)={{f}_{4}}\left( 0 \right)=0\), \({{f}_{2}}\left( 0 \right)=1,\) \({{f}_{5}}\left( 0 \right)=\alpha ,\) \({{f}_{7}}\left( 0 \right)=1+\frac{{{A}_{3}}}{{{B}_{i}}}{{f}_{8}}\left( 0 \right),\) \({{f}_{3}}(0)={{r}_{1}},{{f}_{6}}(0)={{r}_{2}}\,,{{f}_{8}}(0)={{r}_{3}},\) (20)
where, $r_1$, $r_2$ and $r_3$ are the initial guesses.
4. Entropy Generation
The local volumetric entropy equation rate of viscous incompressible fluid with porous medium is written as
\(\begin{align} & S''{{'}_{gen}}=\left\{ {{\left( \frac{\partial T}{\partial x} \right)}^{2}}+{{\left( \frac{\partial T}{\partial y} \right)}^{2}}+{{\left( \frac{\partial T}{\partial z} \right)}^{2}}+\frac{4}{3}R{{\left( \frac{\partial T}{\partial z} \right)}^{2}} \right\}\frac{{{k}_{nf}}}{{{T}_{\infty }}^{2}} \\ & +\frac{{{\mu }_{nf}}}{{{T}_{\infty }}}\left\{ {{\left( \frac{\partial u}{\partial z} \right)}^{2}}+{{\left( \frac{\partial v}{\partial z} \right)}^{2}} \right\}+\frac{1}{{{T}_{\infty }}}\frac{{{\mu }_{nf}}}{{{k}_{p}}}{{u}^{2}} \\ \end{align}\) (21)
The non-dimensional form of characteristic entropy generation rate is
\(S''{{'}_{0}}=\frac{{{k}_{nf}}{{\left( {{T}_{w}}-{{T}_{\infty }} \right)}^{2}}}{{{T}_{\infty }}^{2}{{L}^{2}}}\) (22)
Hence the entropy generation number is
\({{N}_{G}}=\frac{S''{{'}_{gen}}}{S''{{'}_{0}}}\) (23)
On using equations (21) and (22) in equation (23) reduces into the following equation
\(\begin{align} & {{N}_{G}}=\left\{ \frac{{{\left( n-1 \right)}^{2}}}{2}{{\eta }^{2}}+\left( \frac{n+1}{2} \right){{\operatorname{Re}}_{L}}\left( 1+\frac{4}{3}R \right) \right\}\theta {{'}^{2}} \\ & +\frac{{{\operatorname{Re}}_{L}}KBr}{{{\left( 1-\phi \right)}^{2.5}}\Omega {{A}_{3}}}f{{'}^{2}}+\left( \frac{n+1}{2} \right)\frac{Br{{\operatorname{Re}}_{L}}}{{{A}_{3}}{{\left( 1-\phi \right)}^{2.5}}\Omega }\left( f'{{'}^{2}}+g'{{'}^{2}} \right) \\ \end{align}\)(24)
where, $Re_L$, $B_r$, Ω denotes the Reynolds number, Brinkman number, dimensionless temperature difference, respectively. These numbers can be expressed as
\({{\operatorname{Re}}_{L}}=\frac{{{U}_{w}}L}{{{\upsilon }_{f}}}\), \(Br=\frac{{{\mu }_{f}}{{U}_{w}}^{2}}{{{k}_{f}}\left( {{T}_{w}}-{{T}_{_{\infty }}} \right)}\), \(\Omega =\frac{\left( {{T}_{w}}-{{T}_{\infty }} \right)}{{{T}_{\infty }}}\)
5. Validation of The Study
Table 2 depicts the comparison of the present results with the previous results of Mahanthesh et al. [10]. This comparison shows that present results are very well agreement with the previous results. Also these results verify the validity of the present results.
Table 2. The evaluation of the values of $f''(0)$ and $g''(0)$ with that of Mahanthesh et al. [10] on taking n=1,ϕ=0, $α_1$=0, and k=0 in this study
ϕ
Mahanthesh et al. [10]
f''(0)
g''(0)
The numerical results have been obtained for two different cases of water based CNTs i.e. SWCNTs and MWCNTs and are presented graphically. Effects of pertinent parameters like ratio parameter α, power law index n , nanoparticle volume fraction ϕ, local porosity parameter Pr, radiation parameter Bi, Prandtl number Pr, Biot number Bi, Brinkman number Br on the velocity profile, temperature profile and entropy are discussed and plotted through the figures.
$α_1=\frac{π}{4}$, $ϕ=0.2$, $K=0.1$, $α=0.3$, $R=0.2$, $Bi=0.5$, $δ=1$, $n=3$
Figure 2. Velocity for different parameters
Figures 2-3 show the comparative study of different physical parameters for SWCNT and MWCNT along axial and transverse direction. The variation of stretching ratio parameter on the axial velocity. It is noted that rises in stretching ratio parameter leads to reduce the boundary layer thickness along the axial direction, while in the transverse direction stretching ratio parameter, leads to increase the boundary layer. The stretching ratio parameter is the ratio of horizontal to the vertical stretching.
The effect of local porosity parameter for SWCNT and MWCNT along axial and transverse direction leads to reduce the boundary layer thickness. Reason behind this is that an enhancement in CNTs enhances the boundary layer thickness. The influence of nanoparticle volume fraction for SWCNT and MWCNT along axial and transverse direction leads to increase the boundary layer thickness.
Figure 4. Velocity for different values of n
Figure 4 shows the influence of power law index on the velocity profile along axial and transverse direction for both carbon nanotubes. Velocity profile decreases when the power law index increases for both CNTs. From these results we found that velocity are higher for MWCNTs as compared with SWCNTs.
Figure 5 shows the influence of Biot number Bi on the temperature distribution.
It is noted that an enhance in Biot number causes enhancement in the temperature profile. Physically speaking, an enhance in Biot number decreases sheet's thermal resistance and also an improves convective heat transfer to the fluid on the sheet. An increment in the Biot number relates to stronger convection which illustrates the higher temperature distribution and more thickness for both CNTs. The response of temperature profile for nanoparticle volume fraction is shown in Figure 6 This figure describes that, the temperature profile in the boundary layer is an increasing function of volume fraction parameter.
Figure 5. Temperature profile of Bi
Figure 6. Temperature profile of ϕ
Figure 7. Temperature profile of R for SWCNT
Figure 7 and Figure 8 describes the impact of thermal radiation parameter on the dimensionless temperature profile. A comparison for both nonlinear and linear thermal radiation influence on thermal boundary layer has been depicted. Figure 7 correspond to SWCNTs and fig.8 corresponds to MWCNTs. From these figures, the temperature in the boundary layer rises with a rise in radiation parameter. It is seen from this figure that, the temperature in the boundary layer increases with an increase in radiation parameter. The central reason behind this outcome is that, by strengthening the radiation parameter, the Rosseland radiative absorptive k* decreases. Consequently the divergence of radiative heat flux enhances, which in turn the rate of radiative heat transfer into the fluid rises. The higher radiative heat transfer is accountable for the increase in thermal boundary layer growth to the fluid. Further the temperature is lower for linear thermal radiation compared to non-linear thermal radiation. Thus, we noted that the nonlinear thermal radiation is more suitable for heating processes.
Figure 8. Temperature profile of R for MWCNT
Figure 9. Temperature profile of K
In Figure 9 both temperature profile and related layer thickness increase when the permeability parameter of porous media increases for both CNTs. Because an enhancement in carbon nanotubes increases the boundary layer thickness.
Figure 10 shows that the nanoparticle volume fraction parameter increases as entropy generation decreases. It is also noted that the entropy generation number reduces with the rising value of the nanoparticle volume fraction parameter due to the high dissipated energy resulted from the sharper velocity gradient near the wall. On the other hand, reverse effect has been obtained for far field. It is also noted that due to decrease in the friction between the stretching surface and the nanofluid results decrease in entropy production.
figure_10.png
Figure 10. Entropy for different values of ϕ
Figure 11. Entropy for different values of Bi
Figure 12. Entropy for different values of α
Figure 11 illustrates the effects of Biot number Bi on the entropy generation number. Near the stretching surface, the effects of Biot on entropy generation number are prominent. Increase in entropy generation number with an increase in the Biot number in the boundary layer region. In the region far away from the surface of the stretching sheet, the entropy generation is negligible. Therefore, the entropy can be minimized by increasing the convection through the boundary. Figure 12 displays that entropy generation decreases when ratio parameter increases.
Figure 13. Entropy for different values of ReL
Figure 14. Entropy for different values of $BrΩ^{-1}$
The impacts of the non-dimensional parameter Reynolds number and $BrΩ^{-1}$ on the entropy generation are presented in Figures 13 and 14 respectively. The rises in both non dimensional parameters resultant ris in the entropy generation. Figure 13 shows that Reynolds number increases due to higher heat transfer rate at the surface of stretching sheet. When the Reynolds number increases, the entropy due to the heat transfer becomes prominent, and fluid friction decrease near the stretching sheet. However when the distance increases from the surface of the stretching sheet, these effects are negligible. Reynolds number generates the higher entropy. Entropy function strongly depends upon Reynolds number. With high Reynolds number, hectic motion occurs because as Re increases, the fluid moves more disturbingly and thus contribution of fluid friction and heat transfer on entropy result tends to increase in entropy generation. Figure 14 shows that entropy generation increases when $BrΩ^{-1}$ increases, because of higher $BrΩ^{-1}$ increases the nanofluid friction. The relative significance of viscous consequence on the flow is determined by this parameter. In the figure, the entropy number is greater for high dimensionless group parameter. The fact that for higher dimensionless group parameter, the entropy number due to fluid friction is enhanced.
Entropy generation Analysis of water based Carbon Nanotubes nanofluid 3D flow over a inclined nonlinear stretching sheet embedded in porous media have been studied. The effects of different physical parameters on velocity, temperature and entropy profile have been examined. Following points has been concluded
Velocity component reduces for higher stretching ratio parameter in the axial direction, while reverse effect has been shown in the transverse direction.
Both velocity components are higher for MWCNTs as compared with SWCNTs.
Temperature profile increases with increasing value of Biot number, porosity parameter and radiation parameter.
The non-linear radiation has high effect on flow fields compared to linear thermal radiation.
Entropy generation number increases with increasing value of Biot number, Reynolds number and $BrΩ^{-1}$ .
The role of Reynolds number is to generate the higher entropy so entropy strongly depends on Reynolds number.
Entropy generation number reduces with increasing value of stretching ratio parameter and nanoparticle volume fraction.
Local porosity parameter
Radiation parameter
Ratio parameter
Prandtl Number
Reynolds number
Brinkmann number
Dimensionless temperature difference
[1] Choi SUS, Zhang ZG, Yu W, Lockwood FE, Grulke EA. (2001). Anomalous thermal conductivity enhancement in nanotube suspension. Applied Physics Lett 79(14): 2252-2254. http://dx.doi.org/10.1063/1.1408272
[2] Hayat T, Khan MI, Farooq M, Alsedi A, Yasmeen T. (2016). Impact of Margoni convection in the flow of carbon-water nanofluid with thermal radiation. Int. Journal of Heat and Mass Transfer 106: 810-815. https://doi.org/10.1016/j.ijheatmasstransfer.2016.08.115
[3] Aman S, Khan I, Ismail Z, Salleh MZ, Al-Mdallal QM. (2017). Heat transfer enhancement in free convection flow of CNTs Maxwell nanofluids with four different types of molecular liquids. Sci. Rep. 7(1): 2445. https://doi.org/10.1038/s41598-017-01358-3
[4] Hayat T, Ahmed S, Muhammad T, Alsedi A, Ayub M. (2017). Computational modelling for homogenous-heterogeneous reactions in three-dimensional flow of carbon nanotubes. Res. In Physics 7: 2651-2657. https://doi.org/10.1016/j.rinp.2017.07.040
[5] Jain S, Bohra S. (2018). Hall current and radiation effects on unsteady MHD squeezing nanofluid flow in a rotating channel with lower stretching permeable wall. Applications of Fluid Dynamics 127-141. http://dx.doi.org/10.1007/978-981-10-5329-0_9
[6] Hayat T, Aziz A, Muhammad T, Alsedi A. (2016). On magnetohydrodynamic three-dimensional flow of nanofluid over a convectively heated nonlinear stretching surface. Int. Journal of Heat and Mass Transfer 100: 566-572. http://dx.doi.org/10.1016/j.ijheatmasstransfer.2016.04.113
[7] Mustafa M, Mustaq A, Hayat T, Alsedi A. (2016). Rotating flow of magnetite-water nanofluid over a stretching surface inspired by non linear thermal radiation. PLOS ONE 11(2): e0149304. http://dx.doi.org/10.1371/journal.pone.0149304
[8] Gopal D, Kishan N, Raju CSK. (2017). Viscous and joule's dissipation on Casson fluid over a chemically reacting stretching sheet with inclined magnetic field and multiple slips. Inform. in Med. Unlocked 9: 154-160. http://dx.doi.org/10.1016/j.imu.2017.08.003
[9] Kandasamy R, Vignesh V, Kumar A, Hasan SH, Isa NM. (2018). Thermal radiation energy due to SWCNTs on MHD nanofluid flow in the presence of seawater/water: Lie group transformation. Ain Shams Eng. Journal 9(4). http://dx.doi.org/10.1016/j.asej.2016.04.022
[10] Mahanthesh B, Gireesha BJ, Gorla RSR. (2016). Nonlinear radiative heat transfer in MHD 3-D flow of water based nanofluid over a non-linearly stretching sheet with convective boundary condition. Journal of Nige. Math. Soc 35: 178-198. http://dx.doi.org/10.1016/j.jnnms.2016.02.003
[11] Jain S, Choudhary R. (2015). Effects of MHD on boundary layer flow in porous medium due to exponentially shrinking sheet with slip. Journal of Procedia Eng 127: 1203-1210. http://dx.doi.org/10.1016/j.proeng.2015.11.464
[12] Chauhan DS, Olkha A. (2011). Slip flow and heat transfer of a second grade fluid in a porous medium over a stretching sheet with power law surface temperature or heat flux. J. of Chem. Eng. Communi 198(9): 1129-1145. https://doi.org/10.1080/00986445.2011.552034
[13] Nayak MK. (2017). MHD 3D flow and heat transfer analysis of nanofluid by shrinking surface inspired by thermal radiation and viscous dissipation. International Journal of Mech. Sciences 124-125: 185-193. http://dx.doi.org/10.1016/j.ijmecsci.2017.03.014
[14] Jain S, Bohra S. (2017). Heat and mass transfer over a three-dimensional inclined non-linear stretching sheet with convective boundary conditions. Indian Journal of Pure and Applied Physics 55: 847-856. http://op.niscair.res.in/index.php/IJPAP/article/view/15706/1411
[15] Matin MH, Nobari MRH, Jahangiri P. (2012). Entropy analysis in mixed convection MHD flow of nanofluid over a non-linear stretching sheet. Journal of Thermal Science and Technology 7(1): 104-119. http://dx.doi.org/10.1299/jtst.7.104
[16] Rehman AU, Mahmood R, Nadeem S. (2017). entropy analysis of radioactive rotating nanofluid with thermal slip. Applied Ther. Engineering 112: 832-840. http://dx.doi.org/10.1016/j.applthermaleng.2016.10.150
[17] Das S, Chakraborty S, Jana RN, Makinde OD. (2015). Entropy analysis of unsteady magneto-nanofluid flow past accelerating stretching sheet with convective boundary condition. Applied Math. Mech. Engl. Ed. 36(12): 1593-1610. http://dx.doi.org/10.1007/s10483-015-2003-6
[18] Chauhan DS, Kumar V. (2013). Entropy analysis for third grade fluid flow with temperature-dependent viscosity in annulus partially filled with porous medium. Theoret. Appl. Mech. 40(3): 441-464. http://dx.doi.org/10.2298/TAM1303441C
[19] Shirley A, Aurang Z. (2017). Entropy generation of nanofluid flow over a convectively heated stretching sheet with stagnation point flow having nimonic 80A nanoparticles: Buongiorno model. Fluid Mech. and Thermo 618-624.
[20] Vasanthakumari R, Pondy P. (2018). Mixed convection of silver and titanium dioxide nanofluids along inclined stretching sheet in presence of MHD with heat generation suction effect. Math. Modell. of Eng. Pro 5(2): 123-129. http://dx.doi.org/10.18280/mmep.050210 | CommonCrawl |
arXiv.org > math-ph > arXiv:1805.08760v3
Title:Moments of random matrices and hypergeometric orthogonal polynomials
Authors:Fabio Deelan Cunden, Francesco Mezzadri, Neil O'Connell, Nick Simm
(Submitted on 22 May 2018 (v1), revised 4 Jul 2018 (this version, v3), latest version 11 Jan 2019 (v5))
Abstract: We establish a new connection between moments of $n \times n$ random matrices $X_n$ and hypergeometric orthogonal polynomials. Specifically, we consider moments $\mathbb{E}\mathrm{Tr} X_n^{-s}$ as a function of the complex variable $s \in \mathbb{C}$, whose analytic structure we describe completely. We discover several remarkable features, including a reflection symmetry (or functional equation), zeros on a critical line in the complex plane, and orthogonality relations. An application of the theory resolves part of an integrality conjecture of Cunden et al. [F. D. Cunden, F. Mezzadri, N. J. Simm and P. Vivo, J. Math. Phys. 57 (2016)] on the time-delay matrix of chaotic cavities. In each of the classical ensembles of random matrix theory (Gaussian, Laguerre, Jacobi) we characterise the moments in terms of the Askey scheme of hypergeometric orthogonal polynomials. We also calculate the leading order $n\to\infty$ asymptotics of the moments and discuss their symmetries and zeroes. We discuss aspects of these phenomena beyond the random matrix setting, including the Mellin transform of products and Wronskians of pairs of classical orthogonal polynomials. When the random matrix model has orthogonal or symplectic symmetry, we obtain a new duality formula relating their moments to hypergeometric orthogonal polynomials.
Comments: 53 pages, 4 figures
Subjects: Mathematical Physics (math-ph); Classical Analysis and ODEs (math.CA); Complex Variables (math.CV)
Cite as: arXiv:1805.08760 [math-ph]
(or arXiv:1805.08760v3 [math-ph] for this version)
From: Fabio Deelan Cunden [view email]
[v1] Tue, 22 May 2018 17:45:57 UTC (185 KB)
[v2] Mon, 4 Jun 2018 16:28:04 UTC (187 KB)
[v3] Wed, 4 Jul 2018 11:12:16 UTC (187 KB)
[v4] Fri, 9 Nov 2018 16:23:40 UTC (188 KB)
[v5] Fri, 11 Jan 2019 14:37:05 UTC (188 KB)
math-ph
math.CA
math.CV
math.MP | CommonCrawl |
Diffusion and Defect Data Pt.B: Solid State Phenomena
Published by Trans Tech Publications
Evaluation of Surface Passivation Layers for Bulk Lifetime Estimation of High Resistivity Silicon for Radiation Detectors
Joan Marc Rafi
L. Cardona-Safont
F.M. Zabala
Manuel Lozano
With the aim to identify an appropriate low-temperature surface passivation process that could be used for bulk lifetime estimation of high resistivity (HR) (>1 kOmega-cm) silicon for radiation detectors, different candidate passivating layers were evaluated on n-type and p-type standard Czochralski (CZ), HR magnetic Czochralski (MCZ) and HR float zone (FZ)) substrates. Minority carrier lifetime measurements were performed by means of a microwave PhotoConductance Decay single point setup. The results show that SiNx PECVD layers deposited at 200degC may be used to evaluate the impact of different processing steps and treatments on the substrate characteristics for radiation detectors.
Design of a Low-Cost Submicron Measuring Probe
Gyula Hermann
In this paper a new low cost design is presented. The moving element of the probe head consists of the stylus and a crossform intermediate body with a small aluminium enhanced mirror at the two ends and at the center. The intermediate body is suspended on four springs made of berillium-copper foils. The displacement of the probe tip is calculated from the displacement and the rotations of the mirrors measured by modified optical pick-ups. In order to test probes a calibration system with 20 nm measuring uncertainty was designed. A high precision three-axis translation stage, with a working range of 100 times 100 times 100 mum, moves the probe stylus and the position of the stage is determined by three mutually-orthogonal plane mirror laser interferometer transducers having 1 nm resolution.
Non-Affine Deformations at a Concentration Transition in Cross-Linked Elastomers in the Light of the 3D XY Spin Glass Model
Liliya Elnikova
X-ray and mechanical spectroscopy on liquid-crystalline elastomers give evidence of rubber elasticity, which depends upon the crosslink concentration. After applied macroscopic deformations, mesoscale non-affine deformations in these systems might lead to long relaxation times. Basing on the example of the crosslink-dependent smectic A − nematic (SmA−N) transition in polysiloxanes, we propose to use the three-dimensional Villain spin glass model and reduce it to the lattice version of the three-dimensional XY spin-glass model. By using the Monte Carlo loop algorithm in this model, we found a percolation threshold depending on the crosslink concentration.
Ab Initio Exchange Interactions and Magnetic Properties of Intermetallic Compound Gd2Fe17-XGax
E. E. Kokorina
M. V. Medvedev
Igor A. Nekrasov
Intermetallic compounds R2Fe17 are perspective for applications as permanent magnets. Technologically these systems must have Curie temperature Tc much higher than room temperature and preferably have easy axis anisotropy. At the moment highest Tc among stoichiometric R2Fe17 materials is 476 K, which is not high enough. There are two possibilities to increase Tc: substitution of Fe ions with non-magnetic elements or introduction of light elements into interstitial positions. In this work we have focused our attention on substitution scenario of Curie temperature rising observed experimentally in Gd(2)Fe(17-x)Ga(x) (x=0,3,6) compounds. In the framework of the LSDA approach electronic structure and magnetic properties of the compounds were calculated. Ab initio exchange interaction parameters within the Fe sublattice for all nearest Fe ions were obtained. Employing the theoretical values of exchange parameters Curie temperatures Tc of Gd(2)Fe(17-x)Ga(x) within mean-field theory were estimated. Obtained values of Tc agree well with experiment. Also LSDA computed values of total magnetic moment coincide with experimental ones.
Cluster Dynamics Modeling of Materials: Advantages and Limitations
Alain Barbu
Emmanuel Clouet
The aim of this paper is to give a short review on cluster dynamics modeling in the field of atoms and point defects clustering in materials. It is shown that this method, due to its low computer cost, can handle long term evolution that cannot, in many cases, be obtained by Lattice Kinetic Monte Carlo methods. Indeed, such a possibility is achieved thanks to an important drawback that is the loss of space correlations of the elements of the microstructures. Some examples, in the field of precipitation and irradiation of metallic materials are given. The limitations and difficulties of this method are also discussed. Unsurprisingly, it is shown that it goes in a very satisfactory way when the objects are distributed homogeneously. Conversely, the source term describing the primary damage under irradiation, by nature heterogeneous in space and time, is tricky to introduce especially when displacement cascades are produced.
Phase Heterogeneities of Lipidic Aggregates
We propose a model for explanation the "domain-wall" type configuration states in binary lipid mixtures of cationic and neutral lipids, associated with observed relaxation effects in their aggregates. We apply the analogy with formation of Kibble-Zurek topological defects, which we suppose connected with structural dynamics of the lipid phases. In frames of the proposed model, the density of kink-type defects and the energy of the configurations are calculated.
Simulations of Decomposition Kinetics of Fe-Cr Solid Solutions during Thermal Aging
Enrique Martinez
Chu-Chun Fu
Maximilien Levesque
Frederic Soisson
The decomposition of Fe-Cr solid solutions during thermal aging is modeled by Atomistic Kinetic Monte Carlo (AKMC) simulations, using a rigid lattice approximation with composition dependant pair interactions that can reproduce the change of sign of the mixing energy with the alloy composition. The interactions are fitted on ab initio mixing energies and on the experimental phase diagram, as well as on the migration barriers in iron and chromium rich phases. Simulated kinetics is compared with 3D atom probe and neutron scattering experiments.
Application of Powder Diffraction Methods to the Analysis of Short- and Long-Range Atomic Order in Nanocrystalline Diamond and SiC: The Concept of the Apparent Lattice Parameter (alp)
B. Palosz
E. Grzanka
Stanislaw Gierlotka
Witold Palosz
Two methods of the analysis of powder diffraction patterns of diamond and SiC nanocrystals are presented: (a) examination of changes of the lattice parameters with diffraction vector Q ('apparent lattice parameter', alp) which refers to Bragg scattering, and (b), examination of changes of inter-atomic distances based on the analysis of the atomic Pair Distribution Function, PDF. Application of these methods was studied based on the theoretical diffraction patterns computed for models of nanocrystals having (i) a perfect crystal lattice, and (ii), a core-shell structure, i.e. constituting a two-phase system. The models are defined by the lattice parameter of the grain core, thickness of the surface shell, and the magnitude and distribution of the strain field in the shell. X-ray and neutron experimental diffraction data of nanocrystalline SiC and diamond powders of the grain diameter from 4 nm up to micrometers were used. The effects of the internal pressure and strain at the grain surface on the structure are discussed based on the experimentally determined dependence of the alp values on the Q-vector, and changes of the interatomic distances with the grain size determined experimentally by the atomic Pair Distribution Function (PDF) analysis. The experimental results lend a strong support to the concept of a two-phase, core and the surface shell structure of nanocrystalline diamond and SiC.
Anomalous Temperature and Field Behaviors of Magnetization in Cubic Lattice Frustrated Ferromagnets
Andrey N. Ignatenko
A. A. Katanin
Valentin Irkhin
Thermodynamic properties of cubic Heisenberg ferromagnets with competing exchange interactions are considered near the frustration point where the coefficient $D$ in the spin-wave spectrum $E_{\mathbf{k}}\sim D k^{2}$ vanishes. Within the Dyson-Maleev formalism it is found that at low temperatures thermal fluctuations stabilize ferromagnetism by increasing the value of $D$. For not too strong frustration this leads to an unusual "concave" shape of the temperature dependence of magnetization, which is in agreement with experimental data on the europium chalcogenides. Anomalous temperature behavior of magnetization is confirmed by Monte Carlo simulation. Strong field dependence of magnetization (paraprocess) at finite temperature is found near the frustration point.
The Neel Temperature and Sublattice Magnetization for the Stacked Triangular-Lattice Antiferromagnet with a Weak Interlayer Coupling
The quantum Heisenberg antiferromagnet on the stacked triangular lattice with the intralayer nearest-neighbor exchange interaction J and interlayer exchange J' is considered within the non-linear $\sigma$-model with the use of the renormalization group (RG) approach. For J' << J the asymptotic formula for the Neel temperature $T_{Neel}$ and sublattice magnetization are obtained. RG turns out to be insufficient to describe experimental data since it does not take into account the $\mathcal{Z}_2$-vortices. Therefore $T_{Neel}$ is estimated using the Monte-Carlo result for the 2D correlation length [10] which has a Kosterlitz-type behavior near the temperature $T_{KT}$ where the vortices are activated.
First Principles Study of Water-Based Self-Assembled Nanobearing Effect in CrN/TiN Multilayer Coatings
David Holec
Jörg Paulitsch
Paul H Mayrhofer
Recently, we have reported on low friction CrN/TiN coatings deposited using a hybrid sputtering technique. These multilayers exhibit friction coefficients $\mu$ below 0.1 when tested in atmosphere with a relative humidity $\approx25%$, but $\mu$ ranges between 0.6-0.8 upon decreasing the humidity below 5%. Here we use first principle calculations to study O and H adatom energetics on TiN and CrN (001) surfaces. The diffusional barrier of H on TiN(001) is about half of the value on CrN(001) surface, while both elements are stronger bonded on CrN. Based on these results we propose a mechanism for a water-based self-assembled nanobearing.
Bound States in the Vortex Core
Irena Knezevic
Zoran Radovic
The quasiparticle excitation spectrum of isolated vortices in clean layered d-wave superconductors is calculated. A large peak in the density of states in the "pancake" vortex core is found, in an agreement with the recent experimental data for high-temperature superconductors.
Local Investigation of the Electrical Properties of Grain Boundaries in Silicon
Jörg Palm
D. Steinbach
H. Alexander
We present the recent development of three related techniques for the local investigation of grain boundaries (GBs): grain boundary electron-beam-induced current (GB EBIC), grain boundary light-beam-induced current (GB LBIC) and local grain boundary photoconductance spectroscopy (GB PCS). Two grains which are separated by a common GB are ohmically connected to a current amplifier. In GB EBIC a focused electron beam and in GB LBIC a focused light beam of above band gap energy is scanned across the GB. At GBs with a two-dimensional coherent potential barrier a characteristic dark-bright signal is observed which is directly related to the recombination current through the boundary. By applying a small bias, the local attenuation of the potential barrier height as a function of the injection level can be determined. In GB PCS a beam of monochromatic subband gap light is used. By applying a bias, the change in the GB barrier height due to the excitation of carriers into the GB trap states can be detected by the change in the over-barrier current. By varying the light energy, a section of the local distribution of states in the gap can be determined.
Quantum Transport in Bridge Systems
Santanu K. Maiti
We study electron transport properties of some molecular wires and a unconventional disordered thin film within the tight-binding framework using Green's function technique. We show that electron transport is significantly affected by quantum interference of electronic wave functions, molecule-to-electrode coupling strengths, length of the molecular wire and disorder strength. Our model calculations provide a physical insight to the behavior of electron conduction across a bridge system. Comment: 23 pages, 9 figures, A brief review article
Study of Relaxed Si0.7Ge0.3 Buffers Grown on Patterned Silicon Substrates by Raman Spectroscopy
G. Wöhl
Erich Kasper
M. Klose
H. Kibbel
We have carried out micro-Raman spectroscopy to characterize Ge concentration and strain in relaxed Si0.7Ge0.3, buffer layers grown on patterned silicon substrates. Different epitaxial layer stacks, annealing steps and Ge composition were used to achieve different relaxation in the strain relaxed buffer layers. A detailed consideration of Raman frequencies and the relative intensities of the, various phonon modes can be used to monitor composition and strain. We show that this method is also suitable for a device layer stack with a strained Si layer on top of the relaxed SiGe buffer layer and we compare it with another proposal for determining of the Ge content using the Si-Si LO Raman frequencies of the Si cap layer and the relaxed SiGe layer. The potential and the accuracy of the various methods in comparison to high resolution x-ray diffraction measurements are discussed. Finally, we demonstrate, that micro-Raman can be used as an in-line monitoring tool to determine the uniformity of Ge concentration and strain with a lateral resolution of 1-2 mum.
Effect of Doping on the Thermoelectric Properties of Thallium Tellurides Using First Principles Calculations
Philippe Jund
Xiaoma Tao
R. Viennois
J.-C. Tédenac
We present a study of the electronic properties of Tl5Te3, BiTl9Te6 and SbTl9Te6 compounds by means of density functional theory based calculations. The optimized lattice constants of the compounds are in good agreement with the experimental data. The band gap of BiTl9Te6 and SbTl9Te6 compounds are found to be equal to 0.589 eV and 0.538 eV, respectively and are in agreement with the available experimental data. To compare the thermoelectric properties of the different compounds we calculate their thermopower using Mott's law and show, as expected experimentally, that the substituted tellurides have much better thermoelectric properties compared to the pure compound.
Canted Spiral Magnetic Order in Layered Systems
Marat A. Timirgazin
Vitaliy Gilmutdinov
A. K. Arzhnikov
Formation of a canted spiral magnetic order is studied in the framework of a mean-field approximation of the Hubbard model. It is revealed that this magnetic state can be stabilized under certain conditions in layered systems with a relatively small interplane electron hopping. Example of an experimentally observed magnetic structure of La$_{2-x}$Sr$_x$CuO$_4$ is considered. It is shown that the canting magnetic order can be described in terms of a simple non-relativistic band magnetism.
High-Strength Silicon Carbides by Hot Isostatic Pressing
Sunil Dutta
Silicon carbide has strong potential for heat engine hardware and other high-temperature applications because of its low density, good strength, high oxidation resistance, and good high-temperature creep resistance. Hot isostatic pressing (HIP) was used for producing alpha and beta silicon carbide (SiC) bodies with near-theoretical density, ultrafine grain size, and high strength at processing temperatures of 1900 to 2000 C. The HIPed materials exhibited ultrafine grain size. Furthermore, no phase transformation from beta to alpha was observed in HIPed beta-SiC. Both materials exhibited very high average flexural strength. It was also shown that alpha-SiC bodies without any sintering aids, when HIPed to high final density, can exhibit very high strength. Fracture toughness K (sub C) values were determined to be 3.6 to 4.0 MPa m (sup 1/2) for HIPed alpha-SiC and 3.7 to 4.1 MPa m (sup 1/2) for HIPed beta-SiC. In the HIPed specimens strength-controlling flaws were typically surface related. In spite of improvements in material properties such as strength and fracture toughness by elimination of the larger strength-limiting flaws and by grain size refinement, HIPing has no effect on the Weibull modulus.
Terahertz Emission from Phosphor Centers in SiGe and SiGe/Si Semiconductors
Sergey Pavlov
Heinz-Wilhelm Hübers
N.V. Abrosimov
Valery Shastin
Terahertz-range photoluminescence from silicon-germanium crystals and superlattices doped by phosphor has been studied under optical excitation by radiation from a mid-infrared CO2 laser at low temperature. SiGe crystals with a Ge content between 0.9 and 6.5 %, doped by phosphor with a concentration optimal for silicon laser operation, do not exhibit terahertz gain. On the contrary, terahertz-range gain of ~ 2.3 - 3.2 1/cm has been observed for donor-related optical transitions in Si/SiGe strained superlattices at pump intensities above 100 kW/cm2.
Two-Stage Hydrogen Compression Using Zr-Based Metal Hydrides
Evangelos D. Koultoukis
Sofoklis Makridis
Daniel Fruchart
Athanasios K. Stubos
Zr-based AB2-Laves phase type alloys containing the same type of A and B metals, have been prepared from pure elements by melting and subsequent re-melting under argon atmosphere by using a HF-induction levitation furnace. Characterization of the alloys has resulted from powder X-Ray Diffraction (XRD) measurements and SEM/EDX analyses. Systematic PCI (Pressure-Composition-Isotherms) measurements have been recorded at 20 and 90 oC, using a high-pressure Sievert's type apparatus. The purpose of this study is to find a series of alloys promptly forming metal hydrides (MH) with suitable properties in order to build a MH-based hydrogen compressor, working in the same way between 20 and ~100 oC.
Ferromagnetism in the Highly-Correlated Hubbard Model
Alexander V. Zarubin
The Hubbard model with strong correlations is treated in the many-electron representation of Hubbard's operators. The regions of stability of saturated and non-saturated ferromagnetism in the n-U plane for the square and simple cubic lattices are calculated. The role of the bare density of states singularities for the magnetic phase diagram is discussed. A comparison with the results of previous works is performed.
Spontaneous Currents in Josephson Devices
Zoran Radović
Ljiljana Dobrosavljevic
B. Vujicic
The unconventional Josephson coupling in a ferromagnetic weak link between d-wave superconductors is studied theoretically. For strong ferromagnetic barrier influence, the unconventional coupling, with ground state phase difference across the link $0<\phi_{\rm gs}\leq \pi$, is obtained at small crystal misorientation of the superconducting electrodes, in contrast to the case of normal metal barrier, where it appears at large misorientations. In both cases, with decreasing temperature there is an increasing range of misorientations, where $\phi_{\rm gs}$ varies continuously between 0 and $\pi$. When the weak link is a part of a superconducting ring, this is accompanied by the flow of spontaneous supercurrent, of intensity which depends (for a given misorientation) on the reduced inductance $l=2\pi LI_c(T)/\Phi_0$, and is non-zero only for $l$ greater than a critical value. For $l\gg 1$, another consequence of the unconventional coupling is the anomalous quantization of the magnetic flux. Comment: 8pages, 6figures, To be published in Phys. Rev. B {\bf 60} (1 september 1999-I)
Persistent Current in Metallic Rings and Cylinders
We explore the behavior of persistent current and low-field magnetic response in mesoscopic one-channel rings and multi-channel cylinders within the tight-binding framework. We show that the characteristic properties of persistent current strongly depend on total number of electrons $N_e$, chemical potential $\mu$, randomness and total number of channels. The study of low-field magnetic response reveals that only for one-channel rings with fixed $N_e$, sign of the low-field currents can be predicted exactly, even in the presence of disorder. On the other hand, for multi-channel cylinders, sign of the low-field currents cannot be mentioned exactly, even in the perfect systems with fixed $N_e$ as it significantly depends on the choices of $N_e$, $\mu$, number of channels, disordered configurations, etc. Comment: 25 pages, 14 figures, A brief review article
Magnetocaloric Effect and Frustrations in One-Dimensional Magnets
Felix A. Kassan-Ogly
Alexey Igorevich Proshkin
In this paper, we investigated the magnetocaloric effect (MCE) in one-dimensional magnets with different types of ordering in the Ising model, Heisenberg, XY-model, the standard, planar, and modified Potts models. Exact analytical solutions to MCE as functions of exchange parameters, temperature, values and directions of an external magnetic field are obtained. The temperature and magnetic field dependences of MCE in the presence of frustrations in the system in a magnetic field are numerically computed in detail.
Incommensurate Spin-Density Wave in Two-Dimensional Hubbard Model
A.V. Vedyayev
We consider the magnetic phase diagram of the two-dimensional Hubbard model on a square lattice. We take into account both spiral and collinear incommensurate magnetic states. The possibility of phase separation of spiral magnetic phases is taken into consideration as well. Our study shows that all the listed phases appear to be the ground state at certain parameters of the model. Relation of the obtained results to real materials, e.g. Cu-based high-temperature superconductors, is discussed.
Electric Field Driven Magnetic Domain Wall Motion in Iron Garnet Film
Alexey V. Nikolaev
Alexander P. Pyatakov
Elena P. Nikolaeva
Anatoly Konstantinovich Zvezdin
The room temperature magnetoelectric effect was observed in epitaxial iron garnet films that appeared as magnetic domain wall motion induced by electric field. The films grown on gadolinium-gallium garnet substrates with various crystallographic orientations were examined. The effect was observed in (210) and (110) films and was not observed in (111) films. Dynamic observation of the domain wall motion in 800 kV/cm electric field pulses gave the domain wall velocity in the range 30 ÷50 m/s. Similar velocity was achieved in magnetic field pulse about 50 Oe.
Silicon Doped with Lithium and Magnesium from the Melt for Terahertz Laser Application
Natalia Nötzel
Helge Riemann
Martin Dressel
Silicon crystals, doped with moderate concentration of magnesium or lithium, have been grown for application as optically pumped donor silicon lasers for the terahertz spectral region. The pedestal growth technique accompanied with axial-loaded dopant pills enabled manufacturing of large silicon crystals with a homogeneous donor distribution in the range from 10^14 to 10^16 cm^-3, as required for intracenter silicon lasers. Terahertz-range photoluminescence from the grown crystals has been observed.
Simultaneous Localization of Electrons in Different Δ-Valleys in Ge/Si Quantum Dot Structures
Aigul Zinovieva
N. P. Stepina
A. V. Dvurechenskii
Detlev Gruetzmacher
In the present work the possibility of simultaneous localization of two electrons in $\Delta^{100}$ and $\Delta^{001}$ valleys in ordered structures with Ge/Si(001) quantum dots (QD) was verified experimentally by the electron spin resonance (ESR) method. The ESR spectra obtained for the ordered ten-layered QD structure in the dark show the signal corresponding to electron localization in Si at the Ge QD base edges in $\Delta^{100}$, $\Delta^{010}$ valleys ($g_{zz}$=1.9985, $g_{in-plane}$=1.999). Light illumination causes the appearance of a new ESR line ($g_{zz}$=1.999) attributed to electrons in the $\Delta^{001}$ valley localized at QD apexes. The observed effect is explained by enhancement of electron confinement near the QD apex by Coloumb attraction to the photogenerated hole trapped in a Ge QD.
Theoretical Investigation of the Magnetic Order in FeAs
Lyudmila Dobysheva
The magnetic structure of the iron monoarsenide FeAs is studied using first-principles calculations. We consider the collinear and non-collinear (spin-spiral wave) magnetic ordering and magnetic anisotropy. It is analitically shown that a magnetic triaxial anisotropy results in a sum of two spin-spiral waves with opposite directions of wave vectors and different spin amplitudes, so that the magnetic moments in two perpendicular directions do not equal each other.
Fundamental Limitations of Half-Metallicity in Spintronic Materials
Alexander Solontsov
Zero-point spin fluctuations are shown to strongly influence the ground state of ferromagnetic metals and to impose limitations for the fully spin polarized state assumed in half-metallic ferromagnets, which may influence their applications in spintronics. This phenomenon leads to the low-frequency Stoner excitations and cause strong damping and softening of magnons in magnetoresistive manganites observed experimentally.
Current Status of Graphene Transistors
Max Christian Lemme
This paper reviews the current status of graphene transistors as potential supplement to silicon CMOS technology. A short overview of graphene manufacturing and metrology methods is followed by an introduction of macroscopic graphene field effect transistors (FETs). The absence of an energy band gap is shown to result in severe shortcomings for logic applications. Possibilities to engineer a band gap in graphene FETs including quantum confinement in graphene Nanoribbons (GNRs) and electrically or substrate induced asymmetry in double and multi layer graphene are discussed. Graphene FETs are shown to be of interest for analog radio frequency applications. Finally, novel switching mechanisms in graphene transistors are briefly introduced that could lead to future memory devices. Comment: 11 pages, 6 figures
Functional-Integral Approach to the Investigation of the Spin-Spiral Magnetic Order and Phase Separation
A. G. Groshev
We investigate a two-dimensional single-band Hubbard model with a nearest-neighbor hopping. We treat a commensurate collinear order as well as incommensurate spiral magnetic phases at a finite temperature using a Hubbard-Stratonovich transformation with a two-field representation and solve this problem in a static approximation. We argue that temperature dramatically influence the collinear and spiral magnetic phases, phase separation in the vicinity of half-filling. The results imply a possible interpretation of unusual behavior of magnetic properties of single-layer cuprates.
Mono- and Polycrystalline Silicon for Terahertz Intracenter Lasers
The performance of optically pumped terahertz silicon lasers with active media made from mono- and polycrystalline silicon doped by phosphorus has been investigated. The polycrystalline silicon samples consist of grains with a characteristic size distribution in the range from 50 to 500 µm. Despite of significant changes of the principal phonon spectrum and increased scattering of phonons at grain boundaries, the silicon laser made from polycrystalline material has a laser threshold and an operation temperature only slightly worse than that of monocrystalline silicon lasers.
Conditions for the Spin-Spiral State in Itinerant Magnets
The spin-spiral (SS) type of magnetization is studied with the Hubbard model. Consideration of noncollinearity of the magnetic moments results in a phase diagram which consists of regions of the SS and paramagnetic states depending on the number of electrons and the parameter U/t (U is the Hubbard repulsion, and t is an overlap integral). A possibility of stabilization of the SS state with three nonzero components of magnetic moment is considered.
Location Depending Textures of the Human Dental Enamel 2009-09-23 - 2009-09-25
L. Raue
Helmut Klein
Dental enamel is the most highly mineralised and hardest biological tissue in human body [1]. Dental enamel is made of hydroxylapatite (HAP) - Ca(5)(PO(4))(3)(OH), which is hexagonal (6/m). The lattice parameters are a = b = 0.9418 nm und c = 0.6875 nm [1]. Although HAP is a very hard mineral, it can be dissolved easily in a process which is known as enamel demineralization by lactic acid produced by bacteria. Also the direct consumption of acid (e.g. citric, lactic or phosphoric acid in soft drinks) can harm the dental enamel in a similar way. These processes can damage the dental enamel. It will be dissolved completely and a cavity occurs. The cavity must then be cleaned and filled. It exists a lot of dental fillings, like gold, amalgam, ceramics or polymeric materials. After filling other dangers can occur: The mechanical properties of the materials used to fill cavities can differ strongly from the ones of the dental enamel itself. In the worst case, the filling of a tooth can damage the enamel of the opposite tooth by chewing if the interaction of enamel and filling is not equivalent, so that the harder fillings can abrade the softer enamel of the healthy tooth at the opposite side. This could be avoided if the anisotropic mechanical properties of dental enamel would be known in detail, hence then another filling could be searched or fabricated as an equivalent opponent for the dental enamel with equal properties. To find such a material, one has to characterise the properties of dental enamel first in detail for the different types of teeth (incisor, canine, premolar and molar). This is here exemplary done for a human incisor tooth by texture analysis with the program MAUD from 2D synchrotron transmission images [2,3,4].
Formation of Substructure and Texture in Dual-Phase Steels due to Thermal Treatment. 2009-09-23 - 2009-09-25
M. Masimov
Microstructure and texture formation in DP steels obtained by thermal treatment at temperatures of 780 degrees C i.e. between A(c1) and A(c3) and at 900 degrees C, i.e. above A(c3) and following different cooling techniques were studied by means of X-ray and electron diffraction techniques. The formation of the different structure constituents as well as substructure parameters such as blocks size and misorientation between them induced by thermal treatment was detailed analyzed. Various methods conventional X-ray methods, high-energy synchrotron radiation and EBSD measuring the texture of the bcc phase were applied in order to investigate their influence on the results. Beside texture heredity, a softening of the initial texture components induced by cold rolling and of related anisotropy of steels due to thermal treatment was estimated.
8 Reads
Effect of Preparation-Induced Surface Morphology on the Stability of H-Terminated Si(111) and Si(100) Surfaces
H. Angermann
Henrion W
Rebien M
Röseler A
The non-destructive and surface-sensitive method surface photovoltage (SPV) technique as well as ultraviolet-visible (UV-VIS) and Fourier-transform infrared (FTIR) spectroscopic ellipsometry (SE) were employed to investigate the influence of the preparation-induced surface morphology of wet-chemically treated silicon wafers on the stability of the surface passivation against native oxidation in clean room air. It was shown that the progression of the initial oxidation phase on wet-chemically prepared H-terminated surfaces strongly depends on the remaining surface microroughness and interface state density. Best results were obtained on atomically flat NH4F-treated Si(111) surfaces prepared in N2 atmosphere without rinsing, characterised by a very low initial interface state density Dit,min < 2 × 1010 cm-2eV-1 and very long initial phases of oxidation up to 48 h.
High Temperature Mechanical Loss Spectrum of 3Y-TZP Zirconia Reinforced with Carbon Nanotubes or Silicon Carbide Whiskers
Claudia Ionascu
Robert Schaller
High temperature plasticity of fine-grained ceramics (ZrO2, Al2O3, etc) is usually associated with a grain boundary sliding process. The aim of the present research is then to improve the high-temperature mechanical strength of polycrystalline zirconia (3Y-TZP) through the insertion of multiwalled carbon nanotubes (CNTs) or silicon carbide whiskers (SiCw), which are susceptible to pin the grain boundaries. The effect of these nano-sized particles on grain boundary sliding has been studied by mechanical spectroscopy.
Influence of Parameters during Induction Heating Cycle of 7075 Aluminium Alloys with RAP Process
G. Vaneetveld
Ahmed Rassili
H. V. Atkinson
This paper was presented at 10th International Conference on Semi-Solid Processing of Alloys and Composites, S2P 2008, September 16th -18th, 2008, Aachen, Germany and Liège, Belgium and published as Solid State Phenomena, 2008, 141-143, pp. 719-724. The final published version is available at www.scientific.net, Doi: 10.4028/www.scientific.net/SSP.141-143.719. Thixoforging involves shaping alloys with a globular microstructure in the semi-solid state. To reach this kind of material, the Recrystallisation and Partial Melting (RAP) process can be used to obtain a globular microstructure from extruded material with liquid penetrating the recrystallised boundaries. Induction heating is used to apply the RAP process to slugs. One of the benefits of using this method of heating is the fast heating rate (20°C/s). This paper will help to improve heating parameters by showing their influence on 7075 aluminium alloy recrystallisation. These parameters are the heating rate; heating frequencies-power; presence or not of protective gas; position of the slug in the inductor; energy stored inside the slug; oxide layer on the slug side; chamfer of the slug upper corner.
Laser welding of SSM Cast A356 aluminium alloy processed with CSIR-Rheo technology
Rehan Akhter
L Ivanchev
C Van Rooyen
Herman Burger
Copyright: 2006 Trans Tech Publications, Switzerland Samples of aluminium alloy A356 were manufactured by Semi Solid Metals HPDC technology, developed recently in CSIR, Pretoria. They were butt welded in as cast conditions using as Nd: YAG laser. The best metal and weld microstructure were presented. The effect of different heat treatments on microstructure and mechanical properties of the welds were investigated. It was found that the fine dendrite structure of the weld metal contributed for equalizing the mechanical properties of the joint.
The Natural and Artificial Aging Response of Semi-Solid Metal Processed Alloy A356
Heinrich Möller
Gonasagren Govender
Stumpf Waldo
10th International Conference on Semi-Solid processing of alloys and composites (S2P). Aachen, Germany and Liege, Belgium, 16 - 18 September 2008. Copyright: 2008 Trans Tech Publications The heat treatment cycles that are currently applied to semi-solid processed components are mostly those that are in use for dendritic casting alloys. These heat treatments are not necessarily the optimum treatments for non-dendritic microstructures. For rheocast alloy A356, it is shown that natural aging prior to artificial aging causes the time-to-peak-hardness to be longer compared to the time when only artificial aging is used. Furthermore, a hardness plateau is maintained during artificial aging at 180oC between 1 and 5 hours without any prior natural aging. A natural aging period as short as 1 hour results in a hardness peak (rather than a plateau) to be reached during artificial aging after 4 hours at 180oC
The Influence of Heat Treatments for Laser Welded Semi Solid Metal Cast A356 Alloy on the Fracture Mode of Tensile Specimens
G Kunene
LH Ivanchev
Presented at the 10th International Conference on Semi-Solid processing of alloys and composites (S2P). Aachen, Germany and Liege, Belgium, 16 - 18 September 2008 The CSIR rheo-process was used to prepare the aluminium A356 SSM slurries and thereafter plates (4x80x100 mm3) were cast using a 50 Ton Edgewick HPDC machine. Plates in the as cast, T4 and T6 heat treatment conditions which had passed radiography were then butt laser welded. It was found that the pre-weld as cast, T4 and post-weld T4 heat treated specimens fractured in the base metal. However, the pre-weld T6 heat treated specimens were found to have fractured in the heat affected zone (HAZ)
Evaluation of Surface Chemical Segregation of Semi-Solid Cast Aluminium Alloy A356
Copyright: 2008 Trans Tech Publications In order for SSM forming to produce homogeneous properties in a casting, it is important that there is a uniform distribution of the primary grains. Besides producing a sound casting free of porosity, the amount of liquid segregation must be minimized. The surface liquid segregation phenomenon was investigated on high pressure die cast (HPDC) A356 alloy. SSM slurries were prepared using the CSIR Rheocasting System and plates of 4mm × 80mm × 100mm were HPDC. The chemical composition depth profile from the surface was determined using optical emission spectroscopy (OES) and glow discharge optical emission spectroscopy (GDOES). It was found that a 0.5-1.0 mm eutectic rich layer existed on the surface of the alloy. The thickness of the segregation layer depended on the location on the casting. It was found that this layer was insignificant close to the gate of the casting but was relatively consistent over most of the plate. Although this segregation layer did not impact on the bulk mechanical properties, hardness tests did reveal that this region had significantly higher hardness values which may have a considerable impact on the fatigue properties
Investigation of the Primary Phase Segregation during the Filling of an Industrial Mold with Semi-Solid A357 Aluminum
Frédéric Pineau
Geneviève Simard
Yes Formulaire Crown Copyright non disponible Saguenay IMI-117642
Nanoscale surface physics with local probes: Electronic bandstructure of a two-dimensional self-assembled adatom superlattice
W. D. Schneider
Structure Analysis of Nanocrystalline MgO Aerogel Prepared by Sol-Gel Method
Janusz J. Malinowski
Grzegorz Dercz
Lucjan Pająk
Wojciech Jakub Pudlo
Wet gel obtained by sol-gel technique was dried in supercritical CO2 to prepare hydrated form of magnesium oxide. Calcination at 723 K under vacuum yielded nanocrystalline MgO aerogel. Structure studies were performed by X-ray diffraction, scanning and transmission electron microcopies. Electron microscopy images reveal rough, unfolded and ramified structure of solid skeleton. Specific surface area SBET was equal to 238 m2/g. X-ray pattern reveals the broadened diffraction lines of periclase, the only crystalline form of magnesium oxide. The gamma crystallite size distribution was determined using FW 1/5/4/5 M method proposed by R. Pielaszek. The obtained values of and ó (measure of polydispersity) of particle size parameters are equal to 6.5 nm and 1.8 nm, respectively, whereas the average crystallite size estimated by Williamson-Hall procedure was equal to 6.0 nm. The obtained at Rietveld refinement Rwp, and S fitting parameters equal to 6.62% and 1.77, respectively, seem to be satisfactory due to the nanosize of MgO crystallites and because of the presence of amorphous phase.
Opportunities and Challenges for Use of SSM Forming in the Aerospace Industry
HN Chou
Copyright: 2006 Trans Tech Publications Ltd SSM is now considered an established technology to produce high integrity near net shape components for the automotive industry in particular. Although it is used extensively in the automotive industry, very little attention has been given to aerospace applications. SSM processing does demonstrate the potential to replace certain hogout components in commercial aircraft with the main aim to reduce costs while maintaining high strength to weight ratios. In order to achieve this it will require developing processes to reliably cast components with consistent properties to meet aerospace requirements. Since SSM forming is a relatively new process, materials properties data bases for components produced using this technique is very limited. One of the major challenges is the generation of a data base of material properties to assist design engineers for design of components, as well to assess the life expectancy and development maintenance schedules.
Mechanism of Shunting of Nanocrystalline Silicon Solar Cells Deposited on Rough Ag/ZnO Substrates
H. B. T. Li
R. H. Franken
Robert L. Stolk
R.E.I. Schropp
Using a textured substrate is a basic requirement for light trapping in a thin film solar cell. In this contribution, the structure of μc-Si:H n-i-p solar cells developed on a rough Ag/ZnO coated glass substrate is carefully studied, in order to understand the substrate surface morphology dependence of solar cell properties, especially of the yield of working cells. From cross-sectional transmission electron microscopy (TEM) images it is clear that cells developed on substrates with tilted large Ag crystal grains contain pinholes that result in short-circuiting of the entire device. The formation of these pinholes is due to the inability of conformal coverage of the sub-micron sized cavities that are created by these Ag grains. Controlling the Ag deposition temperature is found to be essential to have a well performing μc-Si:H n-i-p cell.
Incorporation, Diffusion and Agglomeration of Carbon in Silicon
P. Lavéant
Peter Werner
G. Gerth
U.M. Gösele
The incorporation of carbon into silicon has gained interest since at a high concentration, carbon can reduce i) the stresses of Si/SiGe heterostructures, ii) it can suppress the enhanced diffusion of dopants like boron. Such properties have initiated, e.g., the development of new devices, such as the SiGeC hetero-bipolar transistor. Unfortunately, the carbon incorporation in silicon is difficult to achieve due to its low solubility and the lattice stresses involved. This paper demonstrates that the growth of carbon-rich silicon layers by molecular beam epitaxy (MBE) shows a strong temperature dependence and a complex structural "phase diagram". We distinguish two growth mechanisms: fully substitutional incorporation at around 450°C and additionally segregation above 600°C. Therefore, pseudomorphic 100 nm thick Si layers have been grown with a content of 2% C, and for an C incorporation of 5% epitaxial layers could be generated including defects, such as twins. Thermodynamically, such layers are characterized by point defect concentrations far away from equilibrium. Therefore, the carbon diffusion can not only be described by a simple, self-interstitial related mechanism, but also vacancies and interstitial oxygen have to be taken into account. Experiments with antimony, a vacancy-diffusing dopant, prove the strong influence of vacancies in carbon-rich samples. Concerning the complex interaction of point defects, we discuss the possibility of an additional mechanism, namely the Frank-Turnbul mechanism and/or the precipitation of silicon carbide as a vacancy source. We also evoke the co-precipitation of oxygen and carbon and explain this affinity by an exchange of point defects and a volume compensation.
AlGaN/GaN based heterostructures for MEMS and NEMS applications
Volker Cimalla
Claus-Christian Röhlig
Vadim Lebedev
Matthias Hein
With the increasing requirements for microelectromechanical systems (MEMS) regarding stability, miniaturization and integration, novel materials such as wide band gap semiconductors are receiving more attention. The outstanding properties of group III-nitrides offer many more possibilities for the implementation of new functionalities and a variety of technologies are available to realize group III-nitride based MEMS. In this work we demonstrate the application of these techniques for the fabrication of full-nitride MEMS. It includes a novel actuation and sensing principle based on the piezoelectric effect and employing a two-dimensional electron gas confined in AlGaN/GaN heterostructures as integrated back electrode. Furthermore, the actuation of flexural and longitudinal vibration modes in resonator bridges are demonstrated as well as their sensing properties.
Top-cited authors
Koichi Momma
National Museum of Nature and Science
Helmut Schaeben
Ralf Hielscher
Yves Bréchet
Atomic Energy and Alternative Energies Commission
Ladislas Kubin
The French Aerospace Lab ONERA | CommonCrawl |
Vector Error Correction Model
the impact of monetary policy on the economy of the united kingdom: a vector error correction model (vecm). Diamond Scientific Publication (Mokslinės leidybos deimantas) is a publisher of peer-reviewed, fully Open Access journals. The result show that there is a bi-directional long-term relationship between stock prices and dividends, i. Time Series Analysis III. These modeling issues are manifest in all applied work but they are particularly acute in multivariate time series settings such as cointegrated systems where multiple interconnected decisions can materially affect the form of the model and its interpretation. • Variables used in VAR are all assumed to be endogenous. Our findings are robust to different combinations of macroeconomic variables in six‐dimension systems and two subperiods. The rest of the paper is structured as follows. Behavior Analysts As Advocates for Autism Insurance Reform. To estimate a VAR model, one must first create the model using an ndarray of homogeneous or structured dtype. The nonlinear least squares support vector machine model is designed within the Bayesian evidence framework that allows us to find appropriate trade-offs between model complexity and in-sample model accuracy. Educate behavior analysts on the process of policy change as it relates to insurance coverage for ABA therapy with this webinar-style course. Represent a vector autoregression (VAR) model using a varm object. a 1 + b 1 Figure 1: Abstract model of a. 6 Such a restriction makes any normal prior with a large variance. This website uses cookies to distinguish you from other users. Also, the VECM consistently outperforms the vector autoregressive model in forecasting ability. 1 sampai 2014. T2 - A structural vector error correction model (SVECM) study of Malaysia. This is an important result as any arbitrary linear combination of I(1) series will be I(1) (unless the series are cointegrated). Applied Econometrics and International Development, Vol. Aug 01, 2017 · Digitizing in GIS is the process of converting geographic data either from a hardcopy or a scanned image into vector GIS data by tracing the features. PY - 2002/10/1. http://journals. VECTOR ERROR CORRECTION MODELS ZHIPENG LIAO UC Los Angeles PETER C. org/outbreaks/article/estimation-of-pneumonic-plague-transmission-in-madagascar-august-november-2017/ http://currents. typic and phenotypic correction can be simply detected. for evolving (state) vector and vector of shocks. Digital Object Identifier (DOI) 10. We apply our methods to a large UK macroeconomic model. When our model makes perfect predictions, R 2 will be 1. Acta Commercii is a journal that seeks to promote research within the ambit of management and related disciplines. 8 Estimated statistics for 4d-VEC model with 3 cointegrated relationships 138 Table 5. // Kazakhstan Oil & Gas Report;Q1 2012, Issue 1, p92. Mulligan Western Carolina University Victor: I know what you're talking about. Working Paper 1998-008C by Richard G. 6 Such a restriction makes any normal prior with a large variance. Specifically, we apply a vector error-correction model to assess if, and to what extent, capacity or passenger demand are fir. Electricity Demand Analysis Using Cointegration and Error-Correction Models with Time Varying Parameters: The Mexican Case 1 Yoosoon Chang Eduardo Martinez-Chombo Department of Economics Department of Economic Research Rice University Banco de Mexico Abstract We specify and estimate a double-log functional form of the demand equation,. 21 Representations for the I(1) cointegrated model 2870 3. If a regression component is present, then infer horizontally concatenates X to Y to form a numobs-by-numpaths*numseries + 1 matrix. In the next section we describe the optimal spatio-temporal WSN data prediction model, followed by the procedure for optimal model. Vector Network Analysis Introduction and Fundamentals 2. infer assumes that the last rows of each series occur at the same time. T1 - Testing for two-regime threshold cointegration in vector error-correction models. The third column ( Rho ) and the fifth column ( Tau ) are the test statistics for unit root testing. additional filters of economic. Vector auto regressions is commonly used to analyze an industry by BMI Methodology. Parameter exclusion from estimation is equivalent to imposing equality constraints to zero. While the deflnition of validity of the model may be seen easy to reconcile. This step is done automatically by the varbasic command, but must be done explicitly after the var or svar commands. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We are an international scientific publisher focusing on delivery of significant research work as well as high quality content through innovative and excellent services. Their simulations suggest the power of the new proposed. 1 The Model in Regression Form 20. Does this normalization have any impact on the beta estimate ? I thank you in advance for your support. Their simulations suggest the power of the new proposed. Introduction ARDL model EC representation Bounds testing Postestimation Further topics Summary ardl: Estimating autoregressive distributed lag and equilibrium correction models Sebastian Kripfganz1 Daniel C. Most popular questions people look for before coming to this page. To estimate a VAR model, one must first create the model using an ndarray of homogeneous or structured dtype. Models of this type. Chapter 1 The Theoritical Bases of the Vector Autoregressive Approach The vector autoregressive model (VAR) is a statistical model which has been devloped by Christopher Sims in the beginning of 1980s. Reisman* Abstract: Global vector autoregressions (GVARs) have several attractive features: multiple potential channels for the international transmission of macroeconomic and financial shocks, a. Error-correction–based cointegration tests for error-correction model is equal to zero. N2 - This paper examines a two-regime vector error-correction model with a single cointegrating vector and a threshold effect in the error-correction term. Estimation and Inference in Cointegration Models Economics 582 Eric Zivot May 17, 2012 Tests for Cointegration Let the ( ×1) vector Y be (1). This was practiced a bit on each mission and would be used if communication is lost with the ground, to have an onboard method of updating the state vector. These modeling issues. org/outbreaks/article. Published in Tijdschrift voor sociaalwetenschappelijk onderzoek van de landbouw, Vol. Chapter 4: Vector Autoregression and Vector Error-Correction Models 71 When we apply the VEC model to more than two variables, we must consider the possi-bility that more than one cointegrating relationship exists among the variables. WORKSHOP on Advanced Time Series Econometrics with EViews Asst. table: Estimates of critical values of the limiting distributions of the cointegrating rank statistics in a VECM with weakly exogenous I(1) variables. Looking for online definition of vector diagram in the Medical Dictionary? vector diagram explanation free. How do you interpret VEC and VAR models coefficients? For example, if the results of the ECM model revealed causality running from the independent to the dependent variable. Our result shows that the coefficient of lagged error-correction- model term is statistically significant at 5% level. 2 Spurious Regression and Cointegration 12. 1 Spurious Regression The time series regression model discussed in Chapter 6 required all vari-ables to be I(0). The r1 bit is calculated by performing a parity check on the bit positions whose binary representation includes 1 in the first position. Jun 30, 2008 · Stock Exchange using a VAR. Then Johansen Cointegration test is carried out to establish the rank at which the series are cointegrated. The source can be switched to excite port-1 or port-2 of the device under test (DUT). VECTOR ERROR CORRECTION MODEL AND ITS APPLICATION ON PGAS, AKRA, AND PTT PCL STOCK DATA ( JANUAR Y 2010 – JANUARY 2019 ) By Almira Rizka Putri Oil and gas is an important commodity that has driven the establishment of companies engaged in oil and gas. Also described is the mathematical model. But it's not a dream—it's thatyou've got to make decisions before you know what's involved, but you're stuck with the results aryway. Logarithmic vector fields associated with parametric semi-quasihomogeneous hypersurface isolated singularities are considered in the context of symbolic computation. 00 Hubungi 0852. Obviously the engineer cannot calculate lot pre-exposure corrections for every single field or wafer-quadrant on the lot. Dec 01, 2015 · How was the reading experience on this article? Check all that apply - Please note that only the first page is available if you have not selected a reading option after clicking "Read Article". Vectors and gene correction strategies can first be explored in vitro and then translated and explored in this large animal model, which is a precondition to perform clinical trials. Short-Run and Long-Run Relationship between Capital Formation and Economic Growth in India IJMT, Volume 19, Number 2, July - December 2011 171 • To find out the short-run and long-run relationship between capital formation and economic growth. Vector Autoregression (VAR) Models. Evans Halftoning and Image Processing Group Hewlett-Packard Laboratories. Representation of Data on the addition of parity bits: Determining the Parity bits Determining the r1 bit. Acta Commercii is a journal that seeks to promote research within the ambit of management and related disciplines. Parameter exclusion from estimation is equivalent to imposing equality constraints to. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. If the variables in y t are all I (1), the terms involving differences are stationary, leaving only the error-correction term to introduce long-term stochastic trends. Pada dasarnya VECM merupakan VAR restriksi hubungan perilaku. trends and ,B is an N x 1 vector of constants. Investment in the stock market is long term in nature. Correction: Roadway traffic crash prediction using a state-space model based support vector regression approach. WATSON Princeton University Many economic models imply that ratios, simple differences, or "spreads" of variables are I(O). In this paper we suggest a test for cointegration rank when change point is known and model has possibility that cointegration vector is changed. The use of VEC models to forecast construction material prices addresses a gap in the existing literature that stems from overlooking the importance of forecasting both the short- and long-term movements of individual. A vector error-correction model of price time series for bottleneck detection in price coordination within a marketing channel. Futures Spot Coeff. $\endgroup$ – Wayne Nov 27 '13 at 3:35. 6 Determination of likelihood estimate for VEC(1) model 137 Table 5. Unmodeled forces (e. Vectors and gene correction strategies can first be explored in vitro and then translated and explored in this large animal model, which is a precondition to perform clinical trials. <div class="statcounter"&. AU - Seo, Byeongseon. The errors in the industrial and tactical grade are typically due to changes in temperature which cause the accelerometer bias to deviate in ways that cannot be accounted for by the sensor calibration model. Found a good topic by Delaney herehow-to-build-a-pairs-trading-strategy-on. Project MUSE promotes the creation and dissemination of essential humanities and social science resources through collaboration with libraries, publishers, and scholars worldwide. A new algorithm for computing the logarithmic vector fields is introduced. In the former case, this is illustrated in the context of Lettau and Ludvigson's consumption model and in the latter case in KPSW's six variable model. Directional couplers are used to separate the incident,. Emerging economies are still faced with need to improve economic growth. The result show that there is a bi-directional long-term relationship between stock prices and dividends, i. Vector Autoregression (VAR) Model Creation. Keywords: Biodiesel, cointegration, nonlinear vector error-correction model, regime-dependent model, Markov-switching. 2 Bayesian Vector Autoregressions 19 2. While the deflnition of validity of the model may be seen easy to reconcile. May 15, 2013 · Naaaah, jika residualnya ternyata bisa stasioner pada level (inilah maksudnya terkointegrasi ya sooob, residual stasioner pada level), maka kita bisa turunkan deh model ECM alias model jangka pendeknya sooob. The World's most comprehensive professionally edited abbreviations and acronyms database All trademarks/service marks referenced on this site are properties of their respective owners. AU - Hansen, Bruce E. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. You can view the full text of this article for free using the link below. The errors in the industrial and tactical grade are typically due to changes in temperature which cause the accelerometer bias to deviate in ways that cannot be accounted for by the sensor calibration model. This article empirically analyzes the relationship between unemployment rate and inflation rate in the Philippines over the period 1980-2006. 3 Causality tests have been used. This paper aims to investigate the nexus between domestic investment, exports, imports, and economic growth for the Brazilian economy during the period 1970-2017, using the VECM methodology. The unit root test has been performed on YUN, GDP, ED, FDI, PI, and LR, and POP to assess stationarity. Also described is the mathematical model. Y = filter(Mdl,Z,Name,Value) uses additional options specified by one or more name-value pair arguments. Chapter 4: Vector Autoregression and Vector Error-Correction Models 71 When we apply the VEC model to more than two variables, we must consider the possi-bility that more than one cointegrating relationship exists among the variables. • Variables used in VAR are all assumed to be endogenous. Applied Econometrics and International Development, Vol. We consider possibility that cointegration. economic dataset 139. HORVATH Stanford University MARK W. For verified definitions visit AcronymFinder. , geopotential terms) can also lead to variations in B. For the LAGMAX=3 in the SAS statements, basic time series model) and proc iml (to compute the variances). In this paper, we examine the use of Box-Tiao's (1977) canonical correlation method as an alternative to likelihood-based inferences for vector error-correction models. Fitra Prasapawidya Purna Pusat Pengembangan Ekonomi Fakultas Ekonomi dan Bisnis, Universitas Muhammadiyah Yogyakarta Indonesia. $\endgroup$ – Wayne Nov 27 '13 at 3:35. T2 - A structural vector error correction model (SVECM) study of Malaysia. When using a structured or record array, the class will use the passed variable names. iii cipher the information and provide a printed or graphical presentation of the results. Taylor and Francis start page. The euro exchange rate model is therefore estimated in the form of a Vector Autoregressive (VAR) model with cointegrating vectors (VECM). AU - Seo, Byeongseon. I If the model is extended to 3 or more variables, more than one relation with stationary errors may exist. A Structural Vector Error-Correction Model of Price Time Series to Detect Bottleneck Stages within a Marketing Channel Published in Vertical Relationships and Coordination in the Food System. Blau Black FT2 Blue FF397 Matt Integral Black Matte Helm Vector LS2 Sign Black Matt XXS. The unit root test has been performed on YUN, GDP, ED, FDI, PI, and LR, and POP to assess stationarity. 4 Markov-Switching in the EC model 13 2 Estimation 15 2. Convert common tasks that use the vgx functions to the newer functionality. 18, issue 2, 223-237 Date: 1995 References: View references in EconPapers View complete reference list from CitEc. The standard vector analysis method recommended by the American National Standards Institute was adopted in this study to evaluate the effect of astigmatism correction. SPECIFICATIONS SUBJECT TO CHANGE WITHOUT NOTICE 2900 Inland Empire Blvd. Specifically, we apply a vector error-correction model to assess if, and to what extent, capacity or passenger demand are fir. The negative association between unemployment and inflation is known as the Phillips Curve because the trade-off relationship between these two variables was first pointed out by William Phillips in 1958. NRL and MLE. 2 (1) The two variables are designated as dependent ( y) and independent ( x). 4; Title: Short and long-run linear and nonlinear causality between FDI and GDP for the US Authors: Ilias A. VECTOR ERROR CORRECTION MODEL AND ITS APPLICATION ON PGAS, AKRA, AND PTT PCL STOCK DATA ( JANUAR Y 2010 – JANUARY 2019 ) By Almira Rizka Putri Oil and gas is an important commodity that has driven the establishment of companies engaged in oil and gas. The errors in the industrial and tactical grade are typically due to changes in temperature which cause the accelerometer bias to deviate in ways that cannot be accounted for by the sensor calibration model. Project MUSE Mission. Ericsson and Erica L. Accordingly, there are two renowned growth hypotheses in the current literature with regard to the two key drivers; one is finance-led growth (FLG) and the other is export-led growth (ELG). For verified definitions visit AcronymFinder. Practical expressions for the nonlinear regression are obtained in terms of the positive definite kernel function by solving a linear system. Both the choice of the econometric model and the choice of the set of restrictions can affect the point estimates and standard errors of impulse responses. In the case of similar coefficients, our model pushes the respective regions of the parameter space towards the common distribution. Time Series Analysis III. Convert vector autoregression (VAR) model to vector error-correction (VEC) model. Also, the VECM consistently outperforms the vector autoregressive model in forecasting ability. Unmodeled forces (e. 6 Determination of likelihood estimate for VEC(1) model 137 Table 5. It is also assumed that the roots of (F (L) lie outside the unit circle, so that if pt were not present in the model, yt would be stationary. In these models, cointegrating vectors are composed of l's, O's, and - l's and contain no unknown parameters. Random errors are non-repeatable measurement variations and are usually unpredictable. This means if we have a one-bit error, we can figure out which bit is the error; but if we have a two-bit error, it looks like one bit from the other direction. PHILLIPS Yale University, University of Auckland, University of Southampton, and Singapore Management University Model selection and associated issues of post-model selection inference present well known challenges in empirical econometric research. $\begingroup$ As I understand it, a VECM is a VAR where the dependent variables aren't covariance stationary, but their first differences are. Estimating cointegrating vectors 2887 3. Figure 1 shows a very general and abstract model of a unidirectional vector network analyzer (VNA). • If one has r = (n-k) linearly independent equations, there will be some set of k of the components of the n-vectors which can be. Using this result, it is shown that the Pagan and Pesaran method can be used to recover the structural shocks with permanent effects identically to those from the Gonzalo and Ng and KPSW methods. Cointegration MacKinlay (1997), Mills (1999), Alexander (2001), Cochrane (2001) and Tsay (2001). A VAR process has this property if. an adjustment. Help pages for package 'GVAR' version 0. iii Acknowledgments The author is sincerely grateful to those who provided help during the course of this research including Professor Baher Abdulhai, Professor Zhou Zhou, Professor Amer Shalaby, Professor. Lecture Notes 3: Single Equation Cointegration Carl Bonham, Ph. Futures Spot Coeff. (Johannes Hendrik van Heerden); Mpinda, Mvita Freddy. The response of Islamic banking financing to changes in Islamic banking financing, Islamic banking funding, profit per asset are positive,. Kempthorne. Since then, many people have studied the nature of price responsiveness of. The role of constants and trends 2894 4. The switch also provides a Z0 termination for the output port in each direction. Color Error Diffusion with Generalized Optimum Noise Shaping Niranjan Damera-Venkata Brian L. to identify the shock by imposing economic restrictions on an econometric model. The errors in the industrial and tactical grade are typically due to changes in temperature which cause the accelerometer bias to deviate in ways that cannot be accounted for by the sensor calibration model. • If one has r = (n-k) linearly independent equations, there will be some set of k of the components of the n-vectors which can be. For any library that invests in IGI Global's InfoSci-Books and/or InfoSci-Journals databases, IGI Global will match the library's investment with a fund of equal value to go toward subsidizing the OA APCs for their faculty patrons when their work is submitted/accepted under OA into an IGI Global journal. The standard vector analysis method recommended by the American National Standards Institute was adopted in this study to evaluate the effect of astigmatism correction. Investment in the stock market is long term in nature. A Vector Error-Correction Forecasting Model of the U. What does vector diagram mean?. Chapter 1 The Theoritical Bases of the Vector Autoregressive Approach The vector autoregressive model (VAR) is a statistical model which has been devloped by Christopher Sims in the beginning of 1980s. Update the site model to ensure that all errors have been corrected. $\begingroup$ As I understand it, a VECM is a VAR where the dependent variables aren't covariance stationary, but their first differences are. By default, estimate imposes the constraints of the H1 Johansen VEC model form by removing the cointegrating trend and linear trend terms from the model. By default, estimate imposes the constraints of the H1 Johansen VEC model form by removing the cointegrating trend and linear trend terms from the model. The negative association between unemployment and inflation is known as the Phillips Curve because the trade-off relationship between these two variables was first pointed out by William Phillips in 1958. Time Series Analysis III. The effectiveness of the hybird-model has been tested by one example. The model and the factors identified may assist in predicting manpower demand trend and formulating policies, training and retraining programmes tailored to deal effectively with the industry's labour resource requirements in this critical sector of economy. For example, 'X',X,'Scale',false specifies X as exogenous predictor data for the regression component and refraining from scaling the disturbances by the lower triangular Cholesky factor of the model innovations covariance matrix. Aug 17, 2008 · Zagler, Martin, A Vector Correction Model of Economic Growth and Unemployment in Major European Countries and an Analysis of Okun's Law (August 15, 2008). Found a good topic by Delaney herehow-to-build-a-pairs-trading-strategy-on. When using a structured or record array, the class will use the passed variable names. b1 a1 b2 a2 Z L port 1 port 2 S21 S12 S11 S22 uThru DUT TX oscillator reflection coefficient of terminated DUT: S = b1/a1 bridge 4-port reflection signal r = a 1 + / b reference signal 1 n =. 6 Power Correction Data User defined power correction is available only if option ZVR-B7 is installed. HORVATH Stanford University MARK W. Department of Health and Human Services discovered the mistakes by the Cherokee County Department of Social Services. In the former case, this is illustrated in the context of Lettau and Ludvigson's consumption model and in the latter case in KPSW's six variable model. Analisa Keterkaitan Pengeluaran Pemerintah dan Produk Domestik Bruto di Indonesia: Pendekatan Vector Error Correction Model (VECM) di Indonesia. etc) in the CE (cointegration equation) and the VAR. 90 Short-run influences of lagged futures -0. Modern Measurement Techniques for Testing Advanced Military Communications and Radars, 2nd Edition. for evolving (state) vector and vector of shocks. The VEC model addresses the problem of endogeneity because it assumes all the variables in the system are endogenous. In these models, cointegrating vectors are composed of l's, O's, and - l's and contain no unknown parameters. Abstract: Hansen and Seo (2002) outline procedures to test for threshold cointegration, and to estimate a bi-variate model. This study examines the connection between trade and economic growth in South Korea, where trade has been an important sector of the country's economy. model errors but also the interaction between model errors and initial errors because of the mathe-matical nature of the NFSV-tendency errors. Schneider2 1University of Exeter Business School, Department of Economics, Exeter, UK. Jamie Gascoigne. For the most accurate network measurements, vector error correction is employed, as discussed in Chapter 15. 5 dB(A) is potentially overstated. BITCOIN TIME SERIES REGRESSION MODEL. Accordingly, there are two renowned growth hypotheses in the current literature with regard to the two key drivers; one is finance-led growth (FLG) and the other is export-led growth (ELG). 3 Gibbs Sampling with Data Augmentation 17 2. Sargan; Alok Bhargava Econometrica, Vol. 1 The Model in Regression Form 20. The r1 bit is calculated by performing a parity check on the bit positions whose binary representation includes 1 in the first position. 2013, Vol 14, *o 2 31 of consumers. org/outbreaks/article/estimation-of-pneumonic-plague-transmission-in-madagascar-august-november-2017/ http://currents. Convert common tasks that use the vgx functions to the newer functionality. Digital Object Identifier (DOI) 10. , geopotential terms) can also lead to variations in B. Existing studies either overlook the internal profit dynamics of the sector for the sake of international developments or do not go beyond the application of descriptive statistics. Phillips and Moon (1999) have pointed out that nonsense7 and spurious regression phenomena apply to panel data models if the data happen to be nonstationary. Support Vector Machine Classification Support vector machines for binary or multiclass classification For greater accuracy and kernel-function choices on low- through medium-dimensional data sets, train a binary SVM model or a multiclass error-correcting output codes (ECOC) model containing SVM binary learners using the Classification Learner app. Quality of the economic model: The economic model is poor in content if §¶§¡1 z or §·1§ ¡1 y are 'large'. Before using the collected data, a façade correction is applied. Hello friends I will show how create a 3dsmax 3d class room model tutorial. These instruments have found wide application since the mid to late 1970's. 1 Spurious Regression The time series regression model discussed in Chapter 6 required all vari-ables to be I(0). Represent a vector autoregression (VAR) model using a varm object. When our model does no better than the null model then R 2 will be 0. Chapter 1 The Theoritical Bases of the Vector Autoregressive Approach The vector autoregressive model (VAR) is a statistical model which has been devloped by Christopher Sims in the beginning of 1980s. De Wet, Johannes H. We find that when cointegration analysis is undertaken properly, the naive random walk prediction can be out-performed for the US dollar, the British pound and the Japanese yen, but not for the Swiss franc. Time Series Analysis III. Published in Tijdschrift voor sociaalwetenschappelijk onderzoek van de landbouw, Vol. stock vector illustration 3d cigar images. Diamond Scientific Publication (Mokslinės leidybos deimantas) is a publisher of peer-reviewed, fully Open Access journals. , a = + (, an =. When using a structured or record array, the class will use the passed variable names. 3 Model Dynamics and the Unbiasedness Hypothesis. Their results ascertained that FDI in agriculture can increase the social welfare by creating employment opportunities for the unskilled labour in the host country due to the fact that agriculture is more labour intensive and requires less technical skills. 5 dB(A) is potentially overstated. etc) in the CE (cointegration equation) and the VAR. When economic restrictions are imposed, the econometric model is called a structural model. , August, 2005 Department of Economics, University of Hawaii and University of Hawaii Economic Research Organization. By default, estimate imposes the constraints of the H1 Johansen VEC model form by removing the cointegrating trend and linear trend terms from the model. The methodology relies on virtual-source responses retrieved through the application of seismic interferometry (SI). With Safari, you learn the way you learn best. The cointegrated dynamic ARDL model is estimated using ordinary least squares (OLS) and e ects of variables and their lags interpreted. Published in: 2008 International Conference on Computer Science and Software Engineering. A joint acquisition reconstruction paradigm for correcting inhomogeneity artifacts in MR echo planar imaging Joseph C. Behavior Analysts As Advocates for Autism Insurance Reform. Welcome to "Advanced Calibration Techniques for Vector Network Analyzers. ; Meulenberg, M. iii cipher the information and provide a printed or graphical presentation of the results. The results of prediction accuracy tests suggest that the general VEC model and the VEC model with dummy variables are both acceptable for forecasting construction economic indicators. 3 Model Dynamics and the Unbiasedness Hypothesis. One of the main drivers of growth in literature has been found to be electricity consumption. Jan 20, 2017 · Note that your last step is not the estimation of a single-equation ECM as in the Engle-Granger approach but of a VECM. Reported in this paper are results from a software suite developed to explore the parameters relating to façade amplification. Lag lengths can be chosen using model selection rules or by starting at a maximum lag length, say 4, and eliminating lags one-by-one until the t -ratio on the last lag becomes significant. We apply our methods to a large UK macroeconomic model. Random errors are non-repeatable measurement variations and are usually unpredictable. This was practiced a bit on each mission and would be used if communication is lost with the ground, to have an onboard method of updating the state vector. Chat dengan CS 1 Chat dengan CS 2 Jam Kerja IDTESIS 08. For verified definitions visit AcronymFinder. This is an important result as any arbitrary linear combination of I(1) series will be I(1) (unless the series are cointegrated). Correction: Roadway traffic crash prediction using a state-space model based support vector regression approach. The rest of the paper is structured as follows. Partly because of the expansionary monetary policies in many countries in recent decades, housing prices in many mega cities are very high and have been on a rising trend for years (although there have been times when prices have fallen). fluids Article A Correction and Discussion on Log-Normal Intermittency B-Model Christopher Locke 1,*, Laurent Seuront 2 and Hidekatsu Yamazaki 1 1 Department of Ocean Sciences, Tokyo University of Marine Science and Technology, 4-5-7 Konan, Minato-ku,. implies the existence of an error-correcting mechanism that prevents variables from deviating too far from their long-run equilibrium. Using this result, it is shown that the Pagan and Pesaran method can be used to recover the structural shocks with permanent effects identically to those from the Gonzalo and Ng and KPSW methods. the solutions were analyzed using the epa food recovery hierarchy — which prioritizes prevention first, then recovery, and finally recycling — as a starting point. The standard vector analysis method recommended by the American National Standards Institute was adopted in this study to evaluate the effect of astigmatism correction. 611522; Additional Document Info. $$ R^2 = 1 - \frac{Sum\ of\ Squared\ Errors\ Model}{Sum\ of\ Squared\ Errors\ Null\ Model} $$ R 2 has very intuitive properties. This allows for selecting a parsimonious model while still maintaining sufficient flexibility to control for sudden shifts in the parameters, if necessary. Most popular questions people look for before coming to this page. Color Error Diffusion with Generalized Optimum Noise Shaping Niranjan Damera-Venkata Brian L. 3 Model Dynamics and the Unbiasedness Hypothesis. The economic model is true if fi1 6= 0 n1£n1 ^fi2 = 0n2£n1. Aug 08, 2018 · This model characterizes the relationship between construction material prices and a set of relevant explanatory variables. I If the model is extended to 3 or more variables, more than one relation with stationary errors may exist. Convert the estimated VEC(1) model to its equivalent VAR(2) model representation. The Econometrics Toolbox should allow faculty to use MATLAB in un- dergraduate and graduate level econometrics courses with absolutely no pro-. Published in Tijdschrift voor sociaalwetenschappelijk onderzoek van de landbouw, Vol. For example, 'X',X,'Scale',false specifies X as exogenous predictor data for the regression component and refraining from scaling the disturbances by the lower triangular Cholesky factor of the model innovations covariance matrix. 21 Representations for the I(1) cointegrated model 2870 3. • Ontario, California 91764-4804. T1 - Fixed investment, household consumption, and economic growth. Asymptotic properties of estimates are derived and their features compared with the traditional likelihood ratio based approach. fluids Article A Correction and Discussion on Log-Normal Intermittency B-Model Christopher Locke 1,*, Laurent Seuront 2 and Hidekatsu Yamazaki 1 1 Department of Ocean Sciences, Tokyo University of Marine Science and Technology, 4-5-7 Konan, Minato-ku,. Learn the characteristics of vector autoregression models and how to create them. This helps us to provide you with a good user experience and also allows us to improve our website. So we can have single bit correction, but that's all. Specifically, we apply a vector error-correction model to assess if, and to what extent, capacity or passenger demand are fir. Welcome to "Advanced Calibration Techniques for Vector Network Analyzers. 7 Determination of trace statistic for 4d-VEC(1) model with three cointegrated relationships 138 Table 5. Stay ahead with the world's most comprehensive technology and business learning platform. Structural vector autoregressions 2898 4. Resolve the errors by making adjustments to prevent all intersecting modifiers. Error-correction is found to be strong until 2005; however, it substantially weakens in 2006 and 2007 due to market distorting policy measures such as blending obligations and norms requiring the use of rapeseed oil. Color Error Diffusion with Generalized Optimum Noise Shaping Niranjan Damera-Venkata Brian L. economic dataset 139. Working Paper 1998-008C by Richard G. Previous construction demand forecasting studies mainly focused on temporal estimating using national aggregate data. De Wet, Johannes H. Vector Error-Correction Model. 7747 (AS/WA) atau BBM 2B7D0DB6 Hari Libur & Tanggal Merah :: LIBUR. Reisman* Abstract: Global vector autoregressions (GVARs) have several attractive features: multiple potential channels for the international transmission of macroeconomic and financial shocks, a. Otherwise the model will be called good. In this case, the usual statistical results for the linear regression model hold. Luca Perregrini Vector Network Analyzer, pag. This step is done automatically by the varbasic command, but must be done explicitly after the var or svar commands. The result show that there is a bi-directional long-term relationship between stock prices and dividends, i. PHILLIPS Yale University, University of Auckland, University of Southampton, and Singapore Management University Model selection and associated issues of post-model selection inference present well known challenges in empirical econometric research. This example illustrates the use of a vector error-correction (VEC) model as a linear alternative to the Smets-Wouters Dynamic Stochastic General Equilibrium (DSGE) macroeconomic model, and applies many of the techniques of Smets-Wouters to the description of the United States economy. 2 (1) The two variables are designated as dependent ( y) and independent ( x). 4 Markov-Switching in the EC model 13 2 Estimation 15 2. 16 magdaniar hutabarat, 2017 pemodelan hubungan antara ihsg, nilai tukar dolar amerika serikat terhadap rupiah (kurs) dan inflasi dengan vector error correction model. | CommonCrawl |
npj quantum information
Towards the standardization of quantum state verification using optimal strategies
Experimental few-copy multipartite entanglement detection
Valeria Saggio, Aleksandra Dimić, … Borivoje Dakić
Experimental demonstration of robust self-testing for bipartite entangled states
Wen-Hao Zhang, Geng Chen, … Guang-Can Guo
Quantum verification of NP problems with single photons and linear optics
Aonan Zhang, Hao Zhan, … Lijian Zhang
Self-testing quantum systems of arbitrary local dimension with minimal number of measurements
Shubhayan Sarkar, Debashis Saha, … Remigiusz Augusiak
Demonstrating the power of quantum computers, certification of highly entangled measurements and scalable quantum nonlocality
Elisa Bäumer, Nicolas Gisin & Armin Tavakoli
Experimental Greenberger–Horne–Zeilinger entanglement beyond qubits
Manuel Erhard, Mehul Malik, … Anton Zeilinger
Unbounded randomness from uncharacterized sources
Marco Avesani, Hamid Tebyanian, … Giuseppe Vallone
Efficient generation of entangled multiphoton graph states from a single atom
Philip Thomas, Leonardo Ruscio, … Gerhard Rempe
Experimental quantum state discrimination using the optimal fixed rate of inconclusive outcomes strategy
Santiago Gómez, Esteban S. Gómez, … Gustavo Lima
Xinhe Jiang ORCID: orcid.org/0000-0002-1419-709X1 na1,
Kun Wang2 na1,
Kaiyi Qian1 na1,
Zhaozhong Chen1 na1,
Zhiyu Chen1,
Liangliang Lu1,
Lijun Xia1,
Fangmin Song1,
Shining Zhu1 &
Xiaosong Ma ORCID: orcid.org/0000-0002-0500-56901
npj Quantum Information volume 6, Article number: 90 (2020) Cite this article
Quantum metrology
Quantum devices for generating entangled states have been extensively studied and widely used. As so, it becomes necessary to verify that these devices truly work reliably and efficiently as they are specified. Here we experimentally realize the recently proposed two-qubit entangled state verification strategies using both local measurements (nonadaptive) and active feed-forward operations (adaptive) with a photonic platform. About 3283/536 number of copies (N) are required to achieve a 99% confidence to verify the target quantum state for nonadaptive/adaptive strategies. These optimal strategies provide the Heisenberg scaling of the infidelity \({\it{\epsilon }}\) as a function of N (\({\it{\epsilon }}\sim N^{r}\)) with the parameter r = −1, exceeding the standard quantum limit with r = −0.5. We experimentally obtain the scaling parameters of r = −0.88 ± 0.03 and −0.78 ± 0.07 for nonadaptive and adaptive strategies, respectively. Our experimental work could serve as a standardized procedure for the verification of quantum states.
Quantum state plays an important role in quantum information processing1. Quantum devices for creating quantum states are building blocks for quantum technology. Being able to verify these quantum states reliably and efficiently is an essential step towards practical applications of quantum devices2. Typically, a quantum device is designed to output some desired state ρ, but the imperfection in the device's construction and noise in the operations may result in the actual output state deviating from it to some random and unknown states σi. A standard way to distinguish these two cases is quantum-state tomography3,4,5,6,7. However, this method is both time-consuming and computationally challenging8,9. Non-tomographic approaches have also been proposed to accomplish the task10,11,12,13,14,15,16,17, yet these methods make some assumptions either on the quantum states or on the available operations. It is then natural to ask whether there exists an efficient non-tomographic approach to accomplish the task?
The answer is affirmative. Quantum-state verification protocol checks the device's quality efficiently. Various studies have been explored using local measurements14,16,18,19. Some earlier works considered the verification of maximally entangled states20,21,22,23. In the context of hypothesis testing, optimal verification of maximally entangled state is proposed in ref. 20. Under the independent and identically distributed setting, Hayashi et al.23 discussed the hypothesis testing of the entangled pure states. In a recent work, Pallister et al.24 proposed an optimal strategy to verify non-maximally entangled two-qubit pure states under locally projective and nonadaptive measurements. The locality constraint induces only a constant-factor penalty over the nonlocal strategies. Since then, numerous works have been done along this line of research25,26,27,28,29,30,31, targeting on different states and measurements. Especially, the optimal verification strategies under local operations and classical communication are proposed recently27,28,29, which exhibit better efficiency. We also remark related works by Dimić and Dakić32, and Saggio et al.33, in which they developed a generic protocol for efficient entanglement detection using local measurements and with an exponentially growing confidence vs. the number of copies of the quantum state.
In this work, we report an experimental two-qubit-state verification procedure using both optimal nonadaptive (local measurements) and adaptive (active feed-forward operations) strategies with an optical setup. Compared with previous works merely on minimizing the number of measurement settings34,35,36, we also minimize the number of copies (i.e., coincidence counts (CCs) in our experiment) required to verify the quantum state generated by the quantum device. We perform two tasks–Task A and Task B. With Task A, we obtain a fitting infidelity and the number of copies required to achieve a 99% confidence to verify the quantum state. Task B is performed to estimate the confidence parameter δ and infidelity parameter ϵ vs. the number of copies N. We experimentally compare the scaling of δ-N and ϵ-N by applying the nonadaptive strategy24 and adaptive strategy27,28,29 to the two-qubit states. With our methods, we obtain a comprehensive judgment about the quantum state generated by a quantum device. Present experimental and data analysis workflow may be regarded as a standard procedure for quantum-state verification.
Quantum-state verification
Consider a quantum device \({\cal{D}}\) designed to produce the two-qubit pure state
$$\left| {\Psi} \right\rangle = \sin \theta \left| {HH} \right\rangle + \cos \theta \left| {VV} \right\rangle ,$$
where θ ∈ [0, π/4]. However, it might work incorrectly and actually outputs independent two-qubit fake states σ1, σ2, ⋯, σN in N runs. The goal of the verifier is to determine the fidelity threshold of these fake states to the target state with a certain confidence. We remark that the state for θ = π/4 is the maximally entangled state and θ = 0 is the product state. As special cases of the general state in Eq. (1), all the analysis methods presented in the following can be applied to the verification of maximally entangled state and product state. The details of the verification strategies for maximally entangled state and product state are given in Supplementary Notes1.C and 1.D. Previously, theoretical20,23,37 and experimental21 works have studied the verification of maximally entangled state. Here we focus mainly on the verification of non-maximally entangled state in the main text, which is more advantageous in certain experiments in comparison to maximally entangled state. For instance, in the context of loophole-free Bell test, non-maximally entangled states require lower detection efficiency than maximally entangled states38,39,40,41. The details and experimental results for the verification of maximally entangled state and product state are shown in the Supplementary Notes2 and 4. To realize the verification of our quantum device, we perform the following two tasks in our experiment (see Fig. 1):
Fig. 1: Illustration of quantum-state verification strategy.
a Consider a quantum device \({\cal{D}}\) designed to produce the two-qubit pure state |ψ〉. However, it might work incorrectly and actually outputs two-qubit fake states σ1, σ2,⋯, σN in N runs. For each copy σi, randomly projective measurements {M1, M2, M3, ⋯} are performed by the verifier based on their corresponding probabilities {p1, p2, p3, ⋯}. Each measurement outputs a binary outcome 1 for pass and 0 for fail. The verifier takes two tasks based on these measurement outcomes. b Task A gives the statistics on the number of copies required before finding the first fail event. From these statistics, the verifier obtains the confidence δA that the device outputs state |ψ〉. c Task B performs a fixed number (N) of measurements and makes a statistic on the number of copies (mpass) passing the test. From these statistics, the verifier can judge with a certain confidence δB1/δB2 that the device belongs to Case 1 or Case 2.
Task A: Performing measurements on the fake states copy-by-copy according to the verification strategy and making statistics on the number of copies required before we find the first fail event. The concept of Task A is shown in Fig. 1b.
Task B: Performing a fixed number (N) of measurements according to verification strategy and making statistics on the number of copies that pass the verification tests. The concept of Task B is shown in Fig. 1c.
Task A is based on the assumption that there exists some ϵ > 0 for which the fidelity 〈Ψ|σi|Ψ〉 is either 1 or satisfies 〈Ψ|σi|Ψ〉 ≤ 1 − ϵ for all i ∈ {1, ⋯, N} (see Fig. 1b). Our task is to determine which is the case for the quantum device. To achieve Task A, we perform binary-outcome measurements from a set of available projectors to test the state. Each binary-outcome measurement {Ml,1 − Ml} (l = 1, 2, 3, ⋯) is specified by an operator Ml, corresponding to passing the test. For simplicity, we use Ml to denote the corresponding binary measurement. This measurement is performed with probability pl. We require the target state |Ψ〉 always passes the test, i.e., Ml|Ψ〉 = |Ψ〉. In the bad case (〈Ψ|σi|Ψ〉 ≤ 1 − ϵ), the maximal probability that σi can pass the test is given by24,25
$$\mathop {{\max }}\limits_{\left\langle {\Psi} \right.|\sigma _i{\mathrm{|}}\left. {\Psi} \right\rangle \le 1 - \epsilon } {\mathrm{Tr}}({\Omega} \sigma _i) = 1 - [1 - \lambda _2({\Omega} )]\epsilon : = 1 - {\Delta} _\epsilon ,$$
where Ω = \(\mathop {\sum}\nolimits_l {p_l} M_l\) is called an strategy, ∆ϵ is the probability σi fails a test and λ2(Ω) is the second largest eigenvalue of Ω. Whenever σi fails the test, we know immediately that the device works incorrectly. After N runs, σi in the incorrect case can pass all these tests with probability being at most [1 − [1 − λ2(Ω)]ϵ]N. Hence, to achieve confidence 1 − δ, it suffices to conduct N number of measurements satisfying24
$$N \ge \frac{{\ln \delta }}{{\ln [1 - [1 - \lambda _2({\Omega} )]\epsilon ]}} \approx \frac{1}{{[1 - \lambda _2({\Omega} )]\epsilon }}\ln \frac{1}{\delta }.$$
From Eq. (3), we can see that an optimal strategy is obtained by minimizing the second largest eigenvalue λ2(Ω), with respect to the set of available measurements. Pallister et al.24 proposed an optimal strategy for Task A, using only locally projective measurements. As no classical communication is involved, this strategy (hereafter labeled as Ωopt) is nonadaptive. Later, Wang et al.27, Yu et al.28, and Li et al.29 independently propose the optimal strategy using one-way local operations and classical communication (hereafter labeled as \({\Omega} _{^{{\mathrm{opt}}}}^ \to\)) for two-qubit pure states. Furthermore, Wang et al.27 also gives the optimal strategy for two-way classical communication. The adaptive strategy allows general local operations and classical communication measurements, and is shown to be more efficient than the strategies based on local measurements. Thus, it is important to realize the adaptive strategy in the experiment. We refer to the Supplementary Notes1 and 2 for more details on these strategies.
In reality, quantum devices are never perfect. Another practical scenario is to conclude with high confidence that the fidelity of the output states are above or below a certain threshold. To be specific, we want to distinguish the following two cases:
Case 1: \({\cal{D}}\) works correctly—∀i, 〈ψ|σi|ψ〉 > 1 − ϵ. In this case, we regard the device as "good".
Case 2: \({\cal{D}}\) works incorrectly—∀i, 〈ψ|σi|ψ〉 ≤ 1 − ϵ. In this case, we regard the device as "bad".
We call this Task B (see Fig. 1c), which is different from Task A, as the condition for "\({\cal{D}}\) works correctly" is less restrictive compared with that of Task A. It turns out that the verification strategies proposed for Task A are readily applicable to Task B. Concretely, we perform the nonadaptive verification strategy Ωopt sequentially in N runs and count the number of passing events mpass. Let Xi be a binary variable corresponding to the event that σi passes the test (Xi = 1) or not (Xi = 0). Thus, we have mpass = \(\mathop {\sum}\nolimits_{i = 1}^N {X_i}\). Assuming that the device is "good", then from Eq. (2) we can derive that the passing probability of the generated states is no smaller than 1 − [1 − λ2(Ωopt)]ϵ. We refer to Lemma 3 in the Supplementary Note 3.A for proof. Thus, the expectation of Xi satisfies \({\Bbb E}\)[Xi] ≥ 1 − (1 − λ2(Ωopt))ϵ ≡ µ. The independence assumption together with the law of large numbers then guarantee mpass ≥ Nµ, when N is sufficiently large. We follow the statistical analysis methods using the Chernoff bound in the context of state verification28,32,33,42, which is related to the security analysis of quantum key distributions43,44. We then upper bound the probability that the device works incorrectly as
$$\delta \equiv e^{ - N{\mathop{\rm{D}}\nolimits} \left( {\frac{{m_{{\mathrm{pass}}}}}{N}\parallel \mu } \right)},$$
where \({\mathop{\rm{D}}\nolimits} \left( {x\parallel y} \right): = x\log _2\frac{x}{y} + (1 - x)\log _2\frac{{1 - x}}{{1 - y}}\) is the Kullback–Leibler divergence. That is to say, we can conclude with confidence δB1 = 1 − δ that \({\cal{D}}\) belongs to Case 1. Conversely, if the device is "bad", then using the same argument we can conclude with confidence δB2 = 1 − δ that \({\cal{D}}\) belongs to Case 2. Please refer to the Supplementary Note 3 for rigorous proofs and arguments on how to evaluate the performance of the quantum device for these two cases.
To perform Task B with the adaptive strategy \({\Omega} _{{\mathrm{opt}}}^ \to\), we record the number of passing events mpass = \(\mathop {\sum}\nolimits_{i = 1}^N {X_i}\). If the device is "good", the passing probability of the generated states is no smaller than µs ≡ 1 − [1 − λ4(\({\Omega} _{{\mathrm{opt}}}^ \to\))]ϵ, where λ4(\({\Omega} _{{\mathrm{opt}}}^ \to\)) = sin2θ/(1 + cos2θ) is the smallest eigenvalue of \({\Omega} _{{\mathrm{opt}}}^ \to\), as proved by Lemma 5 in Supplementary Note 3.B. The independence assumption along with the law of large numbers guarantee that mpass ≥ Nµs, when N is sufficiently large. On the other hand, if the device is "bad", we can prove that the passing probability of the generated states is no larger than µl ≡ 1 − [1 − λ2(\({\Omega} _{{\mathrm{opt}}}^ \to\))]ϵ, where λ2(\({\Omega} _{{\mathrm{opt}}}^ \to\)) = cos2θ/(1 + cos2θ), by Lemma 4 in Supplementary Note 3.B. Again, the independence assumption and the law of large numbers guarantee that mpass ≤ Nµl, when N is large enough. Therefore, we consider two regions regarding the value of mpass in the adaptive strategy, i.e., the region mpass ≤ Nµs and the region mpass ≥ Nµl. In these regions, we can conclude with δB1 = 1 − δl/δB2 = 1 − δs that the device belongs to Case 1/Case 2. The expressions for δl and δs and all the details for applying adaptive strategy to Task B can be found in Supplementary Note 3.B.
Experimental setup and verification procedure
Our two-qubit entangled state is generated based on a type-II spontaneous parametric down-conversion in a 20 mm-long periodically poled potassium titanyl phosphate crystal, embedded in a Sagnac interferometer45,46 (see Fig. 2). A continuous-wave external-cavity ultraviolet diode laser at 405 nm is used as the pump light. A half-wave plate (HWP1) and quarter-wave plate (QWP1) transform the linear polarized light into the appropriate elliptically polarized light to provide the power balance and phase control of the pump field. With an input pump power of ∼30 mW, we typically obtain 120 kHz CCs.
Fig. 2: Experimental setup for optimal verification of two-qubit quantum state.
We use a photon pair source based on a Sagnac interferometer to generate various two-qubit quantum state. QWP1 and HWP1 are used for adjusting the relative amplitude of the two counter-propagating pump light. For nonadaptive strategy, the measurement is realized with QWP, HWP, and polarizing beam splitter (PBS) at both Alice's and Bob's site. The adaptive measurement is implemented by real-time feed-forward operation of electro-optic modulators (EOMs), which are triggered by the detection signals recorded with a field-programmable gate array (FPGA). The optical fiber delay is used to compensate the electronic delay from Alice's single photon detector (SPD) to the two EOMs. DM: dichroic mirror; dHWP: dual-wavelength half-wave plate; dPBS: dual-wavelength polarizing beam splitter; FPC: fiber polarization controller; HWP: half-wave plate; IF: 3 nm interference filter centered at 810 nm; PBS: polarizing beam splitter; PPKTP: periodically poled KTiOPO4; QWP: quarter-wave plate.
The target state has the following form
$$\left| \psi \right\rangle = \sin \theta \left| {HV} \right\rangle + e^{i\phi }\cos \theta \left| {VH} \right\rangle ,$$
where θ and ϕ represent amplitude and phase, respectively. This state is locally equivalent to |Ψ〉 in Eq. (1) by \({\Bbb U} = \left( {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & 1 \end{array}} \right) \otimes \left( {\begin{array}{*{20}{c}} 0 & {e^{i\phi }} \\ 1 & 0 \end{array}} \right)\). By using Lemma 1 in Supplementary Note 1, the optimal strategy for verifying |ψ〉 is \({\Omega} _{{\mathrm{opt}}}^\prime = {\Bbb U}{\Omega} _{{\mathrm{opt}}}{\Bbb U}^\dagger\), where Ωopt is the optimal strategy verifying |Ψ〉 in Eq. (1). In the Supplementary Note 2, we write down explicitly the optimal nonadaptive strategy24 and adaptive strategy27,28,29 for verifying |ψ〉.
In our experiment, we implement both the nonadaptive and adaptive measurements to realize the verification strategies. There are four settings {P0, P1, P2, P3} for nonadaptive measurements24, while only three settings {\(\tilde T_0\), \(\tilde T_1\), \(\tilde T_2\)} are required for the adaptive measurements27,28,29. The exact form of these projectors is given in the Supplementary Note 2. It is noteworthy that the measurements \(P_0 = \tilde T_0 = \left| H \right\rangle \left\langle H \right| \otimes \left| V \right\rangle \left\langle V \right| + \left| V \right\rangle \left\langle V \right| \otimes \left| H \right\rangle \left\langle H \right|\) are determined by the standard σz basis for both the nonadaptive and adaptive strategies, which are orthogonal and can be realized with a combination of QWP, HWP, and polarization beam splitter. For adaptive measurements, the measurement bases \(\tilde v_ + = e^{i\phi }{\mathrm{cos}}\theta \left| H \right\rangle + {\mathrm{sin}}\theta \left| V \right\rangle {\mathrm{/}}\tilde w_ + = e^{i\phi }{\mathrm{cos}}\theta \left| H \right\rangle - i{\mathrm{sin}}\theta \left| V \right\rangle\) and \(\tilde v_ - = e^{i\phi }{\mathrm{cos}}\theta \left| H \right\rangle - {\mathrm{sin}}\theta \left| V \right\rangle {\mathrm{/}}\tilde w_ - = e^{i\phi }{\mathrm{cos}}\theta \left| H \right\rangle + i{\mathrm{sin}}\theta \left| V \right\rangle\) at Bob's site are not orthogonal. It is noteworthy that we only implement the one-way adaptive strategy in our experiment. The two-way adaptive strategy is also derived in ref. 27. Compared to nonadaptive and one-way adaptive strategy, the two-way adaptive strategy gives improvements on the verification efficiency due to the utilization of more classical communication resources. The implementation of two-way adaptive strategy requires the following: first, Alice performs her measurement and sends her results to Bob; then, Bob performs his measurement according to Alice's outcomes; finally, Alice performs another measurement conditioning on Bob's measurement outcomes. This procedure requires the real-time communications both from Alice to Bob and from Bob to Alice. Besides, the two-way adaptive strategy requires the quantum nondemolition measurement at Alice's site, which is difficult to implement in the current setup. To realize the one-way adaptive strategy, we transmit the results of Alice's measurements to Bob through classical communication channel, which is implemented by real-time feed-forward operations of the electro-optic modulators (EOMs). As shown in Fig. 2, we trigger two EOMs at Bob's site to realize the adaptive measurements based on the results of Alice's measurement. If Alice's outcome is \(\left| + \right\rangle = \left( {\left| V \right\rangle + \left| H \right\rangle } \right){\mathrm{/}}\sqrt 2\) or \(\left| R \right\rangle = \left( {\left| V \right\rangle + i\left| H \right\rangle } \right){\mathrm{/}}\sqrt 2\), EOM1 implements the required rotation and EOM2 is identity operation. Conversely, if Alice's outcome is \(\left| - \right\rangle = \left( {\left| V \right\rangle - \left| H \right\rangle } \right){\mathrm{/}}\sqrt 2\) or \(\left| L \right\rangle = \left( {\left| V \right\rangle - i\left| H \right\rangle } \right){\mathrm{/}}\sqrt 2\), EOM2 will implement the required rotation and EOM1 is identity operation. Our verification procedure is the following.
Specifications of quantum device. We adjust the HWP1 and QWP1 of our Sagnac source to generate the desired quantum state.
Verification using the optimal strategy. In this stage, we generate many copies of the quantum state sequentially with our Sagnac source. These copies are termed as fake states {σi, i = 1, 2,⋯, N}. Then, we perform the optimal nonadaptive verification strategy to σi. From the parameters θ and ϕ of the target state, we can compute the angles of wave plates QWP2 and HWP2, QWP3 and HWP3 for realizing the projectors {P0, P1, P2, P3} required in the nonadaptive strategy. To implement the adaptive strategy, we employ two EOMs to realize the \(\tilde v_ + {\mathrm{/}}\tilde v_ -\) and \(\tilde w_ + {\mathrm{/}}\tilde w_ -\) measurements once receiving Alice's results (refer to Supplementary Note 2.B for the details). Finally, we obtain the timetag data of the photon detection from the field-programmable gate array and extract individual CC, which is regarded as one copy of our target state. We use the timetag experimental technique to record the channel and arrival time of each detected photon for data processing47. The time is stored as multiples of the internal time resolution (∼156 ps). The first data in the timetag is recorded as the starting time ti0. With the increasing of time, we search the required CC between different channels within a fixed coincidence window (0.4 ns). If a single CC is obtained, we record the time of the ended timetag data as tf0. Then, we move to the next time slice ti1 − tf1 to search for the next CC. This process can be cycled until we find the N-th CC in time slice tiN−1 – tfN−1. This measurement can be viewed as single-shot measurement of the bipartite state with post selection. The time interval in each slice is about 100 µs in our experiment, consistent with the 1/CR, CR-coincidence rate. By doing so, we can precisely obtain the number of copies N satisfying the verification requirements. We believe this procedure is suitable in the context of verification protocol, because one wants to verify the quantum state with the minimum amount of copies.
Data processing. From the measured timetag data, the results for different measurement settings can be obtained. For the nonadaptive strategy, {P0, P1, P2, P3} are chosen randomly with the probabilities {µ0, µ1, µ2, µ3} (µ0 = α(θ), µi = (1 − α(θ))/3)) with α(θ) = (2 − sin(2θ))/(4 + sin(2θ)). For the adaptive strategy, {\(\tilde T_0\), \(\tilde T_1\), \(\tilde T_2\)} projectors are randomly chosen according to the probabilities {β(θ), (1 − β(θ))/2, (1 − β(θ))/2}, where β(θ) = cos2θ/(1 + cos2θ). For Task A, we use CC to decide whether the outcome of each measurement is pass or fail for each σi. The passing probabilities for the nonadaptive strategy can be, respectively, expressed as,
$$\displaystyle\begin{array}{l}P_0:\frac{{CC_{HV} + CC_{VH}}}{{CC_{HH} + CC_{HV} + CC_{VH} + CC_{VV}}},\\ P_i:\frac{{CC_{\tilde u_i\tilde v_i^ \bot } + CC_{\tilde u_i^ \bot \tilde v_i} + CC_{\tilde u_i^ \bot \tilde v_i^ \bot }}}{{CC_{\tilde u_i\tilde v_i} + CC_{\tilde u_i\tilde v_i^ \bot } + CC_{\tilde u_i^ \bot \tilde v_i} + CC_{\tilde u_i^ \bot \tilde v_i^ \bot }}}.\end{array}$$
where i = 1, 2, 3, and \(\tilde u_i{\mathrm{/}}\tilde u_i^ \bot\) and \(\tilde v_i{\mathrm{/}}\tilde v_i^ \bot\) are the orthogonal bases for each photon and their expressions are given in the Supplementary Note 2.A. For P0, if the individual CC is in CCHV or CCVH, it indicates that σi passes the test and we set Xi = 1; otherwise, it fails to pass the test and we set Xi = 0. For Pi, i = 1, 2, 3, if the individual CC is in \({\mathrm{CC}}_{\tilde u_i\tilde v_i^ \bot }\), \({\mathrm{CC}}_{\tilde u_i^ \bot \tilde v_i}\), or \({\mathrm{CC}}_{\tilde u_i^ \bot \tilde v_i^ \bot }\), it indicates that σi passes the test and we set Xi = 1; otherwise, it fails to pass the test and we set Xi = 0. For the adaptive strategy, we set the value of the random variables Xi in a similar way.
We increase the number of copies (N) to decide the occurrence of the first failure for Task A and the frequency of passing events for Task B. From these data, we obtain the relationship of the confidence parameter δ, the infidelity parameter ϵ, and the number of copies N. There are certain probabilities that the verifier fail for each measurement. In the worst case, the probability that the verifier fails to assert σi is given by 1 − ∆ϵ, where ∆ϵ = 1 − ϵ/(2 + sinθ cosθ) for nonadaptive strategy24 and ∆ϵ = 1 − ϵ/(2 − sin2θ) for adaptive strategy27,28,29.
Results and analysis of two-qubit optimal verification
The target state to be verified is the general two-qubit state in Eq. (5), where the parameter θ = k ∗ π/10 and ϕ is optimized with maximum likelihood estimation method. In this section, we present the results of k = 2 state (termed as k2, see Supplementary Note 2) as an example. The verification results of other states, such as the maximally entangled state and the product state, are presented in Supplementary Note 4. Our theoretical non-maximally target state is specified by θ = 0.6283 (k = 2). In experiment, we obtain \(\left| \psi \right\rangle = {\mathrm{0}}{\mathrm{.5987}}\left| {HV} \right\rangle + {\mathrm{0}}{\mathrm{.8010}}e^{{\mathrm{3}}{\mathrm{.2034}}i}\left| {VH} \right\rangle\) (θ = 0.6419, ϕ = 3.2034) as our target state to be verified. To realize the verification strategy, the projective measurement is performed sequentially by randomly choosing the projectors. We take 10,000 rounds for a fixed 6000 number of copies.
Task A: According to this verification task, we make a statistical analysis on the number of measurements required for the first occurrence of failure. According to the geometric distribution, the probability that the n-th measurement (out of n measurements) is the first failure is
$${\mathrm{Pr}}(N_{{\mathrm{first}}} = n) = (1 - {\Delta} _\epsilon )^{n - 1} \cdot {\Delta} _\epsilon$$
where n = 1, 2, 3, · · ·. We then obtain the cumulative probability
$$\delta _{\mathrm{A}} = \mathop {\sum}\limits_{N_{{\mathrm{first}}} = 1}^{n_{{\mathrm{exp}}}} {{\mathrm{Pr}}} (N_{{\mathrm{first}}})$$
which is the confidence of the device generating the target state |ψ〉. In Fig. 3a, we show the distribution of the number Nfirst required before the first failure for the nonadaptive (Non) strategy. From the figure we can see that Nfirst obeys the geometric distribution. We fit the distribution with the function in Eq. (7) and obtain an experimental infidelity \({\it{\epsilon }}_{{\mathrm{exp}}}^{{\mathrm{Non}}}\) = 0.0034(15), which is a quantitative estimation of the infidelity for the generated state. From the experimental statistics, we obtain the number \(n_{{\mathrm{exp}}}^{{\mathrm{Non}}}\) = 3283 required to achieve the 99% confidence (i.e., 99% cumulative probability for Nfirst ≤ \(n_{{\mathrm{exp}}}^{{\mathrm{Non}}}\)) of judging the generated states to be the target state in the nonadaptive strategy.
Fig. 3: The distribution of the number required before the first failure.
a For the nonadaptive strategy. b For the adaptive strategy. From the statistics, we obtain the fitting infidelity of \({\it{\epsilon }}_{{\mathrm{exp}}}^{{\mathrm{Non}}}\) = 0.0034(15) and \({\it{\epsilon }}_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) = 0.0121(6). The numbers required to achieve a 99% confidence are \(n_{{\mathrm{exp}}}^{{\mathrm{Non}}}\) = 3283 and \(n_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) = 536, respectively.
The results for the adaptive (Adp) verification of Task A are shown in Fig. 3b. The experimental fitting infidelity for this distribution is \({\it{\epsilon }}_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) = 0.0121(6). The number required to achieve the same 99% confidence as the nonadaptive strategy is \(n_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) = 536. It is noteworthy that this nearly six times (i.e., \(n_{{\mathrm{exp}}}^{{\mathrm{Non}}}{\mathrm{/}}n_{{\mathrm{exp}}}^{{\mathrm{Adp}}}\) ∼ 6) difference of the experimental number required to obtain the 99% confidence is partially because the infidelity with adaptive strategy is approximately four times larger than the nonadaptive strategy. However, the number of copies required to achieve the same confidence by using the adaptive strategy is still about two times fewer than the nonadaptive strategy even if the infidelity of the generated states is the same (see the analysis presented in Supplementary Note 5). This indicates that the adaptive strategy requires a significant lower number of copies to conclude the device output state |ψ〉 with 99% confidence compared with the nonadaptive one.
Task B: We emphasize that Task B is considered under the assumption that the quantum device is either in Case 1 or in Case 2 as described above. These two cases are complementary and the confidence to assert whether the device belongs to Case 1 or Case 2 can be obtained according to different values of mpass. We refer to the Supplementary Note 3 for detailed information on judging the quantum device for these two cases. For each case, we can reduce the parameter δ by increasing the number of copies of the quantum state. Thus, the confidence δB = 1 − δ to judge the device belongs to Case 1/Case 2 is obtained. For the nonadaptive strategy, the passing probability mpass/N can finally reach a stable value 0.9986 ± 0.0002 after about 1000 number of copies (see Supplementary Note 6). This value is smaller than the desired passing probability µ when we choose the infidelity ϵmin to be 0.001. In this situation, we conclude the state belongs to Case 2. Conversely, the stable value is larger than the desired passing probability µ when we choose the infidelity ϵmax to be 0.006. In this situation, we conclude the state belongs to Case 1. In Fig. 4, we present the results for the verification of Task B. First, we show the confidence parameter δ vs. the number of copies for the nonadaptive strategy in Fig. 4a, b. With about 6000 copies of quantum state, the δ parameter reaches 0.01 for Case 2. This indicates that the device belongs to Case 1 with probability at most 0.01. In other words, there are at least 99% confidence that we can say the device is in "bad" case after about 6000 measurements. In general, more copies of quantum states are required to reach a same level δ = 0.01 for Case 1, because there are fewer portion for the number of passing events mpass to be chosen in the range of µN to N. From Fig. 4b, we can see that it takes about 17,905 copies of quantum state, to reduce the parameter δ to be below 0.01. At this stage, we can say that the device belongs to Case 2 with probability at most 0.01. That is, there are at least 99% confidence that we can say the device is in "good" case after about 17,905 measurements.
Fig. 4: Experimental results for the verification of Task B.
a, b Nonadaptive strategy. The confidence parameter δ decreases with the increase of number of copies. After about 6000 copies, δ goes below 0.01 for Case 2 (see inset of a). For Case 1 (see inset of b), it takes about 17,905 copies to reduce δ below 0.01. c, d Adaptive strategy. The number of copies required to reduce δs and δl to be 0.01 for the two cases are about 10,429 and 23,645, respectively. In general, it takes less number of copies for verifying Case 2, because more space are allowed for the states to be found in the 0−µN region. The blue is the experimental error bar (Exp.), which is obtained by 100 rounds of measurements for each coincidence. The insets show the log-scale plots, which indicates δ can reach a value below 0.01 with about thousands to tens of thousands of copies.
Figure 4c, d are the results of adaptive strategy. For the adaptive strategy, the passing probability mpass/N finally reaches a stable value 0.9914 ± 0.0005 (see Supplementary Note 6), which is smaller than the nonadaptive measurement due to the limited fidelity of the EOMs' modulation. Correspondingly, the infidelity parameter for the two cases are chosen to be ϵmin = 0.008 and ϵmax = 0.017, respectively. We can see from the figure that it takes about 10,429 number of copies for δs to be decreased to 0.01 when choosing ϵmin, which indicates that the device belongs to Case 2 with at least 99% confidence after about 10,429 measurements. On the other hand, about 23,645 number of copies are needed for δl to be decreased to 0.01 when choosing ϵmax, which indicates that the device belongs to Case 1 with at least 99% confidence after about 23,645 measurements. It is noteworthy that the difference of adaptive and nonadaptive comes from the different descent speed of δ vs. the number of copies N, which results from the differences in passing probabilities and the infidelity parameters. See Supplementary Note 6 for detailed explanations.
From another perspective, we can fix δ and see how the parameter ϵ changes when increasing the number of copies. Figure 5 presents the variation of ϵ vs. the number of copies in the log–log scale when we set the δ to be 0.10. At small number of copies, the infidelity is large and drops fast to a low level when the number of copies increases to be ~100. The decline becomes slow when the number of copies exceeds 100. It should be noted that the ϵ asymptotically tends to a value of 0.0036 (calculated by 1 − ∆ϵ = 0.9986) and 0.012 (calculated by 1 − ∆ϵ = 0.9914) for the nonadaptive and adaptive strategies, respectively. Therefore, we are still in the region of mpass/N ≥ µ. We can also see that the scaling of ϵ vs. N is linear in the small number of copies region. We fit the data in the linear region with ϵ ∼ Nr and obtain a slope r ∼ −0.88 ± 0.03 for nonadaptive strategy and r ∼ −0.78 ± 0.07 for adaptive strategy. This scaling exceeds the standard quantum limit ϵ ∼ N−0.5 scaling42,48 for physical parameter estimation. Thus, our method is better for estimating the infidelity parameter ϵ than the classical metrology. It is noteworthy that mpass/N is a good estimation for our state fidelity. If the state fidelity increases, the slope of linear region will decreases to the Heisenberg limit ϵ ~ N−1 in quantum metrology (see Supplementary Note 6).
Fig. 5: The variation of infidelity parameter vs. the number of copies.
a Nonadaptive strategy and b adaptive strategy. Here, the data are plotted on a log–log scale. The confidence parameter δ is chosen to be 0.10. The parameter ϵ fast decays to a low value which is asymptotically close to the infidelity 0.0036 (Nonadaptive) and 0.012 (Adaptive) of the generated quantum state when increasing the number of copies. The fitting slopes for the linear scaling region are −0.88 ± 0.03 and −0.78 ± 0.07 for the nonadaptive and adaptive, respectively. The blue symbol is the experimental data with error bar (Exp.), which is obtained by 100 rounds of measurements for each coincidence.
Comparison with standard quantum-state tomography
The advantage of the optimal verification strategy lies in that it requires fewer number of measurement settings and, more importantly, the number of copies to estimate the quantum states generated by a quantum device. In standard quantum-state tomography49, the minimum number of settings required for a complete reconstruction of the density matrix is 3n, where n is the number of qubits. For two-qubit system, the standard tomography will cost nine settings whereas the present verification strategy only needs four and three measurement settings for the nonadaptive and adaptive strategies, respectively. To quantitatively compare the verification strategy with the standard tomography, we show the scaling of the parameters δ and ϵ vs. the number of copies N in Fig. 6. For each number of copies, the fidelity estimation F ± ∆F can be obtained by the standard quantum-state tomography. The δ of standard tomography is calculated by the confidence assuming normal distribution of the fidelity with mean F and SD ∆F. The ϵ of standard tomography is calculated by ϵ = 1 − F. The result of verification strategy is taken from the data in Figs. 4 and 5 for the nonadaptive strategy. For δ vs. N, we fit the curve with equation δ = eg·N, where g is the scaling of log(δ) with N. We obtain gtomo = −6.84 × 10−5 for the standard tomography and gverif = −7.35 × 10−4 for the verification strategy. This indicates that present verification strategy achieves better confidence than standard quantum-state tomography given the same number of copies. For ϵ vs. N, as shown in Fig. 6b, the standard tomography will finally reach a saturation value when increasing the number of copies. With the same number of copies N, the verification strategy obtains a smaller ϵ, which indicates that the verification strategy can give a better estimation for the state fidelity than the standard quantum-state tomography when small number of quantum states are available for a quantum device.
Fig. 6: Comparison of standard quantum-state tomography and present verification strategy.
In the figure, we give the variation of a δ and b ϵ vs. the number of copies N by using standard quantum-state tomography (tomo) and present verification strategy (verif). For standard tomography, the fidelity F ± ∆F is first obtained from the reconstructed density matrix of each copy N. Then confidence parameter δ is estimated by assuming normal distribution of the fidelity with mean F and SD ∆F. The infidelity parameter ϵ is estimated by ϵ = 1 − F. It is noteworthy that the experimental data symbols shown in a looks like lines due to the dense data points.
Our work, including experiment, data processing and analysis framework, can be used as a standardized procedure for verifying quantum states. In Task A, we give an estimation of the infidelity parameter ϵexp of the generated states and the confidence δA to produce the target quantum state by detecting certain number of copies. With the ϵexp obtained from Task A, we can choose ϵmax or ϵmin which divides our device to be Case 1 or Case 2. Task B is performed based on the chosen ϵmin and ϵmax. We can have an estimation for the scaling of the confidence parameter δ vs. the number of copies N based on the analysis method of Task B. With a chosen δ, we can also have an estimation for the scaling of the infidelity parameter ϵ vs. N. With these steps, we can have a comprehensive judgment about how well our device really works.
In summary, we report experimental demonstrations for the optimal two-qubit pure state verification strategy with and without adaptive measurements. We give a clear discrimination and comprehensive analysis for the quantum states generated by a quantum device. Two tasks are proposed for practical applications of the verification strategy. The variation of confidence and infidelity parameter with the number of copies for the generated quantum states are presented. The obtained experimental results are in good agreement with the theoretical predictions. Furthermore, our experimental framework offers a precise estimation on the reliability and stability of quantum devices. This ability enables our framework to serve as a standard tool for analyzing quantum devices. Our experimental framework can also be extended to other platforms.
The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
The codes that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information (Cambridge Univ. Press, UK, 2010).
Book MATH Google Scholar
Paris, M. & Rehacek, J. Quantum State Estimation. Vol. 649 (Springer, 2004).
Sugiyama, T., Turner, P. S. & Murao, M. Precision-guaranteed quantum tomography. Phys. Rev. Lett. 111, 160406 (2013).
Article ADS Google Scholar
Gross, D., Liu, Y.-K., Flammia, S. T., Becker, S. & Eisert, J. Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105, 150401 (2010).
Haah, J., Harrow, A. W., Ji, Z., Wu, X. & Yu, N. Sample-optimal tomography of quantum states. In Proc. 48th Annual ACM Symposium on Theory of Computing, STOC 2016, 913–925 (ACM, New York, 2016).
Donnell, R. & Wright, J. Efficient quantum tomography. In Proc 48th Annual ACM Symposium on Theory of Computing, STOC 2016, 899–912 (ACM, New York, 2016).
Donnell, R. & Wright, J. Efficient quantum tomography II. In Proc. 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, 962–974 (ACM, New York, 2017).
Häffner, H. et al. Scalable multiparticle entanglement of trapped ions. Nature 438, 643–646 (2005).
Carolan, J. et al. On the experimental verification of quantum complexity in linear optics. Nat. Photonics 8, 621–626 (2014).
Tóth, G. & Gühne, O. Detecting genuine multipartite entanglement with two local measurements. Phys. Rev. Lett. 94, 060501 (2005).
Flammia, S. T. & Liu, Y.-K. Direct fidelity estimation from few Pauli measurements. Phys. Rev. Lett. 106, 230501 (2011).
da Silva, M. P., Landon-Cardinal, O. & Poulin, D. Practical characterization of quantum devices without tomography. Phys. Rev. Lett. 107, 210404 (2011).
Aolita, L., Gogolin, C., Kliesch, M. & Eisert, J. Reliable quantum certification of photonic state preparations. Nat. Commun. 6, 8498 (2015).
Hayashi, M. & Morimae, T. Verifiable measurement-only blind quantum computing with stabilizer testing. Phys. Rev. Lett. 115, 220502 (2015).
McCutcheon, W. et al. Experimental verification of multipartite entanglement in quantum networks. Nat. Commun. 7, 13251 (2016).
Takeuchi, Y. & Morimae, T. Verification of many-qubit states. Phys. Rev. X 8, 021060 (2018).
Bădescu, C., Donnell, R. & Wright, J. Quantum state certification. In Proc. 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, 503–514 (ACM, New York, 2019).
Morimae, T., Takeuchi, Y. & Hayashi, M. Verification of hypergraph states. Phys. Rev. A 96, 062321 (2017).
Article ADS MathSciNet Google Scholar
Takeuchi, Y., Mantri, A., Morimae, T., Mizutani, A. & Fitzsimons, J. F. Resource-efficient verification of quantum computing using Serfling's bound. npj Quantum Inf. 5, 27 (2019).
Hayashi, M., Matsumoto, K. & Tsuda, Y. A study of LOCC-detection of a maximally entangled state using hypothesis testing. J. Phys. A Math. Gen. 39, 14427–14446 (2006).
Article ADS MathSciNet MATH Google Scholar
Hayashi, M. et al. Hypothesis testing for an entangled state produced by spontaneous parametric down-conversion. Phys. Rev. A 74, 062321 (2006).
Hayashi, M., Tomita, A. & Matsumoto, K. Statistical analysis of testing of an entangled state based on the Poisson distribution framework. N. J. Phys. 10, 043029 (2008).
Hayashi, M. Group theoretical study of LOCC-detection of maximally entangled states using hypothesis testing. N. J. Phys. 11, 043028 (2009).
Pallister, S., Linden, N. & Montanaro, A. Optimal verification of entangled states with local measurements. Phys. Rev. Lett. 120, 170502 (2018).
Zhu, H. & Hayashi, M. Efficient verification of hypergraph states. Phys. Rev. Appl. 12, 054047 (2019).
Zhu, H. & Hayashi, M. Efficient verification of pure quantum states in the adversarial scenario. Phys. Rev. Lett. 123, 260504 (2019).
Wang, K. & Hayashi, M. Optimal verification of two-qubit pure states. Phys. Rev. A 100, 032315 (2019).
Yu, X.-D., Shang, J. & Gühne, O. Optimal verification of general bipartite pure states. npj Quantum Inf. 5, 112 (2019).
Li, Z., Han, Y.-G. & Zhu, H. Efficient verification of bipartite pure states. Phys. Rev. A 100, 032316 (2019).
Liu, Y.-C., Yu, X.-D., Shang, J., Zhu, H. & Zhang, X. Efficient verification of Dicke states. Phys. Rev. Appl. 12, 044020 (2019).
Li, Z., Han, Y.-G. & Zhu, H. Optimal verification of Greenberger-Horne-Zeilinger states. Phys. Rev. Applied 13, 054002 (2020).
Dimić, A. & Dakić, B. Single-copy entanglement detection. npj Quantum Inf. 4, 11 (2018).
Saggio, V. et al. Experimental few-copy multipartite entanglement detection. Nat. Phys. 15, 935–940 (2019).
Knips, L., Schwemmer, C., Klein, N., Wieśniak, M. & Weinfurter, H. Multipartite entanglement detection with minimal effort. Phys. Rev. Lett. 117, 210504 (2016).
Bavaresco, J. et al. Measurements in two bases are sufficient for certifying high-dimensional entanglement. Nat. Phys. 14, 1032–1037 (2018).
Friis, N., Vitagliano, G., Malik, M. & Huber, M. Entanglement certification from theory to experiment. Nat. Rev. Phys. 1, 72–87 (2019).
Zhu, H. & Hayashi, M. Optimal verification and fidelity estimation of maximally entangled states. Phys. Rev. A 99, 052346 (2019).
Eberhard, P. H. Background level and counter efficiencies required for a loophole-free Einstein-Podolsky-Rosen experiment. Phys. Rev. A 47, R747–R750 (1993).
Giustina, M. et al. Bell violation using entangled photons without the fair-sampling assumption. Nature 497, 227–230 (2013).
Giustina, M. et al. Significant-loophole-free test of Bell's theorem with entangled photons. Phys. Rev. Lett. 115, 250401 (2015).
Shalm, L. K. et al. Strong loophole-free test of local realism. Phys. Rev. Lett. 115, 250402 (2015).
Zhang, W.-H. et al. Experimental optimal verification of entangled states using local measurements. Phys. Rev. Lett. 125, 030506 (2020).
Scarani, V. et al. The security of practical quantum key distribution. Rev. Mod. Phys. 81, 1301–1350 (2009).
Hayashi, M. & Nakayama, R. Security analysis of the decoy method with the Bennett–Brassard 1984 protocol for finite key lengths. N. J. Phys. 16, 063009 (2014).
Kim, T., Fiorentino, M. & Wong, F. N. C. Phase-stable source of polarization-entangled photons using a polarization Sagnac interferometer. Phys. Rev. A 73, 012316 (2006).
Fedrizzi, A., Herbst, T., Poppe, A., Jennewein, T. & Zeilinger, A. A wavelength-tunable fiber-coupled source of narrowband entangled photons. Opt. Express 15, 15377–15386 (2007).
UQDevices. Time tag and logic user manual. Version 2.1. https://uqdevices.com/documentation/ (2017).
Giovannetti, V., Lloyd, S. & Maccone, L. Advances in quantum metrology. Nat. Photonics 5, 222–229 (2011).
Altepeter, J. B., Jeffrey, E. R. & Kwiat, P. In Advances In Atomic, Molecular, and Optical Physics. 52, 105–159 (Academic Press, 2005).
We thank B. Dakić for the helpful discussions. This work was supported by the National Key Research and Development Program of China (numbers 2017YFA0303704 and 2019YFA0308704), the National Natural Science Foundation of China (numbers 11674170 and 11690032), NSFC-BRICS (number 61961146001), the Natural Science Foundation of Jiangsu Province (number BK20170010), the Leading-edge technology Program of Jiangsu Natural Science Foundation (number BK20192001), the program for Innovative Talents and Entrepreneur in Jiangsu, and the Fundamental Research Funds for the Central Universities.
These authors contributed equally: Xinhe Jiang, Kun Wang, Kaiyi Qian, Zhaozhong Chen.
National Laboratory of Solid-state Microstructures, School of Physics, Collaborative Innovation Center of Advanced Microstructures, State Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University, Nanjing, 210093, China
Xinhe Jiang, Kaiyi Qian, Zhaozhong Chen, Zhiyu Chen, Liangliang Lu, Lijun Xia, Fangmin Song, Shining Zhu & Xiaosong Ma
Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
Kun Wang
Xinhe Jiang
Kaiyi Qian
Zhaozhong Chen
Zhiyu Chen
Liangliang Lu
Lijun Xia
Fangmin Song
Shining Zhu
Xiaosong Ma
X.-H.J., K.-Y.Q., Z.-Z.C., Z.-Y.C., and X.-S.M. designed and performed the experiment. K.W. performed the theoretical analysis. X.-H.J. and K.-Y.Q. analyzed the data. X.-H.J., K.W., and X.-S.M. wrote the paper with input from all authors. All authors discussed the results and read the manuscript. F.-M.S., S.-N.Z., and X.-S.M. supervised the work. X.-H.J., K.W., K.-Y.Q. and Z.-Z.C. contributed equally to this work.
Correspondence to Xiaosong Ma.
A patent application related to this work is filed by Nanjing University on 29 May 2020 in China. The application number is 202010475173.4 (Patent in China). The status of the application is now under patent pending.
Jiang, X., Wang, K., Qian, K. et al. Towards the standardization of quantum state verification using optimal strategies. npj Quantum Inf 6, 90 (2020). https://doi.org/10.1038/s41534-020-00317-7
npj Quantum Information (npj Quantum Inf) ISSN 2056-6387 (online) | CommonCrawl |
Transitive dendrite map with infinite decomposition ideal
DCDS Home
Chain transitive induced interval maps on continua
February 2015, 35(2): 757-770. doi: 10.3934/dcds.2015.35.757
An ergodic theory approach to chaos
Ryszard Rudnicki 1,
Institute of Mathematics, Polish Academy of Sciences, Bankowa 14, 40-007 Katowice
Received February 2013 Revised October 2013 Published September 2014
This paper is devoted to the ergodic-theoretical approach to chaos, which is based on the existence of invariant mixing measures supported on the whole space. As an example of application of the general theory we prove that there exists an invariant mixing measure with respect to the differentiation operator on the space of entire functions. From this theorem it follows the existence of universal entire functions and other chaotic properties of this transformation.
Keywords: chaos, Invariant measure, differentiation operator., universal functions.
Mathematics Subject Classification: Primary: 37L40; Secondary: 28D10, 30D15, 47A1.
Citation: Ryszard Rudnicki. An ergodic theory approach to chaos. Discrete & Continuous Dynamical Systems, 2015, 35 (2) : 757-770. doi: 10.3934/dcds.2015.35.757
J. Auslander and J. A. Yorke, Interval maps, factors of maps and chaos, Tôhoku Math. J. II. Ser., 32 (1980), 177-188. doi: 10.2748/tmj/1178229634. Google Scholar
J. Banasiak and M. Moszyński, A generalization of Desch-Schappacher-Webb criteria for chaos, Discr. Contin. Dyn. Syst., 12 (2005), 959-972. doi: 10.3934/dcds.2005.12.959. Google Scholar
J. Bass, Stationary functions and their applications to turbulence, J. Math. Anal. Appl., 47 (1974), 354-399. doi: 10.1016/0022-247X(74)90026-2. Google Scholar
F. Bayart and S. Grivaux, Frequently hypercyclic operators, Trans. Amer. Math. Soc., 358 (2006), 5083-5117. doi: 10.1090/S0002-9947-06-04019-0. Google Scholar
F. Bayart and É. Matheron, Mixing operators and small subsets of the circle,, preprint, (). doi: 10.1515/crelle-2014-0002. Google Scholar
G. D. Birkhoff, Démonstration d'un théorème élémentaire sur les fonctions entières, C.R. Acad. Sci. Paris, 189 (1929), 473-475. Google Scholar
C. Blair and L. Rubel, A universal entire function, Amer. Math. Monthly, 90 (1983), 331-332. doi: 10.2307/2975786. Google Scholar
O. Blasco, A. Bonilla and K.-G. Grosse-Erdmann, Rate of growth of frequently hypercyclic functions, Proc. Edinb. Math. Soc., 53 (2010), 39-59. doi: 10.1017/S0013091508000564. Google Scholar
L. Bernal-González and A. Bonilla, Universality of holomorphic functions bounded on closed sets, J. Math. Anal. Appl., 315 (2006), 302-316. doi: 10.1016/j.jmaa.2005.06.010. Google Scholar
N. N. Bogoluboff and N. M. Kriloff, La théorie générale de la measure dans son application à l'étude des systèmes dynamiques de la méchanique non-linéare, Ann. Math., 38 (1937), 65-113. doi: 10.2307/1968511. Google Scholar
P. Brunovský and J. Komornik, Ergodicity and exactness of the shift on $C[0,\infty]$ and the semiflow of a first order partial differential equation, J. Math. Anal. Appl., 104 (1984), 235-245. doi: 10.1016/0022-247X(84)90045-3. Google Scholar
S. A. Chobanyan, V. I. Tarieladze and N. N. Vakhania, Probability Distributions on Banach Spaces, Kluwer Academic Publ., Dordrecht, 1987. doi: 10.1007/978-94-009-3873-1. Google Scholar
I. P. Cornfeld, S. V. Fomin and Ya. G. Sinai, Ergodic Theory, Grundlehren der Mathematischen Wissenschaften, 245, Springer-Verlag, New York, 1982. doi: 10.1007/978-1-4615-6927-5. Google Scholar
R. deLaubenfels and H. Emamirad, Chaos for functions of discrete and continuous weighted shift operators, Ergod. Th. Dynam. Sys., 21 (2001), 1411-1427. doi: 10.1017/S0143385701001675. Google Scholar
W. Desch, W. Schappacher and G. F. Webb, Hypercyclic and chaotic semigroups of linear operators, Ergodic Theory Dynamical Systems, 17 (1997), 793-819. doi: 10.1017/S0143385797084976. Google Scholar
R. L. Devaney, An Introduction to Chaotic Dynamical Systems, 2nd edition, Addison-Wesley Studies in Nonlinearity, Addison-Wesley Publishing Company, Redwood City, CA, 1989. Google Scholar
S. M. Duyos-Ruis, Universal functions of the structure of the space of entire functions, Soviet Math. Dokl., 30 (1984), 713-716. Google Scholar
S. El Mourchid, G. Metafune, A. Rhandi and J. Voigt, On the chaotic behaviour of size structured cell populations, J. Math. Anal. Appl., 339 (2008), 918-924. doi: 10.1016/j.jmaa.2007.07.034. Google Scholar
C. Foiaş, Statistical study of Navier-Stokes equations I, II, Rend. Sem. Mat. Univ. Padova, 48 (1972), 219-348 (1973); ibid. 49 (1973), 9-123. Google Scholar
R. M. Gethner and J. H. Shapiro, Universal vectors for operators on spaces of holomorphic functions, Proc. Amer. Math. Soc., 100 (1987), 281-288. doi: 10.1090/S0002-9939-1987-0884467-4. Google Scholar
G. Godefroy and J. H. Shapiro, Operators with dense, invariant, cyclic vector manifolds, J. Funct. Anal., 98 (1991), 229-269. doi: 10.1016/0022-1236(91)90078-J. Google Scholar
K. G. Grosse-Ederman, On the universal functions of G. R. MacLane, Complex Variables Theory Appl., 15 (1990), 193-196. doi: 10.1080/17476939008814450. Google Scholar
H. M. Hilden and L. J. Wallen, Some cyclic and non-cyclic vectors of certain operators, Indiana Univ. Math. J., 23 (1974), 557-565. Google Scholar
E. Hopf, Statistical hydromechanics and functional calculus, J. Rational Mech. Anal., 1 (1952), 87-123. Google Scholar
K. E. Howard, A size structured model of cell dwarfism, Discr. Contin. Dyn. Syst., 1 (2001), 471-484. doi: 10.3934/dcdsb.2001.1.471. Google Scholar
U. Krengel, Ergodic Theorems, de Gruyter Studies in Mathematics, 6, Walter de Gruyter & Co., Berlin, 1985. doi: 10.1515/9783110844641. Google Scholar
D. Landers and L. Roggie, An ergodic theorem for Fréchet-valued random variables, Proc. Amer. Math. Soc., 72 (1978), 49-53. Google Scholar
A. Lasota, Invariant measures and a linear model of turbulence, Rend. Sem. Mat. Univ. Padova, 61 (1979), 40-48. Google Scholar
A. Lasota and M. C. Mackey, Chaos, Fractals and Noise. Stochastic Aspects of Dynamics, Springer Applied Mathematical Sciences, 97, New York, 1994. doi: 10.1007/978-1-4612-4286-4. Google Scholar
A. Lasota and J. Yorke, On the existence of invariant measures for transformations with strictly turbulent trajectories, Bull. Polish Acad. Sci. Math., 25 (1977), 233-238. Google Scholar
W. Luh, On universal functions, Colloq. Math. Soc., János Bolyai, 19 (1976), 503-511. Google Scholar
G. R. MacLane, Sequences of derivatives and normal families, J. Analyse Math., 2 (1952), 72-87. doi: 10.1007/BF02786968. Google Scholar
H. Méndez-Lango, Is the process of finding $f'$ chaotic?, Rev. Integr. Temas Mat., 22 (2004), 37-41. Google Scholar
M. Murillo-Arcila and A. Peris, Strong mixing measures for linear operators and frequent hypercyclicity, J. Math. Anal. Appl., 398 (2013), 462-465. doi: 10.1016/j.jmaa.2012.08.050. Google Scholar
J. Myjak and R. Rudnicki, Stability versus chaos for a partial differential equation, Chaos Solitons and Fractals, 14 (2002), 607-612. doi: 10.1016/S0960-0779(01)00190-4. Google Scholar
M. Pollicott and M. Yuri, Dynamical Systems and Ergodic Theory, London Mathematical Society Student Texts, 40, Cambridge University Press, Cambridge, 1998. Google Scholar
A. Rochlin, Exact endomorphisms of Lebesque spaces, Izv. Akad. Nauk SSSR Ser. Mat., 25 (1961) 499-530; Amer. Math. Soc. Transl., 39 (1964), 1-36. Google Scholar
S. Rolewicz, On orbits of elements, Studia Math., 32 (1969), 17-22. Google Scholar
R. Rudnicki, Invariant measures for the flow of a first-order partial differential equation, Ergodic Theory Dynamical Systems, 8 (1985), 437-443. doi: 10.1017/S0143385700003059. Google Scholar
________, Ergodic properties of hyperbolic systems of partial differential equations, Bull. Polish Acad. Sci. Math., 33 (1985), 595-599. Google Scholar
________, Ergodic measures on topological spaces, Univ. Iagell. Ac. Math., 26 (1987), 231-237. Google Scholar
________, An abstract Wiener measure invariant under a partial differential equation, Bull. Polish Acad. Sci. Math., 35 (1987), 289-295. Google Scholar
________, Strong ergodic properties of a first-order partial differential equation, J. Math. Anal. Appl., 133 (1988), 14-26. doi: 10.1016/0022-247X(88)90361-7. Google Scholar
________, Gaussian measure-preserving linear transformations, Univ. Iagell. Ac. Math., 30 (1993), 105-112. Google Scholar
________, Chaos for some infinite-dimensional dynamical systems, Math. Meth. Appl. Sci., 27 (2004), 723-738. doi: 10.1002/mma.498. Google Scholar
________, Chaoticity of the blood cell production system, Chaos: An Interdisciplinary Journal of Nonlinear Science, 19 (2009), 043112, 1-6. doi: 10.1063/1.3258364. Google Scholar
________, Chaoticity and invariant measures for a cell population model, J. Math. Anal. Appl., 393 (2012), 151-165. doi: 10.1016/j.jmaa.2012.03.055. Google Scholar
S. Wiggins, Chaotic Transport in Dynamical Systems, Interdisciplinary Applied Mathematics, 2, Springer, New York, 1992. doi: 10.1007/978-1-4757-3896-4. Google Scholar
Jianzhong Wang. Wavelet approach to numerical differentiation of noisy functions. Communications on Pure & Applied Analysis, 2007, 6 (3) : 873-897. doi: 10.3934/cpaa.2007.6.873
S. Eigen, A. B. Hajian, V. S. Prasad. Universal skyscraper templates for infinite measure preserving transformations. Discrete & Continuous Dynamical Systems, 2006, 16 (2) : 343-360. doi: 10.3934/dcds.2006.16.343
Lidong Wang, Xiang Wang, Fengchun Lei, Heng Liu. Mixing invariant extremal distributional chaos. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6533-6538. doi: 10.3934/dcds.2016082
Jonathan C. Mattingly, Etienne Pardoux. Invariant measure selection by noise. An example. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4223-4257. doi: 10.3934/dcds.2014.34.4223
Luis Barreira and Jorg Schmeling. Invariant sets with zero measure and full Hausdorff dimension. Electronic Research Announcements, 1997, 3: 114-118.
Paul A. Glendinning, David J. W. Simpson. A constructive approach to robust chaos using invariant manifolds and expanding cones. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3367-3387. doi: 10.3934/dcds.2020409
Agnieszka Badeńska. Measure rigidity for some transcendental meromorphic functions. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2375-2402. doi: 10.3934/dcds.2012.32.2375
Rafael De La Llave, R. Obaya. Regularity of the composition operator in spaces of Hölder functions. Discrete & Continuous Dynamical Systems, 1999, 5 (1) : 157-184. doi: 10.3934/dcds.1999.5.157
Simon Lloyd, Edson Vargas. Critical covering maps without absolutely continuous invariant probability measure. Discrete & Continuous Dynamical Systems, 2019, 39 (5) : 2393-2412. doi: 10.3934/dcds.2019101
Giuseppe Da Prato. An integral inequality for the invariant measure of some finite dimensional stochastic differential equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3015-3027. doi: 10.3934/dcdsb.2016085
Paola Mannucci, Claudio Marchi, Nicoletta Tchou. Asymptotic behaviour for operators of Grushin type: Invariant measure and singular perturbations. Discrete & Continuous Dynamical Systems - S, 2019, 12 (1) : 119-128. doi: 10.3934/dcdss.2019008
Boris Kalinin, Anatole Katok. Measure rigidity beyond uniform hyperbolicity: invariant measures for cartan actions on tori. Journal of Modern Dynamics, 2007, 1 (1) : 123-146. doi: 10.3934/jmd.2007.1.123
Yan Wang, Guanggan Chen. Invariant measure of stochastic fractional Burgers equation with degenerate noise on a bounded interval. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3121-3135. doi: 10.3934/cpaa.2019140
Harald Fripertinger. The number of invariant subspaces under a linear operator on finite vector spaces. Advances in Mathematics of Communications, 2011, 5 (2) : 407-416. doi: 10.3934/amc.2011.5.407
Yury Arlinskiĭ, Eduard Tsekanovskiĭ. Constant J-unitary factor and operator-valued transfer functions. Conference Publications, 2003, 2003 (Special) : 48-56. doi: 10.3934/proc.2003.2003.48
Peter Giesl. Construction of a global Lyapunov function using radial basis functions with a single operator. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 101-124. doi: 10.3934/dcdsb.2007.7.101
Boris Kalinin, Anatole Katok, Federico Rodriguez Hertz. Errata to "Measure rigidity beyond uniform hyperbolicity: Invariant measures for Cartan actions on tori" and "Uniqueness of large invariant measures for $\Zk$ actions with Cartan homotopy data". Journal of Modern Dynamics, 2010, 4 (1) : 207-209. doi: 10.3934/jmd.2010.4.207
Jawad Al-Khal, Henk Bruin, Michael Jakobson. New examples of S-unimodal maps with a sigma-finite absolutely continuous invariant measure. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 35-61. doi: 10.3934/dcds.2008.22.35
Sangtae Jeong, Chunlan Li. Measure-preservation criteria for a certain class of 1-lipschitz functions on Zp in mahler's expansion. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 3787-3804. doi: 10.3934/dcds.2017160
Sugata Gangopadhyay, Constanza Riera, Pantelimon Stănică. Gowers $ U_2 $ norm as a measure of nonlinearity for Boolean functions and their generalizations. Advances in Mathematics of Communications, 2021, 15 (2) : 241-256. doi: 10.3934/amc.2020056
Ryszard Rudnicki | CommonCrawl |
The Nature of Science and Physics
By the end of this section, you will be able to:
Make reasonable approximations based on given data.
On many occasions, physicists, other scientists, and engineers need to make approximations or "guesstimates" for a particular quantity. What is the distance to a certain destination? What is the approximate density of a given item? About how large a current will there be in a circuit? Many approximate numbers are based on formulae in which the input quantities are known only to a limited accuracy. As you develop problem-solving skills (that can be applied to a variety of fields through a study of physics), you will also develop skills at approximating. You will develop these skills through thinking more quantitatively, and by being willing to take risks. As with any endeavor, experience helps, as well as familiarity with units. These approximations allow us to rule out certain scenarios or unrealistic numbers. Approximations also allow us to challenge others and guide us in our approaches to our scientific world. Let us do two examples to illustrate this concept.
Example 1. Approximating the Height of a Building
Can you approximate the height of one of the buildings on your campus, or in your neighborhood? Let us make an approximation based upon the height of a person. In this example, we will calculate the height of a 39-story building.
Think about the average height of an adult male. We can approximate the height of the building by scaling up from the height of a person.
Based on information in the example, we know there are 39 stories in the building. If we use the fact that the height of one story is approximately equal to about the length of two adult humans (each human is about 2-m tall), then we can estimate the total height of the building to be
[latex]\frac{\text{2 m}}{\text{1 person}}\times \frac{\text{2 person}}{\text{1 story}}\times \text{39 stories = 156 m}[/latex].
You can use known quantities to determine an approximate measurement of unknown quantities. If your hand measures 10 cm across, how many hand lengths equal the width of your desk? What other measurements can you approximate besides length?
Example 2. Approximating Vast Numbers: a Trillion Dollars
Figure 1. A bank stack contains one-hundred $100 bills, and is worth $10,000. How many bank stacks make up a trillion dollars? (credit: Andrew Magill)
The U.S. federal deficit in the 2008 fiscal year was a little greater than $10 trillion. Most of us do not have any concept of how much even one trillion actually is. Suppose that you were given a trillion dollars in $100 bills. If you made 100-bill stacks and used them to evenly cover a football field (between the end zones), make an approximation of how high the money pile would become. (We will use feet/inches rather than meters here because football fields are measured in yards.) One of your friends says 3 in., while another says 10 ft. What do you think?
When you imagine the situation, you probably envision thousands of small stacks of 100 wrapped $100 bills, such as you might see in movies or at a bank. Since this is an easy-to-approximate quantity, let us start there. We can find the volume of a stack of 100 bills, find out how many stacks make up one trillion dollars, and then set this volume equal to the area of the football field multiplied by the unknown height.
(1) Calculate the volume of a stack of 100 bills. The dimensions of a single bill are approximately 3 in. by 6 in. A stack of 100 of these is about 0.5 in. thick. So the total volume of a stack of 100 bills is:
volume of stack = length × width × height
volume of stack = 6 in. × 3 in. × 0.5 in
volume of stack = 9 in.3
(2) Calculate the number of stacks. Note that a trillion dollars is equal to $1 × 1012, and a stack of one-hundred $100 bills is equal to $10,000, or $1×104. The number of stacks you will have is:
$1 × 1012 (a trillion dollars)/ $1 × 104 per stack = 1 × 108 stacks.
(3) Calculate the area of a football field in square inches. The area of a football field is 100 yd×50 yd, which gives 5,000 yd2. Because we are working in inches, we need to convert square yards to square inches:
[latex]\begin{array}{}\text{Area}={\text{5,000 yd}}^{2}\times \frac{3\text{ft}}{\text{1 yd}}\times \frac{3\text{ft}}{\text{1 yd}}\times \frac{\text{12}\text{in}\text{.}}{\text{1 ft}}\times \frac{\text{12}\text{in}\text{.}}{\text{1 ft}}=6,480,000\text{ in}{\text{.}}^{2},\\ \text{Area}\approx 6\times {\text{10}}^{6}\text{in}{\text{.}}^{2}\text{.}\end{array}[/latex]
This conversion gives us 6 × 106 in2 for the area of the field. (Note that we are using only one significant figure in these calculations.)
(4) Calculate the total volume of the bills. The volume of all the $100-bill stacks is
[latex]9\text{in}{\text{.}}^{3}/\text{stack}\times {\text{10}}^{8}\text{ stacks}=9\times {\text{10}}^{8}\text{in}{\text{.}}^{3}[/latex]
(5) Calculate the height. To determine the height of the bills, use the equation:
[latex]\begin{array}{lll}\text{volume of bills}& =& \text{area of field}\times \text{height of money:}\\ \text{Height of money}& =& \frac{\text{volume of bills}}{\text{area of field}},\\ \text{Height of money}& =& \frac{9\times {\text{10}}^{8}\text{in}{\text{.}}^{3}}{6\times {\text{10}}^{6}{\text{in.}}^{2}}=1.33\times {\text{10}}^{2}\text{in.,}\\ \text{Height of money}& \approx & 1\times {\text{10}}^{2}\text{in.}=\text{100 in.}\end{array}[/latex]
The height of the money will be about 100 in. high. Converting this value to feet gives
[latex]\text{100 in}\text{.}\times \frac{\text{1 ft}}{\text{12 in}\text{.}}=8\text{.}\text{33 ft}\approx \text{8 ft.}[/latex]
The final approximate value is much higher than the early estimate of 3 in., but the other early estimate of 10 ft (120 in.) was roughly correct. How did the approximation measure up to your first guess? What can this exercise tell you in terms of rough "guesstimates" versus carefully calculated approximations?
Check Your Understanding
Using mental math and your understanding of fundamental units, approximate the area of a regulation basketball court. Describe the process you used to arrive at your final approximation.
An average male is about two meters tall. It would take approximately 15 men laid out end to end to cover the length, and about 7 to cover the width. That gives an approximate area of 420 m2.
Scientists often approximate the values of quantities to perform calculations and analyze systems.
Problems & Exercises
1. How many heartbeats are there in a lifetime?
2. A generation is about one-third of a lifetime. Approximately how many generations have passed since the year 0 AD?
3. How many times longer than the mean life of an extremely unstable atomic nucleus is the lifetime of a human? (Hint: The lifetime of an unstable atomic nucleus is on the order of 10-22.)
4. Calculate the approximate number of atoms in a bacterium. Assume that the average mass of an atom in the bacterium is ten times the mass of a hydrogen atom. (Hint: The mass of a hydrogen atom is on the order of 10-27 and the mass of a bacterium is on the order of 10-15).
Figure 2. This color-enhanced photo shows Salmonella typhimurium (red) attacking human cells. These bacteria are commonly known for causing foodborn illness. Can you estimate the number of atoms in each bacterium? (credit: Rocky Mountain Laboratories, NIAID, NIH)
6. (a) What fraction of Earth's diameter is the greatest ocean depth? (b) The greatest mountain height?
7. (a) Calculate the number of cells in a hummingbird assuming the mass of an average cell is ten times the mass of a bacterium. (b) Making the same assumption, how many cells are there in a human?
8. Assuming one nerve impulse must end before another can begin, what is the maximum firing rate of a nerve in impulses per second?
approximation: an estimated value based on prior experience and reasoning
Selected Answers to Problems & Exercises
1. 2 × 109 heartbeats
3. 2 × 1031 if an average human lifetime is taken to be about 70 years.
5. 50 atoms
7. 1012 cells/hummingbird (b) 1016 cells/human
Licenses and Attributions
CC licensed content, Shared previously
College Physics. Authored by: OpenStax College. Located at: http://cnx.org/contents/031da8d3-b525-429c-80cf-6c8ed997733a/College_Physics. License: CC BY: Attribution. License Terms: Located at License | CommonCrawl |
npj quantum information
A Nature Research Journal
Detecting multipartite entanglement structure with minimal resources
You Zhou ORCID: orcid.org/0000-0003-0886-077X1,
Qi Zhao1,
Xiao Yuan ORCID: orcid.org/0000-0003-0205-65452 &
Xiongfeng Ma ORCID: orcid.org/0000-0002-9441-40061
npj Quantum Information volume 5, Article number: 83 (2019) Cite this article
Recently, there are tremendous developments on the number of controllable qubits in several quantum computing systems. For these implementations, it is crucial to determine the entanglement structure of the prepared multipartite quantum state as a basis for further information processing tasks. In reality, evaluation of a multipartite state is in general a very challenging task owing to the exponential increase of the Hilbert space with respect to the number of system components. In this work, we propose a systematic method using very few local measurements to detect multipartite entanglement structures based on the graph state—one of the most important classes of quantum states for quantum information processing. Thanks to the close connection between the Schmidt coefficient and quantum entropy in graph states, we develop a family of efficient witness operators to detect the entanglement between subsystems under any partitions and hence the entanglement intactness. We show that the number of local measurements equals to the chromatic number of the underlying graph, which is a constant number, independent of the number of qubits. In reality, the optimization problem involved in the witnesses can be challenging with large system size. For several widely used graph states, such as 1-D and 2-D cluster states and the Greenberger–Horne–Zeilinger state, by taking advantage of the area law of entanglement entropy, we derive analytical solutions for the witnesses, which only employ two local measurements. Our method offers a standard tool for entanglement-structure detection to benchmark multipartite quantum systems.
Entanglement is an essential resource for many quantum information tasks,1 such as quantum teleportation,2 quantum cryptography,3,4 nonlocality test,5 quantum computing,6 quantum simulation,7 and quantum metrology.8,9 Tremendous efforts have been devoted to the realization of multipartite entanglement in various systems,10,11,12,13,14,15,16,17,18,19,20 which provide the foundation for small- and medium-scale quantum information processing in near future and will eventually pave the way to universal quantum computing. In order to build up a quantum computing device, it is crucial to first witness multipartite entanglement. So far, genuine multipartite entanglement has been demonstrated and witnessed in experiment with a small amount of qubits in different realizations, such as 14-ion-trap-qubit,10 12-superconducting-qubit,14 and 12-photon-qubit systems.17
In practical quantum hardware, the unavoidable coupling to the environment undermines the fidelity between the prepared state and the target one. Taking the Greenberger–Horne–Zeilinger (GHZ) state for example, the state-of-the-art 10-superconducting-qubit13 and the 12-photon17 preparations only achieve the fidelity of 66.8% and 57.2%, respectively, which just exceed the threshold 50% for the certification of genuine entanglement. As the system size becomes larger, see for instance, Google's a 72-qubit chip (https://www.sciencenews.org/article/google-moves-toward-quantum-supremacy-72-qubit-computer) and IonQ's a 79-qubit system (https://physicsworld.com/a/ion-based-commercial-quantum-computer-is-a-first/), it is an experimental challenge to create genuine multipartite entanglement. Nonetheless, even without global genuine entanglement as the target state possesses, the experimental prepared state might still have fewer-body entanglement within a subsystem and/or among distinct subsystems.21,22,23 The study of lower-order entanglement, which can be characterized by the detailed entanglement structures,24,25,26 is important for quantum hardware development, because it might reveal the information on unwanted couplings to the environment and acts as a benchmark of the underlying system. Moreover, the certified lower-order entanglement among several subsystems could be still useful for some quantum information tasks.
Considering an N-partite quantum system and its partition into m subsystems (m ≤ N), the entanglement structure indicates how the subsystems are entangled with each other. Each subsystem corresponds to a subset of the whole quantum system. For instance, we can choose each subsystem to be each party (i.e., m = N), and then the entanglement structure indicates the entanglement between the N parties. In some specific systems, such as distributed quantum computing,27 quantum networks28 or atoms in a lattice, the geometric configuration can naturally determine the system partition (see Fig. 1 for an illustration). In other cases, one might not need to specify the partition in the beginning. By going through all possible partitions, one can investigate higher level entanglement structures, such as entanglement intactness (non-separability),23,26 which quantifies how many pieces in the N-partite state are separated.
A distributed quantum computing scenario. Three remote (small) quantum processors, owned by Alice, Bob, and Charlie, are connected by quantum links. Each of them possesses a few of qubits and performs quantum operations. In this case, the partition of the whole quantum system is determined by the locations of these processors. In order to perform global quantum operations involving multiple processors, entanglement among the processors are generally required. Thus, it is essential to benchmark the entanglement structure on this network
Multipartite entanglement-structure detection is generally a challenging task. Naively, one can perform state tomography on the system. As the system size increases, tomography becomes infeasible due to the exponential increase of the Hilbert space. Entanglement witness,29,30,31 on the other hand, provides an elegant solution to multipartite entanglement detection. In literature, various witness operators have been proposed to detect different types of quantum states, generally requiring a polynomial number of measurements with respect to the system size.32,33 Interestingly, a constant number of local measurement settings are shown to be sufficient to detect genuine entanglement for stabilizer states.34,35 Compared with genuine entanglement, multipartite entanglement structure still lacks a systematic exploration, due to the rich and complex structures of N-partite system. Recently, positive results have been achieved for detecting entanglement structures of GHZ-like states with two measurement settings26 and the entanglement of a specific 1-D cluster state of the 16-qubit superconducting quantum processor ibmqx5 machine from the IBM cloud.36 Unfortunately, it remains an open problem of efficient entanglement-structure detection of general multipartite quantum states.
In this work, we propose a systematic method to witness the entanglement structure based on graph states. Note that the graph state37,38 is one of the most important classes of multipartite states for quantum information processing, such as measurement-based quantum computing,39,40 quantum routing and quantum networks,28 quantum error correction,41 and Bell nonlocality test.42 It is also related to the symmetry-protected topological order in condensed matter physics.43 Typical graph states include cluster states, GHZ state, and the states involved in the encoding process of the 5-qubit Steane code and the concatenated [7,1,3]-CSS-code.38
The main idea of our entanglement-structure detection method runs as follows. First, with the close connection between the maximal Schmidt coefficient and quantum entropy, we upper-bound the fidelity of fully- and biseparable states. These bounds are directly related to the entanglement entropy of the underlying graph state with respect to certain bipartition. Then, inspired by the genuine entanglement detection method,34 we lower-bound the fidelity between the unknown prepared state and the target graph state, with local measurements corresponding to the stabilizer operators of the graph state. Finally, by comparing theses fidelity bounds, we can witness the entanglement structures, such as the (genuine multipartite) entanglement between any subsystem partitions, and hence the entanglement intactness.
Our detection method for entanglement structures based on graph states is presented in Theorems 1 and 2, which only involves k local measurements. Here, k is the chromatic number of the corresponding graph, typically, a small constant independent of the number of qubits. For several common graph states, 1-D and 2-D cluster states and the GHZ state, we construct witnesses with only k = 2 local measurement settings, and derive analytical solutions to the optimization problem. These results are shown in Corollaries 1–4. The proofs of propositions and theorems are left in Methods, and the proofs of Corollaries 1–4 are presented in Supplementary Methods 1–4.
Definitions of multipartite entanglement structure
Let us start with the definitions of multipartite entanglement structure. Considering an N-qubit quantum system in a Hilbert space \({\cal{H}} = {\cal{H}}_{2}^{ \otimes N}\), one can partition the whole system into m nonempty disjoint subsystems Ai, i.e., \(\{ N\} \equiv \{ 1,2, \ldots ,N\} = \mathop {\bigcup}\nolimits_{i = 1}^{m} {A_i}\) with \({\cal{H}} = \mathop { \otimes }\nolimits_{i = 1}^{m} {\cal{H}}_{A_i}\). Denote this partition to be \({\cal{P}}_{m} = \{ A_{i}\}\) and omit the index m when it is clear from the context. Similar to definitions of regular separable states, here, we define fully- and biseparable states with respect to a specific partition \({\cal{P}}_{m}\) as follows.
Definition 1
An N-qubit pure state, \(\left| \mathrm{\Psi}_{f} \right\rangle \in \mathcal{H}\), is \(\mathcal{P}\)-fully separable, iff it can be written as
$$\left| {{\mathrm{\Psi }}_{f}} \right\rangle = \mathop { \otimes }\limits_{i = 1}^{m} \left| {{\mathrm{\Phi }}_{A_{i}}} \right\rangle .$$
An N-qubit mixed state ρf is \(\cal{P}\)-fully separable, iff it can be decomposed into a convex mixture of \(\cal{P}\)-fully separable pure states
$$\rho _f = \mathop {\sum}\limits_i {p_i} \left| {{\mathrm{\Psi }}_f^i} \right\rangle \left\langle {{\mathrm{\Psi }}_f^i} \right|,$$
with pi ≥ 0, ∀i and \(\mathop {\sum}\nolimits_i {p_i} = 1\).
Denote the set of \(\cal{P}\)-fully separable states to be \(S_{f}^{\cal{P}}\). Thus, if one can confirm that a state \(\rho \ \notin \ S_{f}^{\cal{P}}\), the state ρ should possess entanglement between the subsystems {Ai}. Such kind of entanglement could be weak though, since it only requires at least two subsystems to be entangled. For instance, the state \(\left| {\mathrm{\Psi }} \right\rangle = \left| {{\mathrm{\Psi }}_{A_{1}A_{2}}} \right\rangle \otimes \mathop {\prod}\nolimits_{i = 3}^{m} {\left| {{\mathrm{\Psi }}_{A_{i}}} \right\rangle }\) is called entangled nevertheless only with entanglement between A1 and A2. It is interesting to study the states where all the subsystems are genuinely entangled with each other. Below, we define this genuine entangled state via \(\cal{P}\)-bi-separable states.
An N-qubit pure state, \(\left| {{\mathrm{\Psi }}_s} \right\rangle \in \cal{H}\), is \(\cal{P}\)-bi-separable, iff there exists a subsystem bipartition \(\{ A,\bar A\}\), where \(A = \mathop {\bigcup}\nolimits_i {A_i}\), \(\bar A = \{ N\} /A \ \ne \ \emptyset\), the state can be written as,
$$\left| {{\mathrm{\Psi }}_b} \right\rangle = \left| {{\mathrm{\Phi }}_A} \right\rangle \otimes \left| {{\mathrm{\Phi }}_{\bar A}} \right\rangle .$$
An N-qubit mixed state ρb is \(\cal{P}\)-bi-separable, iff it can be decomposed into a convex mixture of \(\cal{P}\)-bi-separable pure states,
$$\rho _b = \mathop {\sum}\limits_i {p_i} \left| {{\mathrm{\Psi }}_b^i} \right\rangle \left\langle {{\mathrm{\Psi }}_b^i} \right|,$$
with pi ≥ 0, ∀i and \(\mathop {\sum}\nolimits_i {p_i} = 1\), and each state \(\left| {{\mathrm{\Psi }}_b^i} \right\rangle\) can have different bipartitions.
Denote the set of bi-separable states to be \(S_{b}^{\cal{P}}\). It is not hard to see that \(S_{f}^{\cal{P}} \subset S_{b}^{\cal{P}}\).
A state ρ possesses \(\cal{P}\)-genuine entanglement iff \(\rho \ \notin \ S_{b}^{\cal{P}}\).
The three entanglement-structure definitions of \(\cal{P}\)-fully separable, \(\cal{P}\)-bi-separable, and \(\cal{P}\)-genuinely entangled states can be viewed as generalized versions of regular fully separable, bi-separable, and genuinely entangled states, respectively. In fact, when m = N, these pairs of definitions are the same.
Following the conventional definitions, a pure state |Ψm〉 is m-separable if there exists a partition \({\cal{P}}_m\), the state can be written in the form of Eq. (1). The m-separable state set, Sm, contains all the convex mixtures of the m-separable pure states, \(\rho _{m} = \mathop {\sum}\nolimits_{i} {p_{i}} \left| {{\mathrm{\Psi }}_{m}^{i}} \right\rangle \left\langle {{\mathrm{\Psi }}_{m}^{i}} \right|\), where the partition for each term \(\left| {{\mathrm{\Psi }}_m^i} \right\rangle\) needs not to be same. It is not hard to see that Sm+1 ⊂ Sm. Meanwhile, define the entanglement intactness of a state ρ to be m, iff ρ ∉ Sm+1 and ρ ∈ Sm. Thus, as ρ ∉ Sm+1, the intactness is at most m, i.e., the non-separability can serve as an upper bound of the intactness. When the entanglement intactness is 1, the state is genuinely entangled; and when the intactness is N, the state is fully separable. See Fig. 2 for the relationships among these definitions.
Venn diagrams to illustrate relationships of several separable sets. a To illustrate the separability definitions based on a given partition, we consider a tripartition \({\cal{P}}_{3} = \{ A_{1},A_{2},A_{3}\}\) here. The \(\cal{P}\)-fully separable state set \(S_{f}^{\cal{P}}\) is at the center, contained in three bi-separable sets with different bipartitions. The \(\cal{P}\)-bi-separable state set \(S_{b}^{\cal{P}}\) is the convex hull of these three sets. A state possesses \(\cal{P}\)-genuine entanglement if it is outside of \(S_{b}^{\cal{P}}\). Note that this becomes the case of three-qubit entanglement when each party Ai contains one qubit.22 b Separability hierarchy of N-qubit state with Sm+1 ⊂ Sm and 2 ≤ m ≤ N. The m-separable state set Sm is the convex hull of separable states with different m-partitions. Thus \(S_{f}^{{\cal{P}}_{m}} \subset S_{m}\), and one can investigate Sm by considering all \(S_{f}^{{\cal{P}}_{m}}\). A state possesses genuine multipartite entanglement (GME) if it is outside of S2, and is (fully) N-separable if it is in SN
By definitions, one can see that if a state is \({\cal{P}}_m\)-fully separable, it must be m-separable. Of course, an m-separable state might not be \({\cal{P}}_m\)-fully separable, for example, if the partition is not properly chosen. In experiment, it is important to identify the partition under which the system is fully separated. With the partition information, one can quickly identify the links where entanglement is broken. Moreover, for some systems, such as distributed quantum computing, multiple quantum processor, and quantum network, natural partition exists due to the system geometric configuration. Therefore, it is practically interesting to study entanglement structure under partitions.
Entanglement-structure detection method
Let us first recap the basics of graph states and the stabilizer formalism.37,38 In a graph, denoted by G = (V, E), there are a vertex set V = {N} and a edge set E ⊂ [V]2. Two vertexes i, j are called neighbors if there is an edge (i, j) connecting them. The set of neighbors of the vertex i is denoted as Ni. A graph state is defined on a graph G, where the vertexes represent the qubits initialized in the state of \(\left| + \right\rangle = (\left| 0 \right\rangle + \left| 1 \right\rangle )/\sqrt 2\) and the edges represent a Controlled-Z (C-Z) operation, \({\mathrm{CZ}}^{\{ i,j\} } = \left| 0 \right\rangle _i\left\langle 0 \right| \otimes {\mathbb{I}}_j + \left| 1 \right\rangle _i\left\langle 1 \right| \otimes Z_j\), between the two neighbor qubits. Then the graph state can be written as,
$$\left| G \right\rangle = \mathop {\prod}\limits_{(i,j) \in E} {{\mathrm{CZ}}^{\{ i,j\} }} \left| + \right\rangle ^{ \otimes N}.$$
Denote the Pauli operators on qubit i to be Xi, Yi, Zi. An N-partite graph state can also be uniquely determined by N independent stabilizers,
$$S_i = X_i\mathop { \otimes }\limits_{j \in N_i} Z_j,$$
which commute with each other and Si|G〉 = |G〉, ∀i. That is, the graph state is the unique eigenstate with eigenvalue of +1 for all the N stabilizers. Here, Si contains identity operators for all the qubits that do not appear in Eq. (6). As a result, a graph state can be written as a product of stabilizer projectors,
$$\left| G \right\rangle \left\langle G \right| = \mathop {\prod}\limits_{i = 1}^N {\frac{{S_i + {\mathbb {I}}}}{2}} .$$
The fidelity between ρ and a graph state |G〉 can be obtained from measuring all possible products of stabilizers. However, as there are exponential terms in Eq. (7), this process is generally inefficient for large systems. Hereafter, we consider the connected graph, since its corresponding graph state is genuinely entangled.
Now, we propose a systematic method to detect entanglement structures based on graph states. First, we give fidelity bounds between separable states and graph states as the following proposition.
Given a graph state |G〉 and a partition \({\cal{P}} = \{ A_{i}\}\), the fidelity between |G〉 and any \(\cal{P}\)-fully separable state is upper bounded by
$${\mathrm{Tr}}\left( {\left| G \right\rangle \left\langle G \right|\rho _f} \right) \le \min _{\{ A,\bar A\} }2^{ - S(\rho _A)};$$
and the fidelity between |G〉 and any \(\cal{P}\)-bi-separable state is upper bounded by
$${\mathrm{Tr}}(\left| G \right\rangle \left\langle G \right|\rho _b) \le \max _{\{ A,\bar A\} }2^{ - S(\rho _A)},$$
where \(\{ A,\bar A\}\) is a bipartition of {Ai}, and S(ρA) = −Tr[ρA log2 ρA] is the von Neumann entropy of the reduced density matrix \(\rho _A = {\mathrm{Tr}}_{\bar A}(\left| G \right\rangle \left\langle G \right|)\).
The bound in Eq. (9) is tight, i.e., there always exists a \(\cal{P}\)-bi-separable state to saturate it. The bound in Eq. (8) may not be tight for some partition \(\cal{P} = \{ A_{\it{i}}\}\) and some graph state |G〉. In addition, we remark that to extend Theorem 1 from the graph state to a general state |Ψ〉, one should substitute the entropy in the bounds of Eqs. (8) and (9) with the min-entropy S∞(ρA) = −logλ1 with λ1 the largest eigenvalue of \(\rho _A = {\mathrm{Tr}}_{\bar A}(\left| \Psi \right\rangle \left\langle \Psi \right|)\).
Next, we propose an efficient method to lower-bound the fidelity between an unknown prepared state and the target graph state. A graph is k-colorable if one can divide the vertex set into k disjoint subsets \({\bigcup} {V_l} = V\) such that any two vertexes in the same subset are not connected. The smallest number k is called the chromatic number of the graph. (Note that the colorability is a property of the graph (not the state), one may reduce the number of measurement settings by local Clifford operations.38) We define the stabilizer projector of each subset Vl as
$$P_l = \mathop {\prod}\limits_{i \in V_l} {\frac{S_i + {\mathbb{I}}}{2}} ,$$
where Si is the stabilizer of |G〉 in subset Vl. The expectation value of each Pl can be obtained by one local measurement setting \(\mathop { \otimes }\nolimits_{i \in V_l} X_i\mathop { \otimes }\nolimits_{j \in V/V_l} Z_j\). Then, we can propose a fidelity evaluation scheme with k local measurement settings, as the following proposition.
For a graph state \(\left| G \right\rangle \left\langle G \right|\) and the projectors Pl defined in Eq. ( 10 ), the following inequality holds,
$$\left| G \right\rangle \left\langle G \right| \ge \mathop {\sum}\limits_{l = 1}^k {P_l} - (k - 1){\mathbb{I}},$$
where A ≥ B indicates that (A − B) is positive semidefinite.
Note that Proposition 2 with k = 2 case has also been studied in literature.34 Combining Propositions 1 and 2, we propose entanglement-structure witnesses with k local measurement settings, as presented in the following theorem.
Given a partition \({\cal{P}} = \{ A_{i}\}\), the operator \(W_{f}^{\cal{P}}\) can witness non-\(\cal{P}\)-fully separability (entanglement),
$$W_{f}^{\cal{P}} = \left( {k - 1 + \min _{\{ A,\bar{A}\} }2^{ - S(\rho _A)}} \right){\mathbb{I}} - \mathop {\sum}\limits_{l = 1}^{k} {P_{l}} ,$$
with \(\langle W_{f}^{\cal{P}}\rangle \ge 0\) for all \(\cal{P}\)-fully separable states; and the operator \(W_{b}^{\cal{P}}\) can witness \(\cal{P}\)-genuine entanglement,
$$W_{b}^{\cal{P}} = \left( {k - 1 + \max _{\{ A,\bar A\} }2^{ - S(\rho _{A})}} \right){\Bbb I} - \mathop {\sum}\limits_{l = 1}^{k} {P_{l}} ,$$
with \(\langle W_{b}^{\cal{P}}\rangle \ge 0\) for all \(\cal{P}\)-bi-separable states, where \(\{ A,\bar A\}\) is a bipartition of {Ai}, \(\rho _{A} = {\mathrm{Tr}}_{\bar A}(\left| G \right\rangle \left\langle G \right|)\), and the projectors Pl is defined in Eq. (10).
The proposed entanglement-structure witnesses have several favorable features. First, given an underlying graph state, the implementation of the witnesses is the same for different partitions. This feature allows us to study different entanglement structures in one experiment. Note that the witness operators in Eqs. (12) and (13) can be divided into two parts: The measurement results of Pl obtained from the experiment rely on the prepared unknown state and are independent of the partition; The bounds, \(1 + \min {\mkern 1mu} (\max )_{\{ A,\bar A\} }2^{ - S(\rho _A)}\), on the other hand, rely on the partition and are independent of the experiment. Hence, in the data postprocessing of the measurement results of Pl, we can study various entanglement structures for different partitions by calculating the corresponding bounds analytically or numerically.
Second, besides investigating the entanglement structure among all the subsystems, one can also employ the same experimental setting to study that of a subset of the subsystems, by performing different data postprocessing. For example, suppose the graph G is partitioned into three parts, say A1, A2, and A3, and only the entanglement between subsystems A1 and A2 is of interest. One can construct new witness operators with projectors \(P_{l}^{\prime}\), by replacing all the Pauli operators on the qubits in A3 in Eq. (10) to identities. Such measurement results can be obtained by processing the measurement results of the original Pl. Then the entanglement between A1 and A2 can be detected via Theorem 1 with projectors \(P_{l}^{\prime}\) and the corresponding bounds of the graph state \(\left| {G_{A_{1}A_{2}}} \right\rangle\). Details are discussed in Supplementary Note 1.
Third, when each subsystem Ai contains only one qubit, that is, m = N, the witnesses in Theorem 1 become the conventional ones. It turns out that Eq. (13) is the same for all the graph states under the N-partition \({\cal{P}}_{N}\), as shown in the following corollary. Note that, a special case of the corollary, the genuine entanglement witness for the GHZ and 1-D cluster states, has been studied in literature.34
Corollary 1
The operator \(W_{b}^{{\cal{P}}_{N}}\) can witness genuine multipartite entanglement,
$$W_{b}^{{\cal{P}}_{N}} = \left( {k - \frac{1}{2}} \right){\mathbb {I}} - \mathop {\sum}\limits_{l = 1}^{k} {P_{l}} ,$$
with \(\langle W_{b}^{{\cal{P}}_{N}}\rangle \ge 0\) for all bi-separable states, where Pl is defined in Eq. (10) for any graph state.
Fourth, the witness in Eq. (12) can be naturally extended to identify non-m-separability, by investigating all possible partitions \({\cal{P}}_{m}\) with fixed m. In fact, according to the definition of m-separable states and Eq. (8), the fidelity between any m-separable state ρm and the graph state |G〉 can be upper bounded by \({\mathrm{max}}_{{\cal{P}}_{m}}{\mathrm{min}}_{\{ A,\bar{A}\} }2^{ - S(\rho _{A})}\), where the maximization is over all possible partitions with m subsystems. As a result, we have the following theorem on the non-m-separability.
The operator Wm can witness non-m-separability,
$$W_{m} = \left( {k - 1 + \max _{{\cal{P}}_{m}}\min _{\{ A,\bar A\} }2^{ - S(\rho _{A})}} \right){\mathbb{I}} - \mathop {\sum}\limits_{l = 1}^{k} {P_{l}} ,$$
with 〈Wm〉 ≥ 0 for all m-separable states, where the maximization takes over all possible partitions \({\cal{P}}_{m}\) with m subsystems, the minimization takes over all bipartition of \({\cal{P}}_{m}\), \(\rho _A = {\mathrm{Tr}}_{\bar A}(\left| G \right\rangle \left\langle G \right|)\), and the projector Pl is defined in Eq. (10).
The robustness analysis of the witnesses proposed in Theorems 1 and 2 under the white noise is presented in Methods. It shows that our entanglement-structure witnesses are quite robust to noise. Moreover, the optimization problems in Theorems 1 and 2 are generally hard, since there are exponentially many different possible partitions. Surprisingly, for several widely used types of graph states, such as 1-D, 2-D cluster states, and the GHZ state, we find the analytical solutions to the optimization problem, as shown in the following section.
Applications to several typical graph states
In this section, we apply the general entanglement detection method proposed above to several typical graph states, 1-D, 2-D cluster states, and the GHZ state. Note that for these states the corresponding graphs are all 2-colorable. Thus, we can realize the witnesses with only two local measurement settings. For clearness, the vertexes in the subsets V1 and V2 are associated with red and blue colors respectively, as shown in Fig. 3. We write the stabilizer projectors defined in Eq. (10) for the two subsets as,
$$\begin{array}{l}P_1 = \mathop {\prod}\limits_{{\mathrm{red}}\,i} {\frac{{S_i + {\mathbb{I}}}}{2}} ,\\ P_2 = \mathop {\prod}\limits_{{\mathrm{blue}}\,i} {\frac{{S_i + {\mathbb{I}}}}{2}} .\end{array}$$
The more general case with k-chromatic graph states is presented in Supplementary Note 1.
Graphs of the a 1-D cluster state |C1〉, b 2-D cluster state |C2〉, and c GHZ state |GHZ〉. Note that the graph state form of the GHZ state is equivalent to its canonical form, \((\left| 0 \right\rangle ^{ \otimes N} + \left| 1 \right\rangle ^{ \otimes N})/\sqrt 2\), up to local unitary operations
We start with a 1-D cluster state |C1〉 with stabilizer projectors in the form of Eq. (16). Consider an example of tripartition \({\cal{P}}_{3} = \{ A_{1},A_{2},A_{3}\}\), as shown in Fig. 3a, there are three ways to divide the three subsystems into two sets, i.e., \(\{ A,\bar A\}\) = {A1, A2A3}, {A2, A1A3}, {A3, A1A2}. It is not hard to see that the corresponding entanglement entropies are \(S(\rho _{A_{1}}) = S(\rho _{A_{3}}) = 1\) and \(S(\rho _{A_{2}}) = 2\). Note that in the calculation, each broken edge will contribute 1 to the entropy, which is a manifest of the area law of entanglement entropy.44 According to Theorem 1, the operators to witness \({\cal{P}}_{3}\)-entanglement structure are given by,
$$\begin{array}{l}W_{f,C_1}^{{\cal{P}}_3} = \frac{5}{4}{\mathbb{I}} - (P_1 + P_2),\\ W_{b,C_1}^{{\cal{P}}_3} = \frac{3}{2}{\mathbb{I}} - (P_1 + P_2),\end{array}$$
where the two projectors P1 and P2 are defined in Eq. (16) with the graph of Fig. 3a.
Next, we take an example of 2-D cluster state |C2〉 defined in a 5 × 5 lattice and consider a tripartition, as shown in Fig. 3b. Similar to the 1-D cluster state case with the area law, the corresponding entanglement entropies are \(S(\rho _{A_{1}}) = S(\rho _{A_{3}}) = 5\) and \(S(\rho _{A_{2}}) = 4\). According to Theorem 1, the operators to witness \({\cal{P}}_{3}\)-entanglement structure are given by,
$$\begin{array}{l}W_{f,C_2}^{{\cal{P}}_3} = \frac{{33}}{{32}}{\mathbb{I}} - (P_1 + P_2),\\ W_{b,C_2}^{{\cal{P}}_3} = \frac{{17}}{{16}}{\mathbb{I}} - (P_1 + P_2),\end{array}$$
where the two projectors P1 and P2 are defined in Eq. (16) with the graph of Fig. 3b. Similar analysis works for other partitions and other graph states.
Now, we consider the case where each subsystem Ai contains exactly one qubit, \({\cal{P}}_{N}\). Then, witnesses in Eq. (12) become the conventional ones, as shown in the following Corollary.
The operator \(W_{f,C}^{{\cal{P}}_{N}}\) can witness non-fully separability (entanglement),
$$W_{f,C}^{\cal{P}_N} = (1 + 2^{ - \left\lfloor {\frac{N}{2}} \right\rfloor }){\Bbb I} - (P_1 + P_2),$$
with \(\langle W_{f,C}^{{\cal{P}}_{N}}\rangle \ge 0\) for all fully separable states, where the two projectors P1 and P2 are defined in Eq. (16) with the stabilizers of any 1-D or 2-D cluster state.
Here, we only show the cases of 1-D and 2-D cluster states. We conjecture that the witness holds for any (such as 3-D) cluster states. For a general graph state, on the other hand, the corollary does not hold. In fact, we have a counter example of the GHZ state shown in Fig. 3c. It is not hard to see that for any GHZ state, the entanglement entropy is given by,
$$S(\rho _A^{GHZ}) = 1,\;\;\;\forall \{ A,\ \bar A\} .$$
Then, Eqs. (12) and (13) yield the same witnesses. That is, the witness constructed by Theorem 1 for the GHZ state can only tell genuine entanglement or not.
Following Theorem 2, one can fix the number of the subsystems m and investigate all possible partitions to detect the non-m-separability. The optimization problem can be solved analytically for the 1-D and 2-D cluster states, as shown in Corollary 3 and 4, respectively.
The operator \(W_{m,C_{\mathrm{1}}}\) can witness non-m-separability,
$$W_{m,C_1} = (1 + 2^{ - \left\lfloor {\frac{m}{2}} \right\rfloor }){\mathbb{I}} - (P_1 + P_2),$$
with \(\langle W_{m,C_1}\rangle \ge 0\) for all m-separable states, where the two projectors P1 and P2 are defined in Eq. (16) with the stabilizers of a 1-D cluster state.
In particular, when m = 2 and m = N, \(W_{m,C_{\mathrm{1}}}\) becomes the entanglement witnesses in Eqs. (14) and (19), respectively.
The operator \(W_{m,C_{\mathrm{2}}}\) can witness non-m-separability for N ≥ m(m − 1)/2,
$$W_{m,C_2} = \left( {1 + 2^{ - \left\lceil {\frac{{ - 1 + \sqrt {1 + 8(m - 1)} }}{2}} \right\rceil }} \right){\mathbb{I}} - (P_1 + P_2),$$
We remark that the witnesses constructed in Corollaries 1, 2, and 3 are tight. Take the witness \(W_{m,C_{\mathrm{1}}}\) in Corollary 3 as an example. There exists an m-separable state ρm that saturates \({\mathrm{Tr}}(\rho _mW_{m,C_1}) = 0\). In addition, as m ≤ 5, the witness \(W_{m,C_{\mathrm{2}}}\) in Corollary 4 is also tight. Detailed discussions are presented in Supplementary Methods 1–4.
In this work, we propose a systematic method to construct efficient witnesses to detect entanglement structures based on graph states. Our method offers a standard tool for entanglement-structure detection and multipartite quantum system benchmarking. The entanglement-structure definitions and the associated witness method may further help to detect novel quantum phases, by investigating the entanglement properties of the ground states of related Hamiltonians.43
The witnesses proposed in this work can be directly generalized to stabilizer states,6,45 which are equivalent to graph states up to local Clifford operations.38 It is interesting to extend the method to more general multipartite quantum states, such as the hyper-graph state46 and the tensor network state.47 Meanwhile, the generalization to the neural network state48 is also intriguing, since this kind of ansatz is able to represent broader quantum states with a volume law of entanglement entropy,49 and is a fundamental block for potential artificial intelligence applications. In addition, one may utilize the proposed witness method to detect other multipartite entanglement properties, such as the entanglement depth and width,50,51 as m-separability in this work. Moreover, one can also consider the self-testing scenario, such as (measurement-) device-independent settings,52,53,54 which can help to manifest the entanglement structures with less assumptions on the devices. Furthermore, translating the proposed entanglement witnesses into a probabilistic scheme is also interesting.55,56
Proof of Proposition 1
Proof. First, let us prove the \(\cal{P}\)-bi-separable state case in Eq. (9). Since the \(\cal{P}\)-bi-separable state set \(S_{b}^{\cal{P}}\) is convex, one only needs to consider the fidelity |〈Ψb|G〉|2 of the pure state |Ψb〉 defined in Eq. (3). It is known that the maximal value of the fidelity equals to the largest Schmidt coefficient of |G〉 with regard to the bipartition \(\{ A,\bar {A}\}\),57 i.e.,
$$\max _{\left| {{\mathrm{\Psi }}_b} \right\rangle }|\left\langle {{\mathrm{\Psi }}_b} \right|G\rangle |^2 = \lambda _1,$$
with the Schmidt decomposition \(\left| G \right\rangle = \mathop {\sum}\nolimits_{i = 1}^d {\sqrt {\lambda _i} } \left| {{\mathrm{\Phi }}_i} \right\rangle _A\left| {{\mathrm{\Phi }}_i^\prime } \right\rangle _{\bar A}\) and λ1 ≥ λ2 ≥ ⋯ ≥ λd. For general graph state |G〉, the spectrum of any reduced density matrix ρA is flat, i.e., λ1 = λ2 = ⋯λd, with d being the rank of ρA.58 As a result, one has
$$\begin{array}{l}S(\rho _{A}) = \log _{2}d,\\ \lambda _{i} = \frac{1}{d} = 2^{ - S(\rho _{A})}.\end{array}$$
To get an upper bound, one should maximize \(2^{ - S(\rho _{A})}\) on all possible subsystem bipartitions and then get Eq. (9).
Second, we prove the \(\cal{P}\)-fully separable state case in Eq. (8). Similarly, we only need to upper-bound the fidelity of the pure state |Ψf〉 defined in Eq. (1), due to the convexity property of the \(\cal{P}\)-fully separable state set \(S_{\mathrm{f}}^{\cal{P}}\). From the proof of Eq. (9) above, we know that the fidelity of the \(\cal{P}\)-bi-separable state satisfies the bound |〈Ψb|G〉|2 ≤ \(2^{ - S\left(\rho _{A}\right)}\), given a subsystem bipartition \(\{ A,\bar {A}\}\). It is not hard to see that these bounds all hold for |Ψf〉, since \(\mathrm{S}_{f}^{\cal{P}} \subset S_{b}^{\cal{P}}\). Thus, one can obtain the finest bound via minimizing over all possible bipartitions and finally get Eq. (8).
The entanglement entropy S(ρA) equals the rank of the adjacency matrix of the underlying bipartite graph, which can be efficiently calculated. Details are discussed in Supplementary Note 1. While the optimization problems can be computationally hard due to the exponential number of possible bipartitions, one can solve it properly as the number of the subsystems m is not too large. In addition, we can always have an upper bound on the minimization by only considering specific partitions. Analytical calculation of the optimization is possible for graph states with certain symmetries, such as the 1-D and 2-D cluster states and the GHZ state.
Proof. As shown in Main Text, a graph state |G〉 can be written in the following form
$$\left| G \right\rangle \left\langle G \right| = \mathop {\prod}\limits_{i = 1}^N {\frac{{S_i + {\mathbb{I}}}}{2}} = \mathop {\prod}\limits_{l = 1}^k {P_l} .$$
Accordingly, Eq. (11) in Proposition 2 becomes,
$$\left[ {\mathop {\prod}\limits_{l = 1}^k {P_l} + (k - 1){\mathbb{I}}} \right] - \mathop {\sum}\limits_{l = 1}^k {P_l} \ge 0.$$
Note that the projectors Pl commute with each other, thus we can prove Eq. (26) for all subspaces which are determined by the eigenvalues of all Pl. For the subspace where the eigenvalues of all Pl are 1, the inequality (1 + k − 1) − k ≥ 0 holds. For the subspace where only one of Pl takes value of 0, the inequality (0 + k − 1) − (k − 1) ≥ 0 holds. Moreover, for the subspace in which there are more than one Pl taking 0, the inequality also holds. As a result, we finish the proof.
Proofs of Theorems 1 and 2
Proof of Theorem 1
Proof. The proof is to combine Propositions 1 and 2. Here we only show the proof of Eq. (12), and one can prove Eq. (13) in a similar way. To be specific, one needs to show that any \(\cal{P}\)-fully separable state satisfies \(\langle W_{f}^{\cal{P}}\rangle \ge 0\), that is,
$$\begin{array}{lll}{\mathrm{Tr}}\left\{ {\mathop {\sum}\limits_{l = 1}^k {P_l} \rho _f} \right\} &\le & {\mathrm{Tr}}\left\{ {\left[ {(k - 1){\mathbb{I}} + \left| G \right\rangle \left\langle G \right|} \right]\rho _f} \right\}\\ &\le & (k - 1) + \min _{\{ A,\bar A\} }2^{ - S(\rho _A)}.\end{array}$$
Here the first and the second inequalities are right on account of Propositions 2 and 1, respectively.
Proof. With Eq. (8) one can bound the fidelity from any \(\cal{P}\)-fully separable state to a graph state |G〉. The m-separable state set Sm contains all the state ρm which can be written as the convex mixture of pure m-separable state, \(\rho _m = \mathop {\sum}\nolimits_i {p_i} \left| {{\mathrm{\Psi }}_m^i} \right\rangle \left\langle {{\mathrm{\Psi }}_m^i} \right|\), where the partition for each constitute \(\left| {{\mathrm{\Psi }}_m^i} \right\rangle\) needs not to be the same. Hence one can bound the fidelity from ρm to a graph state |G〉 by investigating all possible partitions, i.e.,
$${\mathrm{Tr}}(\left| G \right\rangle \left| G \right\rangle \rho _{m}) \le \max _{{\cal{P}}_{m}}\min _{\{ A,\bar {A}\} }2^{ - S(\rho _{A})},$$
where the maximization takes over all possible partitions \({\cal{P}}_{m}\) with m subsystems, the minimization takes over all bipartition of \({\cal{P}}_{m}\). Then like in Eq. (27), by combing Eqs. (11) and (28) we finish the proof.
The optimization problem in Theorem 2 over the partitions is generally hard, since there are about mN/m! possible ways to partition N qubits into m subsystems. For example, when N is large (say, in the order of 70 qubits), the number of different partitions is exponentially large even with a small separability number m. Surprisingly, for several widely used types of graph states, such as 1-D, 2-D cluster states, and the GHZ state, we find the analytical solutions to the optimization problem, as shown in Corollaries in main text.
Robustness of entanglement-structure witnesses
In this section, we discuss the robustness of the proposed witnesses in Theorems 1 and 2. In practical experiments, the prepared state ρ deviates from the target graph state |G〉 due to some nonnegligible noise. Here we utilize the following white noise model to quantify the robustness of the witnesses.
$$\rho = (1 - p_{{\mathrm{noise}}})\left| G \right\rangle \left\langle G \right| + p_{{\mathrm{noise}}}\frac{{\mathbb{I}}}{{2^N}},$$
which is a mixture of the original state |G〉 and the maximally mixed state with coefficient pnoise. We will find the largest plimit, such that the witness can detect the corresponding entanglement structure when pnoise < plimit. Thus plimit reflects the robustness of the witness.
Let us first consider the entanglement witness \(W_{f}^{\cal{P}}\) in Eq. (12) of Theorem 1. For clearness, we denote \(C_{{\mathrm{min}}} = \min _{\{ A,\bar {A}\} }2^{ - S(\rho _{A})}\). Insert the state of Eq. (29) into the witness, one gets,
$$\begin{array}{lll}{\mathrm{Tr}}(W_f^{\cal{P}}\rho ) &=& {\mathrm{Tr}}\left\{ {\left[ {\left( {k - 1 + C_{{\mathrm{min}}}} \right){\mathbb{I}} - \mathop {\sum}\limits_{l = 1}^k {P_l} } \right]} \right.\\ &&\left. \times{\left[ {p_{{\mathrm{noise}}}\frac{{\mathbb{I}}}{{2^N}} + (1 - p_{{\mathrm{noise}}})\left| G \right\rangle \left\langle G \right|} \right]} \right\}\\ &=& p_{{\mathrm{noise}}}\left( {k - 1 + C_{{\mathrm{min}}} - 2^{ - N}\mathop {\sum}\limits_{l = 1}^k {2^{N - n_l}} } \right)\\ &&+ (1 - p_{{\mathrm{noise}}})(k - 1 + C_{{\mathrm{min}}} - k)\\ &=& p_{{\mathrm{noise}}}\left( {k - \mathop {\sum}\limits_{l = 1}^k {2^{ - n_l}} } \right) + (C_{{\mathrm{min}}} - 1),\end{array}$$
where nl = |Vl| is the qubit number in each vertex set with different color, and in the second equality we employ the facts that \({\mathrm{Tr}}(P_l) = 2^{N - n_l}\) and Tr(Pl|G〉〈G|) = 1. Let the above expectation value less than zero, one has
$$p_{{\mathrm{noise}}} < \frac{{1 - C_{{\mathrm{min}}}}}{{k - \mathop {\sum}_{l = 1}^k {2^{ - n_l}} }}.$$
Similarly, for the \(\cal{P}\)-genuine entanglement witness and the non-m-separability witness in Eqs. (13) and (15), we have,
$$\begin{array}{l}p_{{\mathrm{noise}}} < \frac{{1 - C_{{\mathrm{max}}}}}{{k - \mathop {\sum}_{l = 1}^k {2^{ - n_l}} }}\\ p_{{\mathrm{noise}}} < \frac{{1 - C_m}}{{k - \mathop {\sum}_{l = 1}^k {2^{ - n_l}} }},\end{array}$$
where we denote the optimizations \(\max _{\{ A,\bar {A}\} }2^{ - S(\rho _{A})}\) and \(\max _{{\cal{P}}_{m}}\min _{\{ A,\bar {A}\} }2^{ - S(\rho _{A})}\) as Cmax and Cm, respectively.
Moreover, it is not hard to see that all the coefficients Cmin, Cmax, and Cm are not larger than 0.5. Thus, for any entanglement-structure witness, one has
$$p_{{\mathrm{limit}}} \ge \frac{{0.5}}{{k - \mathop {\sum}_{l = 1}^k {2^{ - n_l}} }} > \frac{1}{{2k}}.$$
As a result, our entanglement-structure witness is quite robust to noise, since the largest noise tolerance plimit is just related to the chromatic number of the graph.
Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.
Code availability
Code sharing is not applicable to this article as no code was generated or analyzed during the current study.
Horodecki, R., Horodecki, P., Horodecki, M. & Horodecki, K. Quantum entanglement. Rev. Mod. Phys. 81, 865 (2009).
Bennett, C. H. et al. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Phys. Rev. Lett. 70, 1895 (1993).
Bennett, C. H. & Brassard, G. in Proceedings of IEEE International Conference on Computers, Systems, and Signal Processing, 175 (India, 1984).
Ekert, A. K. Quantum cryptography based on Bell's theorem. Phys. Rev. Lett. 67, 661 (1991).
Brunner, N., Cavalcanti, D., Pironio, S., Scarani, V. & Wehner, S. Bell nonlocality. Rev. Mod. Phys. 86, 419 (2014).
Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information: 10th Anniversary Edition. 10th edn (Cambridge University Press, New York, NY, 2011).
Lloyd, S. Universal quantum simulators. Science 273, 1073 (1996).
Wineland, D. J., Bollinger, J. J., Itano, W. M., Moore, F. L. & Heinzen, D. J. Spin squeezing and reduced quantum noise in spectroscopy. Phys. Rev. A 46, R6797 (1992).
Giovannetti, V., Lloyd, S. & Maccone, L. Quantum metrology. Phys. Rev. Lett. 96, 010401 (2006).
Monz, T. et al. 14-Qubit entanglement: creation and coherence. Phys. Rev. Lett. 106, 130506 (2011).
Britton, J. W. et al. Engineered two-dimensional Ising interactions in a trapped-ion quantum simulator with hundreds of spins. Nature 484, 489 EP (2012).
Friis, N. et al. Observation of entangled states of a fully controlled 20-qubit system. Phys. Rev. X 8, 021012 (2018).
Song, C. et al. 10-Qubit entanglement and parallel logic operations with a superconducting circuit. Phys. Rev. Lett. 119, 180511 (2017).
Gong, M. et al. Genuine 12-qubit entanglement on a superconducting quantum processor. Phys. Rev. Lett. 122, 110501 (2019).
Wang, X.-L. et al. Experimental ten-photon entanglement. Phys. Rev. Lett. 117, 210502 (2016).
Chen, L.-K. et al. Observation of ten-photon entanglement using thin BiB3O6 crystals. Optica 4, 77 (2017).
Zhong, H.-S. et al. 12-Photon entanglement and scalable scattershot boson sampling with optimal entangled-photon pairs from parametric down-conversion. Phys. Rev. Lett. 121, 250505 (2018).
Lücke, B. et al. Detecting multiparticle entanglement of dicke states. Phys. Rev. Lett. 112, 155304 (2014).
Luo, X.-Y. et al. Deterministic entanglement generation from driving through quantum phase transitions. Science 355, 620 (2017).
Lange, K. et al. Entanglement between two spatially separated atomic modes. Science 360, 416 (2018).
Dür, W., Vidal, G. & Cirac, J. I. Three qubits can be entangled in two inequivalent ways. Phys. Rev. A 62, 062314 (2000).
Acín, A., Bruß, D., Lewenstein, M. & Sanpera, A. Classification of mixed three-qubit states. Phys. Rev. Lett. 87, 040401 (2001).
Guhne, O., Toth, G. & Briegel, H. J. Multipartite entanglement in spin chains. New J. Phys. 7, 229 (2005).
Huber, M. & de Vicente, J. I. Structure of multidimensional entanglement in multipartite systems. Phys. Rev. Lett. 110, 030501 (2013).
Shahandeh, F., Sperling, J. & Vogel, W. Structural quantification of entanglement. Phys. Rev. Lett. 113, 260502 (2014).
Lu, H. et al. Entanglement structure: entanglement partitioning in multipartite systems and its experimental detection using optimizable witnesses. Phys. Rev. X 8, 021072 (2018).
Cirac, J. I., Ekert, A. K., Huelga, S. F. & Macchiavello, C. Distributed quantum computation over noisy channels. Phys. Rev. A 59, 4249 (1999).
Kimble, H. J. The quantum internet. Nature 453, 1023 (2008).
Terhal, B. M. A family of indecomposable positive linear maps based on entangled quantum states. Linear Algebra Appl. 323, 61 (2001).
Guhne, O. & Toth, G. Entanglement detection. Phys. Rep. 474, 1 (2009).
Friis, N., Vitagliano, G., Malik, M. & Huber, M. Entanglement certification from theory to experiment. Nat. Rev. Phys. 1, 72 (2019).
Gühne, O., Lu, C.-Y., Gao, W.-B. & Pan, J.-W. Toolbox for entanglement detection and fidelity estimation. Phys. Rev. A 76, 030305(R) (2007).
Zhou, Y., Guo, C. & Ma, X. Decomposition of a symmetric multipartite observable. Phys. Rev. A 99, 052324 (2019).
Tóth, G. & Gühne, O. Detecting genuine multipartite entanglement with two local measurements. Phys. Rev. Lett. 94, 060501 (2005).
Knips, L., Schwemmer, C., Klein, N., Wieśniak, M. & Weinfurter, H. Multipartite entanglement detection with minimal effort. Phys. Rev. Lett. 117, 210504 (2016).
Wang, Y., Li, Y., Yin, Z.-q & Zeng, B. 16-qubit IBM universal quantum computer can be fully entangled. npj Quantum Inf. 4, 46 (2018).
Briegel, H. J. & Raussendorf, R. Persistent entanglement in arrays of interacting particles. Phys. Rev. Lett. 86, 910 (2001).
Hein, M. et al. Entanglement in graph states and its applications. http://arxiv.org/abs/quant-ph/0602096 (2006).
Raussendorf, R. & Briegel, H. J. A one-way quantum computer. Phys. Rev. Lett. 86, 5188 (2001).
Raussendorf, R., Browne, D. E. & Briegel, H. J. Measurement-based quantum computation on cluster states. Phys. Rev. A 68, 022312 (2003).
Schlingemann, D. & Werner, R. F. Quantum error-correcting codes associated with graphs. Phys. Rev. A 65, 012308 (2001).
Gühne, O., Tóth, G., Hyllus, P. & Briegel, H. J. Bell inequalities for graph states. Phys. Rev. Lett. 95, 120405 (2005).
Zeng, B., Chen, X., Zhou, D.-L. & Wen, X.-G. Quantum information meets quantum matter—from quantum entanglement to topological phase in many-body systems. https://arxiv.org/abs/1508.02595 (2015).
Eisert, J., Cramer, M. & Plenio, M. B. Colloquium: area laws for the entanglement entropy. Rev. Mod. Phys. 82, 277 (2010).
Gottesman, D. Stabilizer codes and quantum error correction. arXiv: quant-ph/9705052. https://arxiv.org/abs/quant-ph/9705052 (1997).
Rossi, M., Huber, M., BruB, D. & Macchiavello, C. Quantum hypergraph states. New J. Phys. 15, 113022 (2013).
Orus, R. A practical introduction to tensor networks: matrix product states and projected entangled pair states. Ann. Phys. 349, 117 (2014).
Carleo, G. & Troyer, M. Solving the quantum many-body problem with artificial neural networks. Science 355, 602 (2017).
Deng, D.-L., Li, X. & Das Sarma, S. Quantum entanglement in neural network states. Phys. Rev. X 7, 021021 (2017).
Sørensen, A. S. & M, K. Entanglement and extreme spin squeezing. Phys. Rev. Lett. 86, 4431 (2001).
Wölk, S. & Gühne, O. Characterizing the width of entanglement. New J. Phys. 18, 123024 (2016).
Branciard, C., Rosset, D., Liang, Y.-C. & Gisin, N. Measurement-device-independent entanglement witnesses for all entangled quantum states. Phys. Rev. Lett. 110, 060405 (2013).
Liang, Y.-C. et al. Family of bell-like inequalities as device-independent witnesses for entanglement depth. Phys. Rev. Lett. 114, 190401 (2015).
Zhao, Q., Yuan, X. & Ma, X. Efficient measurement-device-independent detection of multipartite entanglement structure. Phys. Rev. A 94, 012343 (2016).
Dimic, A. & Dakic, B. Single-copy entanglement detection. npj Quantum Inf. 4, 11 (2018).
Saggio, V. et al. Experimental few-copy multipartite entanglement detection. Nat. Phys. https://doi.org/10.1038/s41567-019-0550-4 (2019).
Bourennane, M. et al. Experimental detection of multipartite entanglement using witness operators. Phys. Rev. Lett. 92, 087902 (2004).
Hein, M., Eisert, J. & Briegel, H. J. Multiparty entanglement in graph states. Phys. Rev. A 69, 062311 (2004).
We acknowledge Y.-C. Liang for the insightful discussions. This work was supported by the National Natural Science Foundation of China Grant Nos. 11875173 and 11674193, and the National Key R&D Program of China Grant Nos. 2017YFA0303900 and 2017YFA0304004, and the Zhongguancun Haihua Institute for Frontier Information Technology. Xiao Yuan was supported by the EPSRC National Quantum Technology Hub in Networked Quantum Information Technology (EP/M013243/1).
Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, 100084, China
You Zhou
, Qi Zhao
& Xiongfeng Ma
Department of Materials, University of Oxford, Parks Road, Oxford, OX1 3PH, UK
Xiao Yuan
Search for You Zhou in:
Search for Qi Zhao in:
Search for Xiao Yuan in:
Search for Xiongfeng Ma in:
Y.Z. and X.M. initialized the project. Y.Z., Q.Z., and X.Y. developed the idea and formulated the problem as it is presented. X.M. supervised the project. All authors contributed to deriving the results and writing the paper.
Correspondence to Xiongfeng Ma.
The authors declare no competing interests.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Zhou, Y., Zhao, Q., Yuan, X. et al. Detecting multipartite entanglement structure with minimal resources. npj Quantum Inf 5, 83 (2019) doi:10.1038/s41534-019-0200-9
Entanglement detection under coherent noise: Greenberger-Horne-Zeilinger-like states
Physical Review A (2020)
npj Quantum Information menu
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
Math Calculus Rates (mathematics)
If an arrow is shot upward on the moon with a speed of 58 m/s, its height in meters t seconds...
If an arrow is shot upward on the moon with a speed of 58 m/s, its height in meters {eq}t {/eq} seconds later is given by {eq}y = 58t - 0.83t^2 {/eq}. (Round your answers to two decimal places.)
(a) Find the average speed over the given time intervals.
(i) {eq}[1, 2] {/eq}
(ii) {eq}[1,1.5] {/eq}
(iii) {eq}[1,1.1] {/eq}
(iv) {eq}[1,1.01] {/eq}
(v) {eq}[1,1.001] {/eq}
(b) Estimate the speed when {eq}t = 1 {/eq}.
Average Rate of Change
In order to find the average rate of change of a function, we're looking for the rate of change of a function between two points. This is the same as finding the slope between these two points, as we're finding the ratio of the change in the function over the change in {eq}x {/eq}.
$$\frac{f(b) - f(a)}{b-a} $$
Answer and Explanation: 1
a) To find the average speed over the intervals given, we need to find the average rates of change. This is because the velocity of a position...
Average Rate of Change: Definition, Formula & Examples
Finding the average rate of change is similar to finding the slope of a line. Study the definition of average rate of change, its formula, and examples of this concept.
A ball is thrown into the air by a baby alien...
Let y = 4x^2 + 3. Find the average rate of...
Find the rate of change with respect to x from...
Find the average rate of change of y with...
Jacky read at the rate of 15 pages per hour....
1. If I drove 25 miles and used 1\frac{1}{4} ...
Find f'(7), f(x) = 2x^2.
The function f is continuous on the interval...
When driving down a mountain road, you notice...
If one person can do 25 square feet in 4 hours...
Bob can do a job in 5 hours while Bill can do...
Thomas can do a job in 4 hours. Julia can do...
In 2019, the average salary for a major league...
An isosceles triangle with a base of { 20...
Find the average rate of change of the function...
Let f(x)=1x^3+2.5x+3 In this problem you will...
If 4 people take 5 days days to complete...
In a 10 year study of CAD some patients were...
Oil is pumped continuously from a well at a...
For the curve given by r(t)= <5sin(t), 4t,...
Calculating & Interpreting a Function's Average Rate of Change
The average rate of change in a given function is helpful to identify speeds and other changing variables. Learn why these are used, and how they are calculated and interpreted through examples.
Slopes and Rate of Change
The rate of change is shown through one variable as it changes the function of another variable and can be seen furthermore as location changes as a function of time. Learn more about slopes, rates of change, and the rate of velocity.
Velocity and the Rate of Change
The rate of change refers to how one variable changes based on another variable. Learn about velocity and rate of change by reading an example of the velocity of my drive to work. Then, learn about velocity and inconstant slopes.
Determine the Rate of Change of a Function
The rate of change of a function can refer to how quickly it increases or that it maintains a constant speed. Learn the definitions of linear rates of change and exponential rates of change and how to identify the two types of functions on a graph.
Finding Constant and Average Rates
In mathematics, a rate refers to a quantity, measure, or frequency that something occurs. Learn about finding constant and average rates by using the distance formula through examples.
Data Mining: Identifying Functions From Derivative Graphs
The derivative of a function is the slope of the tangent of the function and depends on it's positive or negative increase to determine it's final state. Learn more about data mining and identifying the functions from derivate graphs.
Rate of Change vs. Negative Rate of Change
Rates of change show how fast a parameter changes with respect to another reference parameter. Learn about rates of change, how to graph rates of change, and see the difference between positive and negative rates of change.
Average and Instantaneous Rates of Change
Rates of change such as acceleration can be measured as average (over a period of time), or instantaneous (single point of time). See how to calculate these two types of rates of change using graphs and calculus.
Finding Slant Asymptotes of Rational Functions
Learn about slant asymptotes, what they look like, and the rules to calculate them. Understand how to find a slant asymptote with different examples provided.
Solving a Trigonometric Equation Graphically
A trigonometric equation can be solved graphically by providing a visual demonstration of the mathematical answers. Learn how to find the answers graphically and understand what to watch out for when solving trigonometric equations.
How to Solve an Absolute Value Equation
An absolute value equation includes a variable that is an absolute value as one of the variables resulting in two solutions, the positive or negative. Learn about the methods of solving basic absolute value equations and why it is necessary to split an absolute value equation into two possible equations to solve for the variable.
What is a System of Equations?
Math word problems often ask students to compare and contrast, which are types of problems that can be solved with a system of equations. Learn about solving word problems using systems of equations and how to graph them, as well as the algebraic method of substitution through an example.
What Are Monomials & Polynomials?
Monomials are constructed of one term of coefficients and variables, whereas polynomials involve two or more terms. Learn from examples of each and discover the significance of the degree of terms.
Methods of Depreciation
When completing financial reports, depreciation, or a loss in value, can be reported using three different methods: straight-line, double declining balance, and units of production. Learn to calculate depreciation using each of these methods.
Zero Product Property Example | What is the Zero Product Property?
What is the zero product property? Learn the zero product property definition and formula as well as how to use the zero product property with examples.
Horizontal and Vertical Asymptotes
Horizontal and vertical asymptotes on a graph reveal points close to the x and y axes that run on infinitely. Learn more about asymptotes, define horizontal and vertical asymptotes, and understand how asymptotes are comparable to an unreachable finish line.
System of Equations Word Problems & Explanations | How to Solve System of Equations Word Problems
Learn how to solve systems of equations word problems. Discover how solving systems of equations word problems is done using the elimination method and the substitution method.
How to Evaluate a Polynomial in Function Notation
A polynomial is an algebraic expression that has more than one term and function notation is the way a function is written. In this lesson, explore how to evaluate or solve a polynomial in function notation.
A mathematical function is a relation between what is input and what is output. Learn more about the purpose of functions, the rules for function and how to write function in notation.
How to Determine the Limits of Functions
A limit can tell us the value that a function approaches as that function's inputs get closer and closer to a number. Learn more about how to determine the limits of functions, properties of limits and read examples.
Praxis Early Childhood Education Test (5025): Practice & Study Guide
OSAT Early Childhood Education (CEOE) (105): Practice & Study Guide
ORELA Essential Academic Skills: Practice & Study Guide
AEPA Essential Academic Skills Subtest I - Reading (NT001): Practice & Study Guide
AEPA Essential Academic Skills Subtest II - Writing (NT002): Practice & Study Guide
AEPA Essential Academic Skills Subtest III - Mathematics (NT003): Practice & Study Guide
AEPA Assessment of Professional Knowledge - Secondary (NT052): Practice & Study Guide
AEPA Professional Knowledge - Early Childhood (AZ093): Practice & Study Guide
Praxis Core Academic Skills for Educators - Writing (5722, 5723): Study Guide & Practice
FTCE General Knowledge Test (GK) (082): Study Guide & Prep
Praxis Business Education - Content Knowledge (5101): Practice & Study Guide
Praxis Mathematics - Content Knowledge (5161): Practice & Study Guide
CAHSEE English Exam: Test Prep & Study Guide
Foundations of Education: Certificate Program
Foundations of Education for Teachers: Professional Development
Educational Psychology Textbook
Instructional Strategies for Teachers: Help & Review
Assessment of Learning for Teachers
Praxis Middle School Science (5440): Practice & Study Guide | CommonCrawl |
Climate change and trend analysis of temperature: the case of Addis Ababa, Ethiopia
Zinabu Assefa Alemu ORCID: orcid.org/0000-0003-0693-85961 &
Michael O. Dioha2
This paper presents the trend analysis of temperature and the effect of climate variation in the city of Addis Ababa, Ethiopia. The paper seeks to provide up-to-date information for the better management of climate change in the city. The analysis is based on the temperature difference in the city over two stations—Bole and Entoto. The overall purpose of this study is to investigate the possible trend of temperature variation as well as the effect of climate change in the study area.
The Mann-Kendall (MK) trend test and Sen's slope estimate were employed to find the nature of the temperature trend and significance level in the city.
It was found that the MK2/MK3 statistic (Z) value for minimum, maximum and average temperatures for Bole station are 6.21/5.99, 2.49/2.6, and 6.09/6.14 respectively. The positive Kendall's Z value shows an upward trend and implies an increasing trend over time. This indicates a significant increase in the trend at a 5% level of significance since the significance level (alpha) is greater than the computed p-value (0.05 > p-values (0.0001)). Whereas for Entoto station, the MK1 statistic (Z) results are 1.64 for minimum, while the MK2/MK3 static (Z) are 0.71/0.65 for the maximum, and 0.17/1.04 for average temperature, and this positive value shows an indicator of an increasing trend. However, the increase is not significant at the 5% significant level since the computed p-value is larger than the significant level (alpha = 0.05).
There is a tendency of temperature increments in Bole station. This could be due to the influence of climate change which can lead to weather extremes in the capital city. Therefore, the study recommends that the variability of temperature needs further monitoring technique, and there is a need to consider the increasing temperature trend to minimize its effects on human health.
Climate change has become one of the most essential concerns in the field of sustainable development, and its impacts (rising of sea levels, melting of polar ice caps, wild bush fires, intense droughts etc.) can be felt in different parts of the globe (Dioha and Kumar 2020; Ali et al. 2013). The warming of our planet due to the emission of greenhouse gases is now unquestionable; and over the last century, the CO2 atmospheric concentration has increased significantly and has, in turn, induced the average global temperature to increase by 0.74 °C as compared with the preindustrial era (UNFCCC 2007). The high temperature in urban areas affects mostly health, economy, leisure activities, and wellbeing of urban residents. Thermal stress caused by warming highly affects the health of vulnerable peoples (Tan et al. 2010; Patz et al. 2005). Developing countries are mostly affected by climate change, and Ethiopia is an example of the most vulnerable countries (Cherie and Fentaw 2015).
The intensity and frequency of extremes can be easily changed by climate change and the changes in climate extremes and their impacts on a variety of physical and biological systems examined by the Intergovernmental Panel on Climate Change (IPCC) and their effects can also contribute to global warming (IPCC 2007). Many factors such as the expansion of cities, and fast population growth rate along with migration from rural to urban areas pose a major challenge for city planners and also contributes to increasing climate change (WHO and UNICEF 2006; Alemu and Dioha 2020). Using various General Circulation Models, Feyissa et al. (2018) suggested future climatic changes and argued that a rise in temperature will exacerbate the urban heat highland effects in warm seasons and an increase in precipitation. Some environmental harms such as high temperature and extreme rainfall, which results in flooding in Addis Ababa, could be signals of climate change (Birhanu et al. 2016). Also, the city temperature is mostly affected by anthropogenic activities along with climate change.
The Mann–Kendall (MK) (non-parametric) test is usually used to detect an upward trend or downward (i.e. monotonic trends) in a series of hydrological data (climate data) and environmental data. The null hypothesis for this test indicates no trend, whereas the alternative hypothesis indicates a trend in the two-sided test or a one-sided test as an upward trend or downward trend (Pohlert 2020). The Sen's estimator is another non-parametric method used for the trend analysis of hydroclimate data set. It is also used to identify the trend magnitude. Hence, this test computes the linear rate of change (slope) and the intercept as shown in Sen's method (Sen 1968). The MK test is widely documented in various literature, as a powerful trend test for effective analysis of seasonal and annual trends in environmental data, hydrological data (climate data), and this test is preferred over other tests because of its applicability in time-series data, which does not follow the statistical distribution.
There are numerous examples of MK trend test applications such as Asfaw et al. (2018) who used the MK test for the detection of trends in time series analysis and the result revealed that inter-annual and intra-annual variability of rainfall as well as the severity index value for Palmer drought shows that the trend for the number of drought years was increasing. Another study also employed a non-parametric MK test and Sen's slope estimates to test the trend of each extreme temperature and rainfall indices as well as their statistical significance in the Western Tigray, Ethiopia (Berhane et al. 2020). Similarly, the trend analysis of temperature in Gombe state, Nigeria was analyzed using the MK trend test and Sen's estimator to decide the nature of the temperature trend and significance level. The study found that average and maximum temperatures revealed positive Kendall's statistics (Z) (Alhaji et al. 2018). In a different study, Yadav et al. (2014) used the MK test and the Sen's Slope for the analysis of both trends and slope magnitude. The results indicated that in all thirteen areas of Uttarakhand (India), the trends of temperature and precipitation are increasing in some months, whereas in some other months the trends were decreasing. Getachew (2018) used the MK trend test for the analysis of rainfall and temperature trends in the south Gonder zone (Ethiopia). The study found that a statistically significant increase in Nefas Mocha and Addis Zemen for mean annual temperature. Kuriqi et al. (2020) applied the MK methodology to validate findings from Sen's slope trend analysis in a study on the seasonality shift and streamflow flow variability trends in India.
Furthermore, the MK test and the Sen's estimator test has been applied to examine the significant trend of rainfall, temperature, and runoff in the Rangoon watershed in Dadeldhura district of Nepal. The result revealed that there were warming trends in the study area (Pal et al. 2017). In contrast, Machida et al. (2013) studied whether the MK test is an effective methodology for detecting software aging from traces of computer system metrics. But, the MK test result shows it is not a powerful trend test. The authors' experimental study showed that the use of MK trend test in detecting software aging is highly exposed in creating false positives (Machida et al. 2013). Other studies have applied the MK test for the assessment of spatial and temporal trends such as in Northern Iran (Biazar and Ferdosi 2020) and in Kansas, USA (Anandhi 2013). Despite the various application of the MK trend test in different parts of the world, studies analyzing the non-parametric MK test is commonly employed to detect monotonic trends in a series of environmental data, climate data or hydrological data. But some limited studies such as Machida et al (2013) showed that the MK test is not a powerful trend test for software aging and the variation in this result and other studies is because of differences in study variables/materials.
However, the MK test is a non-parametric (distribution-free) test which is used to analyze time-series data for consistently monotonic trends. These non-parametric methods have several benefits such as the handling of missing data, the requirement of few assumptions, and the data distribution independence (Öztopal and Sen 2016; Wu and Qian 2017; Kisi 2015). Nevertheless, the major disadvantage of the method is the influence of autocorrelation in data on its test significance. Several modifications in the MK test have been proposed by different authors to remove the influence of autocorrelation done with varied techniques and one of the most common tests is corrected for bias before pre-whitening (Malik et al. 2019; Sanikhani et al. 2018; Su et al. 2006). The MK test is mostly chosen for the analysis of climatic data since its measurement does not follow the normal distribution. Thus, the present study has employed the MK trend test and Sen's slope estimate to understand the nature of the temperature trend and significance level in the study area. Hence, the current study is conducted based on the temperature variation in the city of Addis Ababa over two stations—Bole and Entoto. The historic temperature used for Bole station is from 1983 to 2016 and the Entoto station from 1989 to 2016. In addition to this, the selection of this station is also classified based on geographical variation and the altitude differences.
The overall objective of this study is to investigate the trend of temperature in Addis Ababa City by using the Mann–Kendall trend test and Sen's slope estimate as well as to look at the effect of climate change in the study area. The result of this study (i.e. temperature trends and their descriptive statistics) will help city planners in forecasting weather variations. It will also support universal health coverage by predicting the situation/seasons in order to control seasonal disease outbreaks. In terms of contribution to the existing literature, this study introduces one of the earliest case studies in this subject matter for Ethiopia and the findings will be useful in mitigating the adverse impacts of climate change in the country. Also, the analytical framework presented here can be employed by other researchers to study temperature variations in other regions of the world. While this paper is crafted with a local case study, the results will also be useful for international literature.
The rest of this paper is structured thus: Sect. 2 explains the research methods used in the study which incorporates the study area, data quality control, different MK Tests, Sen's Slope estimator, ITA analysis, data collection, and processing, as well as data analysis tools. Section 3 describes the results and brief discussion, while the general study conclusions are presented in Sect. 4.
We employed the Mann–Kendall (MK) trend test and Sen's slope estimate to examine the nature of the temperature trend and significance level in the study area. Figure 1 shows the general study methodological framework.
General methodological approach
Description of the study area
Addis Ababa is the capital city of Ethiopia and it is found in the heart of the country surrounded by Oromia which is geographically located at longitude 38° 44′ E and latitude of 9° 1′ N. According to the 2007 census, the city have a total population of 2,739,551 inhabitants. Addis Ababa comprises 6 zones and 28 woredas. Addis Ababa covers an area of about 540 Km2 and it lies between 2,200 to 2,500 m above sea level. The city lies at the foot of the 3,000 m high Entoto Mountains and the mountain Entoto is located in Gullele Sub City (within Addis Ababa city Administration). Furthermore, the lowest and the highest annual average temperature of the city is between 9.89 and 24.64 °C (FDRE 2018; CSA 2007). Figure 2 shows a map of the study area.
Map of the study area
Data quality control
The quality of the data was visually and statistically assessed. Visually, the temperature data were checked and detected for outlier and missing data to avoid erroneous/typing error data that can cause changes in the final result. Whereas, the MK test method was checked and tested statistically with the trend free pre-whiting process and the variance correction approaches before applying the test. The trend free pre-whiting process was proposed to remove the serial correlation from the data before applying the trend test (Yue et al. 2002; Hamed 2009). Likewise, to overcome the limitation of the occurrence of serial autocorrelation in time series, the variance correction procedure was applied as proposed by (Hamed and Rao 1998).
Mann–Kendall test (MK1)
MK trend test is a non-parametric test used to identify a trend in a series. It is also used to determine whether a time series has a monotonic upward or downward trend. The non-parametric MK test is commonly employed to detect monotonic trends in a series of environmental data, hydrological data, or climate data. The null hypothesis (H0) shows no trend in the series and the data which come from an independent population are identically distributed. The alternative hypothesis, Ha, indicates that the data follow a monotonic trend (i.e. negative, non-null, or positive trend). There are two benefits of using this test. First, it does not require the data to be normally distributed since the test is non-parametric (distribution-free test) and second, the test has low sensitivity to abrupt breaks due to inhomogeneous time series. The data values are evaluated as an order time series and all subsequent data values are likened from each data value. The time series x1, x2, x3… xn represents n data points.
The MK test statistic (S) is calculated as follows:
$${\text{S}} = \sum\nolimits_{{{\text{i}} = 1}}^{{{\text{n}} - 1}} {\sum\nolimits_{{{\text{j}} = {\text{i}} + 1}}^{{\text{n}}} {sign({\text{Xj}} - {\text{Xi}})}}$$
$$\mathrm{sgn}(\mathrm{x})=\left\{\begin{array}{c} 1 \quad{\text{if}\quad{\text{ x}>0}}\\ 0\quad{\text{if}\quad{\text{ x}=0}}\\ -1 \quad{\text{if}\quad{\text{ x}<0}}\end{array}\right.$$
Note that if S > 0, then later observations in the time series tend to be larger than those that appear earlier in the time series and it is an indicator of an increasing trend, while the reverse is true if S < 0 and this indicates a decreasing trend.
The mean of S is E[S] = 0 and the variance \(({\upsigma }^{2}\)) of S is given by
$${\upsigma }^{2} =\frac{1}{18}\left\{\mathrm{ n}\left(\mathrm{n}-1\right)\left(2\mathrm{n}+5\right)-{\sum }_{\mathrm{j}=1}^{\mathrm{p}}\mathrm{tj}(\mathrm{tj}-1)(2\mathrm{tj}+5)\right\}$$
where p is the number of the tied groups in the data set and tj is the number of data points in the jth tied group. The statistic S is approximately normally distributed provided that the following Z-transformation is employed:
$$\mathrm{Z}=\left\{\begin{array}{c} \frac{\mathrm{S}-1}{\sqrt{{\upsigma }^{2} }} \quad{{\text {if} \,\text{s}>0}}\\ 0 \quad{{\text {if}\, \text{s}=0}}\\ \frac{\mathrm{S}+1}{\sqrt{{\upsigma }^{2} }} \quad{{\text{if}\, \text{s}<0}}\end{array}\right.$$
A normal approximation test that could be used for datasets with more than 10 values was described, provided there are not many tied values within the data set. If there is no monotonic trend (the null hypothesis), then for time series with more than ten elements, z ∼ N (0, 1), i.e. z has a standard normal distribution. The probability density function for a normal distribution with a mean of 0 and a standard deviation of 1 is given by the following equation:
$$\mathrm{f }(\mathrm{z})\hspace{0.17em}=\hspace{0.17em}\frac{1}{\sqrt{2\uppi }}{\mathrm{e}}^{\frac{{-\mathrm{z}}^{2}}{2}}$$
The statistic S is closely related to Kendall's as given by:
$$\uptau = \frac{\mathrm{S}}{\mathrm{D}}$$
$${\mathrm{D}= \left[ \frac{1}{2}\mathrm{n}(\mathrm{n}-1)-\frac{1}{2}{\sum }_{\mathrm{j}=1}^{\mathrm{p}}\mathrm{tj}(\mathrm{tj}-1)\right]}^{1/2}{\left[\frac{1}{2}\mathrm{n}(\mathrm{n}-1)\right]}^{1/2}$$
where p is the number of the tied groups in the data set and tj is the number of data points in the jth tied group.
All the above procedures used to compute the Mann–Kendall Trend test were collected and referenced from (Zaiontz 2020; Kendall 1975; Pohlert 2020).
Mann–Kendall test with trend-free pre-whitening (MK2)
Hamed (2009) recommended that there will be a decrease or an increase in S value when autocorrelation is positive or negative which is underestimated or overestimated by the original variance V(S). Therefore, when trend analysis is conducted for this data using MK1, it will show positive or negative trends when there is no trend. Hence, the trend free pre-whiting process (TFPW) was proposed by Hamed (2009) and the proposed pre-whitening technique in which the slope and lag-1 serial correlation coefficient are simultaneously estimated. The lag-1 serial correlation coefficient is then corrected for bias before pre-whitening. Finally, the lag-1 serial correlation components are removed from the series before applying the trend test. The following steps are used to determine trend analysis using the MK2 test. Calculate the lag-1 (k = 1) autocorrelation coefficient (r1) using:
$$\mathrm{r}1 = \frac{\frac{1}{\mathrm{n}-\mathrm{k}}\sum_{i=1}^{n-k}\left(\mathrm{ Xi }- \stackrel{-}{\mathrm{X }}\right) \left(\mathrm{ Xi}+\mathrm{k }- \stackrel{-}{\mathrm{X }}\right)}{\frac{1}{\mathrm{n}} \sum_{i=1}^{n}{ (\mathrm{ X }- \stackrel{-}{\mathrm{X }})}^{2}}$$
If the condition \(\frac{-1-1.96 \sqrt{n-2}}{\mathrm{n}-1}\) ≤ r1 ≤ \(\frac{-1+1.96 \sqrt{n-2}}{\mathrm{n}-1}\) is satisfied, then the series is assumed to be independent at a 5% significance level and there is no need for pre-whitening. Otherwise, pre-whitening is required for the series before applying the MK1 test.
Equation (9) is used to remove the trend in time series data to get detrended time series.
$$\upbeta ={\text{median}}\left[ \frac{\mathrm{Xj}-\mathrm{Xi}}{\mathrm{j}-\mathrm{i}}\right]\quad {{\text {for all}}\quad{{\text{i < j}}}}.$$
Equation (8) is used to calculate lag-1 autocorrelations for detrended time series given by Xi. Using Eq. (11), remove the lag-1 autoregressive component (AR (1)) from the detrended series to get a residual series.
Yet again, (β * i) value is added to the residual series as follows;
Thus, the MK test is applied to the blended series Yi to determine the significance of the trend.
Mann-Kendall test with variance correction (MK3)
Sometimes, removing lag-1 autocorrelation is not enough for many hydrological time-series datasets. To overcome the limitation of the presence of serial autocorrelation in time-series, a correction procedure was proposed by (Hamed and Rao 1998). First, the corrected variance S is calculated by Eq. (13), where V (S) is the variance of the MK1 and CF is the correction factor due to the existence of serial correlation in the data.
$${\text{Corrected}}\, {\text{variance}}\, {\text{S}} ({\text{V}}*\left( {\text{S}} \right)) = {\text{CF}} \times {\text{V}} \left( {\text{S}} \right)$$
$$\mathrm{CF }=1+\frac{2}{\mathrm{n}(\mathrm{n}-1)(\mathrm{n}-2)}{\sum }_{\mathrm{k}=1}^{\mathrm{n}-1}(\mathrm{n}-\mathrm{k})(\mathrm{n}-\mathrm{k}-1)(\mathrm{n}-\mathrm{k}-2){r}_{k}^{R}$$
where rRk is lag-ranked serial correlation, while n is the total number of observations.
The advantage of the MK3 test over the MK2 test is that it includes all possible serial correlations (lag-k) in the time series, while MK2 only considers the lag-1 serial correlation (Yue et al. 2002).
Sen's Slope estimator
Sen's estimator is another non-parametric test used to identify a trend in a series as well as it shows the magnitude of the trend. The Sen's slop estimate requires at least 10 values in a time series. This test computes both the slope (i.e. linear rate of change) and intercepts according to Sen's method (Sen 1968). Likewise, as Drápela and Drápelová (2011) described that the linear model can be calculated as follows:
$$\mathrm{f}\left(\mathrm{x}\right)=\text{Qx}+\text{B}$$
where Q is the slope, B is constant. According to Pohlert (2020), initially, a set of linear slopes is calculated as follows (Eq. 16):
$$\mathrm{Qi }= \frac{\mathrm{Xj}-\mathrm{Xk}}{\mathrm{j}-\mathrm{k}}\quad\quad{{\text{for j }} = {\text{ 1}},{\text{ 2}},{\text{ 3}} \ldots {\text{ N}}}$$
where Q is the slope, X denotes the variable, n is the number of data, and j, k are indices where j > k. The slope is estimated for each observation and the corresponding intercept is also the median of all intercepts. Median is computed from N observations of the slope to estimate the Sen's Slope estimator (Eq. 17):
$$\text{Q}=\left[\begin{array}{cc}\mathrm{Q}\frac{\mathrm{N}+1}{2} & \mathrm{if \,N \,is\, odd}\\ \frac{1}{2}\left(\mathrm{Q}\frac{\mathrm{N}}{2}+\mathrm{Q}\frac{\mathrm{N}+1}{2}\right)&\mathrm{if\, N \,is\, even}\end{array}\right]$$
$$\text{N}=\frac{\mathrm{n}(\mathrm{n}-1)}{2}$$
where N is the Slope observations and n is the values of Xk in the time series.
According to Mondal et al. (2012), when the N Slope observations are shown as Odd, the Sen's Estimator is computed as Qmed = (N + 1)/2 and for Even times of observations the Slope estimate as Qmed = [(N/2) + ((N + 1)/2)]/2. The two-sided test is carried out at 100(1 – α) % of the confidence interval to obtain the true slope for the non-parametric tests in the series.
The positive slope Qi obtained shows an increasing/upward trend whereas the negative slope Qi obtained shows a decreasing/downward trend. But, if the slope is zero there is no trend other than things remain the same. To obtain an estimate of B (constant) in Eq. (8), the n values of differences xi—Qti is calculated. The median of these values gives an estimate of constant B. The estimates for the constant B of lines of the 99% and 95% confidence intervals are calculated by a similar procedure (Pohlert 2020; MAKESENS 2002).
Innovative trend analysis (ITA) method
The Innovative trend analysis method was proposed by Şen (2011) for the detection of trends in time series. In this method, data are equally divided into two segments between the first dates to the last date. Both segments are arranged in ascending order and presented in the X- and Y-axis. The first segment is presented in the horizontal axis (x-axis) while the second segment is presented in the vertical axis (y-axis) in the Cartesian coordinate system. A bisector line at 1:1 (450) line divides the diagram into two equal triangles. If the data points lay on the 1:1 line, there is no trend in the data. If the data points exist in the top triangle, it is indicative of a positive trend (increasing trend). If the data lies in the bottom triangle, it indicates a negative trend (decreasing trend) in the data (Zhang et al. 2008; Şen 2011). The Innovative Trend Analysis (ITA) of different temperatures graphs/plots for both stations were investigated through RStudio (i.e. package used 'trendchange::innovtrend (X)') as developed by Şen (2011).
The descriptive statistics table provides summary information on the binning input variables. The descriptive procedure displays univariate summary statistics for several variables in a single table. Statistics include the sample size (observations), mean, minimum, maximum, variance, standard deviation, and the number of cases with valid values. Values for Minimum and Maximum correspond to the lowest and highest categories of the factor variable. Whereas, the mean (is computed as the sum of all data values Xi, divided by the sample size n:
$$\text{Mean} \left( {\bar{X}} \right) = \sum\nolimits_{{i = 1}}^{n} {\frac{{Xi}}{n}}$$
The sample variance is the classical measures of spread. Like the mean, they are strongly influenced by outlying values. Both the variance and standard deviation are measures of variability in a population. Variance is the average squared deviations from the arithmetic mean and the standard deviation is the square root of the variance. Thus, the variance is nothing but the square of the standard deviation, i.e.,
$$\text{Variance}({\upsigma }^{2})=\frac{\sum {(\mathrm{ X }- \stackrel{-}{\mathrm{X }})}^{2}}{(\mathrm{n}-1)}$$
$$\text{Standard Deviation}\left(\upsigma \right)=\sqrt{{\upsigma }^{2}}$$
All the above procedures used to compute the descriptive statistics were collected and referenced from (Gupta 2007; Helsel and Hirsch 2002).
Data collection and processing
The daily climatic data for minimum and maximum temperatures were obtained from the National Meteorological Agency (NMA). The historic temperature was collected from two stations which are Bole station from 1983 to 2016 (NY = 34) and Entoto station from 1989 to 2016 (NY = 28). The overall research data for this study were collected based on secondary data sources to address the goals of the study. The data was used to analyses the temperature trend of Addis Ababa city.
MK Trend Test and Sen's Slope estimator were used to study the trend analysis of temperature. The Mann–Kendall trend test results such as MK stat(s), Kendall's tau, test statistics (Z), and P-value as well as the Sen's slope Q were computed using XRealStats, XLSTAT 2020, and RStudio (i.e. documentation for package 'modifiedmk' version 1.5.0). MAKESENS version 1.0 was used for the graph of Sen's estimate whereas the descriptive statistical techniques such as minimum, maximum, mean, standard deviation, variance, and also average annual temperatures graph were computed using Microsoft excel. The analyzed data was used to detect the trend of climate change.
Mann-Kendall test result
Trend analysis of Temperature for Addis Ababa City was done with 34 years of temperature data from Bole station (1983–2016) along with 28 years of temperature data from Entoto station (1989–2016). MK test and Sen's Slope estimator has been used to determine the temperature trend. Figure 3a–d shows the graph of minimum, maximum, and average temperature, in addition to the comparison of the temperatures for Bole station, whereas Fig. 4a–d shows the graph of minimum, maximum, and average temperature, as well as the comparison of the temperatures for Entoto station.
(a) Plot of minimum temperature, average temperature, and maximum temperature from 1983 to 2016 for Bole, (b) Plot of minimum temperature from 1983 to 2016 for Bole, (c) Plot of maximum temperature from 1983 to 2016 for Bole, (d) Plot of average temperature from 1983 to 2016 for Bole
(a) Plot of minimum temperature, average temperature, and maximum temperature from 1989—2016 for Entoto, (b) Plot of minimum temperature from 1989–2016 for Entoto, (c) Plot of maximum temperature from 1989 to 2016 for Entoto, (d) Plot of average temperature from 1989 to 2016 for Entoto station
From the MK test result, it was found that the Z value of MK2/MK3 for minimum, maximum, and average temperatures for Bole station are 6.21/5.99, 2.49/2.6, and 6.09/6.14 respectively, as stated in (Table 1). The positive Kendall's Z value shows an upward trend and also implies an increasing trend over time. This indicates that there is a significant increase in the trend at a 5% level of significance since the p-value is less than the significant level alpha (0.05) (Table 1). Whereas, for the Entoto station, the test statistic (Z) value of MK1 for minimum temperatures is 1.64, and the test statistic (Z) value of MK2/MK3 for maximum and average temperatures are 0.71/0.65, and 0.17/1.04 respectively, as displayed in (Table 1) and the positive value indicates an increasing trend but not significant at 5% significant level since the p-value is greater than the significant level alpha = 0.05 (Table 1). However, the minimum temperature for the Entoto station used the original MK test without using the modified MK test since the criteria stated in Eq. (8) is satisfied and thus no need of pre-whitening test before applying the MK test. Therefore, we simply use the result of MK1 without applying the serial correlation test.
Table 1 Trend analysis of temperature using MK1/MK2/MK3 test for Bole and Entoto stations
The result obtained in this study agrees with the findings of an earlier study by Getachew (2018), whose results revealed that for maximum temperature, an increasing trend analysis is found to be statistically significant as the computed p-value (i.e. 0.03) is lower than the significance level (alpha = 0.05) and the researcher rejects the null hypothesis and accepts the alternative hypothesis for Addis Zemen station at south Gonder zone. Similarly, in Ethiopia, a study conducted by Johannes and Mebratu (2009) shows that over the past five decades the temperature has been increasing annually at a rate of 0.2 °C. Conversely, the increasing trend of minimum temperature for Addis Zemen station is statistically insignificant as the computed p-value (i.e. 0.284) is greater than the significance level (alpha = 0.05) and thus the researcher cannot reject the null hypothesis (Getachew 2018). Table 1 shows the MK trend test result. The annual temperature (i.e. minimum, maximum, and average temperature) for Bole station shows a positive trend and statistically significant because the computed p-value is lower than the significance level alpha = 0.05, and one can accept the alternative hypothesis and reject the null hypothesis. On the other hand, the trend analysis of annual temperature (i.e. minimum, maximum, and average temperature) for the Entoto station shows an increasing trend but not statistically significant, thus, we can accept the null hypothesis as the computed p-value is greater than the significant level (alpha = 0.05).
Sen's estimate and computed data
The simple non-parametric procedure developed by (Zaiontz 2020) was used to estimate the slopes (change per unit time) present in the trend and the Sen's estimate graph/figure was computed by (MAKESENS 2002). The positive sign indicates the increasing slope, and the negative sign implies the decreasing slope, whereas, the zero slope shows no trend in the data for the study period and things remain the same. The Sen's slope estimates as shown in Table 1 and Fig. 5a–c for minimum, maximum, and average temperature from 1983 to 2016 for Bole station respectively shows an increasing trend and this agrees with the MK statistic (Z) result of positive values. It was found that the Z value of MK2/MK3 for minimum, maximum, and average temperatures for Bole station is 6.21/5.99, 2.49/2.6, and 6.09/6.14 respectively (Table 1). The positive Kendall's Z value shows an upward trend and also implies an increasing trend over time. This indicates that there is a significant increase in the trend at a 5% level of significance since the computed p-value is less than the significant level alpha (0.05) (Table 1).
(a) Sen's slope of minimum temperature from 1983–2016 for Bole, (b) Sen's slope of maximum temperature from 1983 to 2016 for Bole, (c) Sen's slope of average temperature from 1983–2016 for Bole
The Sen's slope estimates as shown in Table 1 and Fig. 6a–c for minimum, maximum, and average temperature from 1989 to 2016 for Entoto station respectively, depicts an increasing trend and the Sen's slope agrees with the MK1 statistic (Z) result of positive values of 1.64 for minimum temperature, whereas the MK2/MK3 statistic (Z) result of positive values of 0.71/0.65 for maximum and 0.17/1.04 for average temperature and this shows an indicator of an increasing trend. However, the increasing trend is not significant at 5% significant level since the computed p-value is greater than the significant level (Table 1).
(a) Sen's slope of minimum temperature from 1989–2016 for Entoto, (b) Sen's slope of maximum temperature from 1989–2016 for Entoto, (c) Sen's slope of average temperature from 1989–2016 for Entoto station
Furthermore, the result found in this present study for minimum, maximum, and average temperature for Entoto station and Bole station are different even though the stations exist in the capital city and this dissimilarity occurs because of geographical variation. Bole station exists inside the capital city and many constructions and other transport facilities take place due to this there is a temperature increment, whereas, the Entoto station exists near the National park and mountains and because of this, the temperature is almost stable. The Sen's Slope estimator displays that there is a tendency of temperature increments in Bole station. Thus, the increasing trend of temperature due to climate change and other factors can lead to weather extremes in the capital city (FDRE 2018; Fig. 5a–c, Fig. 6a–c).
Innovative trend analysis (ITA) method and computed figures
The simple non-parametric procedure was used to estimate the graph/figure through Rstudio using the package 'trendchange::innovtrend (X)' (Şen 2011). A bisector line at 1:1 straight line divides the diagram into two equal triangles. If the data points lay on the 1:1 line, there is no trend in the data. If the data points exist in the top triangle, it is indicative of an increasing trend. If the data lies in the bottom triangle, it indicates a decreasing trend in the data.
The Innovative Trend Analysis in Fig. 7a–c for minimum, maximum, and average temperatures from 1983 to 2016 for Bole station was computed. So, the data points exist in the top triangle and this shows an increasing trend and this strongly agrees with the MK2/MK3 result of positive values. It was found that the Z value of MK2/MK3 for minimum, maximum, and average temperatures for Bole station is 6.21/5.99, 2.49/2.6, and 6.09/6.14 respectively. The positive Kendall's Z value shows an upward trend and also implies an increasing trend over time. This indicates that there is a significant increase in the trend at a 5% level of significance since the computed p-value is less than the significant level (alpha = 0.05) (Table 1).
(a) Plot of ITA for minimum temperature from 1983–2016 for Bole Station, (b) Plot of ITA for maximum temperature from 1983–2016 for Bole Station, (c) Plot of ITA for average temperature from 1983–2016 for Bole Station, (d) Plot of ITA for minimum temperature from 1989–2016 for Entoto Station, (e) Plot of ITA for maximum temperature from 1989–2016 for Entoto Station, (f) Plot of ITA for average temperature from 1989–2016 for Entoto Station
The Innovative Trend Analysis in Fig. 7d–f for minimum, maximum, and average temperatures from 1989 to 2016 for the Entoto station was computed. So, the data points lay on 1:1 line and this shows no trend in the data and this result agrees with the MK1 result of positive values for minimum temperature and MK2/MK3 result of positive values for maximum, and average temperature. It was found that the Z value of MK1 for minimum temperature for the Entoto station is 1.64 whereas the Z value of MK2/MK3 for maximum and average temperatures for the Entoto station is 0.71/0.65, and 0.17/1.04 respectively. The positive Kendall's Z value shows an indicator of an increasing trend. But, the increasing trend is not significant at 5% significant level since the computed p-value is greater than the significant level alpha = 0.05. The Innovative Trend Analysis (ITA) Method was used for a further checkup to compare with the result of the MK1/MK2/MK3 and Sens slope estimate for the trend and significant test (Table 1; Fig. 7a-c, Fig. 7d-f).
Descriptive statistics of annual average temperature
Table 2 shows the minimum, maximum, and average temperature among the two stations. The average annual minimum temperature ranges from 7.66 °C to 11.61 °C, and from 4.14 °C to 10.02 °C for Bole and Entoto stations respectively. The average annual maximum temperature ranges from 22.65 °C to 24.52 °C for Bole station whereas, for Entoto station, it ranges from 16.18 °C to 19.67 °C. The average annual average temperature ranges between 15.20 °C to 17.87 °C, and between 10.7 °C to 14.64 °C for Bole and Entoto stations respectively.
Table 2 Descriptive statistics of annual average temperature
The result obtained in this study agrees with the findings of an earlier study, and whose results revealed that the mean annual maximum temperature ranges from 18.3 °C to 26.3 °C in Nefas Mewcha and Mekane Eyesus, while the mean annual minimum temperature ranges from 7.82 °C to 11.57 °C for Nefas Mewcha and Addis Zemen stations at south Gonder zone (Getachew 2018). Likewise, in this current study, the mean annual minimum temperature ranges from 8.56 °C to 9.82 °C for Entoto and Bole station whereas the mean annual maximum temperature ranges from 18.25 °C to 23.52 °C for Entoto and Bole stations, as well as the mean annual average temperature, ranges from 13.40 °C to 16.67 °C for Entoto and Bole station (Table 2). Conversely (Getachew 2018), the mean annual maximum temperature ranges between 26.9 °C and 32.2 °C for Addis Zemen stations at the south Gonder zone and the result disagrees with the present study of the mean annual the maximum temperature for Entoto and Bole station and this dissimilarity happens as a result of topographic variation and geographical location of the station.
From the study, it can be concluded that the trend analysis of annual temperature for Bole station shows a positive trend and statistical significance. As the computed p-value is lower than the alpha (significance level), one should reject the null hypothesis and accept the alternative hypothesis. On the other hand, the trend analysis of annual temperature for the Entoto station shows an increasing trend but not statistically significant. Hence, one cannot reject the null hypothesis, H0 as the computed p-value is greater than the significant level of alpha (0.05). Furthermore, the study showed that both the Mann–Kendall trend test and Sen's Slope estimator reveals that there is a tendency of temperature increase in the study area. Thus, the increasing trend of temperature due to climate change and other factors can lead to weather extremes in the capital city.
CSA:
Central Statistical Agency
Correction Factor
FDRE:
Federal Democratic Republic of Ethiopia
H0:
Alternative hypothesis
ITA:
Innovative Trend Analysis
Mann Kendall
NMA:
National Meteorological Agencies
NY:
TFPW:
Trend Free Pre-Whiting Process
UNFCCC:
UNICEF:
United Nations International Children's Emergency Fund
: Normalized test statistics
Addinsoft (2020) XLSTAT statistical and data analysis solution. New York, USA. https://www.xlstat.com
Alemu ZA, Dioha MO (2020) Modelling scenarios for sustainable water supply and demand in Addis Ababa city. Ethiopia Environ Syst Res 9:7. https://doi.org/10.1186/s40068-020-00168-3
Alhaji UU, Yusuf AS, Edet CO, Oche CO, Agbo EP (2018) Trend analysis of temperature in Gombe State using mann-kendall trend test. J Sci Res Rep 20(3):1–9
Ali MA, Hoque MA, Kim PJ (2013) Mitigating global warming potentials of methane and nitrous oxide gases from rice paddies under different irrigation regimes. Ambio 42:357–368. https://doi.org/10.1007/s13280-012-0349-3
Anandhi A, Perumal S, Gowda PH et al (2013) Long-term spatial and temporal trends in frost indices in Kansas, USA. Climatic Change 120:169–181. https://doi.org/10.1007/s10584-013-0794-4
Asfaw A, Simane B, Hassen A, Bantider A (2018) Variability and time series trend analysis of rainfall and temperature in northcentral Ethiopia: a case study in Woleka sub-basin. Weather Climate Extremes 19:29–41. https://doi.org/10.1016/j.wace.2017.12.002
Berhane A, Hadgu G, Worku W, Abrha B (2020) Trends in extreme temperature and rainfall indices in the semi-arid areas of Western Tigray. Ethiopia Environ Syst Res 9:3. https://doi.org/10.1186/s40068-020-00165-6
Biazar SM, Ferdosi FB (2020) An investigation on spatial and temporal trends in frost indices in Northern Iran. Theor Appl Climatol 141:907–920. https://doi.org/10.1007/s00704-020-03248-7
Birhanu D, Kima H, Jangb C, Park P (2016) Flood risk and vulnerability of Addis Ababa city due to climate Change and urbanization. Proc Eng 154:696–702
Cherie GG, Fentaw A (2015) Climate change impact assessment of dire dam water supply. AAUCED HES, Ethiopia
CSA (2007) The 2007 population and housing census of Ethiopia: statistical Report for Addis Ababa City Administration, third Population and Housing Census, Ethiopia.
Dioha MO, Kumar A (2020) Exploring greenhouse gas mitigation strategies for agriculture in Africa: the case of Nigeria. Ambio. https://doi.org/10.1007/s13280-019-01293-9
Drápela K, Drápelová I (2011) Application of Mann-Kendall test and the Sen's slope estimates for trend detection in deposition data from Bílý Kříž (Beskydy Mts., the Czech Republic) 1997–2010, Beskydy, Mendelova univerzita v Brně., 4(2): 133–146. 1803–2451.
FDRE (2018) Ethiopian Government Portal. https://www.ethiopia.gov.et/addis-ababa-city-administration. Accessed 4 June 2020.
Feyissa G, Zeleke G, Bewket W, Gebremariam E (2018) Downscaling of future temperature and precipitation extremes in addis ababa under climate change. Nature, MDPI 6:58
Getachew B (2018) Trend analysis of temperature and rainfall in south Gonder zone, Ethiopia. J Degraded Mining Lands Manag. ISSN: 2339–076X (p); 2502–2458 (e), 5(2): 1111–1125. https://doi.org/https://doi.org/10.15243/jdmlm.2018.052.1111
Gupta SP (2007) Statistical Methods. Seventh Revised and Enlarged Edition ed. Sultan Chand and Sons, Educational Publisher. New Delhi.
Hamed KH, Rao AR (1998) A modified mann-kendall trend test for autocorrelated data. J Hydrol 204(1–4):182–196. https://doi.org/10.1016/S0022-1694(97)00125-X
Hamed KH (2009) Enhancing the effectiveness of prewhitening in trend analysis of hydrologic data. J Hydrol 368:143–155
Helsel DR, Hirsch RM (2002) Statistical methods in water resources. Techniques of water-resources investigations of the United States geological survey, book 4, hydrologic analysis and interpretation. U. S. Geological survey.
IPCC (2007) Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Edited by Parry M, Canziani O, Palutikof J, Linden Pvd, Hanson C, Cambridge University Press 32 Avenue of the Americas, New York. pp, 10013–2473
Johannes GM, Mebratu K (2009) Local innovation in climate change adaptation by Ethiopian pastoralists. PROLINNOVA-report, Addis Ababa, Ethiopia
Kendall MG (1975) Rank correlation methods, 4th edn. Charles Griffin, London
Kisi O (2015) An innovative method for trend analysis of monthly pan evaporations. J Hydrol 527:1123–1129. https://doi.org/10.1016/j.jhydrol.2015.06.009
Kuriqi A, Ali R, Pham QB et al (2020) Seasonality shift and streamflow flow variability trends in central India. Acta Geophys. https://doi.org/10.1007/s11600-020-00475-4
Machida F, Andrzejak A, Matias R (2013) On the effectiveness of Mann-Kendall Test for detection of software aging. Conference Paper. https://doi.org/10.1109/ISSREW.2013.6688905
MAKESENS (2002) Mann-Kendall Test and Sen's Slope Estimates for the Trend of Annual Data, MSExcel template. Version 1.0 Freeware. Finnish Meteorological Institute, Finland.
Malik A, Kumar A, Guhathakurta P, Kisi O (2019) Spatial-temporal trend analysis of seasonal and annual rainfall (1966–2015) using innovative trend analysis method with significance test. Arab J Geosci 12:328. https://doi.org/10.1007/s12517-019-4454-5
Mondal A, Kundu S, Mukhopadhyay A (2012) Rainfall trend analysis by Mann-Kendall test: a case study of North-Eastern part of Cuttack district, Orissa. Int J Geol Earth Environ Sci 2(1):70–78
Öztopal A, Sen Z (2016) Innovative trend methodology applications to precipitation records in Turkey. Water Resour Manage. https://doi.org/10.1007/s11269-016-1343-5
Pal AB, Khare D, Mishra PK, Singh L (2017) Trend analysis of rainfall, temperature and runoff data: a case study of Rangoon watershed in Nepal. Int J Students' Res Technol Manag 5(3):21–38. https://doi.org/10.18510/ijsrtm.2017.535
Patz JA, Campbell-Lendrum D, Holloway T, Foley JA (2005) Impact of regional climate change on human health. Nature 438:310–317
Pohlert T (2020) Non-Parametric Trend Tests and Change-Point Detection. https://creativecommons.org/licenses/by-nd/4.0/
Sanikhani H, Kisi O, Mirabbasi R, Meshram SG (2018) Trend analysis of rainfall pattern over the Central India during 1901–2010. Arab J Geosci 11:437. https://doi.org/10.1007/s12517-018-3800-3
Sen PK (1968) Estimates of the regression coefficient based on Kendall's tau. J Am Stat Assoc 63:1379–1389
Şen Z (2011) Innovative trend analysis methodology. J Hydrol Eng 17:1042–1046. https://doi.org/10.1061/(ASCE)HE.1943-5584.0000556
Su BD, Jiang T, Jin WB (2006) Recent trends in observed temperature and precipitation extremes in the Yangtze River basin. China Appl Climatol 83:139–151. https://doi.org/10.1007/s00704-005-0139-y
Tan JG, Zheng YF, Tang X, Guo CY, Li LP, Song GX et al (2010) The urban heat island and its impact on heat waves and human health in Shanghai. Int J Biometeorol 54:75–84
UNFCCC (2007) Climate change: impacts. vulnerabilities and adaptation in developing countries, United Nations Framework Convention on Climate Change (UNFCCC), Bonn
WHO & UNICEF (2006) Meeting the MDG drinking water and sanitation target: the urban and rural challenge of the decade. Switzerland, Geneva
Wu H, Qian H (1950s) Innovative trend analysis of annual and seasonal rainfall and extreme values in Shaanxi, China, since the 1950s. Int J Climatol 37:2582–2592. https://doi.org/10.1002/joc.4866
Yadav R, Tripathi SK, Pranuthi G, Dubey SK (2014) Trend analysis by Mann-Kendall test for precipitation and temperature for thirteen districts of Uttarakhand. J Agrometeorol 16(2):164–171
Yue S, Pilon P, Phinney B, Cavadias G (2002) The influence of autocorrelation on the ability to detect trend in hydrological series. Hydrol Process 16(9):1807–1829. https://doi.org/10.1002/hyp.1095
Zaiontz C (2020) Mann-Kendall test, real statistics using excel. Proudly powered by WordPress. Real Statistics Using Excel: © 2012–2019.
Zhang Q, Xu C-Y, Zhang Z, Chen YD, Liu C-l, Lin H (2008) Spatial and temporal variability of precipitation maxima during 1960–2005 in the Yangtze River basin and possible association with large-scale circulation. J Hydrol 353:215–227
We want to express our greatest appreciation to National Meteorological Agencies (NMA) for providing necessary data. The opinion expressed herein are the authors' own and do not necessarily express the view of NMA.
No funding has been received for this study.
Ethiopian Public Health Institute, P.O.Box: 1242, Addis Ababa, Ethiopia
Zinabu Assefa Alemu
Department of Energy and Environment, TERI School of Advanced Studies, 10 Institutional Area, Vasant Kunj, New Delhi, 110 070, India
Michael O. Dioha
ZA performed the study design, statistical analysis of results, data interpretation, and writing the manuscript. MO Conceptualization, draft review and edit of the manuscript. All authors read and approved the final manuscript.
Correspondence to Zinabu Assefa Alemu.
Ethics approval and consent to participant
Consent to publication
The authors have no competing interest to declare.
Alemu, Z.A., Dioha, M.O. Climate change and trend analysis of temperature: the case of Addis Ababa, Ethiopia. Environ Syst Res 9, 27 (2020). https://doi.org/10.1186/s40068-020-00190-5
Mann-kendall test
Sen's slope | CommonCrawl |
LearnPracticeDownload
Measures of central tendency describe a set of data by identifying the central position in the data set as a single representative value. There are generally three measures of central tendency, commonly used in statistics- mean, median, and mode. Mean is the most common measure of central tendency used to describe a data set.
We come across new data every day. We find them in newspapers, articles, in our bank statements, mobile and electricity bills. Now the question arises whether we can figure out some important features of the data by considering only certain representatives of the data. This is possible by using measures of central tendency. In the following sections, we will look at the different measures of central tendency and the methods to calculate them.
1. What are Measures of Central Tendency?
2. Mean
3. Median
4. Mode
5. Empirical Relationship Between the Three Measures of Central Tendency
6. Measures of Central Tendency and Type of Distribution
7. FAQs on Measures of Central Tendency
What are Measures of Central Tendency?
Measures of central tendency are the values that describe a data set by identifying the central position of the data. There are 3 main measures of central tendency - Mean, Median and Mode.
Mean- Sum of all observations divided by the total number of observations.
Median- The middle or central value in an ordered set.
Mode- The most frequently occurring value in a data set.
Measures of Central Tendency Definition
The central tendency is defined as the statistical measure that can be used to represent the entire distribution or a dataset using a single value called a measure of central tendency. Any of the measures of central tendency provides an accurate description of the entire data in the distribution.
Measures of Central Tendency Example
Let us understand the concept of the measures of central tendency using an example. The monthly salary of an employee for the 5 months is given in the table below,
Month Salary
January $105
February $95
March $105
April $105
May $100
Suppose, we want to express the salary of the employee using a single value and not 5 different values for 5 months. This value that can be used to represent the data for salaries for 5 months here can be referred to as the measure of central tendency. The three possible ways to find the central measure of the tendency for the above data are,
Mean: The mean salary of the given salary can be used as on of the measures of central tendency, i.e., x̄ = (105 + 95 + 105 + 105 + 100)/5 = $102.
Mode: If we use the most frequently occurring value to represent the above data, i.e., $105, the measure of central tendency would be mode.
Median: If we use the central value, i.e., $105 for the ordered set of salaries, given as, $95, $100, $105, $015, $105, then the measure of central tendency here would be median.
We can use the following table for reference to check the best measure of central tendency suitable for a particular type of variable:
Type of Variable Best Suitable Measure of Central Tendency
Nominal Mode
Ordinal Median
Interval/Ratio (not skewed) Mean
Interval/Ratio (skewed) Median
Let us study the following measures of central tendency, their formulas, usage, and types in detail below.
Mean as a Measure of Central Tendency
The mean (or arithmetic mean) often called the average is most likely one of the measures of central tendency that you are most familiar with. It is also known as average. Mean is simply the sum of all the components in a group or collection, divided by the number of components.
We generally denote the mean of a given data-set by x̄, pronounced "x bar". The formula to calculate the mean for ungrouped data to represent it as the measure is given as,
For a set of observations: Mean = Sum of the terms/Number of terms
For a set of grouped data: Mean, x̄ = Σfx/Σf
x̄ = the mean value of the set of given data.
f = frequency of each class
x = mid-interval value of each class
Example: The weights of 8 boys in kilograms: 45, 39, 53, 45, 43, 48, 50, 45. Find the mean weight for the given set of data.
Therefore, the mean weight of the group:
Mean = Sum of the weights/Number of boys
= (45 + 39 + 53 + 45 + 43 + 48 + 50 + 45)/8
= 368/8
Thus, the mean weight of the group is 46 kilograms.
When Not to Use the Mean as the Measure of Central Tendency?
Using mean as the measure of central tendency brings out one major disadvantage, i.e., mean is particularly sensitive to outliers. This is for the case when the values in a data are unusually larger or smaller compared to the rest of the data.
Median as a Measure of Central Tendency
Median, one of the measures of central tendency, is the value of the given data-set that is the middle-most observation, obtained after arranging the data in ascending order is called the median of the data. The major advantage of using the median as a central tendency is that it is less affected by outliers and skewed data. We can calculate the median for different types of data, grouped data, or ungrouped data using the median formula.
For ungrouped data: For odd number of observations, Median = [(n + 1)/2]th term. For even number of observations, Median = [(n/2)th term + ((n/2) + 1)th term]/2
For grouped data: Median = l + [((n/2) - c)/f] × h
l = Lower limit of the median class
c = Cumulative frequency
h = Class size
n = Number of observations
Median class = Class where n/2 lies
Let us use the same example given above to find the median now.
Example: The weights of 8 boys in kilograms: 45, 39, 53, 45, 43, 48, 50, 45. Find the median.
Arranging the given data set in ascending order: 39, 43, 45, 45, 45, 48, 50, 53
Total number of observations = 8
For even number of observation, Median = [(n/2)th term + ((n/2) + 1)th term]/2
⇒ Median = (4th term + 5th term)/2 = (45 + 45)/2 = 45
Mode as a Measure of Central Tendency
Mode is one of the measures of the central tendency, defined as the value which appears most often in the given data, i.e. the observation with the highest frequency is called the mode of data. The mode for grouped data or ungrouped data can be calculated using the mode formulas given below,
Mode for ungrouped data: Most recurring observation in the data set.
Mode for grouped data: L + h \(\frac{\left(f_{m}-f_{1}\right)}{\left(f_{m}-f_{1}\right)+\left(f_{m}-f_{2}\right)}\)
L is the lower limit of the modal class
h is the size of the class interval
f\(_m\) is the frequency of the modal class
f\(_1\) is the frequency of the class preceding the modal class
f\(_2\) is the frequency of the class succeeding the modal class
Example: The weights of 8 boys in kilograms: 45, 39, 53, 45, 43, 48, 50, 45. Find the mode.
Since the mode is the most occurring observation in the given set.
Mode = 45
Empirical Relation Between Measures of Central Tendency
The three measures of central tendency i.e. mean, median, and mode are closely connected by the following relations (called an empirical relationship).
2Mean + Mode = 3Median
For instance, if we are asked to calculate the mean, median, and mode of continuous grouped data, then we can calculate mean and median using the formulae as discussed in the previous sections and then find mode using the empirical relation.
Example: The median and mode for a given data set are 56 and 54 respectively. Find the approximate value of the mean for this data set.
2Mean = 3Median - Mode
2Mean = 3 × 56 - 54
2Mean = 168 - 54 = 114
Mean = 57
Measures of Central Tendency and Type of Distribution
Any data set is a distribution of 'n' number of observations. The best measure of the central tendency of any given data depends on this type of distribution. Some types of distributions in statistics are given as,
Skewed Distribution
Let us understand how the type of distribution can affect the values of different measures of central tendency.
Measures of Central Tendency for Normal Distribution
Here is the frequency distribution table for a set of data:
Observation 6 9 12 15 18 21
Frequency 5 10 15 10 5 0
We can observe the histogram for the above-given symmetrical distribution as shown below,
The above histogram displays a symmetrical distribution of data. Finding the mean, median, and mode for this data-set, we observe that the three measures of central tendency mean, median, and mode are all located in the center of the distribution graph. Thus, we can infer that in a perfectly symmetrical distribution, the mean and the median are the same. The above-given example had one mode, i.e, it is a unimodal set, and therefore the mode is the same as the mean and median. In a symmetrical distribution that has two modes, i.e. the given set is bimodal, the two modes would be different from the mean and median.
Measures of Central Tendency for Skewed Distribution
For skewed distributions, if the distribution of data is skewed to the left, the mean is less than the median, which is often less than the mode. If the distribution of data is skewed to the right, then the mode is often less than the median, which is less than the mean. Let us understand each case using different examples.
Measures of Central Tendency for Right-Skewed Distribution
Consider the following data-set and plot the histogram for the same to check the type of distribution.
Frequency 17 19 8 5 3 2
We observe the given data set is an example of a right or positively skewed distribution. Calculating the three measures of central tendency, we find mean = 10, median = 9, and mode = 9. We, therefore, infer that if the distribution of data is skewed to the right, then the mode is, lesser than the mean. And median generally lies between the values of mode and mean.
Measures of Central Tendency for Left-Skewed Distribution
Frequency 2 13 5 10 15 19
We observe the given data set is an example of left or negatively skewed distribution. Calculating the three measures of central tendency, we find mean = 15.75, median = 18, and mode = 21. We, therefore, infer that if the distribution of data is skewed to the left, then the mode is, greater than the median, which is greater than the mean.
Let us summarize the above observations using the graphs given below.
Graphs in Statistics
How to Find Median
Important Notes on Measures of Central Tendency:
The three most common measures of central tendency are mean, median, and mode.
Mean is simply the sum of all the components in a group or collection, divided by the number of components.
The value of the middle-most observation obtained after arranging the data in ascending order is called the median of the data.
The value which appears most often in the given data i.e. the observation with the highest frequency is called the mode of data.
The three measures of central tendency i.e. mean, median and mode are closely connected by the following relations (called an empirical relationship): 2Mean + Mode = 3Median
Examples on Measures of Central Tendency
Example 1: The mean monthly salary of 10 workers of a group is $1445. One more worker whose monthly salary is $1500 has joined the group. Find the mean monthly salary of 11 workers of the group using the measures of central tendency formula.
Here, n=10, x̅ =1445
Using the formula,
x̅ = ∑f\(_i\)x\(_i\)/n
Therefore ∑x\(_i\) = x̅ × n
∑x\(_i\) =1445 ×10
=14450
10 workers salary = $14450
11 workers salary = $14450 + 1500 = $15950
Average salary = 15950/11
=1450
Answer: Average salary of 11 workers = $1450
Example 2: The following table indicates the data on the number of patients visiting a hospital in a month. Find the average number of patients visiting the hospital in a day using the measures of central tendency formula.
Number of days visiting hospital
In this case, we find the class-mark (also called as mid-point of a class) for each class.
Note: Class-mark = (lower limit + upper limit) / 2
Let x\(_1\), x\(_2\), x\(_3\) . . . x\(_n\) be the class marks of the respective classes.
Hence, we get the following table
Classmark (x\(_i\)) frequency (f\(_i\)) x\(_i\)f\(_i\)
Total ∑f\(_i\) = 36 ∑f\(_i\)x\(_i\) = 1040
∴ Mean = x = ∑x\(_i\)f\(_i\) / ∑f\(_i\) = 1040/36 = 28.89
Answer: Mean of patients visiting the hospital in a day = 28.89
Example 3: A survey on the heights (in cm) of 50 girls of a class was conducted at a school and the data obtained is given in the form of:
120-130 130-140 140-150 150-160 160-170 Total
Number of girls
4 7 12 20 8 50
Find the mode of the above data using the measures of central tendency formula.
Modal class = 150 - 160 [as it has maximum frequency]
l = 150, h = 10, f\(_m\) = 20, f\(_1\) = 12, f\(_2\) = 7
Mode = l + [(f\(_m\) - f\(_1\))/(2f\(_m\) - f\(_1\) - f\(_2\))] × h
= 150 + [(20 - 12)/(2 × 20 - 12 - 8)] × 10
= 150 + 4
Answer: Mode = 154.
view Answer >
go to slidego to slidego to slide
Want to build a strong foundation in Math?
Go beyond memorizing formulas and understand the 'why' behind them. Experience Cuemath and get started.
Book a Free Trial Class
Practice Questions on Measures of Central Tendency
Check Solution >
go to slidego to slide
FAQs on Measures of Central Tendency
What are the Measures of Central Tendency?
Measures of central tendency are those single entities or values that describe a set of data by identifying the central position in the data set. The most common measures of central tendency are the arithmetic mean, the median, and the mode.
What are Examples of Measures of Central Tendency?
Central tendency is a statistic that represents the single value of the entire population or a dataset. Some of the important examples of measures of central tendency include mode, median, arithmetic mean and geometric mean, etc.
What is the Definition of Measures of Central Tendency?
A measure of central tendency is a single value that attempts to describe a set of data by identifying the central position within that set of data. As such, measures of central tendency are sometimes called measures of central location.
What are Good Measures of Central Tendency?
The mean is the most frequently used measure of central tendency because it uses all values in the data set to give you an average. For data from skewed distributions, the median is better than the mean because it isn't influenced by extremely large values.
Where Can We Use Measures of Central Tendency in Our Daily Affairs?
Central tendency is very useful in psychology. It lets us know what is normal or 'average' for a set of data. It also condenses the data set down to one representative value, which is useful when you are working with large amounts of data.
What is the Best Measure of Central Tendency?
The best measure of central tendency depends on the type of variables.
Nominal type of variable distribution- mode
Ordinal type of variable distribution- median
Skewed type of variable distribution- mean, median
What is the Difference Between Mean and Median as Measures of Central Tendency?
The mean is the average (or arithmetic mean) of the values of a data set, whereas the median is the middlemost value of the data.
How Do you Find the Measures of Central Tendency?
The measures of central tendency can be found using the formulas of mean, median, or mode in most cases. As we know, the mean is the average of a given data set, the median is the middlemost data value and the mode represents the most frequently occurring data value in the set.
Download FREE Study Materials
Worksheet on Mean Median and Mode | CommonCrawl |
Capacity analysis in different systems exploiting mobility of VANETs
WANG_MIAO.pdf (21.21Mb)
Wang, Miao
Improving road safety and traffic efficiency has been a long-term endeavor for not only government but also automobile industry and academia. After the U.S. Federal Communication Commission (FCC) allocated a 75 MHz spectrum at 5.9 GHz for vehicular communications, the vehicular ad hoc network (VANET), as an instantiation of the mobile ad hoc network (MANET) with much higher node mobility, opens a new door to combat the road fatalities. In VANETs, a variety of applications ranging from safety related (e.g. emergency report, collision warning) to non-safety-related (e.g. infotainment and entertainment) can be enabled by vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications. However, the flourish of VANET still hinges fully understanding and managing the challenges that the public concerns, for example, capacity and connectivity issues due to the high mobility of vehicles. In this thesis, we investigate how vehicle mobility can impact the performance in three important VANET-involved systems, i.e., pure VANET, VANET-enhanced intelligent transportation systems (ITS), and fast electric vehicle (EV) charging systems. First, in pure VANET, our work shows that the network data-traffic can be balanced and the network throughput can be improved with the help of the vehicle mobility differentiation. Furthermore, leveraging vehicular communications of VANETs, the mobility-aware real-time path planning can be designed to smooth the vehicle traffic in an ITS, through which the traffic congestion in urban scenarios can be effectively relieved. In addition, with the consideration of the range anxiety caused by mobility, coordinated charging can provide efficient charging plans for electric vehicles (EVs) to improve the overall energy utilization while preventing an electric power system from overloading. To this end, we try to answer the following questions: Q1) How to utilize mobility characteristics of vehicles to derive the achievable asymptotic throughput capacity in pure VANETs? Q2) How to design path planning for mobile vehicles to maximize spatial utility based on mobility differentiation, in order to approach vehicle-traffic capacity in a VANET-enhanced ITS? Q3) How to develop the charging strategies based on mobility of electric vehicles to improve the electricity utility, in order to approach load capacities of charging stations in VANET-enhanced smart grid? To achieve the first objective, we consider the unique features of VANETs and derive the scaling law of VANETs throughput capacity in the data uploading scenario. We show that in both free-space propagation and non-free-space propagation environments, the achievable throughput capacity of individual vehicle scales as $\Theta (\frac{1}{{\log n}}) with $n$ denoting the population of a set of homogenous vehicles in the network. To achieve the second objective, we first establish a VANET-enhanced ITS, which incorporates VANETs to enable real-time communications among vehicles, road side units (RSUs), and a vehicle-traffic server in an efficient way. Then, we propose a real-time path planning algorithm, which not only improves the overall spatial utilization of a road network but also reduces average vehicle travel cost for avoiding vehicles from getting stuck in congestion. To achieve the third objective, we investigate a smart grid involved EV fast charging system, with enhanced communication capabilities, i.e., a VANET-enhanced smart grid. It exploits VANETs to support real-time communications among RSUs and highly mobile EVs for real-time vehicle mobility information collection or charging decision dispatch. Then, we propose a mobility-aware coordinated charging strategy for EVs, which not only improves the overall energy utilization while avoiding power system overloading, but also addresses the range anxieties of individual EVs by reducing the average travel cost. In summary, the analysis developed and the scaling law derived in $Q1$ of this thesis is practical and fundamental to reveal the relationship between the mobility of vehicles and the network performance in VANETs. And the strategies proposed in $Q2$ and $Q3$ of the thesis are meaningful in exploiting/leveraging the vehicle mobility differentiation to improve the system performance in order to approach the corresponding capacities.
Miao Wang (2015). Capacity analysis in different systems exploiting mobility of VANETs. UWSpace. http://hdl.handle.net/10012/9249 | CommonCrawl |
Why valuations when defining FOL?
Why does one need valuations in order to define the semantics of first-order logic? Why not just define it for sentences and also define formula substitutions (in he expected way). That should be enough:
$$M \models \forall x. \phi \iff \text{for all }d\in \mathrm{dom}(M),\ M \models \phi[x\mapsto d]$$
$$M,v \models \forall x. \phi \iff \text{for all }d\in \mathrm{dom}(M),\ M, v[x\mapsto d] \models \phi$$
lo.logic
Emil Jeřábek
It is perfectly possible to define satisfaction using just sentences as you suggest, and in fact, it used to be the standard approach for quite some time.
The drawback of this method is that it requires to mix semantic objects into syntax: in order to make an inductive definition of satisfaction of sentences in a model $M$, it is not sufficient to define it for sentences of the original language of $M$. You need to first expand the language with individual constants for all elements of the domain of $M$, and then you can define satisfaction for sentences in the expanded language. This is, I believe, the main reason why this approach went into disuse; if you use valuations, you can maintain a clear conceptual distinction between syntactic formulas of the original language and semantic entities that are used to model them.
Emil JeřábekEmil Jeřábek
$\begingroup$ I think it depends somewhat on whether the author is approaching things from a proof theory side or a model theory side. In the case of proof theory, the original language is of interest for studying provability of sentences, but in the case of model theory the expanded language is more useful for studying definability. So for example Marker's model theory book defines satisfaction via the extended language, but Enderton's intro logic book uses valuations. $\endgroup$ – Carl Mummert May 3 '12 at 21:50
The meaning of a closed formula is a truth value $\bot$ or $\top$. The meaning of a formula containing a free variable $x$ ranging over a set $A$ is a function from $A$ to truth values. Functions $A \to \lbrace \bot, \top \rbrace$ form a complete Boolean algebra, so we can interpet first-order logic in it.
Similarly, a closed term $t$ denotes an element of some domain $D$, while a term with a free variable denotes a function $D \to D$ because the element depends on the value of the variable.
It is therefore natural to interpret a formula $\phi(x_1, \ldots, x_n)$ with free variables $x_1, \ldots, x_n$ in the complete Boolean algebra $D^n \to \lbrace{\bot, \top\rbrace}$ where $D$ is the domain of range of the variables. Whether you phrase the interpretation in this complete Boolean algebra in terms of valuations or otherwise is a technical matter.
Mathematicians seem to be generally confused about free variables. They think they are implicitly universally quantified or some such. The cause of this is a meta-theorem stating that $\phi(x)$ is provable if and only if its universal closure $\forall x . \phi(x)$ is provable. But there is more to formulas than their provability. For example, $\phi(x)$ is not generally equivalent to $\forall x . \phi(x)$, so we certainly cannot pretend that these two formulas are interchangable.
formulas with free variables are unavoidable, at least in the usual first-order logic,
the meaning of a formula with a free variable is a truth function,
therefore in semantics we are forced to consider complete Boolean algebras $D^n \to \lbrace\bot, \top\rbrace$, which is where valuations come from,
the universal closure of a formula is not equivalent to the original formula,
it is a mistake to equate the meaning of a formula with the meaning of its universal closure, just as it is a mistake to equate a function with its codomain.
Andrej BauerAndrej Bauer
$\begingroup$ Cool. Clear and simple answser! I wonder what the logicians have to say about this? $\endgroup$ – Uday Reddy May 6 '12 at 12:29
$\begingroup$ I am one of "the logicians", it's written on my certificate of PhD. $\endgroup$ – Andrej Bauer May 6 '12 at 16:39
Simply because it's more natural to say "$x > 2$ is true when $x$ is $\pi$" (that is, on a valuation which sends $x$ to $\pi$) than "$x > 2$ is true when we substitute $\pi$ (the number itself, not the Greek letter) for $x$". Technically the approaches are equivalent.
Alexey RomanovAlexey Romanov
I want to strengthen Alexey's answer, and claim that the reason is that the first definition suffers from technical difficulties, and not just that the second (standard) way is more natural.
Alexy's point is that the first approach, i.e.:
$M \models \forall x . \phi \iff$ for all $d \in M$: $M \models \phi[x\mapsto d]$
mixes syntax and semantics.
For example, let's take Alexey's example:
${(0,\infty)} \models x > 2$
Then in order to show that, one of the things we have to show is: $(0,\infty) \models \pi > 2$
The entity $\pi > 2$ is not a formula, unless our language includes the symbol $\pi$, that is interpreted in the model $M$ as the mathematical constant $\pi \approx 3.141\ldots$.
A more extreme case would be to show that $M\models\sqrt[15]{15,000,000} > 2$, and again, the right hand side is a valid formula only if our language contains a binary radical symbol $\sqrt{}$, that is interpreted as the radical, and number constants $15$ and $15,000,000$.
To ram the point home, consider what happens when the model we present has a more complicated structure. For example, instead of taking real numbers, take Dedekind cuts (a particular implementation of the real numbers).
Then the elements of your model are not just "numbers". They are pairs of sets of rational numbers $(A,B)$ that form a Dedkind cut.
Now, look at the object $({q \in \mathbb Q | q < 0 \vee q^2 < 5}, {q \in \mathbb Q | 0 \leq q \wedge q^2 > 5}) > 2$" (which is what we get when we "substitute" the Dedekind cut describing $\sqrt{5}$ in the formula $x > 2$. What is this object? It's not a formula --- it has sets, and pairs and who knows what in it. It's potentially infinite.
So in order for this approach to work well, you need to extend your notion of "formula" to include such mixed entities of semantic and syntactic objects. Then you need to define operations such as substitutions on them. But now substitutions would no longer be syntactic functions: $[ x \mapsto t]: Terms \to Terms$. They would be operations on very very large collections of these generalised, semantically mixed terms.
It's possible you will be able to overcome these technicalities, but I guess you will have to work very hard.
The standard approach keeps the distinction between syntax and semantics. What we change is the valuation, a semantic entity, and keep formulae syntactic.
Ohad KammarOhad Kammar
$\begingroup$ The key point to the first approach is that given a model $M$ in a language $L$ you first expand to a language $L(M)$ in which there is a new constant symbol for every element in $M$. Then you can just substitute these constant symbols into formulas in the usual way. There are no actual technical difficulties. $\endgroup$ – Carl Mummert May 3 '12 at 21:45
Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question.
Why do we need formal semantics for predicate logic?
Non-interesting numbers via resource-bounded properties?
Techniques for Reversing the Order of Quantifiers
Understanding least-fixed point logic
Why does IFP< not capture PTIME?
Is infinitary logic a logic in the sense of Gurevich?
Does there exist a sentence of first-order logic that is satisfiable only in infinite models that do not have a finite algorithmic representation? | CommonCrawl |
View source for Banach space
← Banach space
''B-space'' {{MSC|46B|46E15}} {{TEX|done}} $ \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\norm}[1]{\left\|#1\right\|} \newcommand{\set}[1]{\left\{#1\right\}} $ A complete normed [[vector space]]. The problems involved in Banach spaces are of different types: the geometry of the unit ball, the geometry of subspaces, the linear topological classification, series and sequences in Banach spaces, best approximations in Banach spaces, functions with values in a Banach space, etc. Regarding the theory of operators in Banach spaces it should be pointed out that many theorems are directly related to the geometry and the topology of Banach spaces. == History == The function spaces introduced by D. Hilbert, M. Fréchet and F. Riesz between 1904 and 1918 served as the starting point for the theory of Banach spaces. It is in these spaces that the fundamental concepts of strong and weak convergence, compactness, linear functional, linear operator, etc., were originally studied. Banach spaces were named after S. Banach who in 1922 began a systematic study of these spaces {{cite|Ba}}, based on axioms introduced by himself, and who obtained highly advanced results. The theory of Banach spaces developed in parallel with the general theory of [[Linear topological space|linear topological spaces]]. These theories mutually enriched one another with new ideas and facts. Thus, the idea of semi-norms, taken from the theory of normed spaces, became an indispensable tool in constructing the theory of locally convex linear topological spaces. The ideas of weak convergence of elements and linear functionals in Banach spaces ultimately evolved to the concept of weak topology. The theory of Banach spaces is a thoroughly studied branch of functional analysis, with numerous applications in various branches of mathematics — directly or by way of the theory of operators. == Generalities == A Banach space $X$ is a [[vector space]] over $\R$ or $\C$ with a [[norm]] $\norm{\cdot}$ which is [[Completeness (in topology)|complete]] with respect to this norm, i.e., every [[Cauchy sequence]] in $X$ converges. For two Banach spaces $X$, $Y$, denote by $B(X,Y)$ the space of linear continuous maps from $X$ to $Y$. It is in itself a Banach space with respect to the norm $$\norm{T} = \sup_{x \neq 0} \frac{\norm{Tx}}{\norm{x}}.$$ == Examples == The Banach spaces encountered in analysis are mostly sets of functions or sequences of numbers which are subject to certain conditions. # $\ell_p$, $p \geq 1$, is the space of numerical sequences $\set{\xi_n}$ for which $$ \sum_{n=1}^\infty \abs{\xi_n}^p < \infty$$ with the norm $$ \norm{x} = \left( \sum_{n=1}^\infty \abs{\xi_n}^p \right)^{1/p}. $$ # $m$ is the space of bounded numerical sequences with the norm $$ \norm{x} = \sup_n\abs{\xi_n}.$$ # $c$ is the space of convergent numerical sequences with the norm $$\norm{x} = \sup_n\abs{\xi_n}.$$ # $c_0$ is the space of numerical sequences which converge to zero with the norm $$ \norm{x} = \max_n\abs{\xi_n}.$$ # $C[a,b]$ is the space of continuous functions $x=x(t)$ on $[a,b]$ with the norm $$\norm{x} = \max_{a \leq t \leq b}\abs{x(t)}.$$ # $C[K]$ is the space of continuous functions on a compactum $K$ with the norm $$\norm{x} = \max_{t \in K}\abs{x(t)}$$. # $C^n[a,b]$ is the space of functions with continuous derivatives up to and including the order $n$, with the norm $$\norm{x} = \sum_{k=0}^n \max_{a \leq t \leq b}\abs{x^{(k)}(t)}. $$ # $C^n[I^m]$ is the space of all functions defined in an $m$-dimensional cube that are continuously differentiable up to and including the order $n$, with the norm of uniform boundedness in all derivatives of order at most $n$. (Cf. [[Hölder space]].) # $M[a,b]$ is the space of bounded measurable functions with the norm $$\norm{x} = \mathop{\mathrm{ess\;max}}_{a \leq t \leq b} \abs{x(t)}.$$ # $A(D)$ is the space of functions which are [[analytic function|analytic]] in the open unit disc $D$ and are continuous in the closed disc $\bar{D}$, with the norm $$\norm{x} = \max_{z \in \bar{D}}\abs{x(z)}. $$ # $L_p(S ; \Sigma, \mu)$, $p \geq 1$, is the space of functions $x(s)$ defined on a set $S$ provided with a countably-additive [[measure]] $\mu$, with the norm $$\norm{x} = \left( \int_S \abs{x(s)}^p \,\mu(\mathrm{d}s) \right)^{1/p}.$$ (Cf. [[Lp spaces|$L^p$ spaces]].) # $L_p[a,b]$, $p \geq 1$, is a special case of the space $L_p(S ; \Sigma, \mu)$. It is the space of [[Lebesgue measure|Lebesgue-measurable]] functions, summable of degree $p$, with the norm $$\norm{x} = \left( \int_a^b \abs{x(s)}^p \,\mathrm{d}s \right)^{1/p}.$$ # $AP$ is the Bohr space of almost-periodic functions, with the norm $$\norm{x} = \sup_{-\infty < t < \infty} \abs{x(t)}. $$ The spaces $C[a,b]$, $C^n[a,b]$, $L_p[a,b]$, $c$, $\ell_p$ are separable; the spaces $M[a,b]$, $m$, $AP$ are non-separable; $C[K]$ is separable if and only if $K$ is a compact metric space. Other examples include [[Sobolev space]]s and the [[Hardy spaces|Hardy space]] $\mathcal{H}^1$. All [[Hilbert space]]s are a forteriori Banach spaces. == Quotients == A (closed linear) subspace $Y$ of a Banach space, considered apart from the enveloping space $X$, is a Banach space. The [[quotient space]] $X/Y$ of a normed space by a subspace $Y$ is a normed space if the norm is defined as follows. Let $Y_1 = x_1 + Y$ be a coset. Then $$ \norm{Y_1} = \inf_{y \in Y} \norm{x_1 + y}. $$ If $X$ is a Banach space, then $X/Y$ is a Banach space as well. In this case, if $Z$ is another normed space and $T\in B(X,Z)$ fulfills $T(Y)=\{0\}$, then there exists $\hat T \in B(X/Y,Z)$ such that $T = \hat T \circ Q$ and $\norm{T}=\norm{\hat T}$, where $Q:X \to X/Y$ is the quotient mapping. {{cite|KR1|Theorem 1.5.8}} == Linear functionals, dual space == The set of all continuous [[linear functional]]s defined on the normed space $X$, with the norm $$ \norm{f} = \sup_{x \in X} \frac{\abs{f(x)}}{\norm{x}}, \quad x \neq 0 $$ is said to be the dual space of $X$, and is denoted by $X^*$. It is a Banach space. === Hahn-Banach theorem === Banach spaces satisfy the [[Hahn–Banach theorem]] on the extension of linear functionals: If a linear functional is defined on a subspace $Y$ of a normed space $X$, it can be extended, while preserving its linearity and continuity, onto the whole space $X$. Moreover, the extension can be made to have the same norm: $$ \norm{f}_X = \sup_{x \in X} \frac{\abs{f(x)}}{\norm{x}} = \norm{f}_Y = \sup_{y \in Y} \frac{\abs{f(y)}}{\norm{y}}. $$ Even a more general theorem is valid: Let a real-valued function $p(x)$ defined on a linear space satisfy the conditions: $$ p(x+y) \leq p(x) + p(y), \quad p(\lambda x) = \lambda p(x), \quad \lambda \geq 0, \quad x,y \in X, $$ and let $f(x)$ be a real-valued linear functional defined on a subspace $Y \subset X$ and such that $$ f(x) \leq p(x), \quad x \in Y. $$ Then there exists a linear functional $F(x)$ defined on the whole of $X$ such that $$ F(x) = f(x), \quad x \in Y; \quad F(x) \leq p(x), \quad x \in X. $$ A consequence of the Hahn–Banach theorem is the "inverse" formula which relates the norms of $X$ and $X^*$: $$ \norm{x} = \max_{f \in X^*} \frac{\abs{f(x)}}{\norm{f}},\quad f \neq 0, \quad x \in X. $$ The maximum in this formula is attained for some $f=f_X\in X^*$. Another important consequence is the existence of a separating set of continuous linear functionals, meaning that for any $x_1 \neq x_2 \in X$ there exists a linear functional $f$ on $X$ such that $f(x_1) \neq f(x_2)$ (cf. [[Complete set of functionals]]). === General structure of linear functionals === The general form of a linear functional is known for many specific Banach spaces. Thus, on $L_p[a,b]$, $p>1$, all linear functionals are given by a formula $$ f(x) = \int_a^b x(t)y(t) \,\mathrm{d}t, $$ where $y \in L_q[a,b]$, $1/p + 1/q = 1$, and any function $y(t) \in L_q$ defines a linear functional $f$ by this formula, moreover $$ \norm{f} = \left( \int_a^b \abs{y(t)}^q \,\mathrm{d}t \right)^{1/q}. $$ Thus, the dual space of $L_p$ is $L_q$: $L_p^* = L_q$. Linear functionals on $L_1[a,b]$ are defined by the same formula, but in this case $y \in M$, so that $L_1^* = M$. === Biduals, reflexivity === The space $X^{**}$, dual to $X^*$, is said to be the second dual or bidual. Third, fourth, etc., dual spaces are defined in a similar manner. Each element in $X$ may be identified with some linear functional defined on $X^*$: $$ \text{$F(f) = f(x)$ for all $f \in X^*$ ($F \in X^{**}$, $x \in X$),} $$ where $\norm{F} = \norm{x}$. One may then regard $X$ as a subspace of the space $X^{**}$ and $X \subset X^{**} \subset X^\text{IV} \subset \cdots$, $X^* \subset X^{***} \subset \cdots$. If, as a result of these inclusions, the Banach space coincides with its second dual, it is called [[Reflexive space|reflexive]]. In such a case all inclusions are equalities. If $X$ is not reflexive, all inclusions are strict. If the quotient space $X^{**}/X$ has finite dimension $n$, $X$ is said to be quasi-reflexive of order $n$. Quasi-reflexive spaces exist for all $n$. ;Reflexivity criteria for Banach spaces # $X$ is reflexive if and only if for each $f \in X^*$ it is possible to find an $x \in X$ on which the "sup" in the formula $$ \norm{f} = \sup_{x \in X} \frac{\abs{f(x)}}{\norm{x}}, \quad x \neq 0, $$ is attained. # In reflexive Banach spaces and only in such spaces each bounded set is relatively compact with respect to weak convergence: Any one of its infinite parts contains a weakly convergent sequence (the Eberlein–Shmul'yan theorem). The spaces $L_p$ and $\ell_p$, $p>1$, are reflexive. The spaces $L_1$, $\ell_1$, $C$, $M$, $c$, $m$, $AP$ are non-reflexive. == Special cases == === Weakly complete spaces === A Banach space is said to be weakly complete if each weak Cauchy sequence in it weakly converges to an element of the space. Every reflexive space is weakly complete. Moreover, the Banach spaces $L_1$ and $\ell_1$ are weakly complete. The Banach spaces not containing a subspace isomorphic to $c_0$ form an even wider class. These spaces resemble weakly-complete spaces in several respects. === Strictly convex spaces === A Banach space is said to be strictly convex if its unit sphere $S$ contains no segments. Convexity moduli are introduced for a quantitative estimation of the convexity of the unit sphere; these are the local convexity modulus $$ \delta(x,\epsilon) = \inf\set{ 1 - \norm{\frac{x+y}{2}} : y \in S,\, \norm{x-y} \geq \epsilon}, \quad x \in S, \quad 0 < \epsilon \leq 2, $$ and the uniform convexity modulus $$ \delta(\epsilon) = \inf_{x \in S} \delta(x,\epsilon). $$ If $\delta(x,\epsilon) > 0$ for all $x \in S$ and all $\epsilon > 0$, the Banach space is said to be locally uniformly convex. If $\delta(x) > 0$, the space is said to be uniformly convex. All uniformly convex Banach spaces are locally uniformly convex; all locally uniformly convex Banach spaces are strictly convex. In finite-dimensional Banach spaces the converses are also true. If a Banach space is uniformly convex, it is reflexive. === Smooth spaces === A Banach space is said to be smooth if for any linearly independent elements $x$ and $y$ the function $\psi(t)=\norm{x+ty}$ is differentiable for all values of $t$. A Banach space is said to be uniformly smooth if its modulus of smoothness $$ \rho(t) = \sup_{x,y \in S} \set{\frac{\norm{x + \tau y} + \norm{x - \tau y}}{2} -1}, \quad \tau > 0, $$ satisfies the condition $$ \lim_{\tau \rightarrow 0}\frac{\rho(\tau)}{\tau} = 0. $$ In uniformly smooth spaces, and only in such spaces, the norm is uniformly [[Frechet derivative|Fréchet differentiable]]. A uniformly smooth Banach space is smooth. The converse is true if the Banach space is finite-dimensional. A Banach space $X$ is uniformly convex (uniformly smooth) if and only if $X^*$ is uniformly smooth (uniformly convex). The following relationship relates the convexity modulus of a Banach space $X$ and the smoothness modulus of $X^*$: $$ \rho_{X^*}(\tau) = \sup_{0 < \epsilon \leq 2} \set{\frac{\epsilon\tau}{2} - \delta_X(\epsilon)}. $$ If a Banach space is uniformly convex (uniformly smooth), so are all its subspaces and quotient spaces. The Banach spaces $L_p$ and $\ell_p$, $p>1$, are uniformly convex and uniformly smooth, and $$ \delta(\epsilon) \simeq \begin{cases} \epsilon^2 & (1 < p \leq 2) \\ \epsilon^p & (2 \leq p < \infty); \end{cases} $$ $$ \rho(\tau) \simeq \begin{cases} \tau^p & (1 < p \leq 2) \\ \tau^2 & (2 \leq p < \infty); \end{cases} $$ $$ \left( f(\epsilon) \simeq \phi(\epsilon) \Leftrightarrow a < \frac{f(\epsilon)}{\phi(\epsilon)} < b \right). $$ The Banach spaces $M$, $C$, $A$, $L_1$, $AP$, $m$, $c$, $\ell_1$ are not strictly convex and are not smooth. == Linear operators == The following important theorems for linear operators are valid in Banach spaces: ;The [[Banach–Steinhaus theorem]]. If a family of linear operators $T=\set{T_\alpha}$ is bounded at each point, $$ \sup_\alpha \norm{T_\alpha x} < \infty, \quad x \in X, $$ then it is norm-bounded: $$ \sup_\alpha \norm{T_\alpha} < \infty. $$ ;The Banach [[open-mapping theorem]]. If a linear continuous operator maps a Banach space $X$ onto a Banach space $Y$ in a one-to-one correspondence, the inverse operator $T^{-1}$ is also continuous. ;The [[closed-graph theorem]]. If a closed linear operator maps a Banach space $X$ into a Banach space $Y$, then it is continuous. == Isometries and isomorphisms == [[Isometric mapping|Isometries]] between Banach spaces occur rarely. The classical example is given by the Banach spaces $L_1$ and $\ell_2$. The Banach spaces $C[K_1]$ and $C[K_2]$ are isometric if and only if $K_1$ and $K_2$ are homeomorphic (the [[Banach–Stone theorem]]). A measure of proximity of isomorphic Banach spaces is the number $$ d(X,Y) = \ln\inf\bigl\|T\bigr\|\bigl\|T^{-1}\bigr\|, $$ where $T$ runs through all possible operators which realize a (linear topological) isomorphism between $X$ and $Y$. If $X$ is isometric to $Y$, then $d(X,Y)=0$. However, non-isometric spaces for which $d(X,Y)=0$ also exist; they are said to be almost-isometric. The properties of Banach spaces preserved under an isomorphism are said to be linear topological. They include separability, reflexivity and weak completeness. The isomorphic classification of Banach spaces contains, in particular, the following theorems: $$ L_r \neq L_s; \quad \ell_r \neq \ell_s, \quad r \neq s $$ $$ L_r \neq \ell_s, \quad r \neq s; \quad L_r = \ell_s, \quad r = s = 2; $$ $$ M=m; \quad C[0,1] \neq A(D); $$ $C[K] = C[0,1]$ if $K$ is a metric compactum with the cardinality of the continuum; $$ C^n[I^m] \neq C[0,1]. $$ Each separable Banach space is isomorphic to a locally uniformly convex Banach space. It is not known (1985) if there are Banach spaces which are isomorphic to none of their hyperplanes. There exist Banach spaces which are not isomorphic to strictly convex spaces. Irrespective of the linear nature of normed spaces, it is possible to consider their topological classification. Two spaces are homeomorphic if a one-to-one continuous correspondence, such that its inverse is also continuous, can be established between their elements. An incomplete normed space is not homeomorphic to any Banach space. All infinite-dimensional separable Banach spaces are homeomorphic. In the class of separable Banach spaces, $C[0,1]$ and $A(D)$ are universal (cf. [[Universal space|Universal space]]). The class of reflexive separable Banach spaces contains even no isomorphic universal spaces. The Banach space $\ell_1$ is universal in a somewhat different sense: All separable Banach spaces are isometric to one of its quotient spaces. == Non-complementable subspaces == Each of the Banach spaces mentioned above, except $L_2$ and $\ell_2$, contains subspaces without a complement. In particular, in $m$ and $M$ every infinite-dimensional separable subspace is non-complementable, while in $C[0,1]$ all infinite-dimensional reflexive subspaces are non-complementable. If all subspaces in a Banach space are complementable, the space is isomorphic to a Hilbert space. It is not known (1985) whether or not all Banach spaces are direct sums of some two infinite-dimensional subspaces. A subspace $Y$ is complementable if and only if there exists a projection which maps $X$ onto $Y$. The lower bound of the norms of the projections on $Y$ is called the relative projection constant $\lambda(Y,X) $ of the subspace $Y$ in $X$. Each $n$-dimensional subspace of a Banach space is complementable and $\lambda(Y_n,X) \leq \sqrt{n}$. The absolute projection constant $\lambda(Y)$ of a Banach space $Y$ is $$ \lambda(Y) = \sup_X \lambda(Y,X), $$ where $X$ runs through all Banach spaces which contain $Y$ as a subspace. For any infinite-dimensional separable Banach space $Y$ one has $\lambda(Y) = \infty$. Banach spaces for which $\lambda(Y) \leq Y < \infty$ form the class $\mathcal{P}_\lambda$ ($\lambda \geq 1$). The class $\mathcal{P}_1$ coincides with the class of spaces $C(Q)$ where $Q$ are extremally-disconnected compacta (cf. [[Extremally-disconnected space]]). == Finite-dimensional case == Fundamental theorems on finite-dimensional Banach spaces: # A finite-dimensional space is complete, i.e. is a Banach space. # All linear operators in a finite-dimensional Banach space are continuous. # A finite-dimensional Banach space is reflexive (the dimension of $X^*$ is equal to the dimension of $X$). # A Banach space is finite-dimensional if and only if its unit ball is compact. # All $n$-dimensional Banach spaces are pairwise isomorphic; their set becomes compact if one introduces the distance $$ d(X,Y) = \ln\inf_T\bigl\|T\bigr\|\bigl\|T^{-1}\bigr\|. $$ == Convergence of series == A [[series]] \begin{equation} \sum_{k=1}^\infty x_k, \quad x_k \in X \label{eq:series} \end{equation} is said to be convergent if there exists a limit $S$ of the sequence of partial sums: $$ \lim_{n \rightarrow \infty} \norm{S - \sum_{k=1}^n x_k} = 0. $$ If $$ \sum_{k=1}^\infty \norm{x_k} < \infty, $$ the series $\eqref{eq:series}$ is convergent, and is said in such a case to be absolutely convergent. A series is said to be unconditionally convergent if it converges when its terms are arbitrarily rearranged. The sum of an absolutely convergent series is independent of the arrangement of its terms. In the case of series in a finite-dimensional space (and, in particular, for series of numbers) unconditional and absolute convergence are equivalent. In infinite-dimensional Banach spaces unconditional convergence follows from absolute convergence but the converse is not true in any infinite-dimensional Banach space. This is a consequence of the Dvoretskii–Rogers theorem: For all numbers $\alpha_k \geq 0$, subject to the condition $\sum\alpha_k^2 < \infty$, there exists in each infinite-dimensional Banach space an unconditionally convergent series $\sum x_k$ such that $\norm{x_k} = \alpha_k$, $k=1,2,\ldots$. In the space $c_0$ (and hence also in any Banach space containing a subspace isomorphic to $c_0$), for any sequence $\alpha_k \geq 0$ that converges to zero, there exists an unconditionally convergent series $\sum x_k$, $\norm{x_k} = \alpha_k$. In $L_p(S ; \Sigma, \mu)$ the unconditional convergence of the series $\sum x_k$ implies that $$ \sum_{k=1}^\infty \norm{x_k}^s < \infty, $$ where $$ s = \begin{cases} 2 & (1 \leq p \leq 2), \\ p & (p \geq 2). \end{cases} $$ In a uniformly convex Banach space with convexity modulus $\delta(\epsilon)$ the unconditional convergence of the series $\sum x_k$ implies that $$ \sum_{k=1}^\infty\delta(\norm{x_k}) < \infty. $$ A series $\sum x_k$ is said to be weakly unconditionally Cauchy if the series of numbers $\sum\abs{f(x_k)}$ converges for each $f \in X^*$. Each weakly unconditionally Cauchy series in $X$ converges if and only if $X$ contains no subspace isomorphic to $c_0$. A sequence of elements $\set{e_k}_1^\infty$ of a Banach space is said to be minimal if each one of its terms lies outside the closure of $X^{(n)} = [e_k]_{k \neq n}$, the linear hull of the remaining elements. A sequence is said to be uniformly minimal if $$ \rho(e_n ; X^{(n)}) \geq \gamma\norm{e_n}, \quad 0 < \gamma \leq 1, \quad n = 1, 2, \ldots. $$ If $\gamma=1$, the series is said to be an Auerbach system. In each $n$-dimensional Banach space there exists a complete Auerbach system $\set{e_k}_1^n$. It is not known (1985) whether or not a complete Auerbach system exists in each separable Banach space. For each minimal system there exists an adjoint system of linear functionals $\set{f_n}$, which is connected with $\set{e_k}$ by the biorthogonality relations: $f_i(e_j) = \delta_{ij}$. In such a case the system $\set{e_k,f_k}$ is said to be biorthogonal. A set of linear functionals is said to be total if it annihilates only the zero element of the space. In each separable Banach space there exists a complete, minimal system with a total adjoint. Each element $x \in X$ can formally be developed in a series by the biorthogonal system: $$ x \sim \sum_{k=1}^\infty f_k(x)e_k, $$ but in the general case this series is divergent. == Bases == A system of elements $\set{e_k}_1^\infty$ is said to be a basis in $X$ if each element $x \in X$ can be uniquely represented as a convergent series $$ x = \sum_{k=1}^\infty \alpha_k e_k, \quad \alpha_k = \alpha_k(x). $$ Each basis in a Banach space is a complete uniform minimal system with a total adjoint. The converse is not true, as can be seen from the example of the system $\set{e^{int}}_{-\infty}^\infty$ in $C[0,2\pi]$ and $L_1[0,2\pi]$. A basis is said to be unconditional if all its rearrangements are also bases; otherwise it is said to be conditional. The system $\set{e^{int}}_{-\infty}^\infty$ in $L_p[0,2\pi]$, $p>1$, $p \neq 2$, is a conditional basis. The Haar system is an unconditional basis in $L_p$, $p > 1$. There is no unconditional basis in the spaces $C$ and $L_1$. It is not known (1985) whether or not each Banach space contains an infinite-dimensional subspace with an unconditional basis. Any non-reflexive Banach space with an unconditional basis contains a subspace isomorphic to $\ell_1$ or $c_0$. Two normalized bases $\set{e_k^\prime}$ and $\set{e_k^{\prime\prime}} $ in two Banach spaces $X_1$ and $X_2$ are said to be equivalent if the correspondence $e_k^\prime \leftrightarrow e_k^{\prime\prime}$, $k=1,2,\ldots$, may be extended to an isomorphism between $X_1$ and $X_2$. In each of the spaces $\ell_2$, $\ell_1$, $c_0 $ all normalized unconditional bases are equivalent to the natural basis. Bases constructed in Banach spaces which have important applications are not always suitable for solving problems, e.g. in the theory of operators. $T$-bases, or summation bases, have been introduced in this context. Let $\set{t_{i,j}}_1^\infty$ be the matrix of a [[Regular summation methods|regular summation method]]. The system of elements $\set{e_n} \subset X$ is said to be a $T$-basis corresponding to the given summation method if each $x \in X$ can be uniquely represented by a series $$ x \sim \sum_{k=1}^\infty \alpha_k e_k, $$ which is summable to $x$ by this method. The trigonometric system $\set{e^{int}}_{-\infty}^\infty$ in $C[0,2\pi]$ is a summation basis for the methods of Cesàro and Abel. Each $T$-basis is a complete minimal (not necessarily uniformly minimal) system with a total adjoint. The converse is not true. Until the 1970s, one of the principal problems of the theory of Banach spaces was the basis problem dealt with by Banach himself: Does a basis exist in each separable Banach space? The question of existence of a basis in specifically defined Banach spaces remained open as well. The first example of a separable Banach space without a basis was constructed in 1972; bases in the spaces $C^n(I^m)$ and $A(D)$ have been constructed. ==References== {| |- |valign="top"|{{Ref|Ba}}||valign="top"| S. Banach, "Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales" ''Fund. Math.'', '''3''' (1922) pp. 133–181 JFM {{ZBL|48.0201.01}} |- |valign="top"|{{Ref|Ba2}}||valign="top"| S.S. Banach, "A course of functional analysis", Kiev (1948) (In Ukrainian) |- |valign="top"|{{Ref|Be}}||valign="top"| B. Beauzamy, "Introduction to Banach spaces and their geometry", North-Holland (1985) {{MR|0889253}} {{ZBL|0585.46009}} |- |valign="top"|{{Ref|Bo}}||valign="top"| N. Bourbaki, "Elements of mathematics. Topological vector spaces", Addison-Wesley (1977) (Translated from French) {{MR|0583191}} {{ZBL|1106.46003}} {{ZBL|1115.46002}} {{ZBL|0622.46001}} {{ZBL|0482.46001}} |- |valign="top"|{{Ref|Da}}||valign="top"| M.M. Day, "Normed linear spaces", Springer (1958) {{MR|0094675}} {{ZBL|0082.10603}} |- |valign="top"|{{Ref|Di}}||valign="top"| J.J. Diestel, "Geometry of Banach spaces. Selected topics", Springer (1975) {{MR|0461094}} {{ZBL|0307.46009}} |- |valign="top"|{{Ref|DuSc}}||valign="top"| N. Dunford, J.T. Schwartz, "Linear operators. General theory", '''1''', Interscience (1958) {{MR|0117523}} |- |valign="top"|{{Ref|LiTz}}||valign="top"| J. Lindenstrauss, L. Tzafriri, "Classical Banach spaces", '''1–2''', Springer (1977–1979) {{MR|0500056}} {{ZBL|0362.46013}} |- |valign="top"|{{Ref|KR1}}||valign="top"| R.V. Kadison, J.R. Ringrose, "Fundamentals of the Theory of Operator Algebras", Volume I: Elementary Theory, AMS (1997) {{MR|1468229}} |- |valign="top"|{{Ref|Se}}||valign="top"| Z. Semanedi, "Banach spaces of continuous functions", Polish Sci. Publ. (1971) |- |valign="top"|{{Ref|Si}}||valign="top"| I.M. Singer, "Bases in Banach spaces", '''1–2''', Springer (1970–1981) {{MR|0298399}} {{MR|0268648}} {{ZBL|0198.16601}} {{ZBL|0189.42901}} |- |}
Template:Category (view source)
Template:Cite (view source)
Template:MR (view source)
Template:MSC (view source)
Template:MSCwiki (view source)
Template:MSCwiki8 (view source)
Template:MSN HOST (view source)
Template:Ref (view source)
Template:TEX (view source)
Template:ZBL (view source)
Return to Banach space.
Banach space. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Banach_space&oldid=43786
This article was adapted from an original article by M.I. KadetsB.M. Levitan (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/wiki/Banach_space" | CommonCrawl |
BioMedical Engineering OnLine
Causes of altered ventricular mechanics in hypertrophic cardiomyopathy: an in-silico study
Ekaterina Kovacheva ORCID: orcid.org/0000-0002-1574-370X1,
Tobias Gerach1,
Steffen Schuler1,
Marco Ochs2,
Olaf Dössel1 &
Axel Loewe1
BioMedical Engineering OnLine volume 20, Article number: 69 (2021) Cite this article
Hypertrophic cardiomyopathy (HCM) is typically caused by mutations in sarcomeric genes leading to cardiomyocyte disarray, replacement fibrosis, impaired contractility, and elevated filling pressures. These varying tissue properties are associated with certain strain patterns that may allow to establish a diagnosis by means of non-invasive imaging without the necessity of harmful myocardial biopsies or contrast agent application. With a numerical study, we aim to answer: how the variability in each of these mechanisms contributes to altered mechanics of the left ventricle (LV) and if the deformation obtained in in-silico experiments is comparable to values reported from clinical measurements.
We conducted an in-silico sensitivity study on physiological and pathological mechanisms potentially underlying the clinical HCM phenotype. The deformation of the four-chamber heart models was simulated using a finite-element mechanical solver with a sliding boundary condition to mimic the tissue surrounding the heart. Furthermore, a closed-loop circulatory model delivered the pressure values acting on the endocardium. Deformation measures and mechanical behavior of the heart models were evaluated globally and regionally.
Hypertrophy of the LV affected the course of strain, strain rate, and wall thickening—the root-mean-squared difference of the wall thickening between control (mean thickness 10 mm) and hypertrophic geometries (17 mm) was >10%. A reduction of active force development by 40% led to less overall deformation: maximal radial strain reduced from 26 to 21%. A fivefold increase in tissue stiffness caused a more homogeneous distribution of the strain values among 17 heart segments. Fiber disarray led to minor changes in the circumferential and radial strain. A combination of pathological mechanisms led to reduced and slower deformation of the LV and halved the longitudinal shortening of the LA.
This study uses a computer model to determine the changes in LV deformation caused by pathological mechanisms that are presumed to underlay HCM. This knowledge can complement imaging-derived information to obtain a more accurate diagnosis of HCM.
Hypertrophic cardiomyopathy (HCM) is a relatively common inherited disorder, with a prevalence of 1:500, which develops in the absence of an identifiable cause [1, 2]. There are several phenotypes of HCM, depending on the localization and distribution of hypertrophy in the heart: asymmetric, symmetric/concentric, apical, or mid-ventricular obstruction [3]. HCM results in an increased ratio of wall to lumen volume, which can be diagnosed by echocardiographic or magnetic resonance assessment of left-ventricular anatomy [1]. Besides this morphological modification, further abnormalities are underlying the HCM phenotype.
In HCM hearts, fibrosis and myocardial cell disarray can be present and might have evolved for years before the onset of symptoms [1]. The disarray of the cells can be quantified by fractional anisotropy (FA)—a measure obtained by diffusion tensor magnetic resonance imaging (DT-MRI) or shear wave imaging (SWI). Ariga et al. [4] reported reduced FA in HCM patients compared to control subjects, measured by DT-MRI. Villemain et al. [5] reported similar findings in pediatric HCM patients using SWI.
Further structural abnormalities in HCM were detected by SWI on an organ level: passive ventricular stiffness was significantly higher in HCM compared to the control group [5]. In HCM, increased stiffness on the organ level could not be explained by an alteration in the viscoelastic properties of the cardiac myocytes, since the passive stiffness of prepared HCM cardiac myocytes was measured to be the same as healthy donor myocytes [6]. The stiffer tissue behavior might be due to further factors such as cell disarray or tissue fibrosis. Furthermore, the maximal active force was markedly lower in HCM myocytes than in donor myocytes [6]. In clinical routine, it is not possible to measure active force development of the myocytes in-vivo. Furthermore, the application of SWI for stiffness measurement is limited to the entire ventricle and might be not applicable for all patients [7]. A reconstruction of the myocardial cell orientation with DT-MRI is very time-consuming and delivers limited anatomical coverage of the ventricle [8].
These limitations of the available imaging modalities make it impossible or at least cumbersome to identify abnormalities underlying the HCM phenotype in clinical routine. Nevertheless, the consequences of these structural changes can be measured and quantified to provide a basis to diagnose HCM. This diagnosis is often based on echocardiographic assessment of the systolic and diastolic function of the left ventricle (LV) [1] and parameters derived from tissue imaging (strain and strain rate) [9]. Furthermore, MRI can quantify heart motion and function by cine imaging, which enables LV wall thickness calculation. Tissue phase mapping and feature tracking provide LV radial, circumferential, and longitudinal myocardial velocity time courses, as well as global and segmental systolic and diastolic peak velocities [10]. The longitudinal strain in the left atrium (LA) is measured as well to inform HCM diagnosis [11]. Such precise assessment of the cardiac function enables the quantification of altered mechanics in HCM patients compared to healthy volunteers.
Concurrently to these advancements in imaging modalities in the past decades, the field of computational cardiac modeling has progressed to provide an accurate and robust in-silico representation of the human heart beat [12,13,14,15,16]. In a numerical study, Usyk et al. [17] investigated the influence of different structural properties of disarrayed myocardium using a three-dimensional finite-element model of systolic contraction. They created an ellipsoid model of the normal and the hypertrophied ventricle and altered the passive material parameters to examine their sensitivity on the systolic strains. In our work, we used a four-chamber model to represent the motion of the whole heart. Ubbink et al. [18] employed a finite-element model of cardiac mechanics to investigate the influence of the myofiber orientation on the circumferential and circumferential-radial shear strain. Further numerical studies were performed to quantify the impact of structural changes of the myocardium on the model prediction. Campos et al. [19] performed a sensitivity analysis considering uncertainties in wall thickness, in the material properties, and fiber orientation based on a 17-American Heart Association (AHA) segments diagram. In [20], Campos et al. additionally incorporated uncertainties in active stress and the circulatory model to quantify their impact on the stress, strain, and global deformation parameters of the LV. The variations in the fiber orientation were achieved by changing the angles used as an input for the fiber generation algorithms. In contrast, we included fiber disarray in the mid-wall as measured in HCM hearts. The effect of uncertainties in material input parameters on cavity volume, the elongation and radius of the ventricle, wall thickness, and the rotation was studied by Osnes et al. [21] on an ellipsoid geometry. Such in silico studies can help to understand the relationships between the structural changes of the tissue and the ventricular mechanics.
In our study, we are particularly interested in modeling HCM hearts and establishing cause–effect relationships between previously described pathological mechanisms in HCM hearts and their effect on ventricular mechanics. The identification of distinct underlying abnormalities leading to the altered mechanical behavior of HCM hearts complementary to imaging could be valuable information for clinicians on the way to clearer and faster diagnoses. It can provide directions to differentiate HCM from other cardiac conditions, in which thickened walls are present. Moreover, it could help to separate healthy hearts from HCM genotype-positive but phenotype-negative hearts.
In this work, we conduct in-silico experiments to identify potential underlying causes of altered ventricular mechanics observed in HCM patients. The numerical heart simulator includes models of active force development, passive stiffness, the circulatory system, and appropriate boundary conditions to conduct a sensitivity study. We alter model parameters capturing different pathological mechanisms in a virtual heart: increased wall thickness (WT) of the LV to represent concentric hypertrophy, increased tissue stiffness by a factor of 5, decreased active force development by 40%, and disarray of the fiber orientation (FO) in the LV mid-wall with reduced FA. We explore different combinations of these mechanisms to analyze their effect on ventricular mechanics (Table 1). Furthermore, we compared a healthy control heart simulation and a simulation comprising all potential HCM mechanisms in terms of several evaluation metrics defined in detail in "Methods" section.
In the following, we describe the observed alteration of the mechanical behavior of the in-silico heart due to variations in the input parameters of the computational model. Table 1 provides an overview of the cases covered in the sensitivity analysis. Each case is defined by a combination of the four model variants described in detail in "Methods" section.
Table 1 Overview of the cases of the sensitivity analysis and the corresponding variations of model components
We evaluated wall thickening of the LV, the strain, strain rate, and velocity in radial, longitudinal, and circumferential directions of the LV. We calculated these measures regionally (in each of the 17 AHA segments) and globally (one value for the entire LV). Furthermore, we provide the LA longitudinal strain. For each global measure, we calculated the root-mean-squared deviation (RMSD). The RMSD for all cases and all metrics are provided in Additional file 1: Figures S1 and S2.
Altered mechanics due to the wall thickness of LV
We quantified isolated changes of the LV WT by comparing the deformation for different geometries—control geometry (Case 1, WT \(10\,\pm \,2.3\) mm), HCM 1 (Case 5, WT \(15\,\pm \,3.3\) mm), and HCM 2 (Case 6, WT \(17\,\pm \,4.1\) mm).
The increase of the regional WT for the hypertrophic geometries (Case 5 and Case 6) during systole, for all 17 segments, was faster compared to the one of the control geometry (Case 1). Between Case 5 and Case 6, no marked difference was observed (Fig. 1). Wall thickening at end-systole (ES) was in the same range in all three cases (Case 1: between 18.1 and 50.0%, Case 5: between 18.4 and 48.1%, and Case 6: between 18.1 and 51.3%). Nevertheless, the distribution among the segments changed—the thickening of the basal segments (1–6) increased as the initial WT increased. Figure 2 shows the ES distribution of the wall thickening.
The time courses of the regional wall thickening (in each of the 17 AHA segments) for Case 1, 5, and 6 (left, middle, and right, respectively). In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 1: control case (initial WT = 10 mm); Case 5: hypertrophic geometry (15 mm); Case 6: hypertrophic geometry (17 mm)
Bull's-eye displays for Case 1, 2, 5, 6, 8, and 13 showing the wall thickening at ES. Case 1: control case; Case 2: increased stiffness; Case 5: hypertrophic geometry (15 mm); Case 6: hypertrophic geometry (17 mm); Case 8: hypertrophic geometry (17 mm), decreased active force; Case 13: virtual HCM case (all pathological changes included)
The circumferential and radial strain and the strain rate (both global and segmental) for the control geometry differed from those of the hypertrophic geometries. An increase of radial strains and a decrease of circumferential strains for Case 1 were observed during the entire systole, while those for the hypertrophic geometries occurred during the first half of the systolic period (Figs. 3 and 4).
The time courses of the regional circumferential strain (in each of the 17 AHA segments) for Case 1, 5, and 6 (left, middle, and right, respectively). In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 1: control case (initial WT = 10 mm); Case 5: hypertrophic geometry (15 mm); Case 6: hypertrophic geometry (17 mm)
The time courses of the regional radial strain (in each of the 17 AHA segments) for Case 1, 5, and 6 (left, middle, and right, respectively). In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 1: control case (initial WT = 10 mm); Case 5: hypertrophic geometry (15 mm); Case 6: hypertrophic geometry (17 mm)
This can be derived also from the strain rates—they are higher for the radial direction and lower for the circumferential direction in the mid systole for the hypertrophic geometries compared to those in the control case. An initially similar strain rate is available for all three cases during the systole, with amplitude 200%/s for the radial strain rate and around 90%/s for the circumferential strain rate (Fig. 5, left). The regional strains at end-diastole (ED) in all three directions were comparable in all three cases.
The time courses of the global strain rates (longitudinal, circumferential, and radial) are on the left and the global velocities (longitudinal, circumferential, and radial) are on the right, for Case 1 (solid lines) and Case 6 (dotted lines). In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 1: control case (initial WT = 10 mm); Case 6: hypertrophic geometry (17 mm)
The ES longitudinal strain indicated less shortening of the tissue since its minimum over the segments became higher when WT increased from −11.0% for Case 1 to around −6.5% for Case 5 and Case 6. Furthermore, the circumferential and radial strain at ES showed only minor differences when WT increased (Figs. 3 and 4). In Fig. 6, the distributions of the strains at ES are visualized for each local direction (longitudinal, circumferential, and radial).
Bull's-eye displays for Case 1, 2, 6, 8, 10, and 13 showing the longitudinal, circumferential, and radial strain at ES (first, second, and third columns, respectively). Each row corresponds to one case. Case 1: control case; Case 2: increased stiffness; Case 6: hypertrophic geometry (17 mm); Case 8: hypertrophic geometry (17 mm), decreased active force; Case 10: hypertrophic geometry, fiber disarray; Case 13: virtual HCM case (all pathological changes included)
The velocities in all local directions for the control case were differing from those of the hypertrophic geometries during the entire systole. The velocities for the control case were between 0.04 m/s and 0.05 m/s during the beginning and the middle of the systole, while the velocities for the hypertrophic geometries were high at the beginning of the systole (around 0.06 m/s) and decreased quickly to 0.02 m/s until the middle of the systole. The ES velocities were comparable in all three cases. The time courses of the velocities in all three local directions for Case 1 and Case 6 are visualized in Fig. 5, on the right. The visualization of Case 5 was omitted, since it was comparable to Case 6.
The longitudinal strain of the LA increased slower during the systolic period for Case 1 compared to Case 5 and Case 6. The maximal longitudinal strain of the LA occurred at ES and it was 20% for all three cases.
We did not observe any major differences in the measures between both hypertrophic geometries and therefore, in the following, we omit the less hypertrophic geometry (HCM 1) for further comparisons.
Altered mechanics due to the stiffness of LV
We quantified isolated changes in the passive force by comparing the values of the deformation measures in the simulation with control stiffness (Case 1) to the one with increased stiffness (Case 2), both with the control geometry.
At ED, the regional wall thickening was comparable between Case 1 and Case 2. At ES, it differed, since the range of the thickening values was reduced: the maximum decreased more than the minimum did, when the stiffness of the tissue was increased (Case 1: 18.7%–50.0% and Case 2: 16.9%–35.8%). In particular, the wall thickening of the segments in the free wall of the LV (5, 6, 11 and 12) were diminished compared to the one in the septal segments (2, 3, 8, and 9). This led to more homogeneous distribution of the wall thickening among all segments when the stiffness of the tissue was increased (Fig. 2, Case 1 vs. Case 2).
Similar to the wall thickening, the regional strain in all directions (longitudinal, circumferential, and radial) at ES had reduced extent of the values when the stiffness was increased. Therefore, a more homogeneous distribution of the strain among the all segments at ES was present for Case 2 compared to Case 1 (Fig. 6). The global radial strain rate at the beginning of the systolic period was halved when the stiffness was increased (Fig. 7, left).
During the entire systolic period, the global velocities in all directions reduced when the stiffness was increased. During the diastolic period, the maximal velocities in all local directions slightly increased as the stiffness increased (Fig. 7, right). Furthermore, the velocities decreased quicker as the stiffness was increased.
The time courses of the global strain rates (longitudinal, circumferential, and radial) are on the left and the global velocities (longitudinal, circumferential, and radial) are on the right, for Case 1 (solid lines) and Case 2 (dotted lines). In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 1: control case; Case 2: increased stiffness
The longitudinal strain of the LA strongly decreased for the case of increased stiffness: 20% for Case 1 to 10% for Case 2 (Fig. 8, left and middle, respectively).
The time courses of the LA longitudinal strain for Case 1, 2, and 13 (left, middle, and right, respectively). In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 1: control case; Case 2: increased stiffness; Case 13: virtual HCM case (all pathological changes included)
A comparison of the values of the deformation measures in the simulation with control stiffness (Case 6) and the one with increased stiffness (Case 7), both with the hypertrophic geometry (HCM 2), confirmed these results.
Altered mechanics due to the active force development of LV
We quantified isolated changes in the maximal active force developed in the tissue by comparing the values of the deformation measures of the simulation with control active force (Case 6) to the one with decreased active force (Case 8), both with the hypertrophic geometry (HCM 2).
At ED, the regional wall thickening was comparable between Case 6 and Case 8. At ES, the regional wall thickening differed—the range of the thickening values remained similar, while the maximum decreased when the force was decreased. At the same time, the distribution of the thickening values among the AHA segments was retained when the active force was decreased (Fig. 2).
The regional circumferential and radial strain at ES had similar range of the values (\(\approx\)13.5%) for Case 6 and Case 8. The ranges at ES were shifted—the circumferential strain indicated less shorting of the tissue (values are higher, since they are negative) and the radial strain indicated less elongation of the tissue (values are lower, since they are positive) (Fig. 6).
The maximal global circumferential and radial strain rates during the entire heart cycle differed between between Case 6 and Case 8. The circumferential strain rate indicated slower decrease of the strain, while the radial strain rate indicated slower increase of the strain when the active force was reduced (Fig. 9, on the left).
The maximal velocities in all directions (longitudinal, circumferential, and radial) were reduced during the entire heart cycle when the active force was reduced (Fig. 9, on the right).
The time courses of the global strain rates (longitudinal, circumferential, and radial) are on the left and the global velocities (longitudinal, circumferential, and radial) are on the right, for Case 6 (solid lines) and Case 8 (dotted lines). In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 6: hypertrophic geometry (17 mm); Case 8: hypertrophic geometry (17 mm), decreased active force
The longitudinal strain of the LA was lower for the case of reduced active force—20% for Case 6 to 15% for Case 8.
A comparison of the values of the deformation measures in the simulation with control active force (Case 1) and the one with decreased active force (Case 3), both with the control geometry, confirmed these results.
Altered mechanics due to fiber disarray of LV
We quantified isolated changes in the FO of the LV by comparing the values of the deformation measures in the simulation with control FO (Case 6) to the one with disarrayed FO (Case 10).
The RMSD of the global wall thickening was less than 3.2% for the entire heart cycle (Case 6 vs. Case 10: 1.1% (ED) and 3.2% (ES)). Similarly, the regional wall thickening showed minor differences. At ES, the range of the values for Case 6 was 18.1%–51.3%, and for Case 10, it was 19.7%–49.4%, while differences occurred mainly in the basal segments (1–6).
The regional longitudinal strain at ES was not influenced by the disarrayed FO. The regional circumferential and radial strain at ES had smaller extend of the values for the case with disarrayed FO. The difference was more pronounced for the circumferential strain—the range of the values for Case 6 was −15.5%–−3.4%, and for Case 10, it was −12.9%–−2.7%. Therefore, the circumferential strain indicated less shorting of the tissue (values are higher, since they are negative) and the radial strain indicated slightly less elongation of the tissue (values are lower, since they are positive) (Fig. 6).
The regional circumferential and radial strain rate indicated a slower change in the strains at the beginning of the systolic period. The values of the longitudinal strain rate were similar—the systolic RMSD was 2.2%.
The velocities in all three directions were comparable—the RMSD was 0.003 m/s during the systole and 0.001 m/s during the diastole.
The longitudinal strain of the LA slightly decreased when the FO was disarrayed—from 20% to around 18%.
A comparison of the values of the deformation measures in the simulation with control FO (Case 7) to the one with disarrayed FO (Case 11), both cases with increased stiffness, confirmed these results.
Altered mechanics due to combination of pathological model components
We compared the deformation of the control case (Case 1) to the virtual HCM heart (Case 13), which was the combination of hypertrophic geometry, stiffened passive behavior, decreased active force development, and disarrayed FO.
The RMSD of the global wall thickening was 7.6% for the systole and 7.7% for the diastole. The maximum of the regional wall thickening decreased—from 50.0% for the control case to 27.7% for virtual HCM heart. As previously described, a decrease in the maximum was observed when the active force was decreased (e.g., Case 6 vs. Case 8). Furthermore, the extent of the wall thickening values reduced, since the maximum decreased more than the minimum (at ES, the wall thickening ranged from 18.7% to 50.0% for Case 1 and from 10.4% to 27.7% for Case 13). As previously described, a decrease in the range of the wall thickening, which led to a more homogeneous distribution of the wall thickening, was observed when the stiffness of the tissue was increased (e.g., Case 1 vs. Case 2).
The time courses of the regional strains (in each of the 17 AHA segments) for Case 1 and Case 13 on the left and right, respectively. The longitudinal, circumferential, and radial strain are in shown in the first, second, and third rows, respectively. In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 1: control case; Case 13: virtual HCM case (all pathological changes included)
Similar to the wall thickening, the regional strain values in all directions (longitudinal, circumferential, and radial) were closer to zero for the HCM case compared to control case (Fig. 10). The extent of the strain values reduced as well—the ES longitudinal strain was between −11.0% and 7.8% for Case 1 and between −3.5% and 5.5% for Case 13. At ES, the circumferential strain for Case 1 ranged from −17.8 to −4.5% and for Case 13: from −8.0% to −3.5%. The radial strain for Case 1 ranged from 10.6% to 25.7% and for Case 13: from 4.8 to 14.8% (Fig. 6).
The global strain rates reduced for the HCM heart compared to control case for the entire heart cycle (Fig. 11, left). For each direction, the RMSD of the strain rates were similar for the diastolic and systolic period—around 12% in longitudinal direction, 23% in circumferential direction, and 41% in radial direction. During the systolic period, the major difference was observed during the first half of the systole. The global velocity in all three directions differed as well during the systolic and diastolic period—the RMSD was between 9.0% and 10.5% during the diastole and between 11.3 and 14.4% during the systole (Fig. 11, right).
The time courses of the global strain rates (longitudinal, circumferential, and radial) are on the left and the global velocities (longitudinal, circumferential, and radial) are on the right, for Case 1 (solid lines) and Case 13 (dotted lines). In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES. Case 1: control case; Case 13: virtual HCM case (all pathological changes included)
The longitudinal strain of the LA strongly decreased—from 20% for Case 1 to 8% for Case 13 (Fig. 8, left and right, respectively).
Overview of the results
Table 2 provides an overview of the results. Note that the measures which are negative indicate less or slower deformation as they increase—e.g., an increase in the diastolic radial strain rate indicates slower relaxation.
Table 2 Effect of model changes reflecting different pathological mechanisms (columns) on phenotypic mechanical markers (rows). WT = wall thickness, FA = fractional anisotropy, and Comb. = combination
During the systolic period, the time course of the wall thickening and strain is altered between the control geometry and the hypertrophic geometries in all AHA segments. At ES, the circumferential and radial strain had only minor differences between the control geometry and the hypertrophic geometries, while the longitudinal strain indicated less shortening as the WT increased. The myocardial velocities for both hypertrophic geometries are increased during the systolic period compared to the control case.
An increased stiffness of the tissue of the LV led to more homogeneous wall thickening among the AHA segments as well as equalized the strains at ES. The strain rates and velocities were reduced during the systole, especially visible at the beginning of the systole. In contrast, the maximal velocities during the diastole increased when the stiffness was increased and reduced quicker in the stiffer tissue. The longitudinal strain of the LA was halved when the stiffness of the tissue was increased.
A reduced maximal active force development in the tissue of the LV results in reduced wall thickening as well as reduced circumferential and radial strain at ES in all AHA segments. It also results in reduced strain rates and velocities in the entire heart cycle. In total, less deformation is available in the entire LV.
Disarrayed FO in the mid-wall of the LV led to less deformation in circumferential direction—the strain indicated less circumferential shortening of the tissue. Furthermore, it led to slightly less deformation in radial direction. The strain rates and the velocities were not considerably changed.
For the combined HCM heart, a decreased and more homogeneous wall thickening was observed at ES compared to the control case. Furthermore, the strain in all three directions indicated less deformation for HCM case compared to the control case. The strain rate revealed slower shortening and elongation of the tissue during the entire heart beat, but especially visible during the beginning of the systole. The longitudinal strain of the LA was more than halved for HCM case compared to control case. In total, the deformation and its rate of the LV were diminished for the virtual HCM heart compared to the control case.
Altered mechanics in HCM patients reported in the literature
In HCM patients, global strains and strain rates are reported to be significantly lower [1, 11, 22], while Ito et al. [9] measured preserved (or even increased) circumferential shortening. Furthermore, altered myocardial velocities are detected in HCM patients compared to controls—global and segmental diastolic velocities are decreased and systolic longitudinal velocities were reduced in HCM [10]. In the LA, a higher minimum volume and a lower peak atrial longitudinal strain were measured in HCM compared to controls [11].
Causes of altered mechanics in HCM patients reported in the literature
Previous studies have examined potential origins of altered cardiac mechanics in HCM patients in clinical studies. To the best of our knowledge, no computational study was conducted on this topic. In a cohort of 59 HCM patients, Urbano-Moral et al. [2] demonstrated the relation of a reduction in longitudinal shortening of the LV and the extent of hypertrophy. Furthermore, the reduction of the global strain and strain rate was correlated with the mean WT [23]. In a clinical study, Villemain et al. [5] suggested that altered LV relaxation might result from increased myocardial stiffness. Hoskins et al. [6] hypothesized that reduced active force might contribute to systolic dysfunction in HCM patients.
In an HCM patients' heart, distinct underlying phenomena are present simultaneously, but are differently pronounced. Thus, the effects of these phenomena cannot be clearly separated from measurements obtained in clinical studies. In contrast, in the presented numerical study, we could relate the observed alterations of mechanics to their underlying causes.
Identification of underlying pathophysiology
An alteration of the wall thickening and the strain time course during the systole can be related to hypertrophic LV walls. Longitudinal strain, which indicates less shortening, is as well related to hypertrophic LV walls. Reduced ES wall thickening and strain values (circumferential and radial) in all segments can be related to less active force development in the LV. A homogeneous distribution of the ES wall thickening and strains among all AHA segments can be related to stiffer tissue. A reduction of the circumferential strain can be attributed to the fiber disarray in the mid-wall of the LV, but also to less active force development or increased stiffness of the tissue.
Strain rates that are reduced during the entire systole and strongly reduced during the beginning of the systole can be traced back to increased tissue stiffness. Strain rates that are reduced during the entire heart cycle are caused by reduced active tension developed in the tissue.
An increase in the myocardial velocities in all directions during the systolic period can be related to hypertrophic walls of the LV. In contrast, these velocities decreased during the systolic period when the stiffness of the tissue was increased. Additionally, an increase in the velocities during the diastolic period combined with a rapid decay of the velocities can be as well related to an increased stiffness of LV tissue. A reduction of the active tension development did not affect the velocities during the diastolic period.
A reduced longitudinal strain of the LA at ES was present in case of increased stiffness of the tissue or reduced active tension development.
Comparison between simulated and clinically measured deformation
LV volume
A comparison between the LV volume of the control case and the volume curve extracted from short-axis Cine MRI data is shown in Fig. 12. The absolute volume curves show that the initial volume of the virtual control heart is lower than the one measured in the clinical data, the ES volume is higher in the virtual control heart, and the atrial kick contributes less to the diastolic filling of the LV in the simulation compared to reality. However, the morphology of the normalized LV volumes and the gradient of the volume curves is comparable between simulated and MRI data. Still, the maximal and the minimal values of the volume curves and their gradients differ between the simulated and MRI data. We do not have any clinically measured strain or pressure values of the control heart, and therefore, a comparison against literature values was conducted.
LV volume from the simulation of the control case (red curve) vs. LV volume extracted from Cine MRI data (dotted black line) Left: absolute volumes in ml. Middle: normalized volumes. Right: the gradient of the volume curves. In each plot, the first vertical line (at 0.17 s) indicates ED and the second line (at 0.5 s) ES
Wall thickening
A segmentation of the endocardial and epicardial surfaces of the LV for the entire heart cycle is time-consuming and user dependent. Therefore, in clinical routine, the wall thickening is not commonly evaluated, but instead, the radial strain is used. On the virtual hearts, we applied two distinct methods to calculate the radial strain and the wall thickening. For both measures, we obtained comparable time course morphology and ES distribution for each case, but different maximal values [compare for Case 1 the wall thickening (max 50%) in Fig. 1, left and the radial strain (max 26%) in Fig. 10, left].
Kato et al. [24] showed that the radial thickening of the tissue arises due to the circumferential fiber shortening. This implies that fiber disarray in the mid-wall (fiber oriented in circumferential direction) will change the radial thickening and strain. In contrast, we observed a reduction of the deformation in circumferential direction, which was more pronounced than the reduction of the deformation in radial direction.
Strain and strain rate
The strain values obtained for the virtual control case (Case 1) indicated less shortening compared to healthy volunteers in all three directions [25]. Nevertheless, the regional values of the circumferential and radial strain of the virtual control heart are inside the ranges (mean±SD) provided in Table 4 in [25] but deviate from the mean values (e.g., the mean values of the regional circumferential strain were between −26 and −17% and the mean values of the regional radial strain were between 12% and 39%in [25]). Additionally, we obtained heterogeneous distributions among the AHA segments and heterogeneous values were as well measured—e.g., radial strain was 39 ± 21% in the anterior basal segment, and it was 12 ± 8% in the septum basal segment [25]. The heterogeneity in the circumferential strain was significant (p <0.05) [26].
The regional longitudinal strain in the virtual control heart strongly deviated from the ranges provided for the healthy volunteers (from −24 ± 11% to −13 ± 7% [25]). For the virtual HCM heart, the longitudinal strain also deviated from literature values, which were around −11% for HCM hearts [11]. For each virtual heart, the values in the free wall were negative as well, but closer to zero (Fig. 6), while the positive values of the longitudinal strain were mainly in the septal segments. The torsion of the ventricle, which is pronounced in the apical region, leads to an elongation of the septal segments—the apex pulls the septal segments as it rotates. At the same time, the free wall shortened. The simultaneous occurrences of positive and negative values of the longitudinal strain in different segments cancel each other out when the global measures of strain and strain rate are calculated. Therefore, the value of the longitudinal strain rate was around 0 (Fig. 5).
For the virtual HCM heart, the strain values indicated less (and slower) shortening in the circumferential direction and less (and slower) thickening in the radial direction of the tissue compared to the virtual control case, in agreement with [1, 11, 22]. In contrast to Ito et al. [9], the circumferential strain of the virtual HCM heart indicated clearly reduced shortening.
The stiff heart syndrome (cardiac amyloidosis) can be identified by the longitudinal strain, which shows a strong gradient between the apex and base regions (apical sparing) [27, 28]. In the stiffened virtual hearts (e.g., Case 2 and Case 7), we observed an opposite effect when the stiffness was increased, and the strain became more homogeneous.
Li et al. [10] measured reduced global and segmental diastolic radial and longitudinal peak velocities in patients with HCM vs. controls. In contrast, we did not observe a reduction of any peak velocity during the diastolic period in our virtual HCM heart compared to the control heart. Nevertheless, we observed a decrease in these velocities when the active force was reduced (Case 3) and an increase when the stiffness was increased (Case 2). Therefore, we showed that the effects of these two pathologies cancel out in our virtual HCM heart to obtain the same peak diastolic velocity as in the control case (Fig. 11, right). If we would either further reduce the active force in the LV or less deviate from the control stiffness (or both), we would obtain reduced diastolic velocities in line with [10].
When the stiffness of the tissue was increased, the amplitude of the diastolic velocity increased and also the morphology of the diastolic velocity course changed compared to the control case. Therefore, we confirm the suggestion of Villemain et al. [5] that modified LV relaxation might result from increased myocardial stiffness.
Furthermore, Li et al. [10] measured reduced peak velocities during the systole in patients with HCM in the longitudinal direction (radial peak velocities were comparable). We observed a reduction of the velocities during the systolic period as well but for all directions (longitudinal, circumferential, and radial). We did not compare the absolute values of velocities, since different imaging modalities provide different values for the velocities—feature tracking peak velocities are lower than directly measured tissue phase mapping velocities [10]. Hoskins et al. [6] hypothesized that reduced active force might contribute to systolic dysfunction in HCM patients. In agreement with [6], we observed reduced systolic function when the force was decreased—the velocities during the systole were decreased and the strain at ES was diminished.
In a cohort of healthy volunteers, similar global longitudinal and radial velocities during the systole were measured (longitudinal was 2.6 ± 0.55 cm/s and radial was 2.5 ± 0.36 cm/s) [25]. Likewise, we obtained similar global longitudinal and radial velocities. Nevertheless, the circumferential velocity was also in the same range, while Lin et al. [29] measured negative circumferential velocities, since the direction of deformation along the circumference was considered. We calculated the absolute values of these velocities which equal the speed of deformation in the circumferential direction. However, the direction of the deformation can be deduced from the strain values.
Left atrial strain
The LV of the virtual HCM heart deformed less compared to the control case, which led to reduced maximal longitudinal strain of the LA. Similar behavior was reported by Aly et al. [11]. Additionally, they discovered that LA dysfunction is present in HCM patients before global LV dysfunction can be measured. We could not reproduce this behavior in our model—in each case, in which LA longitudinal strain was reduced, the deformation of the LV was as well reduced. This reported statement demonstrates the importance of using a whole heart model, in which the deformation of all chambers is related.
The morphology of the clinically measured and the simulated LV volume curve was comparable. ED and ES volumes differed and the strain values indicated less shortening compared to measurements for healthy volunteers, but were inside the ranges reported in the literature. Furthermore, we obtained less longitudinal shorting of the LV compared to literature values. We showed that the effect of reduced active force and increase in the stiffness cancel each other out in our virtual HCM heart, thus yielding the same peak diastolic velocity as in the control case. We could confirm that modified LV relaxation results from increased tissue stiffness, as suggested in [5].
In the following, we describe limitations of our study and provide directions for potential improvements.
For the control geometry, the finite-element mesh of the ventricle was coarse—up to two elements in the transmural direction. The linear course of the FO is represented by two or three fibers in the transmural direction. Thus, we could not create a combination of control geometry and disarrayed FO. However, for the hypertrophic geometries, the spatial transmural discretization was sufficient to obtain a mid-wall disarrayed fiber orientation (Additional file 1: Figure S4).
Furthermore, the linearity of the solution to Laplace's equation depends on the width of the domain between the boundaries [30]. Therefore, the course of the transmural coordinates used to define the LV mid-wall region close to the apex is not linear. In future, a linear course can be obtained by solving a trajectory distance equation in the LV [30].
The measure of FA was calculated on the fine geometry (also used to create the FO, Fig. 14) and delivered values close to one (0.95 ± 0.11) in the mid-wall ring of the control FO and 0.81 ± 0.25 for the disarrayed FO. Ariga et al. [4] measured for a control case an FA of 0.52 ± 0.03 and for HCM patients 0.49 ± 0.05, which are considerably lower values compared to the values in our virtual hearts. This is the result of a rule-based algorithm, since it creates idealized FO. Nevertheless, the difference between the mean FA of the control and the one of the disarrayed FO in the virtual heart is 0.14, which is much higher compared to this difference measured by Ariga et al. [4] (0.03). Therefore, we consider the cases with disarrayed FO to be representative for severely disarrayed FO.
For all numerical simulations, an identical input parameter set for the circulatory model was applied. It led to healthy systolic pressure (120 mmHg) for the virtual control heart, but to lower systolic pressures in the virtual HCM heart (70 mmHg). The pressure, applied on the endocardial surface, influences the deformation and, therefore, the evaluated measures. In both control and HCM cases, the ejection faction (EF) was lower compared to literature values—44% vs. 72% [31]. This indicates that the contraction force is lower in both atria and ventricles in the simulation. However, an increase of the active force will increase the EF but also the systolic pressure in the control case. A diminished deformation and reduced contraction results in a reduced EF and therefore in reduced strains, as discussed previously.
The active force model provided the force based on a predefined curve (force over time) and a maximum value of the force. A more complex model will adjust the intervals of the increase and the decrease of the force based on the velocities (e.g., Land et al. [32]) and, therefore, the duration of the contraction and relaxation of the LV. Ito et al. [9] reported that regional LV filling for HCM hearts was prolonged compared to control hearts and that the impairment of the diastolic relaxation is a major sign of HCM. We observed the major changes in the measures during the systole rather than diastole with exception of the strain rates—the RSMD are higher during the systole for the strains, velocities, and wall thickening. Additionally, we did not evaluate the diastolic relaxation time extent.
In general, the values of the metrics in the apex segment might be misleading, since the local longitudinal directions in the apex strongly deviate from the global longitudinal direction (Fig. 17). This is a result of the algorithm which creates the local directions. Instead, we could use the global longitudinal direction as a local one, but then the three local directions (longitudinal, circumferential, and radial) will not build an orthogonal system in each volume element and could not provide linearly independent information. We calculated of the strain based on the deformation tensor F. Werys et al. [33] used the Green–Lagrangian strain tensor \(E = 0.5(F^TF-I)\) to derive the strain directly from the motion of the myocardium from cine MRI images. Santiago et al. [15] projected the components of the Green–Lagrangian strain tensor on a global longitudinal direction to obtain the strain. There is no agreement in the literature how to apply the deformation tensor to obtain the strain, and therefore, we would obtain different strain values depending on the definition of strain.
We conducted an in-silico study on virtual human whole hearts to identify causes of altered mechanics in hypertrophic cardiomyopathy (HCM) hearts. We simulated the deformation resulting from combinations of physiological and pathological models and evaluated the mechanical behavior by local and global measures (wall thickening, strain, strain rate, and deformation velocities).
The presented study shows which pathological mechanisms are required to be present in the LV to obtain altered mechanics and how they affect the deformation measures. An increased wall thickness leads to deformation alteration during the systole, while the ES values are comparable to control case. Stiffer tissue equalizes the strains at ES, while reduced active force development reduces the deformation of LV. Disarrayed FO in the mid-wall did not influence the deformation of the LV. An inversion of these arguments allows to identify present pathological mechanisms in the tissue which cause an altered mechanical behavior.
In the clinical routine, it is cumbersome to directly measure underlying pathological mechanisms, and therefore, those, derived from a numerical simulation, might be a valuable information for clinicians and can contribute to a more accurate diagnosis in HCM patients.
In the following, we describe the heart geometry and the numerical solver used to obtain the deformation of the entire heart for three heart beats. Then, we describe how we modeled fiber disarray as measured in HCM patients and introduce the physiological and pathological mechanisms included in the sensitivity analysis. Finally, we present the metrics which measure alteration of left ventricular mechanics.
The geometrical model of the heart
The control geometry was based on MRI data of the whole heart, acquired from a healthy volunteer at University Hospital Heidelberg with a 1.5 T MR tomograph (Philips Medical Systems). Voxel spacing was 0.7 × 0.7 × 1.8 mm. The volunteer gave informed consent and the study was approved by the institutional review board. Images were segmented to obtain the endocardial and epicardial surfaces of the four chambers which provide the boundaries for the volume mesh of the myocardium. Additionally, the convex hull of the four chambers was calculated to serve as an inner surface of the pericardium. The volume between the myocardium and the pericardium was defined as fat tissue. The veins and arteries connected to the myocardium were represented in the model as trunks. The entire volume mesh consisted of 48,780 nodes and 90,801 cells. In the LV, there were two elements transmurally, which suffices to obtain correct deformation according to the convergence analysis conducted by Gerach et al. [34]. We used quadratic tetrahedral elements to discretize the volume and linear triangular elements for the surfaces. The nodes on the free ends of all trunks and the outside surface of the pericardial sac were fixated in all three directions to serve as a boundary condition for the model (Fig. 13A ,B).
Geometrical model of the heart in mid-diastolic state. A Anterior view of the four chambers and the visible trunks (aorta, pulmonary artery, superior pulmonary veins, and superior vena cava); B long axis cut of the four chambers and the pericardium with fixated nodes shown in red; C fiber orientation in anterior and D posterior view
The numerical solver
To calculate cardiac deformation, we used the mechanical solver CardioMechanics [35], which was previously verified [36]. To describe the cardiac mechanics, the equation of balance of the linear momentum is solved by the Finite-Element Method. The governing equation ensures that all forces are in balance at all times during the heart beat. External forces arise outside the myocardium; internal forces arise inside the myocardium. To calculate the external forces, we included a closed-loop circulatory model and a pericardial model. The circulatory model provides a pressure–volume relation in the four chambers and delivers the pressure values, which are acting on the endocardial surfaces [34]. The closed-loop model ensures that the total blood volume in the circulatory system is preserved over several heart beats. The model is strongly coupled to the finite-element model as described by Gerach et al. [34]. The input parameters and the initial conditions are provided in Additional file 1: Tables S1 and S2). The pericardial model represents the pericardial sac, in which the heart is embedded, and the surrounding tissue [35]. A sliding boundary condition is imposed between the inner surface of the pericardial model and the outer surface of the heart model. The pericardial model limits the motion of the heart by reducing the myocardial radial contraction and increasing the atrioventricular plane displacement. It delivers the forces acting on the epicardial surface of the entire heart. The internal forces are calculated by the combination of passive and active force models. The passive force model delivers the force arising from the intrinsic material properties of the myocardial tissue and is described by a constitutive relation. In this study, we applied the model of Guccione et al. [37] describing a hyperelastic, transversely isotropic material by the following strain energy function:
$$\begin{aligned} W&= \frac{C}{2} \left( e^Q - 1\right) + \frac{K}{2} \left( \text {det}(F) - 1\right) ^2, \nonumber \\ Q&= b_f E^2_{11} + b_t \left( E^2_{22} + E^2_{33} + E^2_{23} + E^2_{32}\right) + b_{ft} \left( E^2_{12} + E^2_{21} + E^2_{13} + E^2_{31}\right) , \end{aligned}$$
where C, \(b_f\), \(b_t\), and \(b_{ft}\) are the parameters of the Guccione model, \(E_{ij}\) \((i,j \in \left[ 1,2,3\right] )\) are elements of the Green strain tensor, \(\text {det}(F)\) is the determinant of the deformation tensor, and K scales the incompressibility term. For the contractile tissue, \(K = 10^6\) Pa was chosen and the parameters for the pericardium were chosen as in [35]. The fat tissue had the same passive properties as the pericardium. For the trunks, we applied the hyperelastic model of Mooney–Rivlin [38] with \(C_1 = 14900\) Pa and \(C_2 = 0\) Pa.
The active force model delivers the force acting along the fiber direction, which leads to fiber shortening and, therefore, to the contraction of the tissue. In this study, active force was described by a predefined curve as described by Stergiopulos et al. [39]. The normalized curve was scaled by a parameter, which determines the maximal active force. The ventricles were simultaneously activated 150 ms after the atria were activated (also simultaneously at 0 ms).
Modeling of fiber disarray
The myocardial cells tend to align along their long axis to form bundles that are represented by fibers in the geometrical model. The FO determines the deformation of the tissue [40]. In our model, the FO in the atria was determined by the rule-based algorithm of Wachter et al. [41] (Fig. 13C, D). Fiber directions were assigned for each of the four quadrature points and for the centroid of each element.
Ariga et al. [4] visualized the myocardial microstructure of HCM hearts with DT-MRI. It allows quantifying the direction diffusion of water molecules by measuring the FA. A diffusion-weighted signal intensity is measured to construct the diffusion tensor (DT) [42]. The DT is a 3\(\times\)3 matrix obtained for each voxel and can be transformed to a diagonal matrix with its eigenvalues \(\lambda _1\), \(\lambda _2\), and \(\lambda _3\) as diagonal elements [42]. The eigenvector belonging to \(\lambda _1\) indicates the orientation of the long axis of the myocytes and \(\lambda _1\), the magnitude of the diffusion in this direction. The other two eigenvectors are orthogonal to the primary one and define a transverse orthogonal plane. FA is calculated from the eigenvalues of the DT as follows [42]:
$$\begin{aligned} FA = \sqrt{\frac{3}{2}}\sqrt{ \frac{(\lambda _1 - D_{av})^2 + (\lambda _2 - D_{av})^2 + (\lambda _3 - D_{av})^2}{\lambda _1 ^2 + \lambda _2 ^2 + \lambda _3 ^2} }, \end{aligned}$$
where \(D_{av}\) is the mean diffusivity; \(D_{av} = (\lambda _1 + \lambda _2 + \lambda _3) / 3\). An FA value close to 0 corresponds to isotropic diffusion and therefore indicates tissue with variable FO. An FA value close to 1 corresponds to anisotropic diffusion and therefore indicates coherently aligned tissue [4].
Ariga et al. [4] measured reduced FA in the mid-wall ring (circumferentially aligned fibers) in the hearts of HCM patients compared to controls. We constructed a virtual DT to measure the FA in our computational heart model.
In the geometrical model, the FO is known for each element. Therefore, we estimated the diffusivity of the fibers \(\lambda _1\) in finite-element regions to construct the virtual DT. We subdivided the LV in N regions (\(v_i\), \(i = 1, \dots , N\), N = 1500) of similar size (around \(6 \times 20 \times 20\) mm) and calculated in each region the mean FO \(f^{mean}_i\). For each element in the region (\(e^k_i\), \(k = 1, \dots , M\) with M the number of elements in the current region), we calculated the length of the projection of the fiber on the mean FO (\(l^k_i\)). Then, we set \(\lambda _1\) of the region \(v_i\) to the mean of these lengths across all elements in the region
$$\begin{aligned} \lambda _1 (v_i) = \frac{1}{M}\sum _{k=1} ^M l^k_i. \end{aligned}$$
The diffusivity in the other two directions was set to \(\lambda _{2,3} (v_i) = 0.5( 1 - \lambda _1 (v_i))\). Finally, the values obtained for \(\lambda _1\), \(\lambda _2\), and \(\lambda _3\) were used in Eq. 2 to obtain FA for the provided fiber configuration. Here again, for a coherent fiber arrangement in a specific region, we obtain \(\lambda _1 = 1\), \(\lambda _{2,3} = 0\) and therefore FA \(= 1\). In a region of strongly disarrayed FO, we obtain \(\lambda _{1,2,3} = 1/3\), and therefore, FA \(=0\).
We adapted the fiber assignment algorithm to generate disarrayed FO in the mid-wall ring of the LV (Fig. 14). The mid-wall ring was defined to enclose all elements with transmural coordinates between 0.34 and 0.66. The transmural coordinate ranged from 1 on the endocardial surface to 0 on the epicardial surface, and was obtained by solving the Laplace's equation in the volume. In this ring, the gradient value in each element was multiplied by a random number from a uniform distribution on the interval [0, 1] to obtain the distorted FO in this element. The sheet and sheet-normal directions were calculated to yield an orthonormal system together with the distorted fiber. Outside the mid-wall ring, the FO was assigned as in the control case. The FO were generated on a fine mesh (around 1 million elements). A nearest-neighbor interpolation transferred the FO to the coarse geometry used for the simulations (Fig. 15, right).
Fractional anisotropy and fiber orientation in a slice of the LV wall. FA is color-coded in short-axis slices. Bottom: close-up of fiber orientation in a part of the LV-free wall. Left: control fiber orientation; right: fiber disarray
We created geometries with increased WT of the LV, varied internal forces (passive and active), and distorted FO in the LV. For all cases, the external forces were defined using the same parameterization of circulatory and pericardial models.
The WT (mean ± std) of the LV of the initial geometrical model (described in "The geometrical model of the heart"), was \(10\,\pm \,2.3\) mm and its cavity volume was 193 ml. We added tissue on the endocardial surface to increase the thickness of the LV to 1) \(15\,\pm \,3.3\) mm and 2) \(17\,\pm \,4.1\) mm with a concomitant decrease of the LV volume (118 ml and 94 ml, respectively). In both cases, the added tissue was distributed concentrically. Figure 15 shows the three geometries with distinct WT. WT of the right ventricle and both atria were not modified compared to the initial geometrical model.
Three LV geometries clipped through their long axis. Left: initial geometry (mean wall thickness 10 mm). Middle and right: hypertrophic geometries (15 mm and 17 mm, respectively)
Passive forces
We varied the input parameters of the passive force model (described in "The numerical solver") which determines the tissue stiffness. To identify the parameters of the passive force model for the control case, we used a method based on the pressure–volume relation of LV as described previously [43] and obtained the following parameters for the Guccione model: \(C =\) 309 Pa, \(b_f =\) 17.8, \(b_t =\) 7.1, and \(b_{ft} =\) 12.4. In HCM myocardium, Villemain et al. [5] measured a fivefold increase of the stiffness compared to controls. Thus, we increased the parameter C which determines the global stiffness to 1545 Pa for the entire myocardium of all four chambers to capture increased tissue stiffness.
Active forces
We varied the input parameters of the active force model (described in "The numerical solver"). For the control case, the scaling parameter of the active force, \(T^{\text {V}}_{\text {max}}\), was set to 100 kPa in both ventricles and \(T^{\text {A}}_{\text {max}}\) = 35 kPa in both atria. These values were chosen to obtain a control systolic LV pressure of 120 mmHg in the control geometry. Hoskins et al. [6] measured a 40% decrease of the active force in HCM donor cells compared to controls. Therefore, we reduced the maximal active force to \(T^{\text {V}}_{\text {max}} = 60\) kPa in both ventricles and \(T^{\text {A}}_{\text {max}} = 21\) kPa in both atria.
Fiber orientation
We defined two configurations of FO in the LV: one control case and one representing fiber disarray. The control FO was determined by a rule-based algorithm based on Bayer et al. [44] with angles changing transmurally from 60\(^\circ\) on the endocardium to \(-60^\circ\) on the epicardium [45]. The algorithm (Bayer et al. [44]) was adapted to eliminate a discontinuity of fibers in the free walls and to yield a fiber rotation that is approximately linear across the wall.Footnote 1
Ariga et al. [4] measured fiber disarray in HCM hearts. We modified the rule-based algorithm to yield disarrayed FO in the mid-wall ring of the LV (described in "Modeling of fiber disarray"). On the epicardium and endocardium, the same angles were used as in the control case. We quantified the disarray by calculating the FA in the mid-wall ring, which was 0.95 ± 0.11 for control and 0.81 ± 0.25 for the disarrayed FO (Fig. 14). The minimum FA (0.3) for the control fiber was observed at the junction of LV and RV.
We introduced metrics to evaluate the deformation of the LV and one metric for the left atrium (LA) based on common imaging-derived features [46, 47].
The following measures were evaluated globally (one value for the entire ventricle per time point) and regionally (one value per one of the 17 AHA segments [48] per time point, Fig. 16): strain, strain rate, velocity, and wall thickening.
The division of the LV in 17 AHA segments. Each segment is numbered in both the anterior and posterior views. On the left of the gray separation line is the control geometry and on the right, the hypertrophic geometry (HCM 2). The apex segment (17) includes the endocardial apex
The strain, strain rate, and velocities were calculated in a local heart coordinate system (R-Lo-C), spanned by radial, longitudinal, and circumferential directions [46] (Fig. 17). For every finite element in each geometry, these axes were calculated at the initial time point of the simulation and preserved over the heart beat. Regional and global measures were derived as the mean over all elements in the respective regions.
Local coordinate system spanned by longitudinal (blue, left), circumferential (red, middle), and radial (yellow, right) directions
All measures were calculated during the systolic and during the diastolic period. The regional measures at end-systole and end-diastole are visualized in bull's-eye displays [48]. Time of end-systole and end-diastole was determined based on the pressure–volume relation. For each global measure, we calculated the RMSD (root-mean-squared deviation) between each pair-wise combination of cases (Table 1) during the systolic and diastolic period.
Strain \(\varepsilon\) (%) describes the change in length relative to the initial length (in one dimension). Positive strain values correspond to lengthening and negative to shortening [46].
In the numerical simulation, the deformation of the heart in each element of the mesh is characterized by a deformation gradient tensor F calculated for each time step of the heart beat. The FO \(f_{d}\) in the deformed element is \(F f_{init}\), where \(f_{init}\) is the initial FO. The stretch of the deformed fiber is \(\lambda (f_{d}) = \sqrt{f^T_{init} F^T F f_{init}}\) and the strain is \(\varepsilon (f_{d}) = \lambda (f_{d}) -1\). The strain in the deformed sheet \(\varepsilon (s_{d})\) and sheet-normal \(\varepsilon (sn_{d})\) directions is calculated likewise.
The strain in the R-Lo-C system is then obtain by a coordinate transformation with matrix T. The matrix T transforms a vector with Cartesian coordinates (with respect to the standard basis of \(\mathbb {R}^3\)) into the local R-Lo-C coordinates. The rows of T are the radial, longitudinal, and circumferential vectors in the current element. By multiplication with the matrix T from the left, the normalized vectors pointing in the fiber (\(f_{d}\)), sheet (\(s_{d}\)) and sheet-normal (\(sn_{d}\)) directions in a deformed element are projected on each local direction vector, pointing in longitudinal, circumferential, and radial directions. Then, the projections are scaled by the strain obtained in the deformed element \(\lambda (f_{d})\), \(\lambda (s_{d})\), and \(\lambda (sn_{d})\). The following equation provides the strain in the R-Lo-C system:
$$\begin{aligned} \varepsilon _{RLoC} = \left| T(f_{d}/\Vert f_{d}\Vert _2) \right| \varepsilon (f_{d}) + \left| T(s_{d}/\Vert s_{d}\Vert _2) \right| \varepsilon (s_{d}) + \left| T(sn_{d}/\Vert sn_{d}\Vert _2) \right| \varepsilon (sn_{d}). \end{aligned}$$
The strain in radial direction is the first entry of the transformed strain: \(\varepsilon _{RLoC}(1)\), the strain in longitudinal direction is \(\varepsilon _{RLoC}(2)\), and the strain in circumferential direction is \(\varepsilon _{RLoC}(3)\).
Strain rate (%/s) is the speed at which the strain changes [46]. We calculated the strain rate \(\dot{\varepsilon }_t\) at time t with \(\Delta t =\) 0.01 s
$$\begin{aligned} \dot{\varepsilon }_t = (\varepsilon _t - \varepsilon _{t - \Delta t}) / \Delta t, \end{aligned}$$
where \(\varepsilon _t\) is the strain at time t.
Velocity (m/s) is the temporal change of the displacement. By solving the governing equation of the heart mechanics, we obtain the displacement and, thus, the velocity for every node and each time step [49].
To obtain the velocity of each finite element, the mean of the velocity over its four nodes was calculated for each of the three directions in the Cartesian coordinate system: \(v = (v_x, v_y, v_z)\). To convert the Cartesian velocity into the local R-Lo-C system, the coordinate transformation with the matrix T was conducted analogous to the transformation of the strain: \(v_{RLoC} = Tv^T\). Then, for the observed finite element at the current time point, the absolute value of the velocity in radial direction is \(\left| v_{RLoC}(1)\right|\), in longitudinal direction \(\left| v_{RLoC}(3)\right|\), and in circumferential direction \(\left| v_{RLoC}(3)\right|\). In the following, the absolute value of the velocity will be referred to as velocity.
The wall thickness (WT, \(\omega\) in mm) of the LV was calculated as proposed by Yezzi et al. [50]. WT was obtained for every time step of the heart beat and each surface node. The wall thickening (in percent) for time t is then: \((\omega ^t - \omega ^{0})/\omega ^{0}\), where the upper index corresponds to the time with 0 denoting the initial WT.
Mechanics of left atrium
We defined a main longitudinal axis of the LA by calculating the mean of the local longitudinal directions over all elements in the LV. For every time point t, all nodes of the LA were orthogonally projected on the main axis. With the maximal Euclidean distance between any two projected points \(l^t_{LA}\), the longitudinal strain for time t is then: \((l^t_{LA} - l^0_{LA}) / l^0_{LA}\), where the upper index 0 corresponds to the initial geometry configuration at time 0.
Code available at https://github.com/KIT-IBT/LDRB_Fibers.
Hensley N, Dietrich J, Nyhan D, Mitter N, Yee M-S, Brady M. Hypertrophic cardiomyopathy: a review. Anesthesia Analgesia. 2015;120(3):554–69. https://doi.org/10.1213/ANE.0000000000000538.
Urbano-Moral JA, Rowin EJ, Maron MS, Crean A, Pandian NG. Investigation of global and regional myocardial mechanics with 3-dimensional speckle tracking echocardiography and relations to hypertrophy and fibrosis in hypertrophic cardiomyopathy. Circul Cardiovasc Imag. 2014;7(1):11–9. https://doi.org/10.1161/CIRCIMAGING.113.000842.
Oliveira DCL, Assunção FB, Santos AAS, Nacif MS. Cardiac magnetic resonance and computed tomography in hypertrophic cardiomyopathy: an update. Arquivos Brasileiros de Cardiologia. 2016;107(2):163–72. https://doi.org/10.5935/abc.20160081.
Ariga R, Tunnicliffe EM, Manohar SG, Mahmod M, Raman B, Piechnik SK, Francis JM, Robson MD, Neubauer S, Watkins H. Identification of myocardial disarray in patients with hypertrophic cardiomyopathy and ventricular arrhythmias. J Am Coll Cardiol. 2019;73(20):2493–502. https://doi.org/10.1016/j.jacc.2019.02.065.
Villemain O, Correia M, Khraiche D, Podetti I, Meot M, Legendre A, Tanter M, Bonnet D, Pernot M. Myocardial stiffness assessment using shear wave imaging in pediatric hypertrophic cardiomyopathy. JACC Cardiovasc Imag. 2018;11(5):779–81. https://doi.org/10.1016/j.jcmg.2017.08.018.
Hoskins AC, Jacques A, Bardswell SC, McKenna WJ, Tsang V, dos Remedios CG, Ehler E, Adams K, Jalilzadeh S, Avkiran M, Watkins H, Redwood C, Marston SB, Kentish JC. Normal passive viscoelasticity but abnormal myofibrillar force generation in human hypertrophic cardiomyopathy. J Mol Cell Cardiol. 2010;49(5):737–45. https://doi.org/10.1016/j.yjmcc.2010.06.006.
Song P, Bi X, Mellema DC, Manduca A, Urban MW, Greenleaf JF, Chen S. Quantitative assessment of left ventricular diastolic stiffness using cardiac shear wave elastography. J Ultrasound Med. 2016;35(7):1419–27. https://doi.org/10.7863/ultra.15.08053.
Mekkaoui C, Reese TG, Jackowski MP, Bhat H, Sosnovik DE. Diffusion MRI in the heart. NMR Biomed. 2017;30:3. https://doi.org/10.1002/nbm.3426.
Ito T, Suwa M. Echocardiographic tissue imaging evaluation of myocardial characteristics and function in cardiomyopathies. Heart Failure Rev. 2020. https://doi.org/10.1007/s10741-020-09918-y.
Li A, Ruh A, Berhane H, Robinson JD, Markl M, Rigsby CK. Altered regional myocardial velocities by tissue phase mapping and feature tracking in pediatric patients with hypertrophic cardiomyopathy. Pediatric Radiol. 2020;50(2):168–79. https://doi.org/10.1007/s00247-019-04549-4.
Aly MFA, Brouwer WP, Kleijn SA, van Rossum AC, Kamp O. Three-dimensional speckle tracking echocardiography for the preclinical diagnosis of hypertrophic cardiomyopathy. Int J Cardiovasc Imag. 2014;30(3):523–33. https://doi.org/10.1007/s10554-014-0364-5.
Niederer SA, Lumens J, Trayanova NA. Computational models in cardiology. Nat Rev Cardiol. 2018. https://doi.org/10.1038/s41569-018-0104-y.
...Corral-Acero J, Margara F, Marciniak M, Rodero C, Loncaric F, Feng Y, Gilbert A, Fernandes JF, Bukhari HA, Wajdan A, Martinez MV, Santos MS, Shamohammdi M, Luo H, Westphal P, Leeson P, DiAchille P, Gurev V, Mayr M, Geris L, Pathmanathan P, Morrison T, Cornelussen R, Prinzen F, Delhaas T, Doltra A, Sitges M, Vigmond EJ, Zacur E, Grau V, Rodriguez B, Remme EW, Niederer S, Mortier P, McLeod K, Potse M, Pueyo E, Bueno-Orovio A, Lamata P. The digital twin to enable the vision of precision cardiology. Eur Heart J. 2020. https://doi.org/10.1093/eurheartj/ehaa159.
Quarteroni A, Vergara C, Landajuela M. Mathematical and numerical description of the heart function. In: Emmer M, Abate M, editors. Imagine Math 6. Cham: Springer; 2018. https://doi.org/10.1007/978-3-319-93949-0_15.
Santiago A, Zavala-Aké M, Aguado-Sierra J, Doste R, Gómez S, Arís R, Cajas JC, Casoni E, Vázquez M. Fully coupled fluid-electro-mechanical model of the human heart for supercomputers. Int J Num Methods Biomed Eng. 2018. https://doi.org/10.1002/cnm.3140.
Nordsletten DA, Niederer SA, Nash MP, Hunter PJ, Smith NP. Coupling multi-physics models to cardiac mechanics. Progr Biophys Mol Biol. 2011;104(1–3):77–88. https://doi.org/10.1016/j.pbiomolbio.2009.11.001.
Usyk TP, Omens JH, McCulloch AD. Regional septal dysfunction in a three-dimensional computational model of focal myofiber disarray. Am J Physiol. 2001;281(2):506–14. https://doi.org/10.1152/ajpheart.2001.281.2.H506.
Ubbink SWJ, Bovendeerd PHM, Delhaas T, Arts T, van de Vosse FN. Towards model-based analysis of cardiac MR tagging data: relation between left ventricular shear strain and myofiber orientation. Med Image Analys. 2006;10(4):632–41. https://doi.org/10.1016/j.media.2006.04.001.
Campos JO, Sundnes J, Dos Santos RW, Rocha BM. Effects of left ventricle wall thickness uncertainties on cardiac mechanics. Biomech Model Mechanobiol. 2019;18(5):1415–27. https://doi.org/10.1007/s10237-019-01153-1.
Campos JO, Sundnes J, Dos Santos RW, Rocha BM. Uncertainty quantification and sensitivity analysis of left ventricular function during the full cardiac cycle. Philosoph Trans. 2020;378(2173):20190381. https://doi.org/10.1098/rsta.2019.0381.
Osnes H, Sundnes J. Uncertainty analysis of ventricular mechanics using the probabilistic collocation method. IEEE Trans Bio-med Eng. 2012;59(8):2171–9. https://doi.org/10.1109/TBME.2012.2198473.
Pozios I, Pinheiro A, Corona-Villalobos C, Sorensen LL, Dardari Z, Liu H-Y, Cresswell K, Phillip S, Bluemke DA, Zimmerman SL, Abraham MR, Abraham TP. Rest and stress longitudinal systolic left ventricular mechanics in hypertrophic cardiomyopathy: Implications for prognostication. J Am Soc Echocardiogr. 2018;31(5):578–86. https://doi.org/10.1016/j.echo.2017.11.002.
Satriano A, Heydari B, Guron N, Fenwick K, Cheung M, Mikami Y, Merchant N, Lydell CP, Howarth AG, Fine NM, White JA. 3-dimensional regional and global strain abnormalities in hypertrophic cardiomyopathy. Int J Cardiovasc Imag. 2019;35(10):1913–24. https://doi.org/10.1007/s10554-019-01631-8.
Kato T, Ohte N, Wakami K, Goto T, Fukuta H, Narita H, Kimura G. Myocardial fiber shortening in the circumferential direction produces left ventricular wall thickening during contraction. Tohoku J Exp Med. 2010;222(3):175–81. https://doi.org/10.1620/tjem.222.175.
Augustine D, Lewandowski AJ, Lazdam M, Rai A, Francis J, Myerson S, Noble A, Becher H, Neubauer S, Petersen SE, Leeson P. Global and regional left ventricular myocardial deformation measures by magnetic resonance feature tracking in healthy volunteers: comparison with tagging and relevance of gender. J Cardiovasc Magn Reson. 2013;15:8. https://doi.org/10.1186/1532-429X-15-8.
Hurlburt HM, Aurigemma GP, Hill JC, Narayanan A, Gaasch WH, Vinch CS, Meyer TE, Tighe DA. Direct ultrasound measurement of longitudinal, circumferential, and radial strain using 2-dimensional strain imaging in normal adults. Echocardiography. 2007;24(7):723–31. https://doi.org/10.1111/j.1540-8175.2007.00460.x.
Bhupathi SS, Chalasani S, Rokey R. Stiff heart syndrome. Clin Med Res. 2011;9(2):92–9. https://doi.org/10.3121/cmr.2010.899.
Phelan D, Collier P, Thavendiranathan P, Popović ZB, Hanna M, Plana JC, Marwick TH, Thomas JD. Relative apical sparing of longitudinal strain using two-dimensional speckle-tracking echocardiography is both sensitive and specific for the diagnosis of cardiac amyloidosis. Heart (British Cardiac Society). 2012;98(19):1442–8. https://doi.org/10.1136/heartjnl-2012-302353.
Lin K, Collins JD, Chowdhary V, Markl M, Carr JC. Heart deformation analysis: measuring regional myocardial velocity with MR imaging. Int J Cardiovasc Imag. 2016;32(7):1103–11. https://doi.org/10.1007/s10554-016-0879-z.
Schuler S, Pilia N, Potyagaylo D, Loewe A. Cobiveco: Consistent biventricular coordinates for precise and intuitive description of position in the heart—with MATLAB implementation, 2021. arXiv:2102.02898
Maurer MS, Burkhoff D, Fried LP, Gottdiener J, King DL, Kitzman DW. Ventricular structure and function in hypertensive participants with heart failure and a normal ejection fraction: the cardiovascular health study. J Am Coll Cardiol. 2007;49(9):972–81. https://doi.org/10.1016/j.jacc.2006.10.061.
Land S, Park-Holohan S-J, Smith NP, Dos Remedios CG, Kentish JC, Niederer SA. A model of cardiac contraction based on novel measurements of tension development in human cardiomyocytes. J Mol Cell Cardiol. 2017;106:68–83. https://doi.org/10.1016/j.yjmcc.2017.03.008.
Werys K, Blaszczyk L, Kubik A, Marczak M, Bogorodzki P. Displacement field calculation from CINE MRI using non-rigid image registration.IEEE 2015;672–675. https://doi.org/10.1109/IDAACS.2015.7341388.
Gerach T, Schuler S, Fröhlich J, Lindner L, Kovacheva E, Moss R, Wülfers EM, Seemann G, Wieners C, Loewe A. Electro-mechanical whole-heart digital twins: a fully coupled multi-physics approach. Mathematics. 2021;9:11. https://doi.org/10.3390/math9111247.
Fritz T, Wieners C, Seemann G, Steen H, Dössel O. Simulation of the contraction of the ventricles in a human heart model including atria and pericardium : Finite element analysis of a frictionless contact problem. Biomecha Model Mechanobiol. 2014;13(3):627–41. https://doi.org/10.1007/s10237-013-0523-y.
Land S, Gurev V, Arens S, Augustin CM, Baron L, Blake R, Bradley C, Castro S, Crozier A, Favino M, Fastl TE, Fritz T, Gao H, Gizzi A, Griffith BE, Hurtado DE, Krause R, Luo X, Nash MP, Pezzuto S, Plank G, Rossi S, Ruprecht D, Seemann G, Smith NP, Sundnes J, Rice JJ, Trayanova N, Wang D, Jenny Wang Z, Niederer SA. Verification of cardiac mechanics software: benchmark problems and solutions for testing active and passive material behaviour. Proc Math Phys Eng Sci Royal Soc. 2015;471(2184):2015–0641. https://doi.org/10.1098/rspa.2015.0641.
Guccione JM, McCulloch AD, Waldman LK. Passive material properties of intact ventricular myocardium determined from a cylindrical model. J Biomech Eng. 1991;113(1):42–55.
Kim B, Lee SB, Lee J, Cho S, Park H, Yeom S, Park SH. A comparison among neo-hookean model, mooney-rivlin model, and ogden model for chloroprene rubber. Int J Prec Eng Manuf. 2012;13(5):759–64. https://doi.org/10.1007/s12541-012-0099-y.
Stergiopulos N, Meister JJ, Westerhof N. Determinants of stroke volume and systolic and diastolic aortic pressure. Am J Physiol. 1996;270(6 Pt 2):2050–9. https://doi.org/10.1152/ajpheart.1996.270.6.H2050.
Eriksson T, Prassl A, Plank G, Holzapfel G. Influence of myocardial fiber/sheet orientations on left ventricular mechanical contraction. Math Mech Solids. 2013;18(6):592–606. https://doi.org/10.1177/1081286513485779.
Article MathSciNet MATH Google Scholar
Wachter A, Loewe A, Krueger MW, Dössel O, Seemann G. Mesh structure-independent modeling of patient-specific atrial fiber orientation. Curr Dir Biomed Eng. 2015;1:409–12. https://doi.org/10.1515/cdbme-2015-0099.
Mukherjee P, Berman JI, Chung SW, Hess CP, Henry RG. Diffusion tensor MR imaging and fiber tractography: theoretic underpinnings. Am J Neuroradiol. 2008;29(4):632–41. https://doi.org/10.3174/ajnr.A1051.
Kovacheva E, Baron L, Schuler S, Gerach T, Dössel O, Loewe A. Optimization framework to identify constitutive law parameters of the human heart. Curr Dir Biomed Eng. 2020;6:95–8. https://doi.org/10.1515/cdbme-2020-3025.
Bayer JD, Blake RC, Plank G, Trayanova NA. A novel rule-based algorithm for assigning myocardial fiber orientation to computational heart models. Ann Biomed Eng. 2012;40(10):2243–54. https://doi.org/10.1007/s10439-012-0593-5.
Streeter DD, Spotnitz HM, Patel DP, Sonnenblick EH. Fiber orientation in the canine left ventricle during diastole and systole. Circ Res. 1969;24(3):339–47.
Dhooge J, Heimdal A, Jamal F, Kukulski T, Bijnens B, Rademakers F, Hatle L, Suetens P, Sutherland GR. Regional strain and strain rate measurements by cardiac ultrasound: principles, implementation and limitations. Eur J Echocardiogr. 2000;1(3):154–70. https://doi.org/10.1053/euje.2000.0031.
Scatteia A, Baritussio A, Bucciarelli-Ducci C. Strain imaging using cardiac magnetic resonance. Heart Failure Rev. 2017;22(4):465–76. https://doi.org/10.1007/s10741-017-9621-8.
Cerqueira MD, Weissman NJ, Dilsizian V, Jacobs AK, Kaul S, Laskey WK, Pennell DJ, Rumberger JA, Ryan T, Verani MS, Schroeder W, Schroeder W, Martin K, Lorensen B, Schroeder W, Schroeder W, Martin K, Lorensen B. Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart. Circulation. 2002;105(4):539–42. https://doi.org/10.1161/hc0402.102975.
Belytschko T, Kam Liu W, Moran B. Nonlinear finite elements for continua and structures. New York: Wiley; 2000.
MATH Google Scholar
Yezzi AJ, Prince JL. An eulerian PDE approach for computing tissue thickness. IEEE Trans Med Imag. 2003;22(10):1332–9. https://doi.org/10.1109/TMI.2003.817775.
We acknowledge support by the KIT-Publication Fund of the Karlsruhe Institute of Technology.
Open Access funding enabled and organized by Projekt DEAL. We acknowledge funding by HEiKA–Heidelberg Karlsruhe Research Partnership, Heidelberg University, Karlsruhe Institute of Technology (KIT), Germany, and the Federal Ministry of Education and Research, Germany, Grant Number: 05M2016 and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 258734477 – SFB 1173.
Institute of Biomedical Engineering, Karlsruhe Institute of Technology (KIT), Kaiserstr. 12, 76131, Karlsruhe, Germany
Ekaterina Kovacheva, Tobias Gerach, Steffen Schuler, Olaf Dössel & Axel Loewe
Department of Cardiology, Theresienkrankenhaus, Academic Teaching Hospital of Heidelberg University, Bassermannstr.1, 68165, Mannheim, Germany
Marco Ochs
Ekaterina Kovacheva
Tobias Gerach
Steffen Schuler
Olaf Dössel
Axel Loewe
EK, TG, and SS carried out the implementation and performed the calculations. EK wrote the manuscript with input from all authors. MO contributed to the interpretation of the results. EK, OD, and AL conceived the study and were in charge of overall direction and planning. All authors read and approved the final manuscript.
Correspondence to Axel Loewe.
The volunteer gave informed consent and the study was approved by the institutional review board.
Additional tables and figures.
Kovacheva, E., Gerach, T., Schuler, S. et al. Causes of altered ventricular mechanics in hypertrophic cardiomyopathy: an in-silico study. BioMed Eng OnLine 20, 69 (2021). https://doi.org/10.1186/s12938-021-00900-9
In-silico study
Altered mechanics
Active and passive forces
Fiber disarray
Submission enquiries: [email protected] | CommonCrawl |
Decay estimates of global solution to 2D incompressible Navier-Stokes equations with variable viscosity
Integrability of Hamiltonian systems with homogeneous potentials of degrees $\pm 2$. An application of higher order variational equations
November 2014, 34(11): 4617-4645. doi: 10.3934/dcds.2014.34.4617
Blow-up set for a superlinear heat equation and pointedness of the initial data
Yohei Fujishima 1,
Division of Mathematical Science, Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama-cho, Toyonaka 560-8531, Japan
Received October 2013 Revised March 2014 Published May 2014
We study the blow-up problem for a superlinear heat equation \begin{equation} \label{eq:P} \tag{P} \left\{ \begin{array}{ll} \partial_t u = \epsilon \Delta u + f(u), x\in\Omega, \,\,\, t>0, \\ u(x,t)=0, x\in\partial\Omega, \,\,\, t>0, \\ u(x,0)=\varphi(x)\ge 0\, (\not\equiv 0), x\in\Omega, \end{array} \right. \end{equation} where $\partial_t=\partial/\partial t$, $\epsilon>0$ is a sufficiently small constant, $N\ge 1$, $\Omega\subset {\bf R}^N$ is a domain, $\varphi\in C^2(\Omega)\cap C(\overline{\Omega})$ is a nonnegative bounded function, and $f$ is a positive convex function in $(0,\infty)$. In [10], the author of this paper and Ishige characterized the location of the blow-up set for problem (p) with $f(u)=u^p$ ($p>1$) with the aid of the invariance of the equation under some scale transformation for the solution, which played an important role in their argument. However, due to the lack of such scale invariance for problem (p), we can not apply their argument directly to problem (p). In this paper we introduce a new transformation for the solution of problem (p), which is a generalization of the scale transformation introduced in [10], and generalize the argument of [10]. In particular, we show the relationship between the blow-up set for problem (p) and pointedness of the initial function under suitable assumptions on $f$.
Keywords: Superlinear heat equation, blow-up set, blow-up problem, comparison principle., small diusion.
Mathematics Subject Classification: Primary: 35K91; Secondary: 35B4.
Citation: Yohei Fujishima. Blow-up set for a superlinear heat equation and pointedness of the initial data. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4617-4645. doi: 10.3934/dcds.2014.34.4617
X. Y. Chen and H. Matano, Convergence, asymptotic periodicity, and finite-point blow-up in one-dimensional semilinear heat equations, J. Differential Equations, 78 (1989), 160-190. doi: 10.1016/0022-0396(89)90081-8. Google Scholar
T. Cheng and G. F. Zheng, Some blow-up problems for a semilinear parabolic equation with a potential, J. Differential Equations, 244 (2008), 766-802. doi: 10.1016/j.jde.2007.11.004. Google Scholar
C. Cortazar, M. Elgueta and J. D. Rossi, The blow-up problem for a semilinear parabolic equation with a potential, J. Math. Anal. Appl., 335 (2007), 418-427. doi: 10.1016/j.jmaa.2007.01.079. Google Scholar
A. Friedman and A. A. Lacey, The blow-up time for solutions of nonlinear heat equations with small diffusion, SIAM J. Math. Anal., 18 (1987), 711-721. doi: 10.1137/0518054. Google Scholar
A. Friedman and B. McLeod, Blow-up of positive solutions of semilinear heat equations, Indiana Univ. Math. J., 34 (1985), 425-447. doi: 10.1512/iumj.1985.34.34025. Google Scholar
Y. Fujishima, Location of the blow-up set for a superlinear heat equation with small diffusion, Differential Integral Equations, 25 (2012), 759-786. Google Scholar
Y. Fujishima and K. Ishige, Blow-up set for a semilinear heat equation with small diffusion, J. Differential Equations, 249 (2010), 1056-1077. doi: 10.1016/j.jde.2010.03.028. Google Scholar
Y. Fujishima and K. Ishige, Blow-up for a semilinear parabolic equation with large diffusion on $R^N$, J. Differential Equations, 250 (2011), 2508-2543. doi: 10.1016/j.jde.2010.12.008. Google Scholar
Y. Fujishima and K. Ishige, Blow-up for a semilinear parabolic equation with large diffusion on $R^N$. II, J. Differential Equations, 252 (2012), 1835-1861. doi: 10.1016/j.jde.2011.08.040. Google Scholar
Y. Fujishima and K. Ishige, Blow-up set for a semilinear heat equation and pointedness of the initial data, Indiana Univ. Math. J., 61 (2012), 627-663. doi: 10.1512/iumj.2012.61.4596. Google Scholar
Y. Fujishima and K. Ishige, Blow-up set for type I blowing up solutions for a semilinear heat equation, Ann. Inst. H. Poincaré Anal., 31 (2014), 231-247. doi: 10.1016/j.anihpc.2013.03.001. Google Scholar
Y. Giga and R. V. Kohn, Nondegeneracy of blowup for semilinear heat equations, Comm. Pure Appl. Math., 42 (1989), 845-884. doi: 10.1002/cpa.3160420607. Google Scholar
K. Ishige, Blow-up time and blow-up set of the solutions for semilinear heat equations with large diffusion, Adv. Differential Equations, 7 (2002), 1003-1024. Google Scholar
K. Ishige and N. Mizoguchi, Location of blow-up set for a semilinear parabolic equation with large diffusion, Math. Ann., 327 (2003), 487-511. doi: 10.1007/s00208-003-0463-4. Google Scholar
K. Ishige and H. Yagisita, Blow-up problems for a semilinear heat equation with large diffusion, J. Differential Equations, 212 (2005), 114-128. doi: 10.1016/j.jde.2004.10.021. Google Scholar
N. Mizoguchi and E. Yanagida, Life span of solutions with large initial data in a semilinear parabolic equation, Indiana Univ. Math. J., 50 (2001), 591-610. doi: 10.1512/iumj.2001.50.1905. Google Scholar
N. Mizoguchi and E. Yanagida, Life span of solutions for a semilinear parabolic problem with small diffusion, J. Math. Anal. Appl., 261 (2001), 350-368. doi: 10.1006/jmaa.2001.7530. Google Scholar
P. Quittner and P. Souplet, Superlinear Parabolic Problems, Blow-up, Global Existence and Steady States, Birkhäuser Advanced Texts: Basler Lehrbücher, Birkhäuser Verlag, Basel, 2007. doi: 10.1007/978-3-7643-8442-5. Google Scholar
S. Sato, Life span of solutions with large initial data for a superlinear heat equation, J. Math. Anal. Appl. 343 (2008), 1061-1074. doi: 10.1016/j.jmaa.2008.02.018. Google Scholar
J. J. L. Velázquez, Higher-dimensional blow-up for semilinear parabolic equations, Comm. Partial Differential Equations, 17 (1992), 1567-1596. doi: 10.1080/03605309208820896. Google Scholar
J. J. L. Velázquez, Estimates on the $(n-1)$-dimensional Hausdorff measure of the blow-up set for a semilinear heat equation, Indiana Univ. Math. J., 42 (1993), 445-476. doi: 10.1512/iumj.1993.42.42021. Google Scholar
F. B. Weissler, Single point blow-up for a semilinear initial value problem, J. Differential Equations 55 (1984), 204-224. doi: 10.1016/0022-0396(84)90081-0. Google Scholar
H. Yagisita, Blow-up profile of a solution for a nonlinear heat equation with small diffusion, J. Math. Soc. Japan, 56 (2004), 993-1005. doi: 10.2969/jmsj/1190905445. Google Scholar
H. Yagisita, Variable instability of a constant blow-up solution in a nonlinear heat equation, J. Math. Soc. Japan, 56 (2004), 1007-1017. doi: 10.2969/jmsj/1190905446. Google Scholar
H. Zaag, On the regularity of the blow-up set for semilinear heat equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 19 (2002), 505-542. doi: 10.1016/S0294-1449(01)00088-9. Google Scholar
H. Zaag, One-dimensional behavior of singular $N$-dimensional solutions of semilinear heat equations, Comm. Math. Phys., 225 (2002), 523-549. doi: 10.1007/s002200100589. Google Scholar
H. Zaag, Regularity of the blow-up set and singular behavior for semilinear heat equations, Mathematics mathematics education (Bethlehem, 2000), 337-347, World Sci. Publ., River Edge, NJ, 2002. Google Scholar
H. Zaag, Determination of the curvature of the blow-up set and refined singular behavior for a semilinear heat equation, Duke Math. J., 133 (2006), 499-525. doi: 10.1215/S0012-7094-06-13333-1. Google Scholar
Yohei Fujishima. On the effect of higher order derivatives of initial data on the blow-up set for a semilinear heat equation. Communications on Pure & Applied Analysis, 2018, 17 (2) : 449-475. doi: 10.3934/cpaa.2018025
Keng Deng, Zhihua Dong. Blow-up for the heat equation with a general memory boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2147-2156. doi: 10.3934/cpaa.2012.11.2147
Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2053-2068. doi: 10.3934/cpaa.2017101
John A. D. Appleby, Denis D. Patterson. Blow-up and superexponential growth in superlinear Volterra equations. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3993-4017. doi: 10.3934/dcds.2018174
Julián López-Gómez, Pavol Quittner. Complete and energy blow-up in indefinite superlinear parabolic problems. Discrete & Continuous Dynamical Systems, 2006, 14 (1) : 169-186. doi: 10.3934/dcds.2006.14.169
Marina Chugunova, Chiu-Yen Kao, Sarun Seepun. On the Benilov-Vynnycky blow-up problem. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1443-1460. doi: 10.3934/dcdsb.2015.20.1443
Victor A. Galaktionov, Juan-Luis Vázquez. The problem Of blow-up in nonlinear parabolic equations. Discrete & Continuous Dynamical Systems, 2002, 8 (2) : 399-433. doi: 10.3934/dcds.2002.8.399
José M. Arrieta, Raúl Ferreira, Arturo de Pablo, Julio D. Rossi. Stability of the blow-up time and the blow-up set under perturbations. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 43-61. doi: 10.3934/dcds.2010.26.43
Alberto Bressan, Massimo Fonte. On the blow-up for a discrete Boltzmann equation in the plane. Discrete & Continuous Dynamical Systems, 2005, 13 (1) : 1-12. doi: 10.3934/dcds.2005.13.1
Van Tien Nguyen. On the blow-up results for a class of strongly perturbed semilinear heat equations. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3585-3626. doi: 10.3934/dcds.2015.35.3585
Masahiro Ikeda, Ziheng Tu, Kyouhei Wakasa. Small data blow-up of semi-linear wave equation with scattering dissipation and time-dependent mass. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021011
Asma Azaiez. Refined regularity for the blow-up set at non characteristic points for the vector-valued semilinear wave equation. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2397-2408. doi: 10.3934/cpaa.2019108
Xiumei Deng, Jun Zhou. Global existence and blow-up of solutions to a semilinear heat equation with singular potential and logarithmic nonlinearity. Communications on Pure & Applied Analysis, 2020, 19 (2) : 923-939. doi: 10.3934/cpaa.2020042
Bouthaina Abdelhedi, Hatem Zaag. Single point blow-up and final profile for a perturbed nonlinear heat equation with a gradient and a non-local term. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2607-2623. doi: 10.3934/dcdss.2021032
Juliana Fernandes, Liliane Maia. Blow-up and bounded solutions for a semilinear parabolic problem in a saturable medium. Discrete & Continuous Dynamical Systems, 2021, 41 (3) : 1297-1318. doi: 10.3934/dcds.2020318
Pierpaolo Esposito, Maristella Petralla. Symmetries and blow-up phenomena for a Dirichlet problem with a large parameter. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1935-1957. doi: 10.3934/cpaa.2012.11.1935
Yihong Du, Zongming Guo, Feng Zhou. Boundary blow-up solutions with interior layers and spikes in a bistable problem. Discrete & Continuous Dynamical Systems, 2007, 19 (2) : 271-298. doi: 10.3934/dcds.2007.19.271
Yihong Du, Zongming Guo. The degenerate logistic model and a singularly mixed boundary blow-up problem. Discrete & Continuous Dynamical Systems, 2006, 14 (1) : 1-29. doi: 10.3934/dcds.2006.14.1
Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71
Helin Guo, Yimin Zhang, Huansong Zhou. Blow-up solutions for a Kirchhoff type elliptic equation with trapping potential. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1875-1897. doi: 10.3934/cpaa.2018089
PDF downloads (64)
HTML views (0)
Yohei Fujishima | CommonCrawl |
Differentiability of Hausdorff dimension of the non-wandering set in a planar open billiard
DCDS Home
Solitary gravity-capillary water waves with point vortices
July 2016, 36(7): 3961-3991. doi: 10.3934/dcds.2016.36.3961
A new method for the boundedness of semilinear Duffing equations at resonance
Zhiguo Wang 1, , Yiqian Wang 2, and Daxiong Piao 3,
School of Mathematical Sciences, Soochow University, Suzhou 215006
Department of Mathematics, Nanjing University, Nanjing 210093, China
School of Mathematical Sciences, Ocean University of China, Qingdao 266100
Received February 2015 Revised November 2015 Published March 2016
We introduce a new method for the boundedness problem of semilinear Duffing equations at resonance. In particular, it can be used to study a class of semilinear equations at resonance without the polynomial-like growth condition. As an application, we prove the boundedness of all the solutions for the equation $\ddot{x}+n^2x+g(x)+\psi(x)=p(t)$ under the Lazer-Leach condition on $g$ and $p$, where $n\in \mathbb{N^+}$, $p(t)$ and $\psi(x)$ are periodic and $g(x)$ is bounded.
Keywords: periodic nonlinearity, Moser's theorem., boundedness, Hamiltonian system, at resonance.
Mathematics Subject Classification: Primary: 34C15; Secondary: 70H0.
Citation: Zhiguo Wang, Yiqian Wang, Daxiong Piao. A new method for the boundedness of semilinear Duffing equations at resonance. Discrete & Continuous Dynamical Systems, 2016, 36 (7) : 3961-3991. doi: 10.3934/dcds.2016.36.3961
J. M. Alonso and R. Ortega, Unbounded solutions of semilinear equations at resonance, Nonlinearity, 9 (1996), 1099-1111. doi: 10.1088/0951-7715/9/5/003. Google Scholar
J. M. Alonso and R. Ortega, Roots of unity and unbounded motions of an asymmetric oscillator, J. Differential Equations, 143 (1998), 201-220. doi: 10.1006/jdeq.1997.3367. Google Scholar
V. I. Arnold, On the behavior of an adiabatic invariant under slow periodic variation of the Hamiltonian, Sov. Math. Dokl., 3 (1962), 136-140. Google Scholar
R. Dieckerhoff and E. Zehnder, Boundedness of solutions via the twist theorem, Ann. Scuola Norm. Sup. Pisa, 14 (1987), 79-95. Google Scholar
T. Ding, Nonlinear oscillations at a point of resonance, Sci. Sin., 25 (1982), 918-931. Google Scholar
R. E. Gaines and J. Mawhin, Coincidence Degree, and Nonlinear Differential Equations, Lecture Notes in Math 568, Springer-Verlag, Berlin, 1977. Google Scholar
L. Jiao, D. Piao and Y. Wang, Boundedness for general semilinear Duffing equations via the twist theorem, J. Differential Equations, 252 (2012), 91-113. doi: 10.1016/j.jde.2011.09.019. Google Scholar
A. M. Krssnosel'skii and J. Mawhin, Periodic solutions of equations with oscillating nonlinearities, Math. Comput. Model. 32 (2000), 1445-1455. doi: 10.1016/S0895-7177(00)00216-8. Google Scholar
A. C. Lazer and D. E. Leach, Bounded perturbations of forced harmonic oscillators at resonance, Ann. Mat. Pura Appl., 82 (1969), 49-68. doi: 10.1007/BF02410787. Google Scholar
M. Levi, Quasiperiodic motions in superquadratic time-periodic potentials, Commun. Math. Phys., 143 (1991), 43-83. doi: 10.1007/BF02100285. Google Scholar
B. Liu, Boundedness in nonlinear oscillations at resonance, J. Differential Equations, 153 (1999), 142-174. doi: 10.1006/jdeq.1998.3553. Google Scholar
B. Liu, Boundedness in asymmetric oscillations, J. Math. Anal. Appl., 231 (1999), 355-373. doi: 10.1006/jmaa.1998.6219. Google Scholar
B. Liu, Quasi-periodic solutions of a semilinear Liénard equation at resonance, Sci. China Ser. A: Mathematics, 48 (2005), 1234-1244. doi: 10.1360/04ys0019. Google Scholar
B. Liu, Quasi-periodic solutions of forced isochronous oscillators at resonance, J. Differential Equations, 246 (2009), 3471-3495. doi: 10.1016/j.jde.2009.02.015. Google Scholar
J. Mawhin, Resonance and nonlinearity: A survey, Ukrainian Math. J., 59 (2007), 197-214. doi: 10.1007/s11253-007-0016-1. Google Scholar
J. Mawhin and M. Willem, Critical Point Theory and Hamiltonian Systems, Applied Mathematical Sciences 74, Springer-Verlag, New York, 1989. doi: 10.1007/978-1-4757-2061-7. Google Scholar
J. Moser, On invariant curves of area preserving mappings of an annulus, Nachr. Acad. Wiss. Gottingen Math. Phys., 1962 (1962), 1-20. Google Scholar
R. Ortega, Asymmetric oscillators and twist mappings, J. London Math. Soc., 53 (1996), 325-342. doi: 10.1112/jlms/53.2.325. Google Scholar
R. Ortega, Boundedness in a piecewise linear oscillator and a variant of the small twist theorem, Proc. London Math. Soc., 79 (1999), 381-413. doi: 10.1112/S0024611599012034. Google Scholar
C. Pan and X. Yu, Magnitude Estimates, Shandong Science and Technology Press, Jinan, 1983(Chinese version). Google Scholar
H. Rüssman, On the existence of invariant curves of twist mappings of an annulus, Lecture Notes in Math., Springer-Verlag, Berlin, 1007 (1983), 677-718. doi: 10.1007/BFb0061441. Google Scholar
Y. Wang, Boundedness of solutions in a class of Duffing equations with oscillating potentials, Nonlinear Anal.TAM, 71 (2009), 2906-2917. doi: 10.1016/j.na.2009.01.172. Google Scholar
X. Wang, Invariant tori and boundedness in asymmetric oscillations, Acta Math. Sinica(Engl. Ser.),19 (2003), 765-782. doi: 10.1007/s10114-003-0249-3. Google Scholar
X. Xing and Y. Wang, Boundedness for semilinear Duffing equations at resonance, Taiwanese J. Math., 16 (2012), 1923-1949. Google Scholar
X. Xing, The Lagrangian Stability of Solution for Nonlinear Equations, Ph.D. thesis, Nanjing University, Nanjing, 2012. Google Scholar
J. Xu and J. You, Persistence of lower-dimensional tori under the first Melnikov's nonresonance condition, J. Math. Pures. Appl., 80 (2001), 1045-1067. doi: 10.1016/S0021-7824(01)01221-1. Google Scholar
Florian Wagener. A parametrised version of Moser's modifying terms theorem. Discrete & Continuous Dynamical Systems - S, 2010, 3 (4) : 719-768. doi: 10.3934/dcdss.2010.3.719
Yanmin Niu, Xiong Li. An application of Moser's twist theorem to superlinear impulsive differential equations. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 431-445. doi: 10.3934/dcds.2019017
Shiwang Ma. Nontrivial periodic solutions for asymptotically linear hamiltonian systems at resonance. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2361-2380. doi: 10.3934/cpaa.2013.12.2361
Laura Olian Fannio. Multiple periodic solutions of Hamiltonian systems with strong resonance at infinity. Discrete & Continuous Dynamical Systems, 1997, 3 (2) : 251-264. doi: 10.3934/dcds.1997.3.251
Anna Capietto, Walter Dambrosio, Tiantian Ma, Zaihong Wang. Unbounded solutions and periodic solutions of perturbed isochronous Hamiltonian systems at resonance. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1835-1856. doi: 10.3934/dcds.2013.33.1835
V. Barbu. Periodic solutions to unbounded Hamiltonian system. Discrete & Continuous Dynamical Systems, 1995, 1 (2) : 277-283. doi: 10.3934/dcds.1995.1.277
Kyril Tintarev. Is the Trudinger-Moser nonlinearity a true critical nonlinearity?. Conference Publications, 2011, 2011 (Special) : 1378-1384. doi: 10.3934/proc.2011.2011.1378
Viktor L. Ginzburg and Basak Z. Gurel. The Generalized Weinstein--Moser Theorem. Electronic Research Announcements, 2007, 14: 20-29. doi: 10.3934/era.2007.14.20
Pengyan Wang, Pengcheng Niu. Liouville's theorem for a fractional elliptic system. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1545-1558. doi: 10.3934/dcds.2019067
Jacques Féjoz. On "Arnold's theorem" on the stability of the solar system. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3555-3565. doi: 10.3934/dcds.2013.33.3555
Qiong Meng, X. H. Tang. Solutions of a second-order Hamiltonian system with periodic boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1053-1067. doi: 10.3934/cpaa.2010.9.1053
Kentarou Fujie, Chihiro Nishiyama, Tomomi Yokota. Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with the sensitivity $v^{-1}S(u)$. Conference Publications, 2015, 2015 (special) : 464-472. doi: 10.3934/proc.2015.0464
Kanishka Perera, Marco Squassina. Bifurcation results for problems with fractional Trudinger-Moser nonlinearity. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 561-576. doi: 10.3934/dcdss.2018031
Pedro Teixeira. Dacorogna-Moser theorem on the Jacobian determinant equation with control of support. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 4071-4089. doi: 10.3934/dcds.2017173
Claudio A. Buzzi, Jeroen S.W. Lamb. Reversible Hamiltonian Liapunov center theorem. Discrete & Continuous Dynamical Systems - B, 2005, 5 (1) : 51-66. doi: 10.3934/dcdsb.2005.5.51
Ming Mei, Yau Shu Wong, Liping Liu. Phase transitions in a coupled viscoelastic system with periodic initial-boundary condition: (I) Existence and uniform boundedness. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 825-837. doi: 10.3934/dcdsb.2007.7.825
Fuchen Zhang, Xiaofeng Liao, Chunlai Mu, Guangyun Zhang, Yi-An Chen. On global boundedness of the Chen system. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1673-1681. doi: 10.3934/dcdsb.2017080
Xu Zhang, Guanrong Chen. Boundedness of the complex Chen system. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021291
John Hubbard, Yulij Ilyashenko. A proof of Kolmogorov's theorem. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 367-385. doi: 10.3934/dcds.2004.10.367
Rabah Amir, Igor V. Evstigneev. On Zermelo's theorem. Journal of Dynamics & Games, 2017, 4 (3) : 191-194. doi: 10.3934/jdg.2017011
Zhiguo Wang Yiqian Wang Daxiong Piao | CommonCrawl |
Describe the discovery that galaxies getting farther apart as the universe evolves
Explain how to use Hubble's law to determine distances to remote galaxies
Describe models for the nature of an expanding universe
Explain the variation in Hubble's constant
We now come to one of the most important discoveries ever made in astronomy—the fact that the universe is expanding. Before we describe how the discovery was made, we should point out that the first steps in the study of galaxies came at a time when the techniques of spectroscopy were also making great strides. Astronomers using large telescopes could record the spectrum of a faint star or galaxy on photographic plates, guiding their telescopes so they remained pointed to the same object for many hours and collected more light. The resulting spectra of galaxies contained a wealth of information about the composition of the galaxy and the velocities of these great star systems.
Slipher's Pioneering Observations
Curiously, the discovery of the expansion of the universe began with the search for Martians and other solar systems. In 1894, the controversial (and wealthy) astronomer Percival Lowell established an observatory in Flagstaff, Arizona, to study the planets and search for life in the universe. Lowell thought that the spiral nebulae might be solar systems in the process of formation. He therefore asked one of the observatory's young astronomers, Vesto M. Slipher (Figure 1), to photograph the spectra of some of the spiral nebulae to see if their spectral lines might show chemical compositions like those expected for newly forming planets.
Figure 1: Vesto M. Slipher (1875–1969). Slipher spent his entire career at the Lowell Observatory, where he discovered the large radial velocities of galaxies. (credit: Lowell Observatory)
The Lowell Observatory's major instrument was a 24-inch refracting telescope, which was not at all well suited to observations of faint spiral nebulae. With the technology available in those days, photographic plates had to be exposed for 20 to 40 hours to produce a good spectrum (in which the positions of the lines could reveal a galaxy's motion). This often meant continuing to expose the same photograph over several nights. Beginning in 1912, and making heroic efforts over a period of about 20 years, Slipher managed to photograph the spectra of more than 40 of the spiral nebulae (which would all turn out to be galaxies).
To his surprise, the spectral lines of most galaxies showed an astounding redshift. By "redshift" we mean that the lines in the spectra are displaced toward longer wavelengths (toward the red end of the visible spectrum). Recall from the chapter on Radiation and Spectra that a redshift is seen when the source of the waves is moving away from us. Slipher's observations showed that most spirals are racing away at huge speeds; the highest velocity he measured was 1800 kilometers per second.
Only a few spirals—such as the Andromeda and Triangulum Galaxies and M81—all of which are now known to be our close neighbors, turned out to be approaching us. All the other galaxies were moving away. Slipher first announced this discovery in 1914, years before Hubble showed that these objects were other galaxies and before anyone knew how far away they were. No one at the time quite knew what to make of this discovery.
Figure 2: Milton Humason (1891–1972). Humason was Hubble's collaborator on the great task of observing, measuring, and classifying the characteristics of many galaxies. (credit: Caltech Archives)
Hubble's Law
The profound implications of Slipher's work became apparent only during the 1920s. Georges Lemaître was a Belgian priest and a trained astronomer. In 1927, he published a paper in French in an obscure Belgian journal in which he suggested that we live in an expanding universe. The title of the paper (translated into English) is "A Homogenous Universe of Constant Mass and Growing Radius Accounting for the Radial Velocity of Extragalactic Nebulae." Lemaître had discovered that Einstein's equations of relativity were consistent with an expanding universe (as had the Russian scientist Alexander Friedmann independently in 1922). Lemaître then went on to use Slipher's data to support the hypothesis that the universe actually is expanding and to estimate the rate of expansion. Initially, scientists paid little attention to this paper, perhaps because the Belgian journal was not widely available.
In the meantime, Hubble was making observations of galaxies with the 2.5-meter telescope on Mt. Wilson, which was then the world's largest. Hubble carried out the key observations in collaboration with a remarkable man, Milton Humason, who dropped out of school in the eighth grade and began his astronomical career by driving a mule train up the trail on Mount Wilson to the observatory (Figure 2). In those early days, supplies had to be brought up that way; even astronomers hiked up to the mountaintop for their turns at the telescope. Humason became interested in the work of the astronomers and, after marrying the daughter of the observatory's electrician, took a job as janitor there. After a time, he became a night assistant, helping the astronomers run the telescope and record data. Eventually, he made such a mark that he became a full astronomer at the observatory.
By the late 1920s, Humason was collaborating with Hubble by photographing the spectra of faint galaxies with the 2.5-meter telescope. (By then, there was no question that the spiral nebulae were in fact galaxies.) Hubble had found ways to improve the accuracy of the estimates of distances to spiral galaxies, and he was able to measure much fainter and more distant galaxies than Slipher could observe with his much-smaller telescope. When Hubble laid his own distance estimates next to measurements of the recession velocities (the speed with which the galaxies were moving away), he found something stunning: there was a relationship between distance and velocity for galaxies. The more distant the galaxy, the faster it was receding from us.
In 1931, Hubble and Humason jointly published the seminal paper where they compared distances and velocities of remote galaxies moving away from us at speeds as high as 20,000 kilometers per second and were able to show that the recession velocities of galaxies are directly proportional to their distances from us (Figure 3), just as Lemaître had suggested.
Figure 3: Hubble's Law. (a) These data show Hubble's original velocity-distance relation, adapted from his 1929 paper in the Proceedings of the National Academy of Sciences. (b) These data show Hubble and Humason's velocity-distance relation, adapted from their 1931 paper in The Astrophysical Journal. The red dots at the lower left are the points in the diagram in the 1929 paper. Comparison of the two graphs shows how rapidly the determination of galactic distances and redshifts progressed in the 2 years between these publications.
We now know that this relationship holds for every galaxy except a few of the nearest ones. Nearly all of the galaxies that are approaching us turn out to be part of the Milky Way's own group of galaxies, which have their own individual motions, just as birds flying in a group may fly in slightly different directions at slightly different speeds even though the entire flock travels through space together.
Written as a formula, the relationship between velocity and distance is
[latex]V=H\times d[/latex]
where v is the recession speed, d is the distance, and H is a number called the Hubble constant. This equation is now known as Hubble's law.
Constants of Proportionality
Mathematical relationships such as Hubble's law are pretty common in life. To take a simple example, suppose your college or university hires you to call rich alumni and ask for donations. You are paid $2.50 for each call; the more calls you can squeeze in between studying astronomy and other courses, the more money you take home. We can set up a formula that connects p, your pay, and n, the number of calls
[latex]p=A\times n[/latex]
where A is the alumni constant, with a value of $2.50. If you make 20 calls, you will earn $2.50 times 20, or $50.
Suppose your boss forgets to tell you what you will get paid for each call. You can calculate the alumni constant that governs your pay by keeping track of how many calls you make and noting your gross pay each week. If you make 100 calls the first week and are paid $250, you can deduce that the constant is $2.50 (in units of dollars per call). Hubble, of course, had no "boss" to tell him what his constant would be—he had to calculate its value from the measurements of distance and velocity.
Astronomers express the value of Hubble's constant in units that relate to how they measure speed and velocity for galaxies. In this book, we will use kilometers per second per million light-years as that unit. For many years, estimates of the value of the Hubble constant have been in the range of 15 to 30 kilometers per second per million light-years The most recent work appears to be converging on a value near 22 kilometers per second per million light-years If H is 22 kilometers per second per million light-years, a galaxy moves away from us at a speed of 22 kilometers per second for every million light-years of its distance. As an example, a galaxy 100 million light-years away is moving away from us at a speed of 2200 kilometers per second.
Hubble's law tells us something fundamental about the universe. Since all but the nearest galaxies appear to be in motion away from us, with the most distant ones moving the fastest, we must be living in an expanding universe. We will explore the implications of this idea shortly, as well as in the final chapters of this text. For now, we will just say that Hubble's observation underlies all our theories about the origin and evolution of the universe.
Hubble's Law and Distances
The regularity expressed in Hubble's law has a built-in bonus: it gives us a new way to determine the distances to remote galaxies. First, we must reliably establish Hubble's constant by measuring both the distance and the velocity of many galaxies in many directions to be sure Hubble's law is truly a universal property of galaxies. But once we have calculated the value of this constant and are satisfied that it applies everywhere, much more of the universe opens up for distance determination. Basically, if we can obtain a spectrum of a galaxy, we can immediately tell how far away it is.
The procedure works like this. We use the spectrum to measure the speed with which the galaxy is moving away from us. If we then put this speed and the Hubble constant into Hubble's law equation, we can solve for the distance.
Example 1: Hubble's law
Hubble's law (v = H × d) allows us to calculate the distance to any galaxy. Here is how we use it in practice.
We have measured Hubble's constant to be 22 km/s per million light-years. This means that if a galaxy is 1 million light-years farther away, it will move away 22 km/s faster. So, if we find a galaxy that is moving away at 18,000 km/s, what does Hubble's law tells us about the distance to the galaxy?
[latex]d=\frac{v}{H}=\frac{18,000\text{km/s}}{\frac{22\text{km/s}}{1\text{million light-years}}}=\frac{18,000}{22}\times \frac{1\text{million light-years}}{1}=818\text{million light-years}[/latex]
Note how we handled the units here: the km/s in the numerator and denominator cancel, and the factor of million light-years in the denominator of the constant must be divided correctly before we get our distance of 818 million light-years.
Check Your Learning
Using 22 km/s/million light-years for Hubble's constant, what recessional velocity do we expect to find if we observe a galaxy at 500 million light-years?
[latex]v=d\times H=500\text{million light-years}\times \frac{22\text{km/s}}{1\text{million light-years}}=11,000\text{km/s}[/latex]
Variation of Hubble's Constant
The use of redshift is potentially a very important technique for determining distances because as we have seen, most of our methods for determining galaxy distances are limited to approximately the nearest few hundred million light-years (and they have large uncertainties at these distances). The use of Hubble's law as a distance indicator requires only a spectrum of a galaxy and a measurement of the Doppler shift, and with large telescopes and modern spectrographs, spectra can be taken of extremely faint galaxies.
But, as is often the case in science, things are not so simple. This technique works if, and only if, the Hubble constant has been truly constant throughout the entire life of the universe. When we observe galaxies billions of light-years away, we are seeing them as they were billions of years ago. What if the Hubble "constant" was different billions of years ago? Before 1998, astronomers thought that, although the universe is expanding, the expansion should be slowing down, or decelerating, because the overall gravitational pull of all matter in the universe would have a dominant, measureable effect. If the expansion is decelerating, then the Hubble constant should be decreasing over time.
The discovery that type Ia supernovae are standard bulbs gave astronomers the tool they needed to observe extremely distant galaxies and measure the rate of expansion billions of years ago. The results were completely unexpected. It turns out that the expansion of the universe is accelerating over time! What makes this result so astounding is that there is no way that existing physical theories can account for this observation. While a decelerating universe could easily be explained by gravity, there was no force or property in the universe known to astronomers that could account for the acceleration. In The Big Bang chapter, we will look in more detail at the observations that led to this totally unexpected result and explore its implications for the ultimate fate of the universe.
In any case, if the Hubble constant is not really a constant when we look over large spans of space and time, then the calculation of galaxy distances using the Hubble constant won't be accurate. As we shall see in the chapter on The Big Bang, the accurate calculation of distances requires a model for how the Hubble constant has changed over time. The farther away a galaxy is (and the longer ago we are seeing it), the more important it is to include the effects of the change in the Hubble constant. For galaxies within a few billion light-years, however, the assumption that the Hubble constant is indeed constant gives good estimates of distance.
Models for an Expanding Universe
At first, thinking about Hubble's law and being a fan of the work of Copernicus and Harlow Shapley, you might be shocked. Are all the galaxies really moving away from us? Is there, after all, something special about our position in the universe? Worry not; the fact that galaxies are receding from us and that more distant galaxies are moving away more rapidly than nearby ones shows only that the universe is expanding uniformly.
A uniformly expanding universe is one that is expanding at the same rate everywhere. In such a universe, we and all other observers, no matter where they are located, must observe a proportionality between the velocities and distances of equivalently remote galaxies. (Here, we are ignoring the fact that the Hubble constant is not constant over all time, but if at any given time in the evolution of the universe the Hubble constant has the same value everywhere, this argument still works.)
To see why, first imagine a ruler made of stretchable rubber, with the usual lines marked off at each centimeter. Now suppose someone with strong arms grabs each end of the ruler and slowly stretches it so that, say, it doubles in length in 1 minute (Figure 4). Consider an intelligent ant sitting on the mark at 2 centimeters—a point that is not at either end nor in the middle of the ruler. He measures how fast other ants, sitting at the 4-, 7-, and 12-centimeter marks, move away from him as the ruler stretches.
Figure 4: Stretching a Ruler. Ants on a stretching ruler see other ants move away from them. The speed with which another ant moves away is proportional to its distance.
The ant at 4 centimeters, originally 2 centimeters away from our ant, has doubled its distance in 1 minute; it therefore moved away at a speed of 2 centimeters per minute. The ant at the 7-centimeters mark, which was originally 5 centimeters away from our ant, is now 10 centimeters away; it thus had to move at 5 centimeters per minute. The one that started at the 12-centimeters mark, which was 10 centimeters away from the ant doing the counting, is now 20 centimeters away, meaning it must have raced away at a speed of 10 centimeters per minute. Ants at different distances move away at different speeds, and their speeds are proportional to their distances (just as Hubble's law indicates for galaxies). Yet, notice in our example that all the ruler was doing was stretching uniformly. Also, notice that none of the ants were actually moving of their own accord, it was the stretching of the ruler that moved them apart.
Now let's repeat the analysis, but put the intelligent ant on some other mark—say, on 7 or 12 centimeters. We discover that, as long as the ruler stretches uniformly, this ant also finds every other ant moving away at a speed proportional to its distance. In other words, the kind of relationship expressed by Hubble's law can be explained by a uniform stretching of the "world" of the ants. And all the ants in our simple diagram will see the other ants moving away from them as the ruler stretches.
For a three-dimensional analogy, let's look at the loaf of raisin bread in Figure 5. The chef has accidentally put too much yeast in the dough, and when she sets the bread out to rise, it doubles in size during the next hour, causing all the raisins to move farther apart. On the figure, we again pick a representative raisin (that is not at the edge or the center of the loaf) and show the distances from it to several others in the figure (before and after the loaf expands).
Figure 5: Expanding Raisin Bread. As the raisin bread rises, the raisins "see" other raisins moving away. More distant raisins move away faster in a uniformly expanding bread.
Measure the increases in distance and calculate the speeds for yourself on the raisin bread, just like we did for the ruler. You will see that, since each distance doubles during the hour, each raisin moves away from our selected raisin at a speed proportional to its distance. The same is true no matter which raisin you start with.
Our two analogies are useful for clarifying our thinking, but you must not take them literally. On both the ruler and the raisin bread, there are points that are at the end or edge. You can use these to pinpoint the middle of the ruler and the loaf. While our models of the universe have some resemblance to the properties of the ruler and the loaf, the universe has no boundaries, no edges, and no center (all mind-boggling ideas that we will discuss in a later chapter).
What is useful to notice about both the ants and the raisins is that they themselves did not "cause" their motion. It isn't as if the raisins decided to take a trip away from each other and then hopped on a hoverboard to get away. No, in both our analogies, it was the stretching of the medium (the ruler or the bread) that moved the ants or the raisins farther apart. In the same way, we will see in The Big Bang chapter that the galaxies don't have rocket motors propelling them away from each other. Instead, they are passive participants in the expansion of space. As space stretches, the galaxies are carried farther and farther apart much as the ants and the raisins were. (If this notion of the "stretching" of space surprises or bothers you, now would be a good time to review the information about spacetime in Black Holes and Curved Spacetime. We will discuss these ideas further as our discussion broadens from galaxies to the whole universe.)
The expansion of the universe, by the way, does not imply that the individual galaxies and clusters of galaxies themselves are expanding. Neither raisins nor the ants in our analogy grow in size as the loaf expands. Similarly, gravity holds galaxies and clusters of galaxies together, and they get farther away from each other—without themselves changing in size—as the universe expands.
The universe is expanding. Observations show that the spectral lines of distant galaxies are redshifted, and that their recession velocities are proportional to their distances from us, a relationship known as Hubble's law. The rate of recession, called the Hubble constant, is approximately 22 kilometers per second per million light-years. We are not at the center of this expansion: an observer in any other galaxy would see the same pattern of expansion that we do. The expansion described by Hubble's law is best understood as a stretching of space.
Hubble constant: a constant of proportionality in the law relating the velocities of remote galaxies to their distances
Hubble's law: a rule that the radial velocities of remove galaxies are proportional to their distances from us
redshift: when lines in the spectra are displaced toward longer wavelengths (toward the red end of the visible spectrum) | CommonCrawl |
(-) Remove <label class='research-domain' title='Synthetic Chemistry and Materials'>PE5 (2)</label> filter PE5 (2)
(-) Remove <label class='research-domain' title='Computer Science and Informatics'>PE6 (10)</label> filter PE6 (10)
(-) Remove <label class='research-domain' title='Individuals, Markets and Organisations'>SH1 (0)</label> filter SH1 (0)
(-) Remove <label class='research-domain' title='Cultures and Cultural Production'>SH5 (1)</label> filter SH5 (1)
Project acronym 5D-NanoTrack
Project Five-Dimensional Localization Microscopy for Sub-Cellular Dynamics
Researcher (PI) Yoav SHECHTMAN
Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY
Summary The sub-cellular processes that control the most critical aspects of life occur in three-dimensions (3D), and are intrinsically dynamic. While super-resolution microscopy has revolutionized cellular imaging in recent years, our current capability to observe the dynamics of life on the nanoscale is still extremely limited, due to inherent trade-offs between spatial, temporal and spectral resolution using existing approaches. We propose to develop and demonstrate an optical microscopy methodology that would enable live sub-cellular observation in unprecedented detail. Making use of multicolor 3D point-spread-function (PSF) engineering, a technique I have recently developed, we will be able to simultaneously track multiple markers inside live cells, at high speed and in five-dimensions (3D, time, and color). Multicolor 3D PSF engineering holds the potential of being a uniquely powerful method for 5D tracking. However, it is not yet applicable to live-cell imaging, due to significant bottlenecks in optical engineering and signal processing, which we plan to overcome in this project. Importantly, we will also demonstrate the efficacy of our method using a challenging biological application: real-time visualization of chromatin dynamics - the spatiotemporal organization of DNA. This is a highly suitable problem due to its fundamental importance, its role in a variety of cellular processes, and the lack of appropriate tools for studying it. The project is divided into 3 aims: 1. Technology development: diffractive-element design for multicolor 3D PSFs. 2. System design: volumetric tracking of dense emitters. 3. Live-cell measurements: chromatin dynamics. Looking ahead, here we create the imaging tools that pave the way towards the holy grail of chromatin visualization: dynamic observation of the 3D positions of the ~3 billion DNA base-pairs in a live human cell. Beyond that, our results will be applicable to numerous 3D micro/nanoscale tracking applications.
The sub-cellular processes that control the most critical aspects of life occur in three-dimensions (3D), and are intrinsically dynamic. While super-resolution microscopy has revolutionized cellular imaging in recent years, our current capability to observe the dynamics of life on the nanoscale is still extremely limited, due to inherent trade-offs between spatial, temporal and spectral resolution using existing approaches. We propose to develop and demonstrate an optical microscopy methodology that would enable live sub-cellular observation in unprecedented detail. Making use of multicolor 3D point-spread-function (PSF) engineering, a technique I have recently developed, we will be able to simultaneously track multiple markers inside live cells, at high speed and in five-dimensions (3D, time, and color). Multicolor 3D PSF engineering holds the potential of being a uniquely powerful method for 5D tracking. However, it is not yet applicable to live-cell imaging, due to significant bottlenecks in optical engineering and signal processing, which we plan to overcome in this project. Importantly, we will also demonstrate the efficacy of our method using a challenging biological application: real-time visualization of chromatin dynamics - the spatiotemporal organization of DNA. This is a highly suitable problem due to its fundamental importance, its role in a variety of cellular processes, and the lack of appropriate tools for studying it. The project is divided into 3 aims: 1. Technology development: diffractive-element design for multicolor 3D PSFs. 2. System design: volumetric tracking of dense emitters. 3. Live-cell measurements: chromatin dynamics. Looking ahead, here we create the imaging tools that pave the way towards the holy grail of chromatin visualization: dynamic observation of the 3D positions of the ~3 billion DNA base-pairs in a live human cell. Beyond that, our results will be applicable to numerous 3D micro/nanoscale tracking applications.
Project acronym ANYONIC
Project Statistics of Exotic Fractional Hall States
Researcher (PI) Mordehai HEIBLUM
Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE
Call Details Advanced Grant (AdG), PE3, ERC-2018-ADG
Summary Since their discovery, Quantum Hall Effects have unfolded intriguing avenues of research, exhibiting a multitude of unexpected exotic states: accurate quantized conductance states; particle-like and hole-conjugate fractional states; counter-propagating charge and neutral edge modes; and fractionally charged quasiparticles - abelian and (predicted) non-abelian. Since the sought-after anyonic statistics of fractional states is yet to be verified, I propose to launch a thorough search for it employing new means. I believe that our studies will serve the expanding field of the emerging family of topological materials. Our on-going attempts to observe quasiparticles (qp's) interference, in order to uncover their exchange statistics (under ERC), taught us that spontaneous, non-topological, 'neutral edge modes' are the main culprit responsible for qp's dephasing. In an effort to quench the neutral modes, we plan to develop a new class of micro-size interferometers, based on synthetically engineered fractional modes. Flowing away from the fixed physical edge, their local environment can be controlled, making it less hospitable for the neutral modes. Having at hand our synthetized helical-type fractional modes, it is highly tempting to employ them to form localize para-fermions, which will extend the family of exotic states. This can be done by proximitizing them to a superconductor, or gapping them via inter-mode coupling. The less familiar thermal conductance measurements, which we recently developed (under ERC), will be applied throughout our work to identify 'topological orders' of exotic states; namely, distinguishing between abelian and non-abelian fractional states. The proposal is based on an intensive and continuous MBE effort, aimed at developing extremely high purity, GaAs based, structures. Among them, structures that support our new synthetic modes that are amenable to manipulation, and others that host rare exotic states, such as v=5/2, 12/5, 19/8, and 35/16.
Since their discovery, Quantum Hall Effects have unfolded intriguing avenues of research, exhibiting a multitude of unexpected exotic states: accurate quantized conductance states; particle-like and hole-conjugate fractional states; counter-propagating charge and neutral edge modes; and fractionally charged quasiparticles - abelian and (predicted) non-abelian. Since the sought-after anyonic statistics of fractional states is yet to be verified, I propose to launch a thorough search for it employing new means. I believe that our studies will serve the expanding field of the emerging family of topological materials. Our on-going attempts to observe quasiparticles (qp's) interference, in order to uncover their exchange statistics (under ERC), taught us that spontaneous, non-topological, 'neutral edge modes' are the main culprit responsible for qp's dephasing. In an effort to quench the neutral modes, we plan to develop a new class of micro-size interferometers, based on synthetically engineered fractional modes. Flowing away from the fixed physical edge, their local environment can be controlled, making it less hospitable for the neutral modes. Having at hand our synthetized helical-type fractional modes, it is highly tempting to employ them to form localize para-fermions, which will extend the family of exotic states. This can be done by proximitizing them to a superconductor, or gapping them via inter-mode coupling. The less familiar thermal conductance measurements, which we recently developed (under ERC), will be applied throughout our work to identify 'topological orders' of exotic states; namely, distinguishing between abelian and non-abelian fractional states. The proposal is based on an intensive and continuous MBE effort, aimed at developing extremely high purity, GaAs based, structures. Among them, structures that support our new synthetic modes that are amenable to manipulation, and others that host rare exotic states, such as v=5/2, 12/5, 19/8, and 35/16.
Project acronym AutoCAb
Project Automated computational design of site-targeted repertoires of camelid antibodies
Researcher (PI) Sarel-Jacob FLEISHMAN
Call Details Consolidator Grant (CoG), LS9, ERC-2018-COG
Summary We propose to develop the first high-throughput strategy to design, synthesize, and screen repertoires comprising millions of single-domain camelid antibodies (VHH) that target desired protein surfaces. Each VHH will be individually designed for high stability and target-site affinity. We will leverage recent methods developed by our lab for designing stable, specific, and accurate backbones at interfaces, the advent of massive and affordable custom-DNA oligo synthesis, and machine learning methods to accomplish the following aims: Aim 1: Establish a completely automated computational pipeline that uses Rosetta to design millions of VHHs targeting desired protein surfaces. The variable regions in each design will be encoded in DNA oligo pools, which will be assembled to generate the entire site-targeted repertoire. We will then use high-throughput binding screens followed by deep sequencing to characterize the designs' target-site affinity and isolate high-affinity binders. Aim 2: Develop an epitope-focusing strategy that designs several variants of a target antigen, each of which encodes dozens of radical surface mutations outside the target site to disrupt potential off-target site binding. The designs will be used to isolate site-targeting binders from repertoires of Aim 1. Each high-throughput screen will provide unprecedented experimental data on target-site affinity in millions of individually designed VHHs. Aim 3: Use machine learning methods to infer combinations of molecular features that distinguish high-affinity binders from non binders. These will be encoded in subsequent designed repertoires, leading to a continuous "learning loop" of methods for high-affinity, site-targeted binding. AutoCAb's interdisciplinary strategy will thus lead to deeper understanding of and new general methods for designing stable, high-affinity, site-targeted antibodies, potentially revolutionizing binder and inhibitor discovery in basic and applied biomedical research.
We propose to develop the first high-throughput strategy to design, synthesize, and screen repertoires comprising millions of single-domain camelid antibodies (VHH) that target desired protein surfaces. Each VHH will be individually designed for high stability and target-site affinity. We will leverage recent methods developed by our lab for designing stable, specific, and accurate backbones at interfaces, the advent of massive and affordable custom-DNA oligo synthesis, and machine learning methods to accomplish the following aims: Aim 1: Establish a completely automated computational pipeline that uses Rosetta to design millions of VHHs targeting desired protein surfaces. The variable regions in each design will be encoded in DNA oligo pools, which will be assembled to generate the entire site-targeted repertoire. We will then use high-throughput binding screens followed by deep sequencing to characterize the designs' target-site affinity and isolate high-affinity binders. Aim 2: Develop an epitope-focusing strategy that designs several variants of a target antigen, each of which encodes dozens of radical surface mutations outside the target site to disrupt potential off-target site binding. The designs will be used to isolate site-targeting binders from repertoires of Aim 1. Each high-throughput screen will provide unprecedented experimental data on target-site affinity in millions of individually designed VHHs. Aim 3: Use machine learning methods to infer combinations of molecular features that distinguish high-affinity binders from non binders. These will be encoded in subsequent designed repertoires, leading to a continuous "learning loop" of methods for high-affinity, site-targeted binding. AutoCAb's interdisciplinary strategy will thus lead to deeper understanding of and new general methods for designing stable, high-affinity, site-targeted antibodies, potentially revolutionizing binder and inhibitor discovery in basic and applied biomedical research.
Project acronym BeyondA1
Project Set theory beyond the first uncountable cardinal
Researcher (PI) Assaf Shmuel Rinot
Host Institution (HI) BAR ILAN UNIVERSITY
Summary We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah. While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions. A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that. To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success.
We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah. While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions. A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that. To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success.
Project acronym DELPHI
Project Computing Answers to Complex Questions in Broad Domains
Researcher (PI) Jonathan Berant
Host Institution (HI) TEL AVIV UNIVERSITY
Summary The explosion of information around us has democratized knowledge and transformed its availability for people around the world. Still, since information is mediated through automated systems, access is bounded by their ability to understand language. Consider an economist asking "What fraction of the top-5 growing countries last year raised their co2 emission?". While the required information is available, answering such complex questions automatically is not possible. Current question answering systems can answer simple questions in broad domains, or complex questions in narrow domains. However, broad and complex questions are beyond the reach of state-of-the-art. This is because systems are unable to decompose questions into their parts, and find the relevant information in multiple sources. Further, as answering such questions is hard for people, collecting large datasets to train such models is prohibitive. In this proposal I ask: Can computers answer broad and complex questions that require reasoning over multiple modalities? I argue that by synthesizing the advantages of symbolic and distributed representations the answer will be "yes". My thesis is that symbolic representations are suitable for meaning composition, as they provide interpretability, coverage, and modularity. Complementarily, distributed representations (learned by neural nets) excel at capturing the fuzziness of language. I propose a framework where complex questions are symbolically decomposed into sub-questions, each is answered with a neural network, and the final answer is computed from all gathered information. This research tackles foundational questions in language understanding. What is the right representation for reasoning in language? Can models learn to perform complex actions in the face of paucity of data? Moreover, my research, if successful, will transform how we interact with machines, and define a role for them as research assistants in science, education, and our daily life.
The explosion of information around us has democratized knowledge and transformed its availability for people around the world. Still, since information is mediated through automated systems, access is bounded by their ability to understand language. Consider an economist asking "What fraction of the top-5 growing countries last year raised their co2 emission?". While the required information is available, answering such complex questions automatically is not possible. Current question answering systems can answer simple questions in broad domains, or complex questions in narrow domains. However, broad and complex questions are beyond the reach of state-of-the-art. This is because systems are unable to decompose questions into their parts, and find the relevant information in multiple sources. Further, as answering such questions is hard for people, collecting large datasets to train such models is prohibitive. In this proposal I ask: Can computers answer broad and complex questions that require reasoning over multiple modalities? I argue that by synthesizing the advantages of symbolic and distributed representations the answer will be "yes". My thesis is that symbolic representations are suitable for meaning composition, as they provide interpretability, coverage, and modularity. Complementarily, distributed representations (learned by neural nets) excel at capturing the fuzziness of language. I propose a framework where complex questions are symbolically decomposed into sub-questions, each is answered with a neural network, and the final answer is computed from all gathered information. This research tackles foundational questions in language understanding. What is the right representation for reasoning in language? Can models learn to perform complex actions in the face of paucity of data? Moreover, my research, if successful, will transform how we interact with machines, and define a role for them as research assistants in science, education, and our daily life.
Project acronym DIFFOP
Project Nonlinear Data and Signal Analysis with Diffusion Operators
Researcher (PI) Ronen TALMON
Summary Nowadays, extensive collection and storage of massive data sets have become a routine in multiple disciplines and in everyday life. These large amounts of intricate data often make data samples arithmetic and basic comparisons problematic, raising new challenges to traditional data analysis objectives such as filtering and prediction. Furthermore, the availability of such data constantly pushes the boundaries of data analysis to new emerging domains, ranging from neuronal and social network analysis to multimodal sensor fusion. The combination of evolved data and new domains drives a fundamental change in the field of data analysis. Indeed, many classical model-based techniques have become obsolete since their models do not embody the richness of the collected data. Today, one notable avenue of research is the development of nonlinear techniques that transition from data to creating representations, without deriving models in closed-form. The vast majority of such existing data-driven methods operate directly on the data, a hard task by itself when the data are large and elaborated. The goal of this research is to develop a fundamentally new methodology for high dimensional data analysis with diffusion operators, making use of recent transformative results in manifold and geometry learning. More concretely, shifting the focus from processing the data samples themselves and considering instead structured data through the lens of diffusion operators will introduce new powerful "handles" to data, capturing their complexity efficiently. We will study the basic theory behind this nonlinear analysis, develop new operators for this purpose, and devise efficient data-driven algorithms. In addition, we will explore how our approach can be leveraged for devising efficient solutions to a broad range of open real-world data analysis problems, involving intrinsic representations, sensor fusion, time-series analysis, network connectivity inference, and domain adaptation.
Nowadays, extensive collection and storage of massive data sets have become a routine in multiple disciplines and in everyday life. These large amounts of intricate data often make data samples arithmetic and basic comparisons problematic, raising new challenges to traditional data analysis objectives such as filtering and prediction. Furthermore, the availability of such data constantly pushes the boundaries of data analysis to new emerging domains, ranging from neuronal and social network analysis to multimodal sensor fusion. The combination of evolved data and new domains drives a fundamental change in the field of data analysis. Indeed, many classical model-based techniques have become obsolete since their models do not embody the richness of the collected data. Today, one notable avenue of research is the development of nonlinear techniques that transition from data to creating representations, without deriving models in closed-form. The vast majority of such existing data-driven methods operate directly on the data, a hard task by itself when the data are large and elaborated. The goal of this research is to develop a fundamentally new methodology for high dimensional data analysis with diffusion operators, making use of recent transformative results in manifold and geometry learning. More concretely, shifting the focus from processing the data samples themselves and considering instead structured data through the lens of diffusion operators will introduce new powerful "handles" to data, capturing their complexity efficiently. We will study the basic theory behind this nonlinear analysis, develop new operators for this purpose, and devise efficient data-driven algorithms. In addition, we will explore how our approach can be leveraged for devising efficient solutions to a broad range of open real-world data analysis problems, involving intrinsic representations, sensor fusion, time-series analysis, network connectivity inference, and domain adaptation.
Project acronym EffectiveTG
Project Effective Methods in Tame Geometry and Applications in Arithmetic and Dynamics
Researcher (PI) Gal BINYAMINI
Summary Tame geometry studies structures in which every definable set has a finite geometric complexity. The study of tame geometry spans several interrelated mathematical fields, including semialgebraic, subanalytic, and o-minimal geometry. The past decade has seen the emergence of a spectacular link between tame geometry and arithmetic following the discovery of the fundamental Pila-Wilkie counting theorem and its applications in unlikely diophantine intersections. The P-W theorem itself relies crucially on the Yomdin-Gromov theorem, a classical result of tame geometry with fundamental applications in smooth dynamics. It is natural to ask whether the complexity of a tame set can be estimated effectively in terms of the defining formulas. While a large body of work is devoted to answering such questions in the semialgebraic case, surprisingly little is known concerning more general tame structures - specifically those needed in recent applications to arithmetic. The nature of the link between tame geometry and arithmetic is such that any progress toward effectivizing the theory of tame structures will likely lead to effective results in the domain of unlikely intersections. Similarly, a more effective version of the Yomdin-Gromov theorem is known to imply important consequences in smooth dynamics. The proposed research will approach effectivity in tame geometry from a fundamentally new direction, bringing to bear methods from the theory of differential equations which have until recently never been used in this context. Toward this end, our key goals will be to gain insight into the differential algebraic and complex analytic structure of tame sets; and to apply this insight in combination with results from the theory of differential equations to effectivize key results in tame geometry and its applications to arithmetic and dynamics. I believe that my preliminary work in this direction amply demonstrates the feasibility and potential of this approach.
Tame geometry studies structures in which every definable set has a finite geometric complexity. The study of tame geometry spans several interrelated mathematical fields, including semialgebraic, subanalytic, and o-minimal geometry. The past decade has seen the emergence of a spectacular link between tame geometry and arithmetic following the discovery of the fundamental Pila-Wilkie counting theorem and its applications in unlikely diophantine intersections. The P-W theorem itself relies crucially on the Yomdin-Gromov theorem, a classical result of tame geometry with fundamental applications in smooth dynamics. It is natural to ask whether the complexity of a tame set can be estimated effectively in terms of the defining formulas. While a large body of work is devoted to answering such questions in the semialgebraic case, surprisingly little is known concerning more general tame structures - specifically those needed in recent applications to arithmetic. The nature of the link between tame geometry and arithmetic is such that any progress toward effectivizing the theory of tame structures will likely lead to effective results in the domain of unlikely intersections. Similarly, a more effective version of the Yomdin-Gromov theorem is known to imply important consequences in smooth dynamics. The proposed research will approach effectivity in tame geometry from a fundamentally new direction, bringing to bear methods from the theory of differential equations which have until recently never been used in this context. Toward this end, our key goals will be to gain insight into the differential algebraic and complex analytic structure of tame sets; and to apply this insight in combination with results from the theory of differential equations to effectivize key results in tame geometry and its applications to arithmetic and dynamics. I believe that my preliminary work in this direction amply demonstrates the feasibility and potential of this approach.
Project acronym EMERGE
Project Reconstructing the emergence of the Milky Way's stellar population with Gaia, SDSS-V and JWST
Researcher (PI) Dan Maoz
Summary Understanding how the Milky Way arrived at its present state requires a large volume of precision measurements of our Galaxy's current makeup, as well as an empirically based understanding of the main processes involved in the Galaxy's evolution. Such data are now about to arrive in the flood of quality information from Gaia and SDSS-V. The demography of the stars and of the compact stellar remnants in our Galaxy, in terms of phase-space location, mass, age, metallicity, and multiplicity are data products that will come directly from these surveys. I propose to integrate this information into a comprehensive picture of the Milky Way's present state. In parallel, I will build a Galactic chemical evolution model, with input parameters that are as empirically based as possible, that will reproduce and explain the observations. To get those input parameters, I will measure the rates of supernovae (SNe) in nearby galaxies (using data from past and ongoing surveys) and in high-redshift proto-clusters (by conducting a SN search with JWST), to bring into sharp focus the element yields of SNe and the distribution of delay times (the DTD) between star formation and SN explosion. These empirically determined SN metal-production parameters will be used to find the observationally based reconstruction of the Galaxy's stellar formation history and chemical evolution that reproduces the observed present-day Milky Way stellar population. The population census of stellar multiplicity with Gaia+SDSS-V, and particularly of short-orbit compact-object binaries, will hark back to the rates and the element yields of the various types of SNe, revealing the connections between various progenitor systems, their explosions, and their rates. The plan, while ambitious, is feasible, thanks to the data from these truly game-changing observational projects. My team will perform all steps of the analysis and will combine the results to obtain the clearest picture of how our Galaxy came to be.
Understanding how the Milky Way arrived at its present state requires a large volume of precision measurements of our Galaxy's current makeup, as well as an empirically based understanding of the main processes involved in the Galaxy's evolution. Such data are now about to arrive in the flood of quality information from Gaia and SDSS-V. The demography of the stars and of the compact stellar remnants in our Galaxy, in terms of phase-space location, mass, age, metallicity, and multiplicity are data products that will come directly from these surveys. I propose to integrate this information into a comprehensive picture of the Milky Way's present state. In parallel, I will build a Galactic chemical evolution model, with input parameters that are as empirically based as possible, that will reproduce and explain the observations. To get those input parameters, I will measure the rates of supernovae (SNe) in nearby galaxies (using data from past and ongoing surveys) and in high-redshift proto-clusters (by conducting a SN search with JWST), to bring into sharp focus the element yields of SNe and the distribution of delay times (the DTD) between star formation and SN explosion. These empirically determined SN metal-production parameters will be used to find the observationally based reconstruction of the Galaxy's stellar formation history and chemical evolution that reproduces the observed present-day Milky Way stellar population. The population census of stellar multiplicity with Gaia+SDSS-V, and particularly of short-orbit compact-object binaries, will hark back to the rates and the element yields of the various types of SNe, revealing the connections between various progenitor systems, their explosions, and their rates. The plan, while ambitious, is feasible, thanks to the data from these truly game-changing observational projects. My team will perform all steps of the analysis and will combine the results to obtain the clearest picture of how our Galaxy came to be.
Project acronym FTHPC
Project Fault Tolerant High Performance Computing
Researcher (PI) Oded Schwartz
Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM
Summary Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers. The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience. Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Supercomputers are strategically crucial for facilitating advances in science and technology: in climate change research, accelerated genome sequencing towards cancer treatments, cutting edge physics, devising engineering innovative solutions, and many other compute intensive problems. However, the future of super-computing depends on our ability to cope with the ever increasing rate of faults (bit flips and component failure), resulting from the steadily increasing machine size and decreasing operating voltage. Indeed, hardware trends predict at least two faults per minute for next generation (exascale) supercomputers. The challenge of ascertaining fault tolerance for high-performance computing is not new, and has been the focus of extensive research for over two decades. However, most solutions are either (i) general purpose, requiring little to no algorithmic effort, but severely degrading performance (e.g., checkpoint-restart), or (ii) tailored to specific applications and very efficient, but requiring high expertise and significantly increasing programmers' workload. We seek the best of both worlds: high performance and general purpose fault resilience. Efficient general purpose solutions (e.g., via error correcting codes) have revolutionized memory and communication devices over two decades ago, enabling programmers to effectively disregard the very likely memory and communication errors. The time has come for a similar paradigm shift in the computing regimen. I argue that exciting recent advances in error correcting codes, and in short probabilistically checkable proofs, make this goal feasible. Success along these lines will eliminate the bottleneck of required fault-tolerance expertise, and open exascale computing to all algorithm designers and programmers, for the benefit of the scientific, engineering, and industrial communities.
Project acronym GeoArchMag
Project Beyond the Holocene Geomagnetic field resolution
Researcher (PI) Ron Shaar
Call Details Starting Grant (StG), PE10, ERC-2018-STG
Summary For decades the Holocene has been considered a flat and "boring" epoch from the standpoint of paleomagnetism, mainly due to insufficient resolution of the available paleomagnetic data. However, recent archaeomagnetic data have revealed that the Holocene geomagnetic field is anything but stable – presenting puzzling intervals of extreme decadal-scale fluctuations and unexpected departures from a simple dipolar field structure. This new information introduced an entirely new paradigm to the study of the geomagnetic field and to a wide range of research areas relying on paleomagnetic data, such as geochronology, climate research, and geodynamo exploration. This proposal aims at breaking the resolution limits in paleomagnetism, and providing a continuous time series of the geomagnetic field vector throughout the Holocene at decadal resolution and unprecedented accuracy. To this end I will use an innovative assemblage of data sources, jointly unique to the Levant, including rare archaeological finds, annual laminated stalagmites, varved sediments, and arid playa deposits. Together, these sources can provide unprecedented yearly resolution, whereby the "absolute" archaeomagnetic data can calibrate "relative" terrestrial data. The geomagnetic data will define an innovative absolute geomagnetic chronology that will be used to synchronize cosmogenic 10Be data and an extensive body of paleo-climatic indicators. With these in hand, I will address four ground-breaking problems: I) Chronology: Developing dating technique for resolving critical controversies in Levantine archaeology and Quaternary geology. II) Geophysics: Exploring fine-scale geodynamo features in Earth's core from new generations of global geomagnetic models. III) Cosmogenics: Correlating fast geomagnetic variations with cosmogenic isotope production rate. IV) Climate: Testing one of the most challenging controversial questions in geomagnetism: "Does the Earth's magnetic field play a role in climate changes?"
For decades the Holocene has been considered a flat and "boring" epoch from the standpoint of paleomagnetism, mainly due to insufficient resolution of the available paleomagnetic data. However, recent archaeomagnetic data have revealed that the Holocene geomagnetic field is anything but stable – presenting puzzling intervals of extreme decadal-scale fluctuations and unexpected departures from a simple dipolar field structure. This new information introduced an entirely new paradigm to the study of the geomagnetic field and to a wide range of research areas relying on paleomagnetic data, such as geochronology, climate research, and geodynamo exploration. This proposal aims at breaking the resolution limits in paleomagnetism, and providing a continuous time series of the geomagnetic field vector throughout the Holocene at decadal resolution and unprecedented accuracy. To this end I will use an innovative assemblage of data sources, jointly unique to the Levant, including rare archaeological finds, annual laminated stalagmites, varved sediments, and arid playa deposits. Together, these sources can provide unprecedented yearly resolution, whereby the "absolute" archaeomagnetic data can calibrate "relative" terrestrial data. The geomagnetic data will define an innovative absolute geomagnetic chronology that will be used to synchronize cosmogenic 10Be data and an extensive body of paleo-climatic indicators. With these in hand, I will address four ground-breaking problems: I) Chronology: Developing dating technique for resolving critical controversies in Levantine archaeology and Quaternary geology. II) Geophysics: Exploring fine-scale geodynamo features in Earth's core from new generations of global geomagnetic models. III) Cosmogenics: Correlating fast geomagnetic variations with cosmogenic isotope production rate. IV) Climate: Testing one of the most challenging controversial questions in geomagnetism: "Does the Earth's magnetic field play a role in climate changes?"
Project acronym HARMONIC
Project Discrete harmonic analysis for computer science
Researcher (PI) Yuval FILMUS
Summary Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory. Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science. The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Boolean function analysis is a topic of research at the heart of theoretical computer science. It studies functions on n input bits (for example, functions computed by Boolean circuits) from a spectral perspective, by treating them as real-valued functions on the group Z_2^n, and using techniques from Fourier and functional analysis. Boolean function analysis has been applied to a wide variety of areas within theoretical computer science, including hardness of approximation, learning theory, coding theory, and quantum complexity theory. Despite its immense usefulness, Boolean function analysis has limited scope, since it is only appropriate for studying functions on {0,1}^n (a domain known as the Boolean hypercube). Discrete harmonic analysis is the study of functions on domains possessing richer algebraic structure such as the symmetric group (the group of all permutations), using techniques from representation theory and Sperner theory. The considerable success of Boolean function analysis suggests that discrete harmonic analysis could likewise play a central role in theoretical computer science. The goal of this proposal is to systematically develop discrete harmonic analysis on a broad variety of domains, with an eye toward applications in several areas of theoretical computer science. We will generalize classical results of Boolean function analysis beyond the Boolean hypercube, to domains such as finite groups, association schemes (a generalization of finite groups), the quantum analog of the Boolean hypercube, and high-dimensional expanders (high-dimensional analogs of expander graphs). Potential applications include a quantum PCP theorem and two outstanding open questions in hardness of approximation: the Unique Games Conjecture and the Sliding Scale Conjecture. Beyond these concrete applications, we expect that the fundamental results we prove will have many other applications that are hard to predict in advance.
Project acronym HOLI
Project Deep Learning for Holistic Inference
Researcher (PI) Amir Globerson
Summary Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem. First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Machine learning has rapidly evolved in the last decade, significantly improving accuracy on tasks such as image classification. Much of this success can be attributed to the re-emergence of neural nets. However, learning algorithms are still far from achieving the capabilities of human cognition. In particular, humans can rapidly organize an input stream (e.g., textual or visual) into a set of entities, and understand the complex relations between those. In this project I aim to create a general methodology for semantic interpretation of input streams. Such problems fall under the structured-prediction framework, to which I have made numerous contributions. The proposal identifies and addresses three key components required for a comprehensive and empirically effective approach to the problem. First, we consider the holistic nature of semantic interpretations, where a top-down process chooses a coherent interpretation among the vast number of options. We argue that deep-learning architectures are ideally suited for modeling such coherence scores, and propose to develop the corresponding theory and algorithms. Second, we address the complexity of the semantic representation, where a stream is mapped into a variable number of entities, each having multiple attributes and relations to other entities. We characterize the properties a model should satisfy in order to produce such interpretations, and propose novel models that achieve this. Third, we develop a theory for understanding when such models can be learned efficiently, and how well they can generalize. To achieve this, we address key questions of non-convex optimization, inductive bias and generalization. We expect these contributions to have a dramatic impact on AI systems, from machine reading of text to image analysis. More broadly, they will help bridge the gap between machine learning as an engineering field, and the study of human cognition.
Project acronym HomDyn
Project Homogenous dynamics, arithmetic and equidistribution
Researcher (PI) Elon Lindenstrauss
Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued.
Project acronym HQMAT
Project New Horizons in Quantum Matter: From Critical Fluids to High Temperature Superconductivity
Researcher (PI) Erez BERG
Summary Understanding the low-temperature behavior of quantum correlated materials has long been one of the central challenges in condensed matter physics. Such materials exhibit a number of interesting phenomena, such as anomalous transport behavior, complex phase diagrams, and high-temperature superconductivity. However, their understanding has been hindered by the lack of suitable theoretical tools to handle such strongly interacting quantum ``liquids.'' Recent years have witnessed a wave of renewed interest in this long-standing, deep problem, both from condensed matter, high energy, and quantum information physicists. The goal of this research program is to exploit the recent progress on these problems to open new ways of understanding strongly-coupled unconventional quantum fluids. We will perform large-scale, sign problem-free QMC simulations of metals close to quantum critical points, focusing on new regimes beyond the traditional paradigms. New ways to diagnose transport from QMC data will be developed. Exotic phase transitions between an ordinary and a topologically-ordered, fractionalized metal will be studied. In addition, insights will be gained from analytical studies of strongly coupled lattice models, starting from the tractable limit of a large number of degrees of freedom per unit cell. The thermodynamic and transport properties of these models will be studied. These solvable examples will be used to provide a new window into the properties of strongly coupled quantum matter. We will seek ``organizing principles'' to describe such matter, such as emergent local quantum critical behavior and a hydrodynamic description of electron flow. Connections will be made with the ideas of universal bounds on transport and on the rate of spread of quantum information, as well as with insights from other techniques. While our study will mostly focus on generic, universal features of quantum fluids, implications for specific materials will also be studied.
Understanding the low-temperature behavior of quantum correlated materials has long been one of the central challenges in condensed matter physics. Such materials exhibit a number of interesting phenomena, such as anomalous transport behavior, complex phase diagrams, and high-temperature superconductivity. However, their understanding has been hindered by the lack of suitable theoretical tools to handle such strongly interacting quantum ``liquids.'' Recent years have witnessed a wave of renewed interest in this long-standing, deep problem, both from condensed matter, high energy, and quantum information physicists. The goal of this research program is to exploit the recent progress on these problems to open new ways of understanding strongly-coupled unconventional quantum fluids. We will perform large-scale, sign problem-free QMC simulations of metals close to quantum critical points, focusing on new regimes beyond the traditional paradigms. New ways to diagnose transport from QMC data will be developed. Exotic phase transitions between an ordinary and a topologically-ordered, fractionalized metal will be studied. In addition, insights will be gained from analytical studies of strongly coupled lattice models, starting from the tractable limit of a large number of degrees of freedom per unit cell. The thermodynamic and transport properties of these models will be studied. These solvable examples will be used to provide a new window into the properties of strongly coupled quantum matter. We will seek ``organizing principles'' to describe such matter, such as emergent local quantum critical behavior and a hydrodynamic description of electron flow. Connections will be made with the ideas of universal bounds on transport and on the rate of spread of quantum information, as well as with insights from other techniques. While our study will mostly focus on generic, universal features of quantum fluids, implications for specific materials will also be studied.
Project acronym HydraMechanics
Project Mechanical Aspects of Hydra Morphogenesis
Researcher (PI) Kinneret Magda KEREN
Summary Morphogenesis is one of the most remarkable examples of biological pattern formation. Despite substantial progress in the field, we still do not understand the organizational principles responsible for the robust convergence of the morphogenesis process, across scales, to form viable organisms under variable conditions. We focus here on the less-studied mechanical aspects of this problem, and aim to uncover how mechanical forces and feedback contribute to the formation and stabilization of the body plan. Regenerating Hydra offer a powerful platform to explore this direction, thanks to their simple body plan, extraordinary regeneration capabilities, and the accessibility and flexibility of their tissues. We propose to follow the regeneration of excised tissue segments, which inherit an aligned supra-cellular cytoskeletal organization from the parent Hydra, as well as cell aggregates, which lack any prior organization. We will employ advanced microscopy techniques and develop elaborate image analysis tools to track cytoskeletal organization and collective cell migration and correlate them with global tissue morphology, from the onset of regeneration all the way to the formation of complete animals. Furthermore, to directly probe the influence of mechanics on Hydra morphogenesis, we propose to apply various mechanical perturbations, and intervene with the axis formation process using external forces and mechanical constraints. Overall, the proposed work seeks to develop an effective phenomenological description of morphogenesis during Hydra regeneration, at the level of cells and tissues, and reveal the mechanical basis of this process. More generally, our research will shed light on the role of mechanics in animal morphogenesis, and inspire new approaches for using external forces to direct tissue engineering and advance regenerative medicine.
Morphogenesis is one of the most remarkable examples of biological pattern formation. Despite substantial progress in the field, we still do not understand the organizational principles responsible for the robust convergence of the morphogenesis process, across scales, to form viable organisms under variable conditions. We focus here on the less-studied mechanical aspects of this problem, and aim to uncover how mechanical forces and feedback contribute to the formation and stabilization of the body plan. Regenerating Hydra offer a powerful platform to explore this direction, thanks to their simple body plan, extraordinary regeneration capabilities, and the accessibility and flexibility of their tissues. We propose to follow the regeneration of excised tissue segments, which inherit an aligned supra-cellular cytoskeletal organization from the parent Hydra, as well as cell aggregates, which lack any prior organization. We will employ advanced microscopy techniques and develop elaborate image analysis tools to track cytoskeletal organization and collective cell migration and correlate them with global tissue morphology, from the onset of regeneration all the way to the formation of complete animals. Furthermore, to directly probe the influence of mechanics on Hydra morphogenesis, we propose to apply various mechanical perturbations, and intervene with the axis formation process using external forces and mechanical constraints. Overall, the proposed work seeks to develop an effective phenomenological description of morphogenesis during Hydra regeneration, at the level of cells and tissues, and reveal the mechanical basis of this process. More generally, our research will shed light on the role of mechanics in animal morphogenesis, and inspire new approaches for using external forces to direct tissue engineering and advance regenerative medicine.
Project acronym iEXTRACT
Project Information Extraction for Everyone
Researcher (PI) Yoav Goldberg
Summary Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward. Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language. I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query unstructured texts via easily structured forms. More specifically, I aim develop: (a) a novel structured representation that abstracts much of the complexity of natural language; (b) algorithms that derive these representations from texts; (c) an accessible rule language to query this representation; (d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention. The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and advance their profession.
Staggering amounts of information are stored in natural language documents, rendering them unavailable to data-science techniques. Information Extraction (IE), a subfield of Natural Language Processing (NLP), aims to automate the extraction of structured information from text, yielding datasets that can be queried, analyzed and combined to provide new insights and drive research forward. Despite tremendous progress in NLP, IE systems remain mostly inaccessible to non-NLP-experts who can greatly benefit from them. This stems from the current methods for creating IE systems: the dominant machine-learning (ML) approach requires technical expertise and large amounts of annotated data, and does not provide the user control over the extraction process. The previously dominant rule-based approach unrealistically requires the user to anticipate and deal with the nuances of natural language. I aim to remedy this situation by revisiting rule-based IE in light of advances in NLP and ML. The key idea is to cast IE as a collaborative human-computer effort, in which the user provides domain-specific knowledge, and the system is in charge of solving various domain-independent linguistic complexities, ultimately allowing the user to query unstructured texts via easily structured forms. More specifically, I aim develop: (a) a novel structured representation that abstracts much of the complexity of natural language; (b) algorithms that derive these representations from texts; (c) an accessible rule language to query this representation; (d) AI components that infer the user extraction intents, and based on them promote relevant examples and highlight extraction cases that require special attention. The ultimate goal of this project is to democratize NLP and bring advanced IE capabilities directly to the hands of domain-experts: doctors, lawyers, researchers and scientists, empowering them to process large volumes of data and advance their profession.
Project acronym In Motion
Project Investigation and Monitoring of Time-varying Environments on Macro and Nano Scales
Researcher (PI) Pavel Ginzburg
Summary The ultimate goal of my research is to develop novel approaches to detect dynamical changes in cluttered time-dependent electromagnetic environments. Theoretical and experimental methods will be applied on a range of highly important problems, including radar tracking and optical imaging of complex processes on micro and nano scales. Nowadays demands, set by increasing complexity of systems under study, challenges applicability of existent solutions, opening a room of opportunities for multidisciplinary rewarding research. Scalability of Maxell's equations with respect to frequency and classical-quantum correspondence principles suggest developing a broad range of dynamical phenomena by applying multidisciplinary concepts, as my team has recently demonstrated. Radio detection of macroscopic objects (e.g. airborne targets) and optical imaging of conformational changes in colloids (e.g. bio-chemical activities), being representative examples on a very diverse size scales, share similar underlining physics and engineering principles for their analysis. This multidisciplinary research considers the phenomena on macro, micro and nano scales, utilizing classical and quantum properties of electromagnetic radiation and light for achieving superior performances in detection beyond existing capabilities. Radio detection will be performed via mapping internal mechanical properties of a target, enabling attributing a unique signature in a clutter. The novel concept of 'swimming antennas', driven by holographic optical tweezers, will be developed for optical mapping of micro and nano scale motion. Slow decaying luminescent tags, conjugated with antennas, will allow monitoring a motion beyond the diffraction limit by considering quantum properties of light. Fundamental study and exploration of mechanical motion impact on photonic and electromagnetic applications, including tracking in a clutter, classical and quantum imaging and sensing is the core objective of the Proposal.
The ultimate goal of my research is to develop novel approaches to detect dynamical changes in cluttered time-dependent electromagnetic environments. Theoretical and experimental methods will be applied on a range of highly important problems, including radar tracking and optical imaging of complex processes on micro and nano scales. Nowadays demands, set by increasing complexity of systems under study, challenges applicability of existent solutions, opening a room of opportunities for multidisciplinary rewarding research. Scalability of Maxell's equations with respect to frequency and classical-quantum correspondence principles suggest developing a broad range of dynamical phenomena by applying multidisciplinary concepts, as my team has recently demonstrated. Radio detection of macroscopic objects (e.g. airborne targets) and optical imaging of conformational changes in colloids (e.g. bio-chemical activities), being representative examples on a very diverse size scales, share similar underlining physics and engineering principles for their analysis. This multidisciplinary research considers the phenomena on macro, micro and nano scales, utilizing classical and quantum properties of electromagnetic radiation and light for achieving superior performances in detection beyond existing capabilities. Radio detection will be performed via mapping internal mechanical properties of a target, enabling attributing a unique signature in a clutter. The novel concept of 'swimming antennas', driven by holographic optical tweezers, will be developed for optical mapping of micro and nano scale motion. Slow decaying luminescent tags, conjugated with antennas, will allow monitoring a motion beyond the diffraction limit by considering quantum properties of light. Fundamental study and exploration of mechanical motion impact on photonic and electromagnetic applications, including tracking in a clutter, classical and quantum imaging and sensing is the core objective of the Proposal.
Project acronym JetNS
Project Relativistic Jets in Astrophysics -Compact binary mergers, Gamma-Ray Bursts, and Beyond
Researcher (PI) Ehud Nakar
Summary What is the origin of the electromagnetic (EM) counterparts of gravitational waves observed from compact binary mergers? What makes short gamma ray bursts (GRBs)? What are the sources of IceCube's high-energy neutrinos? Are all core-collapse supernovae exploding via the same mechanism? These are some of the puzzles that have emerged with the rapid progress of time domain astronomy. Relativistic jets in compact binary mergers and GRBs, and their interaction with the surrounding media hold the key to these, and other, seemingly unrelated broad-impact questions. Here I propose a new forefront study of how relativistic jets interact with their surrounding media and of its numerous implications, focusing on compact binary mergers and GRBs. The goal of this project is to study, first, the jet-media interaction, and the microphysics of the radiation-mediated shocks that it drives. I will then use the results, together with available observations, to learn about compact binary mergers, GRBs and SNe, sheding light on the questions listed above, and probing the nature of relativistic jets in general. Important goals will include: (i) General models for the propagation of relativistic jets in various media types. (ii) Modeling of the EM signal generated by jet-media interaction following compact binary mergers. (iii) Estimates of the neutrino signal from jet-media interaction in GRBs and SNe. (iv) Constraint the role of jets in SN explosions. This project is timey as it comes at the beginning of a new multi-messenger era where the EM counterparts of GW sources are going to be detected on a regular basis and where the face of transient astrophysics is going to be changed by a range of large scale surveys such as LSST, the SKA, and more. This project will set the theoretical base for understanding numerous known and yet-to be discovered transients that will be detected in the next decade.
What is the origin of the electromagnetic (EM) counterparts of gravitational waves observed from compact binary mergers? What makes short gamma ray bursts (GRBs)? What are the sources of IceCube's high-energy neutrinos? Are all core-collapse supernovae exploding via the same mechanism? These are some of the puzzles that have emerged with the rapid progress of time domain astronomy. Relativistic jets in compact binary mergers and GRBs, and their interaction with the surrounding media hold the key to these, and other, seemingly unrelated broad-impact questions. Here I propose a new forefront study of how relativistic jets interact with their surrounding media and of its numerous implications, focusing on compact binary mergers and GRBs. The goal of this project is to study, first, the jet-media interaction, and the microphysics of the radiation-mediated shocks that it drives. I will then use the results, together with available observations, to learn about compact binary mergers, GRBs and SNe, sheding light on the questions listed above, and probing the nature of relativistic jets in general. Important goals will include: (i) General models for the propagation of relativistic jets in various media types. (ii) Modeling of the EM signal generated by jet-media interaction following compact binary mergers. (iii) Estimates of the neutrino signal from jet-media interaction in GRBs and SNe. (iv) Constraint the role of jets in SN explosions. This project is timey as it comes at the beginning of a new multi-messenger era where the EM counterparts of GW sources are going to be detected on a regular basis and where the face of transient astrophysics is going to be changed by a range of large scale surveys such as LSST, the SKA, and more. This project will set the theoretical base for understanding numerous known and yet-to be discovered transients that will be detected in the next decade.
Project acronym JEWTACT
Project Jewish Translation and Cultural Transfer in Early Modern Europe
Researcher (PI) Iris IDELSON-SHEIN
Host Institution (HI) BEN-GURION UNIVERSITY OF THE NEGEV
Call Details Starting Grant (StG), SH5, ERC-2018-STG
Summary Contemporary scholarship has often envisioned modernity as a kind of immense cultural earthquake, originating somewhere in western or central Europe, and then gradually propagating throughout the continent. This massive upheaval is said to have shaken the very foundations of every culture it frequented, subsequently eliminating the world which once was, to make way for a new age. This project offers a new understanding of modernization, not as a radical break with tradition, but as the careful importation of new ideas by often timid, almost inadvertent innovators. The project focuses on the rich corpus of translations of non-Jewish texts into Jewish languages, which developed during the early modern period. Largely neglected by modern scholars, these translations played a pivotal role in fashioning Jewish culture from the sixteenth century into modern times. Jewish translators were never merely passive recipients of their non-Jewish sources; they mistranslated both deliberately and accidentally, added and omitted, and harnessed their sources to meet their own unique agendas. Throughout the process of translation then, a new corpus was created, one that was distinctly Jewish in character, but closely corresponded with the surrounding majority culture. JEWTACT offers the first comprehensive study of the entire gamut of these early modern Jewish translations, exposing a hitherto unexplored terrain of surprising intercultural encounters which took place upon the advent of modernity—between East and West, tradition and innovation, Christians and Jews. The project posits translation as the primary and most ubiquitous mechanism of Christian-Jewish cultural transfer in early modern Europe. In so doing, I wish to revolutionize our understanding of the so-called early modern "Jewish book," revealing its intensely porous, collaborative and innovative nature, and to offer a new paradigm of Jewish modernization and cultural exchange.
Contemporary scholarship has often envisioned modernity as a kind of immense cultural earthquake, originating somewhere in western or central Europe, and then gradually propagating throughout the continent. This massive upheaval is said to have shaken the very foundations of every culture it frequented, subsequently eliminating the world which once was, to make way for a new age. This project offers a new understanding of modernization, not as a radical break with tradition, but as the careful importation of new ideas by often timid, almost inadvertent innovators. The project focuses on the rich corpus of translations of non-Jewish texts into Jewish languages, which developed during the early modern period. Largely neglected by modern scholars, these translations played a pivotal role in fashioning Jewish culture from the sixteenth century into modern times. Jewish translators were never merely passive recipients of their non-Jewish sources; they mistranslated both deliberately and accidentally, added and omitted, and harnessed their sources to meet their own unique agendas. Throughout the process of translation then, a new corpus was created, one that was distinctly Jewish in character, but closely corresponded with the surrounding majority culture. JEWTACT offers the first comprehensive study of the entire gamut of these early modern Jewish translations, exposing a hitherto unexplored terrain of surprising intercultural encounters which took place upon the advent of modernity—between East and West, tradition and innovation, Christians and Jews. The project posits translation as the primary and most ubiquitous mechanism of Christian-Jewish cultural transfer in early modern Europe. In so doing, I wish to revolutionize our understanding of the so-called early modern "Jewish book," revealing its intensely porous, collaborative and innovative nature, and to offer a new paradigm of Jewish modernization and cultural exchange.
Project acronym LifeLikeMat
Project Dissipative self-assembly in synthetic systems: Towards life-like materials
Researcher (PI) Rafal KLAJN
Summary "Living organisms are sophisticated self-assembled structures that exist and operate far from thermodynamic equilibrium and, as such, represent the ultimate example of dissipative self-assembly. They remain stable at highly organized (low-entropy) states owing to the continuous consumption of energy stored in ""chemical fuels"", which they convert into low-energy waste. Dissipative self-assembly is ubiquitous in nature, where it gives rise to complex structures and properties such as self-healing, homeostasis, and camouflage. In sharp contrast, nearly all man-made materials are static: they are designed to serve a given purpose rather than to exhibit different properties dependent on external conditions. Developing the means to rationally design dissipative self-assembly constructs will greatly impact a range of industries, including the pharmaceutical and energy sectors. The goal of the proposed research program is to develop novel principles for designing dissipative self-assembly systems and to fabricate a range of dissipative materials based on these principles. To achieve this goal, we will employ novel, unconventional approaches based predominantly on integrating organic and colloidal-inorganic building blocks. Specifically, we will (WP1) drive dissipative self-assembly using chemical reactions such as polymerization, oxidation of sugars, and CO2-to-methanol conversion, (WP2) develop new modes of intrinsically dissipative self-assembly, whereby the activated building blocks are inherently unstable, and (WP3&4) conceive systems whereby self-assembly is spontaneously followed by disassembly. The proposed studies will lead to new classes of ""driven"" materials with features such as tunable lifetimes, time-dependent electrical conductivity, and dynamic exchange of building blocks. Overall, this project will lay the foundations for developing new synthetic dissipative materials, bringing us closer to the rich and varied functionality of materials found in nature."
"Living organisms are sophisticated self-assembled structures that exist and operate far from thermodynamic equilibrium and, as such, represent the ultimate example of dissipative self-assembly. They remain stable at highly organized (low-entropy) states owing to the continuous consumption of energy stored in ""chemical fuels"", which they convert into low-energy waste. Dissipative self-assembly is ubiquitous in nature, where it gives rise to complex structures and properties such as self-healing, homeostasis, and camouflage. In sharp contrast, nearly all man-made materials are static: they are designed to serve a given purpose rather than to exhibit different properties dependent on external conditions. Developing the means to rationally design dissipative self-assembly constructs will greatly impact a range of industries, including the pharmaceutical and energy sectors. The goal of the proposed research program is to develop novel principles for designing dissipative self-assembly systems and to fabricate a range of dissipative materials based on these principles. To achieve this goal, we will employ novel, unconventional approaches based predominantly on integrating organic and colloidal-inorganic building blocks. Specifically, we will (WP1) drive dissipative self-assembly using chemical reactions such as polymerization, oxidation of sugars, and CO2-to-methanol conversion, (WP2) develop new modes of intrinsically dissipative self-assembly, whereby the activated building blocks are inherently unstable, and (WP3&4) conceive systems whereby self-assembly is spontaneously followed by disassembly. The proposed studies will lead to new classes of ""driven"" materials with features such as tunable lifetimes, time-dependent electrical conductivity, and dynamic exchange of building blocks. Overall, this project will lay the foundations for developing new synthetic dissipative materials, bringing us closer to the rich and varied functionality of materials found in nature." | CommonCrawl |
Evolution of the threshold temperature definition of a heat wave vs. evolution of the minimum mortality temperature: a case study in Spain during the 1983–2018 period
J. A. López-Bueno1,
J. Díaz ORCID: orcid.org/0000-0003-4282-49591,
F. Follos2,
J. M. Vellón2,
M. A. Navas1,
D. Culqui1,
M. Y. Luna3,
G. Sánchez-Martínez4 &
C. Linares1
Environmental Sciences Europe volume 33, Article number: 101 (2021) Cite this article
An area of current study concerns analysis of the possible adaptation of the population to heat, based on the temporal evolution of the minimum mortality temperature (MMT). It is important to know how is the evolution of the threshold temperatures (Tthreshold) due to these temperatures provide the basis for the activation of public health prevention plans against high temperatures. The objective of this study was to analyze the temporal evolution of threshold temperatures (Tthreshold) produced in different Spanish regions during the 1983–2018 period and to compare this evolution with the evolution of MMT. The dependent variable used was the raw rate of daily mortality due to natural causes ICD X: (A00-R99) for the considered period. The independent variable was maximum daily temperature (Tmax) during the summer months registered in the reference observatory of each region. Threshold values were determined using dispersion diagrams (annual) of the prewhitened series of mortality temperatures and Tmax. Later, linear fit models were carried out between the different values of Tthreshold throughout the study period, which permitted detecting the annual rate of change in Tthreshold.
The results obtained show that, on average, Tthreshold has increased at a rate of 0.57 ºC/decade in Spain, while Tmax temperatures in the summer have increased at a rate of 0.41 ºC/decade, suggesting adaptation to heat. This rate of evolution presents important geographic heterogeneity. Also, the rate of evolution of Tthreshold was similar to what was detected for MMT.
The temporal evolution of the series of both temperature measures can be used as indicators of population adaptation to heat. The temporal evolution of Tthreshold has important geographic variation, probably related to sociodemographic and economic factors, that should be studied at the local level.
In recent years, studies in different countries have observed a decrease in the mortality attributable to heat waves [2, 3, 9, 27, 28]. This could be interpreted as a progressive process of population adaptation to high temperatures, due to a variety of factors [19, 31], among which the efficiency of heat prevention plans in different countries is worth mentioning [12].
The decrease in the impact of heat is generally measured in terms of the decrease in the relative risks of daily mortality associated with extremely hot temperatures. This process can be visualized as an evolution over time towards higher values for the temperature thresholds for heat waves (Tthreshold) [18, 30] (Díaz et al. 2019). The threshold temperature for a heat wave can be generally defined as the epidemiological threshold at which the effects of heat begin to provoke excess mortality attributable to heat. These thresholds also mark the activation of prevention plans based on public health action to respond to high temperatures. These Tthreshold values are dynamic, they vary in time as the as well as the sociodemographic and economic dynamics also makes it. The Tthreshold values can be used as an indicator of the adaptation to extremely high temperatures [18, 30] (Díaz et al. 2019). From the point of view of population adaptation to heat waves, adaptation is complete when the rate of increase in maximum daily temperature as a consequence of global warming is less than the rate of increase in Tthreshold (Díaz et al. 2019) [15], not including summer mortality excesses.
In order to analyze whether a process of population adaptation is in fact occurring, there is research that investigates the evolution of another epidemiological indicator that defines the traditional functional relationship that exists, in the "V" form, between daily mortality and temperature. This indicator is known as minimum mortality temperature (MMT) [2, 8, 33].
The evolution of MMT has also been used as an indicator of possible population adaptation to heat [15, 16]. From a conceptual point of view, MMT and Tthreshold represent two different indicators. In a graphic representation (Fig. 1) of the temperature–mortality relationship, MMT represents the temperature at which mortality reaches its minimum value. Thus, mortality attributable to heat is represented to the right of MMT, while mortality attributable to cold is represented to the left [1]. However, Tthreshold values represent the temperature at which mortality begins to increase due to heat waves. It is evident that mortality due to heat includes mortality due to heat waves [29], however, the behavior and temporal evolution are not necessarily similar.
Temperature–mortality relationship in Madrid, 1983–2018 period. Minimum mortality temperature (MMT) and temperature threshold for heat waves (Tthreshold)
In the report "Heat Health in the WHO European Region: Updated Evidence for Effective Prevention" [34], the WHO established that the activation of prevention plans to address high temperatures should have an epidemiological basis. That is to say, they should be based on a determination of Tthreshold for each geographic and sub-climatic area of study, based on the increase in mortality with high temperatures. Also, these plans should be revised periodically, given that Tthreshold varies across time. Despite the important role of Tthreshold in the process of population adaptation to high temperatures, there are few studies that analyze its temporal evolution and that also establish variation in time as an indicator of the process of population adaptation to heat waves.
The first objective of this study was to analyze the temporal evolution of Tthreshold temperatures across a period of 36 years (1983–2018) in Spanish regions that are representative of the different impacts of heat waves, and to evaluate whether Tthreshold constitutes a good indicator of population adaptation to high temperatures. Second, this study aimed to compare the rate of evolution of Tthreshold with the rate of evolution observed in MMT during the same time period studied, to analyze the possible relationship and possible implications for future adaptation.
From among all Spanish provinces, 10 were selected as representative of the behavior of Spanish regions in terms of thermal extremes, according to previous studies [9, 11, 32].
The dependent variable was made up of the rate of daily mortality due to natural causes (ICD X: A00-R99) in municipalities with over 10,000 inhabitants in selected Spanish regions during the 1983–2018 period. These data were provided by the National Statistics Institute (INE). Based on daily mortality data, and using population data also supplied by INE, the rate of daily mortality per 100,000 inhabitants was calculated.
Temperature data
The data were provided by the State Meteorological Agency (AEMET). Maximum daily temperature in the summer months (Tmax) was the independent variable, registered in the meteorological observatory of reference in each region during the analyzed period corresponding to 1983–2018.
Tmax was used, because it is the variable that presents the best statistical association with daily mortality during heat waves [11, 18].
In addition, we used the rate of evolution of maximum daily temperature (Tmax) in the summer months for the 1983–2018 period and for future Tmax foreseen for the 2051–2100 time horizon under an RCP8.5 emissions scenario. Data were taken from previous papers: [16] and Díaz et al. 2019, respectively.
Determination of threshold temperatures (Tthreshold)
In order to eliminate the analogous components of trend, seasonality and autoregressive character in the series of temperature and mortality, we used a pre-whitening procedure with the Box–Jenkins' methodology [4].
These prewhitened series constitute the residuals obtained through ARIMA modeling and represent the anomalies that correspond to the mortality rate. The series was modeled for the entire 1983–2018 period.
Find below the equation of the ARIMA regression model in the general form:
$$\begin{array}{*{20}c} {Y_{t} = b + \beta_{1p} \varphi_{pt} + \beta_{2q} \theta_{qt} + \beta_{3P} s\varphi_{Pt} + \beta_{4Q} s\theta_{Qt} + \beta_{5} n1_{t} + \beta_{6\alpha } \cos \left( {\alpha t} \right) + \beta_{7\alpha } {\text{sen}}\left( {\alpha t} \right) + \varepsilon_{t} ,} \\ {\varepsilon_{t} \sim N\left( {0,\sigma } \right),} \\ \end{array}$$
where \(Y_{t}\) is mortality on day t; \(b\) is the intercept; \(\beta\) are the coefficient of each variable in each case; \(\varphi\) is the non-seasonal autoregressive parameter of order p on day t; \(\theta\) is the non-seasonal mobile average of order q on day t; \(s\varphi\) is the seasonal autoregressive parameter of order P on day t;\(s\theta_{Qt}\) is the seasonal mobile average of order Q on day t; n1 is the trend on day t; \(\cos \left( {\alpha t} \right){\text{and sin }}\left( {\alpha t} \right) \,\) are seasonal functions of \(\upalpha\) {365, 180, 120, 90, 60, 30} periods on day t; and \(\varepsilon\) is the residuals which performs a normal distribution of mean = 0, and \(\sigma\) is the standard deviation of the \(\varepsilon\). Since trend was included as an independent variable, the integrated parameter was I = 0. Lastly, it were fixed a period of 7 days for seasonal part of the regression model.
Later, for each year, a dispersion diagram (scatter plot) was constructed such that the X-axis represents maximum daily temperatures in 2 ºC intervals, and the Y-axis represents the value corresponding to these residuals, averaged for these intervals, with the corresponding confidence intervals. Using this methodology, it was possible to relate statistically significant mortality anomalies that were detected at a determined temperature. The value of Tmax, the point at which mortality increases in an anomalous way, was denominated Tthreshold. This methodology has been used in multiple other studies [6, 7, 11, 23, 30].
By way of example, Fig. 2 shows the process by which residuals were obtained and the later determination of Tthreshold in the case of Barcelona for the 1983–2018 period.
a Temporal evolution of the daily mortality rate for Barcelona during the 1983–2018 period; b temporal evolution of the daily mortality rate prewhitened series for this period, and c graphic illustration of the threshold temperature for the 1983–1988 period
Calculation of the rate of temporal evolution of Tthreshold
Once Tthreshold was calculated for each year and region, a linear fit process was carried out for the results obtained. The values on the X-axis represent the years between 1983 and 2018, and the Y-axis show the values of Tthreshold for each year, when it was possible to calculate this value. The slope of the line obtained in the linear fit model represents the rate of evolution of Tthreshold during the period of analysis.
Comparison with the evolution of MMT
In other recent studies in Spain for the same period (1983–2018), the rate of evolution of MMT was calculated [16]. If both rates are compared and bivariate correlations are established between the annual series of Tthreshold and MMT during the study period, it is possible to describe a potential association between them.
Also, cross-correlation functions (CCF) were calculated between the series, which allowed for the analysis of a possible time lag between the values of MMT and Tthreshold.
Determination of the increase in Tthreshold
Given that we were working with spatial data, the time evolution of the results was analyzed using a linear mixed model (link = identity). In this model, the Tthreshold values were used, calculated as a dependent variable, the independent variable of fixed effects was the year, and region was used as a factor of random effects, by way of the following equation:
$${\text{geeglm}}\left( {{\text{formula }} = {\text{ d}}\$ {\text{Tumbral }}\sim {\text{ d}}\$ {\text{year}},{\text{ data }} = {\text{ data}},{\text{ id }} = {\text{ d}}\$ {\text{Provincia}}} \right).$$
This analysis was carried out using the statistical software package SPSS 27. The linear mixed models used the geeglm() function of the geepack package of free R software.
Figure 3 shows the graphs that correspond to the linear fit models for Tthreshold, for the total of the 10 regions analyzed. As can be observed, in all of the cases except Badajoz, there was an increasing temporal evolution in terms of the slopes of all of the fit lines.
Linear fit based on the threshold temperature in the years of the study period in each of the regions considered
Table 1 shows the average values that correspond to the daily mortality rate and the maximum daily temperature (Tmax) for the summer months in different Spanish regions for the 1983–2018 period. It also shows the average values of the rate of change in the minimum mortality temperatures (MMT) obtained previously [16] and the average values of threshold temperatures (Tthreshold) corresponding to the linear fit models shown in Fig. 3. The values of the slopes are expressed in terms of ºC/decade, both in the case of Tthreshold as well as for the values of MMT. Table 1 also shows the Pearson correlation coefficients of the bivariate correlations obtained between the series of the annual values of Tthreshold and MMT. In general, no correlation exists between the series, except in three regions (Alicante, Barcelona and Zaragoza), and in three of the regions (Badajoz, Orense and Valladolid) the correlations have a negative sign.
Table 1 Average values of daily mortality, daily maximum temperature (Tmax) of the summer months; rate of change in minimum mortality temperatures (MMT) and threshold temperatures (Tthreshold) by region during the 1983–2018 period
The CCF calculated between the series of MMT and Tthreshold values did not show statistically significant lags, except in Barcelona, Alicante and Zaragoza, in which case the significant associations were established in lag zero, as shown in Fig. 4.
Cross-correlation functions (CCF) between the annual series of minimum mortality temperatures (MMT) and the threshold temperatures (Tthreshold) for the 1983–2018 period for the regions of Barcelona, Alicante and Zaragoza, respectively
On the other hand, Table 2 shows the results obtained in the linear mixed model, where for all of the regions analyzed, there was a statistically significant, increasing trend in Tthreshold values.
Table 2 Results of the mixed model to determine the annual increase (d$year) in Tthreshold for all of the regions considered
Table 3 shows the rate of increase in Tmax in the summer months in each of the regions analyzed during the 1983–2018 period and the future rate of increase in Tmax values foreseen for the 2051–2100 time horizon under an RCP8.5 emissions scenario.
Table 3 Rate of evolution of maximum daily temperature (Tmax) in the summer months for the 1983–2018 period and for future Tmax foreseen for the 2051–2100 time horizon under an RCP8.5 emissions scenario
The primary result of this study is that, at the global level, Tthreshold has increased in Spain over the 36-year period of analysis (1983–2018), which indicates a gradual process of population adaptation to heat waves. These results agree with those of studies of relative attributable risks that analyze the impact of heat waves in Spain [9] and with the results obtained from studies in other locations both in Europe and in the United States [2, 3, 27, 28].
The rate of evolution of Tthreshold observed here is around 0.57 ºC/decade and is similar to the rate of evolution of MMT for all of Spain, established at 0.64 ºC/decade [16]. Despite this, the rate of increase in Tthreshold is greater than that of MMT in 8 of the 10 considered regions, which indicates that the population adapts more rapidly to the more extreme values of Tthreshold than to the lower temperature values that correspond to MMT. This could be related to the measures put into place specifically related to heat waves (prevention plans to address high temperatures, air conditioning, health alerts) [31].
Similar to the evolution of MMT [16], there are great geographic differences in the evolution of Tthreshold values. Table 1 shows a contrast between the rate of increase of up to 1.32 ºC/decade, such as occurs in Bizkaia, and other regions which may even show a decline, such as in the case of Badajoz (−0.25 ºC/decade). There are diverse factors, as the mean age of the population per each region, that could help to explain these variations between regions with different climatic and demographic contexts, some of which could potentially be influenced, such as health spending [20] or the level of income [22]. Others, such as demographic structure [25] or the rural/urban character, also influence the different impacts of heat [23], but would be difficult to modify.
However, there are other factors that operate at a sub-regional level that are probably important in explaining the different behavior of heat with respect to mortality; for example, the age of built structures [21], their quality and insulation [24] and even the access to air conditioning [14]. The existence of green roofs and walls [5] and the accessibility of green zones could also influence mortality due to heat [26] and, therefore, could change the relationship between temperature and mortality. Explanations of the differences in the evolution of MMT should also take place at the sub-provincial level considered here.
The average maximum temperatures in the summer months in Spain have increased at a rate of 0.41 ºC/decade [16]. Therefore, at the global level it can be said that a process of adaptation to heat waves exists in Spain, in accordance with the hypothesis that adaptation to heat waves exists when the rate of increase in Tthreshold is greater than the rate of growth in Tmax [10, 17]. However, the regional differences mean that this is not the case for the regions as a whole. Table 3 shows a comparison of the rate of increase in Tthreshold (Table 1) with the increase in Tmax for the summer months during the 1983–2018 period. These findings show that adaptation can be said to be taking place in the regions as a whole, except in Badajoz.
Table 3 also shows the potential future increase in Tmax for the summer months for the 2051–2100 time horizon, considering a high emissions scenario RCP8.5 (Díaz et al. 2019). A similar process to that described here would suggest that if the rate of increase in Tthreshold is sustained, there will be a process of adaptation in the future to temperatures in all regions, with the exception of Alicante and Badajoz.
Despite the fact that there is similar behavior in the evolution of the MMT series and Tthreshold series, they represent different conceptualizations. This highlights that a statistically significant correlation exists between both annual series in only three of the regions analyzed. In the cases in which this association exists, both series are in sync, that is, MMT changes in the same year as does Tthreshold.
One of the limitations of this study is that it considered, at most, a 36-year series. Given that there was only one Tthreshold value per year, only 36 values of Tthreshold were included. This precluded carrying out the sensitivity analyses that are typical of time series methodologies, such as Jack-Knife [13]. The use of a relatively short data series (36 years, 36 values) could provide uncertainty in the determinations of the slopes of the linear fits. This uncertainty is inherent in this type of estimations with such scarce number of data. In addition, in some of the regions considered, there were years without a heat wave, therefore it was not possible to determine a Tthreshold value, which also removed data from the series of Tthreshold values.
A representative observatory was used as a reference for an entire region, which could give rise to bias in the assignment of exposure temperatures of the population [7]. The possible bias due to not controlling for air pollution variables was minimized through the use of prewhitened series of mortality rates and through directly relating mortality anomalies with temperature anomalies to determine Tthreshold values.
The temporal evolution of the series of both, MMT and Tthreshold temperatures, can be used as indicators of the population adaptation to heat. The temporal evolution of Tthreshold has important geographic variation, probably related to sociodemographic and economic factors that should be studied at the local level. It is important to keep in mind that the activation of heat prevention plans should take place based on these heat wave definition threshold temperatures, and should be implemented at the local level [34]. An analysis of the temporal evolution of Tthreshold is key not only in updating these threshold levels periodically, as suggested by the WHO [34], but also as an indicator of the population adaptation to heat. Knowing which variables influence changes in Tthreshold levels and modifying them to favor adaptation processes could be a key tool in adaptation to climate change. If this population adaptation to heat is achieved, attributable mortality could be dramatically reduced [18, 30] (Díaz et al. 2019).
It is an ecological analysis so the study does not involve human subjects.
MMT:
Minimum mortality temperature
Tthreshold:
Threshold temperature
Tmax:
Maximum daily temperature
ICD:
CCF:
Cross-correlation functions
Alberdi JC, Díaz J, Montero JC, Mirón IJ (1998) Daily mortality in Madrid Community (Spain) 1986–1991: relationship with atmospheric variables. Eur J Epidemiol 14:571–578
Åström DO, Tornevi A, Ebi KL, Rocklöv J, Forsberg B (2016) Evolution of minimum mortality temperature in Stockholm, Sweden, 1901–2009. Environ Health Perspect 124(6):740–744
Barreca A, Clay K, Deschenes O, Greenstone M, Shapiro JS (2016) Adapting to climate change: the remarkable decline in the US temperature-mortality relationship over the twentieth century. J Politic Economic 124(1):105–109
Box GE, Jenkins GM, Reinsel C (1994) Time series analysis. Forecasting and control. Prentice Hall, Englewood
Buchin O, Hoelscher MT, Meier F, Nehls T, Ziegler F (2016) Evaluation of the health-risk reduction potential of countermeasures to urban heat islands. Energy Buildings 114:27–37
Carmona R, Díaz J, Mirón IJ, Ortíz C, León I, Linares C (2016) Geographical variation in relative risks associated with cold waves in Spain: the need for a cold wave prevention plan. Environ Int 88:103–111
Carmona R, Linares C, Ortiz C, Mirón IJ, Luna MY, Díaz J (2017) Spatial variability in threshold temperatures during extreme heat days: Impact assessment on prevention plans. Int J Environ Health Res 27:463–475
Chung Y, Noh H, Honda Y, Hashizume M, Bell ML, Guo YL, Kim H (2017) Temporal changes in mortality related to extreme temperatures for 15 cities in Northeast Asia: adaptation to heat and mal adaptation to cold. Am J Epidemiol 185(10):907–913
Díaz J, Carmona R, Mirón IJ, Luna MY, Linares C (2018) Time trend in the impact of heat waves on daily mortality in Spain for a period of over thirty years (1983–2013). Environ Int 116:10–17
Díaz J, Sáez M, Carmona R, Mirón IJ, Barceló MA, Luna MY, Linares C (2019) Mortality attributable to high temperaturesover the 2021–2050 and 2051–2100 time horizons in Spain: adaptation and economic estimate. Environmental Research. 172:475–485
Díaz J, Carmona R, Mirón IJ, Ortiz C, León I, Linares C (2015) Geographical variation in relative risks associated with heat: update of Spain's Heat Wave Prevention Plan. Environ Int 85:273–283
de Donato F, Scortichini M, De Sario M, De Martino A, Michelozzi P (2018) Temporal variation in the effect of heat and the role of the Italian heat prevention plan. Public Health 161:154–162
Efron B. Bootstrap methods: another look at the jackknife. The annals of Statistics, 1979; pp 1–26
Flouris AD, McGinn R, Poirie MP, Louie JC et al (2018) Screening criteria for increased susceptibility to heat stress during work or leisure in hot environments in healthy individuals aged 31–70 years. Temperature (Austin) 5(1):86–99. https://doi.org/10.1080/23328940.2017.1381800.eCollection
Follos F, Linares C, Vellón JM, López-Bueno JA, Luna MY, Martínez GS, Díaz J (2020) The evolution of minimum mortality temperatures as an indicator of heat adaptation: the cases of Madrid and Seville (Spain). Sci Tot Environ 747:141259
Follos F, Linares C, López-Bueno JA, Navas MA, Vellón JM, Luna MY, Sánchez-Martínez G, Díaz J (2021) Evolution of the minimum mortality temperature (1983–2018): is Spain adapting to heat? Sci Total Environ 784:147233
Guo Y, Gasparrini A, Armstrong BG, Tawatsupa B, Tobias A, Lavigne E et al (2017) Heat wave andmortality: a multicountry, multicommunity study. Environ Health Perspect. 125(8): 087006
Guo Y, Gasparrini A, Li S, Sera F, Vicedo-Cabrera AM, de Coelho SZSM, Saldiva PHN et al (2018) Quantifying excess deaths related to heatwaves under climate change scenarios: a multicountry time series modelling study. PLoS Med 15(7):e1002629
Kazmierczak A, Bittner S, Breil M, Coninx I, Johnson K, Kleinenkuhnen L, Zandersen M (2020) Urban adaptation in Europe: how cities and towns respond to climate change
Leone M, D'Ippoliti D, De Sario M, Analitis A, Menne B, Katsouyanni K, Dörtbudak Z (2013) A time series study on the effects of heat on mortality and evaluation of heterogeneity into European and Eastern-Southern Mediterranean cities: results of EU CIRCE project. Environ Health 12(1):55
López-Bueno JA, Díaz J, Linares C (2019) Differences in the impact of heat waves according to urban and peri-urban factor in Madrid. Int J Biometeorol 63:371–380
López-Bueno JA, Díaz J, Sánchez-Guevara C, Sánchez-Martínez G, Franco M, Gullón P, Linares C (2020) The impact of heat waves on daily mortality in districts in Madrid: the effect of sociodemographic factors. Environ Res 190:109993
López-Bueno JA, Navas-Martín MA, Linares C, Mirón IJ, Luna MY, Sánchez-Martínez G, Culqui D, Díaz J (2021) Analysis of the impact of heat waves on daily mortality in urban and rural areas in Madrid. Environ Res 195:110892
Matthies, F, Bickler, G, Cardeñosa N, Hales S, Editors., World Health Organization. Regional Office for Europe. et al. (2008). Heat-health action plans: guidance. In: Franziska Matthies, ed. et al. Copenhagen: WHO Regional Office for Europe. https://apps.who.int/iris/handle/10665/107888
Montero JC, Miron IJ, Criado-Alvarez JJ, Linares C, Diaz J (2012) Influence of local factors in the relationship between mortality and heat waves: castile-La Mancha (1975–2003). Sci Total Environ 414:73–80
Murage P, Kovats S, Sarran C, Taylor J, McInnes R (2020) What individual and neighbourhood-level factors increase the risk of heat-related mortality? A case-crossover study of over 185,000 deaths in London using high-resolution climate datasets. Environ Int 134:105292
Petkova EP, Gasparrini A, Kinney PL (2014) Heat and mortality in New York City since the beginning of the 20th century. Epidemiology 25(4):554–560
Ragettli MS, Vicedo-Cabrera AM, Schindler C, Röösli M (2017) Exploring the association between heat and mortality in Switzerland between 1995 and 2013. Environ Res 158:703–709
Sánchez-Guevara C, Gayoso M, Núñez-Peiró M, Sanz A, Neila FJ Alesanco P, et al. Feminización de la pobreza energética en Madrid. Exposición a extremos térmicos. Fundación General de la UPM. ISBN: 978-84-09-20538-7. Madrid, 2020
Sánchez-Martínez G, Díaz J, Linares C, Nieuwenhuyse A, Hooyberghs H, Lauwaet D, De Ridder K, Carmona R, Ortiz C, Kendrovski V, Aerts R, Dunbar M (2018) Heat and health under climate change in Antwerp: projected impacts and implications for prevention. Environ Int 111:135–143
Sánchez-Martínez G, Linares C, Ayuso A, Kendrovski V, Boeckmann M, Díaz J (2019) Heat-Health Action Plans in Europe: challenges ahead and how to tackle them. Environ Res 176:108548
Tobías A, Armstrong B, Gasparrini A, Díaz J (2014) Effects of high summer temperatures on mortality in 50 Spanish cities. Environ Health 13:48
Todd N, Valleron A-J (2015) Space-time covariation of mortality with temperature: a systematic study of deaths in France, 1968–2009. Environ Health Perspect 123(7):659–664
WHO Regional Office for Europe (2021) Heat and health in the Who European Region: update evidence for effective prevention. Copenhagen
This research project was funded by the Carlos III Health Institute (ISCIII) under file number ENPY 470/19 and is supported by the Biodiversity Foundation of the Ministry for Ecological Transition and Demographic Challenge, in addition to the research projects ISCIII: ENPY107/18 and ENPY 376/18.
This paper reports independent results and research. The views expressed are those of the authors and not necessarily those of the Carlos III Institute of Health (Instituto de Salud Carlos III).
National School of Public Health, Carlos III Institute of Health, Escuela Nacional de Sanidad, Avda. Monforte de Lemos 5, 28029, Madrid, Spain
J. A. López-Bueno, J. Díaz, M. A. Navas, D. Culqui & C. Linares
Tdot Soluciones Sostenibles, SL, Ferrol, A Coruña, Spain
F. Follos & J. M. Vellón
State Meteorological Agency, Madrid, Spain
M. Y. Luna
The UNEP DTU Partnership, Copenhagen, Denmark
G. Sánchez-Martínez
J. A. López-Bueno
J. Díaz
F. Follos
J. M. Vellón
M. A. Navas
D. Culqui
C. Linares
LJA: providing and analysis of data; elaboration and revision of the manuscript. DJ: original idea of the study. Study design: elaboration and revision of the manuscript. FF: providing and analysis of data; elaboration and revision of the manuscript. VJM: providing and analysis of data; elaboration and revision of the manuscript. NMÁ: providing and analysis of data; elaboration and revision of the manuscript. CD: providing and analysis of data; elaboration and revision of the manuscript. LMY: providing and analysis of data; elaboration and revision of the manuscript. SMG: epidemiological study design. Elaboration and revision of the manuscript. LC: original idea of the study. Study design; elaboration and revision of the manuscript. All authors read and approved the final manuscript.
Correspondence to J. Díaz.
This study works with aggregate data, therefore there are no individual data. Therefore, the consent to participate is not applicable.
This study works with aggregate data, therefore there are no individual data. Therefore, the consent to publish is not applicable.
The researchers declare that they have no conflicts of interest that would compromise the independence of this research work. The views expressed by the authors do not necessarily coincide with those of the institutions whose affiliation is indicated at the beginning of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
López-Bueno, J.A., Díaz, J., Follos, F. et al. Evolution of the threshold temperature definition of a heat wave vs. evolution of the minimum mortality temperature: a case study in Spain during the 1983–2018 period. Environ Sci Eur 33, 101 (2021). https://doi.org/10.1186/s12302-021-00542-7
DOI: https://doi.org/10.1186/s12302-021-00542-7
Temperature threshold
Mortality attributable | CommonCrawl |
Thermal decomposition of the amino acids glycine, cysteine, aspartic acid, asparagine, glutamic acid, glutamine, arginine and histidine
Ingrid M. Weiss1,2,
Christina Muth2,
Robert Drumm2 &
Helmut O. K. Kirchner2
BMC Biophysics volume 11, Article number: 2 (2018) Cite this article
The pathways of thermal instability of amino acids have been unknown. New mass spectrometric data allow unequivocal quantitative identification of the decomposition products.
Calorimetry, thermogravimetry and mass spectrometry were used to follow the thermal decomposition of the eight amino acids G, C, D, N, E, Q, R and H between 185 °C and 280 °C. Endothermic heats of decomposition between 72 and 151 kJ/mol are needed to form 12 to 70% volatile products. This process is neither melting nor sublimation. With exception of cysteine they emit mainly H2O, some NH3 and no CO2. Cysteine produces CO2 and little else. The reactions are described by polynomials, AA→a NH3+b H2O+c CO2+d H2S+e residue, with integer or half integer coefficients. The solid monomolecular residues are rich in peptide bonds.
Eight of the 20 standard amino acids decompose at well-defined, characteristic temperatures, in contrast to commonly accepted knowledge. Products of decomposition are simple. The novel quantitative results emphasize the impact of water and cyclic condensates with peptide bonds and put constraints on hypotheses of the origin, state and stability of amino acids in the range between 200 °C and 300 °C.
The so-called 20 standard amino acids are fundamental building blocks of living systems [1]. They are usually obtained in solid form from aqueous solution by evaporation of the solvent [2]. Most of today's knowledge about amino acids is therefore limited to the temperature and pressure range of liquid water. A huge number of physical and chemical data were unequivocally established − at least for the 20 standard amino acids [3]. Thermal stability or instability of amino acids, however, is one of the few fields which remains speculative until today, at least to some extent. One major reason for deficiencies in that respect could be that data aquisition is usually performed without analyzing the entire system, where liquid amino acids and decomposition products, as well as their respective gas phases must be taken into account. We used a commercial thermal analysis system with a direct transfer line to a mass spectrometer for characterizing the melting or decomposition process of amino acids under inert atmosphere in the temperature range between 323−593 K and detection of masses between 1−199 Da in the vapour phase. Mass analysis was calibrated with respect to NH3, H2O and CO2 by searching for suitable reference substances, with the goal to identify whether or not there is a common underlying principle of melting − solidification and/or sublimation − desublimation and/or irreversible decomposition for amino acids. Previous reports missed the quantitative identification of gaseous products. The broader implication of this general relationship between amino acids and their condensation products is that amino acids might have been synthesized under prebiological conditions on earth or deposited on earth from interstellar space, where they have been found [4]. Robustness of amino acids against extreme conditions is required for early occurrence, but little is known about their nonbiological thermal destruction. There is hope that one might learn something about the molecules needed in synthesis from the products found in decomposition. Our experimental approach is not biochemical, it is merely thermochemical.
DSC, TGA, QMS
Altogether 200 samples of amino acids of at least 99.99% purity from Sigma-Aldrich were tested in a Simultaneous Thermal Analysis apparatus STA 449 Jupiter (Netzsch, Selb, Germany) coupled with Mass spectrometer QMS 403C Aëolos (Netzsch). Specimens of typically 10 mg weight in Al cans were evacuated and then heated at 5 K/min in argon flow. Differential scanning calorimetry (DSC) and thermal gravimetric analysis/thermogravimetry (TGA, TG), as well as quantitative mass spectrometry (QMS) outputs were smoothed to obtain the data of "Raw data" section. The mass spectrometer scanned 290 times between 30 °C and 320 °C, i.e. at every single degree in 1 Da steps between 1 Da and 100 Da. Alltogether, 290×100×200=5.8 million data points were analyzed.
A MPA120 EZ-Melt Automated Melting Point Apparatus (Stanford Research Systems, Sunnyvale, CA, U.S.A.) equipped with a CAMCORDER GZ-EX210 (JVC, Bad Vilbel, Germany) was used for the optical observations. The same heating rate of 5 K/min was employed, but without inert gas protection. Screen shot images were extracted from continuous videos registered from 160 to 320 °C, for all amino acids significant moments are shown in "Results".
Although we examined all 20 amino acids, we report results for those eight of them, for which the sum of the volatile gases, NH3, H2O, CO2 and H2S, matched the mass loss registered by thermogravimetry (TG). Only if both, the mass and the enthalpy balance match precisely, as in our case ±5 Da (see Table 1 for details), it is possible to take these data as a proof for the correctness of the proposed reaction. This is the reason why only 8 of the 20 amino acids are reported here. Only for them we know for sure, how they decompose. For each amino acid we show the skeleton structure, the optical observations, the DSC signal in red and the TG signal in black, as well as the ion currents for important channels, quantitatively significant are only the 17 Da (NH3, green lines), 18 Da (H2O, blue lines), and 44 Da (CO2, grey lines) signals. The logarithmic scale overemphasizes the molecular weights. The DSC data are given in W/g, the TG data in %. The QMS data are ion currents [A] per sample. All data are summarized in Fig. 1 for glycine, Fig. 2 for cysteine, Fig. 3 for aspartic adic, Fig. 4 for asparagine, Fig. 5 for glutamic acid, Fig. 6 for glutamine, Fig. 7 for arginine, and Fig. 8 for histidine.
Glycine data. C2H5NO2, 75 Da, H f =−528 kJ/mol
Cysteine data. C3H7NO2S, 121 Da, H f =−534 kJ/mol
Aspartate data. C4H7NO4, 133 Da, H f =−973 kJ/ mol
Asparagine data. C4H8N2O3, 132 Da, H f =−789 kJ/mol
Glutamate data. C5H9NO4, 147 Da, H f =−1097 kJ/mol
Glutamine data. C5H100N2O3, 146 Da, H f =−826 kJ/mol
Arginine data. C6H14N4O2, 174 Da, H f =−623 kJ/mol
Histidine data. C6H9N3O2, 155 Da, H f =−467 kJ/mol
Table 1 Data overview
Glycine, Gly, G
C2H5NO2, 75 Da, H f = −528 kJ/mol
Cysteine, Cys, C
C3H7NO2S, 121 Da, H f = −534 kJ/mol
Aspartic acid, Asp, D
C4H7NO4, 133 Da, H f = −973 kJ/mol
Asparagine, Asn, N
C4H8N2O3, 132 Da, H f = −789 kJ/mol
Glutamic acid, Glu, E
C5H9NO4, 147 Da, H f = −1097 kJ/mol
Glutamine, Gln, Q
C5H10N2O3, 146 Da, H f = −826kJ/mol
Arginine, Arg, R
Histidine, His, H
The DSC, TGA and QMS curves share one essential feature: In DSC there are peaks at a certain temperature T peak for each amino acid, at the same temperatures they are accompanied by drops in TGA and QMS peaks. The simple fact that the DSC and QMS signals coincide in bell shaped peaks with the TGA drop proves that essentially one simple decomposition process takes place, there is not a spectrum of decomposition temperatures, as there would be for proteins. Qualitatively this proves that the process observed is neither melting nor sublimation (as claimed in the literature [5]). The observed process is decomposition, none of the eight amino acids exists in liquid form. The optical observations, not obtained under vacuum but under some air access, are informative nevertheless. Solid/liquid transitions, with the liquid boiling heavily, coincide with the peak temperatures for Gly, Cys, Gln, Glu, Arg and His. Only for Asn and Asp there are solid/solid transformations at the peak temperatures. For Asn there is liquification at 280 °C, Asp stays solid up to 320 °C.
Calibration and quantitative mass spectrometry
The DSC signals have the dimension of specific power [W/g], the QMS ones are ion currents of the order of pA. Integration over time, or, equivalently, temperature, gives the peak areas, which are specific energies [J/g] and ionic charges, of the order of pC. Reduction from specimen weights, typically 10 mg, to mol values is trivial. In absolute terms the ion currents and ionic charges are meaningless, because equipment dependent, calibration is needed. Only one reliable calibration substance was available, sodium bicarbonate (NaHCO3) = X1. It decomposes upon heating, 2 NaHCO3 → Na2CO3+CO2+H2O. The \(\mathrm {\frac {1}{2} CO_{2}}\) mol/mol NaHCO3 and \(\mathrm {\frac {1}{2} H_{2}O}\) mol/mol NaHCO3 lines were quantitatively repeatable over months, in terms of pC/mol CO2 and pC/mol H2O. They served to identify 1 mol CO2/mol Cys and \(\frac {1}{2}\) mol H2O/mol Q beyond any doubt. In the absence of primary NH3 calibration we had to resort to secondary substances, glutamine, aspartic acid and asparagine, which retained stable NH3 and H2O signals over months. The \(\frac {1}{2}\) mol NH3/mol Q can only come from the glutamine dimer, which implies that also the H2O signal from glutamine corresponds to \(\frac {1}{2}\) mol H2O/mol Q. For the other two, the correspondence between 1 mol H2O and 1 mol NH3 is convincing. Thus we had four consistent reference points: \(\frac {1}{2}\) mol H2O and \(\frac {1}{2}\) mol CO2 from NaHCO3, and \(\frac {1}{2}\) mol NH3 from glu- tamine, and 1 mol NH3 from Asparagine. For each amino acid sample, the ion current is measured individually in each mass channel between 1 and 100 Da in 1 Da intervals. Integration over time (and temperature) gives for each mass the ion charge per mol AA, [C/molAA], and with the four calibrations the final values of mol/molAA. In Figs. 9, 10 and 11 the ion charges are plotted on the left, the mol amounts on the right. In the graph for 17 Da (Fig. 9) there appeared a 20 μC/mol signal for the reference substance X1. Since this definitely cannot contain NH3, a systematic error of 20 μC/mol must be present, though the statistical errors are smaller.
QMS data for the 17 Da channel. Signals in the 17 Da, the NH3 channel, for each of the amino acids. Ionic charges in the peaks on the left, mol NH3/mol amino acid on the right. The clustering of G, C, D, Q around \(\frac {1}{2}\) mol NH3 per mol AA and of N and R around 1 mol NH3 per mol AA is striking
QMS data for the 18 Da channel. Signals in the 18 Da, the H2O channel, for each of the amino acids. Ionic charges in the peaks on the left, mol H2O/mol amino acid on the right. The clustering of C and Q around the \(\frac {1}{2}\) mol H2O level, of N, E, R, H around the 1 mol H2O level, and the 2 mol point for D are striking
QMS data for the 44 Da channel. Signals in the 44 Da, the CO2 channel, for each of the amino acids. Ionic charges in the peaks on the left, mol CO2/mol amino acid on the right. Only C produces 1 mol CO2, the level of the others is negligible
The absolute ion currents of Figs. 9, 10 and 11 are equipment dependent and not significant, but the relative values are encouraging. One mol NH3 produces 12% less and CO2 54% more ions than one mol H2O. Indeed the ionization cross sections of NH3, H2O and CO2 are reported to be in that order [6].
Figures 9, 10, 11 and 12 and Table 1 summarize the experimental data: With the exception of cysteine, thermal decomposition results in three gases, mainly H2O, less NH3 and hardly any CO2. The weight of these three gases adds up to the weight loss registered by TGA, therefore no other gases evolve in appreciable amount − they are not seen in QMS either. The proximity of the molfractions to integer or half-integer values indicates simple decomposition chains. The process causing the peaks cannot be melting (because of the mass loss), nor sublimation (because of the QMS signals). One concludes that amino acids do not exist in liquid or gaseous form. They decompose endothermally, with heats of decomposition between −72 and −151 kJ/mol, at well defined temperatures between 185 °C and 280 °C.
Comparison of mass balances registered by TGA and QMS experiments. The difference between the mass loss registered by TGA, ΔM and the volatile mass found as NH3, H2O, CO2 and H2S, ΔM − M gas remains below |9| Da. This is confirmation that no other gases are produced
Data analysis, amino acid by amino acid
These amino acids consist of different side chains attached to the C α of the same backbone, NH2−Cα−(C∗OOH), but their decomposition chains are quite different. The pyrolytic process is controlled by three balance laws: In terms of Da the masses must add up, chemically the atomic species must balance, and the enthalpy of formation must equal the enthalpies of formation of the products plus the endothermic heat of reaction. The amounts of volatile products are experimental values (TGA and QMS). For the residues only the mass is experimental, their composition is inferred. In this section we analyze possible pathways. Although the choices, restricted by compositional, mass and enthalpy considerations, are convincing, they cannot be unique beyond doubt. Alternatives to our proposals, but indistinguishable by us, are possible. Analyses of the decomposition chains are, therefore, tentative or speculative. Nevertheless, they are less speculative than those of Rodante et al. [7], who had only TGA and DSC, but no QMS at their disposal. What Acree and Chickos [5] call "sublimation enthalpies" agree more or less with our decomposition enthalpies. One concludes that they must refer to decomposition, not merely sublimation without composition change.
We made use of the enthalpy values listed for standard conditions [8] without minor corrections for specific heats and entropies up to the actual reaction temperatures. Moreover, hydrogen gas escapes our attention, it is too light (2 Da) to be registered in QMS and TGA, nor does it appear in the enthalpy sum, its heat of formation being zero by definition. With exception of hydrogen, the mass balance, controlled by TGA, confirms that beyond the residue and the three gases, nothing else is formed. The real constraint is the enthalpy balance. For the enthalpy balance production of water is necessary. The expression of the formation enthalpies of the 20 amino acids (C a H b N c O d S e ) has the least square fit H f (C a H b N c O d S e )= 30.3a −37.8b+16.5c−182.4d−71.3e [kJ/mol]. The oxygens counterbalance the others with −182 kJ/mol. The obvious way of efficiently transferring enthalpy from the reactants to the products is the formation of water, with Hf (H2O)=−242 kJ/mol.
Detailed analysis for each amino acid is helped by preliminary reference to a few reactions possible in principle. CO2 production in Cys is obviously a special case. In principle one expects the N-termini to be stable, making desamination to produce NH3 unlikely. Nitrogen in the side chains is another matter, indeed the NH3 producing Asn and Arg have nitrogen in their side chains. The predominance of H2O production indicates instability of the C-terminus beyond the C ∗ atom, where dehydration can occur by n-oligomerization, which yields (n-1)/n mol H2O/mol AS, from dimerization for n=2, to 1 mol H2O/mol for n →∞ in polymerization. A special case of dimerization is external cyclization in the diketopiperazine reaction, which yields 1 mol H2O/mol AA. These involve joining N- and C- termini in a dehydration reaction. For long side chains also internal cyclization, where the end of the side chain connects to the C-terminus can be envisaged. Integer and half-integer mol values restrict the choice for the residues, but not unequivocally.
All DSC peaks are endothermic, their areas are given negative signs. With this convention endothermic evaporation and exothermic production of water are written as
$$\begin{array}{@{}rcl@{}} {\mathrm{H_{2}O(l)}} &\quad \longrightarrow &\quad {\mathrm{H_{2}O(g)}}\\ \mathbf{-285.5} &&\quad \mathbf{-241.8} \quad \mathbf{-44}~\boldsymbol{kJ/mol} \end{array} $$
$$\begin{array}{ccrcl} {2} (\mathrm{H}_{2})& + & \mathrm{O}_{2} & \longrightarrow & 2 (\mathrm{H}_{2}\mathrm{O})(\mathrm{g})\\ \mathbf{0} & &\mathbf{0}& & 2 \mathbf{(-241.8)} + 2 \mathbf{(241.8)}~\boldsymbol{kJ/mol} \end{array} $$
Glycine, Gly, G, C2H5NO2, 75 Da, Hf =−528 kJ/mol.
Simple endothermic peak at 250 °C, Hpeak =−72.1 kJ/mol.
The QMS signal of \(\frac {3}{2}\) mol H2O/mol Gly plus \(\frac {1}{2}\) mol NH3/mol Gly is beyond doubt, it is confirmed by the mass loss of 35 Da /mol Gly. This leaves only 10% of the original hydrogen for the residue. The triple and double bonds in carbon rich C4HNO, C3HNO and C2HNO preclude them enthalpy wise and make deposition of carbon likely,
$$\begin{array}{lclcccrcccl} {4 (\mathrm{C}_{2}\mathrm{H}_{5}\text{NO}_{2})} & \longrightarrow & {6\,(\mathrm{H}_{2}\mathrm{O})} & + & 2 {(\text{NH}_{3})} & + & {6\,\mathrm{C}} & + & 2 &{ (\text{CHNO})} \\ \mathbf{-2112} & & \mathbf{-1452}&&\mathbf{-92}& &\mathbf{0} & & 2 & \times & \mathbf{-288}~\boldsymbol{kJ/mol}, \end{array} $$
leaving -280 kJ/mol = x for the moiety CHNO, which is the composition of the peptide bond. The database ChemSpider [9] lists two symmetric molecules consisting entirely of peptide bonds, 1,3-Diazetine-2,4-dione, C2H2N2O2, chemspider 11593418, 86 Da, (Fig. 13a) or its isomer 1,2-Diazetine-3,4-dione, C2H2N2O2, chemspider 11383421, 86 Da, Fig. 13b. The scarcity of hydrogen is such that not even the smallest lactam, 2-Aziridinone, C2H3NO, 57 Da, chemspider 10574050, bp 57 °C (Fig. 13c) can serve as residue.
Interpretation of Glycine data. a, Residue of Gly, C2H2N2O2, 1,3-Diazetine-2,4,dione, 86 Da. b, Isomer of Fig. 13a, 1,2-Diazetine-3,4-dione, 86 Da. c, 2-Aziridinone, C2H3NO, 57 Da. d, Intermediate dimer, glygly, C4H8N2O3, 132 Da
$$\begin{array}{lclclclclll} 2 \text{Cys} = \\ \mathrm{C}_{6}\mathrm{H}_{14}\mathrm{N}_{2}\mathrm{O}_{4}\mathrm{S}_{2} & \longrightarrow & 2\,\text{CO}_{2} & + & 2\, \mathrm{H}_{2}\mathrm{S} & + & \text{NH}_{3} & + & \mathrm{C}_{4}\mathrm{H}_{7}\mathrm{N}, &\\ \ \ \ 2\mathbf{(-534)} & &\! \mathbf{-788} & & \mathbf{-40} & & \mathbf{-46} & & \mathbf{-47} &\!\!\! + & 2\,\mathbf{(-96)}~\boldsymbol{kJ/mol}. \end{array} $$
The simplest pathway seems the formation of linear Glycylglycine, chemspider 10690 (Fig. 13d), m.p. 255 °C (which is T peak!) C4H8N2O3, H f (s) = − 748 kJ/mol [8] from which the central peptide bond −(C =O)-NH − is detached by cutting off the NH2-C α-H2 − group on one, and the C α-H2-C ∗OOH group on the other side. The former makes NH3 plus C, the latter makes 2 C plus 2 H2O. This process is specific for the glycine dimer, in which the C α atoms are not protected by proper sidechains, they are just −Cα-H2 − units. This pathway to shear peptide bonds is of interest in the context of possible peptide nucleic acid (PNA) synthesis [10] via N-2-aminoethylglycine (AEG), C4H10N2O2, chemspider 379422, which is deoxidized diglycine, 2Gly→O2+AEG.
Cysteine, Cys, C, C3H7NO2S: 121 Da, Hf =−534 kJ/mol.
Tpeak = 221 °C with a mass loss of 98 Da, Hpeak=−96 kJ/mol.
The clear 1 mol CO2 signal leaves no oxygen to form H2O, therefore the spurious 18 Da line must stem from a systematic error. There is also \(\frac {1}{2}\) mol NH3/mol Cys. For H2S there is indeed a signal at 34 Da. It corresponds to 1 mol, because the ionization cross sections of H2S and H2O are nearly identical, so that the calibration of Fig. 10 applies. The mass loss of 44+34+8.5=70% of 121 Da agrees with TGA. Chemical analysis found no sulfur in the residue. No possibility for forming disulfide bridges between molecules is left. Neither COS nor CS2 was found. The total reaction is
On the left −1068 kJ, on the right −1113 kJ. Pathway to the formation of C4H7N might be ejection of the carboxyl group −C∗OOH and the −SH group from Cys, the remaining chain NH2-C α-C ∗ is too short for internal, but suitable for external cyclization. Two of these form the asymmetric 5-ring (3-pyrrolidinamine, chemspider 144134), Fig. 14a, from which the −NH2 is cutt off. Indeed the \(\frac {1}{2}\) NH3 ejected confirms such dimerization. That leaves the molecule C4H7N: 2,5-Dihydro-1H-pyrrole, chemspider 13870958, b.p. 90 °C, 69 Da, H f (s) =−46.6 kJ/mol (Fig. 14b), or another pyrroline, with the double bond elsewhere in the ring. Indeed there is heavy boiling beyond the peak. In view of the richness in hydrogen, several small hydrocarbon lines are not surprising.
Interpretation of Cysteine data. a, Intermediate compound: 3-pyrrolidinamine, chemspider 144134, 86 Da. b, Residue of Cys, C4H7N, 2,5-Dihydro-1H-pyrrole, chemspider 13870958, 69 Da
Aspartic acid, Asp, D, C4H7NO4:133 Da, Hf =−973 kJ/mol.
DSC shows two distinct peaks, at 230 °C and at 250 °C, in each of which 1 mol H2O/mol Asp is ejected. The endothermic heats are -64 and -61 kJ/mol, respectively. The substance stays a powder up to 294 °C, i.e. solid/solid transformation in the peak. The reaction
$$\begin{array}{llllllcl} \mathrm{C}_{4}\mathrm{H}_{7}\text{NO}_{4} & \longrightarrow & \mathrm{H}_{2}\mathrm{O} & + & \mathrm{H}_{2}\mathrm{O} & + & \mathrm{C}_{4}\mathrm{H}_{3}\text{NO}_{2}\\ \mathbf{-973} & & \mathbf{-242} & &\mathbf{-242} & & y &\!\! \mathbf{(-125)}~\boldsymbol{kJ/mol}, \end{array} $$
with calculated y=−364 kJ/mol, which is reasonable for the formation enthalpy of the polysuccinimide unit (PSI). The molecular weight of C4H3NO2 is 97 Da.
The compound (C4H3NO2) n is polysuccinimide. The two peaks prove that the reaction occurs in two steps, in the first at 230 °C the condensation reaction produces polyaspartic acid, n Asp→H2O+(Asp) n , in the second at 250 °C the poly-Asp degrades to polysuccinimide (PSI) by ejection of another 1 mol H2O/mol Asp. Such a reaction was reported by Schiff [11]. The molecule drawn in Fig. 15 is β-poly-Asp, there is an isomer, α-poly-Asp, where the next C in the ring forms a bond to its neighbour. We have no possibility to decide between the two.
Interpretation of Aspartate data. The pathway from Aspartic acid (D) to polysuccimide (PSI). Compared with succinimide, the N-C bond in polysuccinimide economizes two hydrogen atoms
Asparagine, Asn, N, C4H8N2O3: 132 Da, Hf =−789 kJ/mol.
In the broad peak at 232 °C, 1 mol H2O /mol Asn and 1 mol NH3 /mol Asn are ejected. Hpeak=−122 kJ/mol. The product stays a white powder up to 265 °C, i.e. there is a solid/solid transformation in the reaction
$$\begin{array}{llllllcl} \mathrm{C}_{4}\mathrm{H}_{8}\mathrm{N}_{2}\mathrm{O}_{3} & \longrightarrow & \mathrm{H}_{2}\mathrm{O} & + & \text{NH}_{3} & + & \mathrm{C}_{4}\mathrm{H}_{3}\text{NO}_{2} & \\ \mathbf{-789} & & \mathbf{-242} & & \mathbf{-46} & & \times &\!\! \mathbf{(-122)}~\boldsymbol{kJ/mol}, \end{array} $$
with calculated x=−379 kJ/mol. In the Asp decomposition, H f (C4H3NO2) was calculated as y=−364 kJ/mol. The two values agree, although because of their histories, the two PSI are not identical. If Asn followed the example of Asp, it would eject 1 mol H2O /mol Asn in the condensation reaction n Asn→H2O+(Asn) n , poly-N, followed by degradation of poly-N to polysuccinimide (PSI) by ejection of 1 mol NH3 /mol Asn. If, however, the H2O of the condensation reaction is not ejected but retained, it can replace the −NH2 in poly-N by −OH. According to Asn−>NH3+ poly-D, this amounts to the formation of polyaspartic acid from asparagine by ejection of NH3. The poly-D then degrades to polysuccinimide (PSI) by ejection of 1 mol H2O/mol Asn. Apparently both alternatives shown in Fig. 16 occur, and there is one broad peak containing both NH3 and H2O.
Interpretation of Asparagine data. Two pathways from asparagine (N) to polysuccinimide (PSI): either through polyasparagine (poly-N) or polyaspartic acid (poly-D). Compared with succiminide, the N-C bond in polysuccinimide economizes two hydrogen atoms
Though the formulae for PSI formed from Asp and from Asn, are the same, (C4H3NO2) n , these two residues need not be identical. For kinetic reasons the oligomerization or polymerization might have proceeded to different lengths in poly-Asp and poly-Asn, therefore the degraded products PSI might have different lengths, with different stabilities and melting points. Moreover, the telomers are different, −OH for PSI from Asp and −NH2 for PSI from Asn. Indeed PSI from Asp remains a white powder up to 289 °C, while PSI from Asn starts melting at 289 °C.
Glutamic acid, Glu, E, C5H9NO4:147 Da, H f =−1097 kJ/mol; Tpeak=200°C, Hpeak=−88 kJ/mol.
At 200 °C, 1 mol H2O /mol Glu is seen in QMS, the DSC area is −121 kJ/mol, mass loss in the peak is 12% (17 Da). The dehydration of Glu has been known for a long time [12].
$$\begin{array}{llllcl} \text{C5H9NO4} & \longrightarrow & \text{H2O} & + & \text{C5H7NO3(l)} &\\ \mathbf{-1097}& & \mathbf{-242}& & \times& \mathbf{(-121)}~\boldsymbol{kJ/mol}, \end{array} $$
with calculated x=−734 kJ/mol for Hf(C5H7NO3), pyroglutamic acid, chemspider 485, 129 Da, T m=184 °C, b.p. = 433 °C (Fig. 17a). This lactam is biologically important, but its enthalpy of formation is apparently not known. Known is the H f (s) =−459 kJ/mol and H f (g) =−375 kJ/mol for C4H5NO2, succinimide, 99 Da, the five ring with O= and =O as wings (the structure is like pyroglutamic acid, but with the carboxyl group −COOH replaced by =O, shown in Fig. 17b). The additional O should add about −200 kJ/mol, which makes the −734 kJ/mol for pyroglutamic acid plausible. The TGA weight loss beyond the peak is evaporation. Pyroglutamic acid is formed by inner cyclization of E: after the −OH hanging on C δ is ejected, the C δ joins the −NH2 hanging on C α. This was suggested by Mosqueira et al. [13], ours is the first experimental evidence for this process. Since QMS does not show any CO2, the reaction to C4H7NO, the lactam pyrrolidone (Fig. 17c), 85 Da, Tm=25 °C, b.p. = 245 °C, H f (l) =−286 kJ/mol, yellow liquid, can be ruled out, although cutting off the CO2 is sterically tempting.
Interpretation of Glutamate data. a, The final residue of Glu, pyroglutamic acid, C5H7NO3, 129 Da. b, Succinimide, C4H5NO2, 99 Da. c, Pyrrolidone, C4H7NO, 85 Da
Glutamine, Gln, Q, C5H10N2O3: 146 Da, Hf =−826 kJ/mol.
The precise \(\frac {1}{2}\) mol fractions of H2O and NH3 in the peak at Tpeak=185 °C, Hpeak=−77 kJ/mol, indicate that a dimer serves as intermediate step, γ-glutamylglutamine (Fig. 18a), C10H17N3O6, chemspider 133013, b.p. 596 °C:
$$\mathrm{2\ Q\ = {C_{10}H_{20}N_{4}O_{6}\ \longrightarrow [NH_{3} + C_{10}H_{17}N_{3}O_{6}]}.} $$
Interpretation of Glutamine data. a, Intermediate step, gamma-glutamylglutamine, C10H17N3O6, 275 Da. b, The residue of Gln: 5-Oxo-L-prolyl-L-glutamine, C10H15N3O5, 257 Da
After further ejection of H2O the total reaction is
$$\mathrm{2\ Q\ = 2 ({C_{5}H_{10}N_{2}O_{3})\ \longrightarrow NH_{3} + H_{2}O + C_{10}H_{15}N_{3}O_{5}.}} $$
The database ChemSpider [9] lists for the residue a suitable molecule, 9185807, 5-Oxo-L-prolyl-L-glutamine (Fig. 18b), C10H15N3O5, 257 Da, b.p. 817 °C, Hvap=129 kJ/mol. Above the peak at 185 °C optical observations show indeed a nonboiling liquid, agreeing with the high boiling point quoted.
Arginine, Arg, R, C6H14N4O2: 174 Da, Hf =−623 kJ/mol.
A small peak without mass loss at 220 °C, −14 kJ/mol, and a main peak at 230 °C, −52 kJ/mol, producing 1 mol NH3 plus 1 mol H2O in QMS, confirmed by the weight loss of 20% of 174 Da in TGA. The precursor peak without mass loss at 220 °C, −14 kJ/mol, probably comes from a rearrangement in the guanidine star. In the large peak a double internal cyclization occurs. The loss of the amino group −NH2 in the backbone, and internal cycling joining the N next to the C δ in the side chain to C α,
$$\mathrm{C_{6}H_{14}N_{4}O_{2}\ \longrightarrow NH_{3} + H_{2}O + C_{6}H_{11}N_{3}O_{2}.} $$
forms an intermediate, 1-Carbamimidoylproline, 157 Da, chemspider 478133 (Fig. 19a). It is called "..proline", because the ring is spanned between an N and C α, though the N is not from the backbone. By losing the −OH and a second inner cyclization joining the =NH or the −NH2 to C ∗, one or the other tautomer of the final residue is formed. The total reaction is C6H14N4O2 →NH3+H2O+C6H9N3O, drawn in Fig. 19b, not quoted in the database [9, 14].
Interpretation of Arginine data. a, 1-Carbamimidoylproline, 157 Da, representing the intermediate step after ejection of NH3 from Arg. b, The final residue of Arg, C6H9N3O, 139 Da, "creatine-proline". The creatine ring on top joins the proline ring
This final residue is remarkable. It contains the proline ring, the guanidine star and a peptide bond in the ring of creatinine, which is the 5-ring with the =O and =OH double bonds. Creatinine, Hf=−240 kJ/mol, m.p. 300 °C, C4H7N3O, chemspider 568, has several tautomeric forms. The end product in question might contain either of those rings. We have no way to decide between the alternatives, but a double ring structure seems likely.
Histidine, His, H, C6H9N3O2: 155 Da, H f =−466 kJ/mol.
The QMS results are clear, His ejects 1 mol H2O in the reaction
$$\mathrm{His = C_{6}H_{9}N_{3}O_{2} \ \longrightarrow 1\,H_{2}O + C_{6}H_{7}N_{3}O}. $$
The observed 1mol H2O /mol His, confirmed by the weight loss of 13% of 155 Da, could stem from the condensation reaction of polymerization, but the volatility seen optically contradicts this option. Inner cyclization seems likely. If the C ∗ of the backbone joins the C of the imidazole ring, with =O and −NH2 attached outside, the 5-ring formed joins the 5-ring of the imidazole. The proposed structure is shown in Fig. 20.
Interpretation of Histidine data. Final residue of His, C6H9N3O, 139 Da, consisting of two 5-rings: 2-amino-2,4-cyclopentadien-1-one (C5H5NO, chemspider 28719770) and imidazole
The MolPort database [14] quotes this structure, but with the pyrazole ring (where the two N are nearest neighbours) instead of the imidazole ring (where the two N are next nearest neighbours): 5-amino-4H,5H,6H-pyrrolo[1,2-b]pyrazol-4-one, molport 022-469-240. Parting the nitrogens is energetically favorable: for pyrazole H f (s) = +105 kJ/mol, H f (g) = +179 kJ/mol; for imidazole H f (s) = +49 kJ/mol, H f (g) = +132.9 kJ/mol [8]. Moreover, the original His has an imidazole and not a pyrazole ring, and so does the residue.
Entropy of decomposition
In the tables of Domalski [15], Chickos and Acree [16] and Acree and Chickos [17], at temperatures coinciding with our peak temperatures, "Heats of sublimation" of the order of our endothermic peak areas are reported. Our QMS signals prove that chemical decomposition is involved, but that should have been obvious from the DSC data alone. The average of the entropies of transformation, Speak=Hpeak/Tpeak, is 215 J/Kmol, way above the usual entropies of melting (22 J/kmol for H2O, 28 for NaCl, 36 for C6H6), and higher than typical entropies of evaporation (41 J/Kmol for H2O, 29 for CS2, 23 for CO2, 21 for NH3). The endothermic heats in the peaks are therefore neither enthalpies of fusion nor enthalpies of sublimation, they are heats of reaction accompanied by phase changes. There is transformation and decomposition, but no reversible melting. Amino acids are stable in solid form, but not as liquids or gases.
Peptide bond formation
Five of the eight amino acids have residues containing peptide bonds, −C(=O) −NH−, only Asp and Asn leave polysuccinimide (PSI), Cys leaves cyclic pyrrolines. The preponderance of water in thermal decomposition is not surprising. In natural protein formation, each participating amino acid suffers damage. In the condensation reaction, where the N-terminus of one molecule reacts with the C-terminus of its neighbour, the planar peptide bond −Cα − CO − N − Cα− is formed. The N-atoms on, and the keto-bound O −atoms off the backbones retain their position. H2O is ejected, but neither NH3 nor CO2 are produced in protein formation. Thermal decomposition of amino acids is analogous. In protein formation, the endothermic heat is provided by ATP, in amino acid decomposition it is thermal energy. Figure 21 summarizes the results.
Overview of the residues with respect to H2O or NH3 contents. All residues are obtained by ejection of 0, \(\frac {1}{2}\), 1, \(1\frac {1}{2}\) or 2 mols of H2O or NH3, placing them on two axes. All residues contain either 0, \(\frac {1}{2}\) or 1 mol NH3 or H2O, placing them on two axes. Most of them contain peptide bonds. The polysuccinimide of D and N is an exception, cysteine, for lack of oxygen, the other
Peak areas
Quantitatively, the parallel between protein formation and pyrolysis is confirmed on the enthalpy level. In the formation of a dipeptide, X+Y →H2O+(X−Y), the difference between the enthalpies of the reactants and the products go into the formation of the peptide bond (PB): H f (X)+H f (Y)= −242 kJ+H f (X−Y)+H PB . With the tabulated value [8, 15] for H f (X), H f (Y) and H f (X-Y) one calculates H PB=−67 kJ in glycylglycine, −70 kJ in alanylglycine, −43 kJ in serylserin, −78 kJ in glycylvaline, −65 kJ in leucylglycine, −91/2 kJ in triglycylglycine, −86/2 kJ in leucylglycylglycine, and −58 kJ in glycylphenylalanine. The average value is −59 ± 13 kJ per peptide bond. The narrow standard deviation indicates that the enthalpy of forming a peptide bond is insensitive to its environment, therefore the endothermic values of oligomerization or polymerization should be close to this. One concludes that the formation of a peptide bond in a linear dimer is endothermic with an enthalpy of 59 ± 13 kJ. It is tempting to compare this with the areas of the DSC peak, the observed endothermic heat of the decomposition reaction. The average of the eight amino acids is −105 ± 27 kJ/mol. One concludes that essentially the endothermic heat of decomposition, the peak area, goes into peptide bond formation.
Production of NH3
In cases where the N-terminus, untouched by the condensation, remains attached to a cyclic product, it could be cut off as NH3, contributing up to \(\frac {1}{2}\) mol NH3/mol AA. Remarkable is the absence of methane (CH4, 16 Da), hydrogen cyanide (HCN, 27 Da) and formamide (CH3NO, 45 Da), all in mass channels where we would have seen them. These, suspected in prebiotic synthesis of amino acids, do not appear in their decomposition. Although at most only three molecules are involved, two gases and one monomolecular residue, identification of the structure of the latter is not unequivocal, there remain more or less probable other possibilities than our choices. Clearly, without QMS, data from DSC and TGA could not possibly suffice to identify decomposition chains.
Water, cyclic compounds and peptide bonds
The novel quantitative results emphasize the importance of water and cyclic condensates containing peptide bonds. All postulated residues are cyclic compounds, five of the 8 contain peptide bonds. The residues are stable at temperatures >180 °C and beyond the respective peak termperatures. These facts put constraints on hypothetical origin, state and stability of amino acids in the range between 200 °C and 300 °C in the absence and presence of water, but literature is sparse in that respect. The history of diketopiperazine and derivatives is extensively reviewed by Prasad [18] back until 1888, the synthesis of cyclo-Gly-Gly by Curtius and Gloebel [19], but Prasad emphasized the relevance of these compounds as a class of natural products no earlier than 1922 [20–22]. Today, CDPs are recognized as "transkingdom signaling molecules" [23], indicating highly conserved mechanisms from earliest stages of life on earth. The potential of CDPs and related substances as novel drugs for biomedical applicatons is comprehensively reviewed by Borthwick [24], though he does not cover aqueous regimes above 150 °C. Thermal formation of cyclo-Leu-Leu from the Leu-Leu dipeptide in the solid state was reported to occur at 177 °C [25] − actually several month after the release of QMS data for amino acids Gly, Cys, Asp, Asn, Glu, Gln, Arg, and His in the solid state [26]. There might be differences in terms of thermal cyclization by dehydration, depending on whether or not the origin is an amino acid crystal or a dipeptide. Controlled biosynthesis of CDP by highly conserved enzymes is found in all domains of life [27]. However, the biochemistry of cyclo-dipeptides and related enzymatic pathways is a comparatively unexplored interdisciplinary field, usually based on genome analyses. Just recently, their presence in extremophilic organisms has been highlighted in more detail [28, 29]. Previously reported evidences along with the first conclusive demonstration of thermal cyclization of Gly, Cys, Asp, Asn, Glu, Gln, Arg, and His by QMS, DSC and TGA as reported here put emphasis on the fact that cyclic dipeptides or cyclic compounds could represent thermally more stable precursors of prebiotic life.
Our comparative analysis allowed us to identify the eight of the twenty standard amino acids, for which the thermochemical equations unequivocally agree with stoichiometric release of NH3, H2O and CO2. The predominance of the release of H2O during the process of decomposition instead of melting indicates a common principle of condensation and, depending on the individual properties of the respective intermediate products, subsequent decomposition of the condensation products. Comparative data for all 20 standard amino acids, obtained by complementing DSC and TGA with quantitative mass spectrometry (QMS), have never been reported. For the eight with data closure we can say: Amino acids decompose thermally, they do not sublimate, nor do they melt. Only three gases are formed, mostly H2O, less so NH3 and hardly any CO2. Cys forms H2S, but not CS2. In all amino acids investigated, Gly, Cys, Asn, Asp, Gln, Glu, Arg, His, the liquid or solid residues are lactams and heterocyclic compounds with 5- or 6-membered non- (or only partially) aromatic rings, containing one or two nitrogen atoms (pyrrolidines, piperidines, pyrrazolidines, piperazines), most of them with peptide bonds present.
In summary, this work addresses an important question of amino acid thermal stability. Several processes may occur upon heating, chemical decomposition or sublimation/evaporation without decomposition. The aim of this work was to accurately determine these processes. For 8 out of the 20 standard amino acid, we demonstrated that these have a well defined temperature of decomposition. The contemporary detection of products of 17 Da, 18 Da, and 44 Da in the gas phase is the proof for decomposition, concise mass and enthalpy balances do not leave any room for speculation. Analysis and interpretation rule out the existence of any byproduct hydrocarbons which could not have been validated. The analysis for glycine, cysteine, aspartic acid, asparagine, glutamic acid, glutamine, arginine and histidine is beyond any doubt. At a heating rate of 5 K/min, neither melting nor sublimation take place. At least 8 of 20 standard amino acids do not exist in liquid form.
Nelson DL, Cox MM. Lehninger Principles of Biochemistry. New York: Macmillan; 2017.
Boldyreva E. Crystalline amino acids In: Boeyens JCA, Ogilvie JF, editors. Models, Mysteries and Magic of Molecules. Dordrecht: Springer: 2013. p. 167–92.
Barret G. Chemistry and Biochemistry of the Amino Acids. Heidelberg: Springer; 1985.
Follmann H, Brownson C. Darwin's warm little pond revisited: from molecules to the origin of life. Naturwissenschaften. 2009; 96:1265–92.
Acree W, Chickos JS. Phase transition enthalpy measurements of organic and organometallic compounds. sublimation, vaporization and fusion enthalpies from 1880 to 2010. J Phys Chem Ref Data. 2010; 39:043101.
Electron-Impact Cross Section Database. Gaithersburg: NIST; 2017. http://physics.nist.gov/PhysRefData/ASD/ionEnergy.html. Accessed 18 Jan 2018.
Rodante F, Marrosu G, Catalani G. Thermal-analysis of some alpha-amino-acids with similar structures. Thermochim Acta. 1992; 194:197–213.
Haynes W. Standard thermodynamic properties of chemical substances In: Lewin RA, editor. CRC Handbook of Chemistry and Physics. Boca Raton: CRC Press, Taylor and Francis Group: 2013. p. 5.
ChemSpider Free Chemical Structure Database. Cambridge: Royal Society of Chemistry; 2017. http://www.chemspider.com/. Accessed 18 Jan 2018.
Banack SA, Metcalf JS, Jiang LY, Craighead D, Ilag LL, Cox PA. Cyanobacteria produce n-(2-aminoethyl)glycine, a backbone for peptide nucleic acids which may have been the first genetic molecules for life on earth. PLoS ONE. 2012; 7(11):49043.
Schiff H. Über polyaspartsäuren. Berichte der deutschen chemischen Gesellschaft. 1897; 30:2449–59.
Haitinger L. Vorläufige mittheilung über glutaminsäure und pyrrol. Monatshefte für Chemie und verwandte Teile anderer Wissenschaften. 1882; 3:228–9.
Mosqueira FG, Ramos-Bernal S, Negron-Mendoza A. Prebiotic thermal polymerization of crystals of amino acids via the diketopiperazine reaction. Biosystems. 2008; 91:195–200.
MolPort Chemical Compound Database. Riga: MolPort; 2017. https://www.molport.com/. Accessed 18 Jan 2018.
Domalski ES. Selected values of heats of combustion and heats of formation of organic compounds containing the elements c, h, n, o, p, and s. J Phys Chem Ref Data. 1972; 1:221–77.
Chickos JS, Acree WE. Enthalpies of sublimation of organic and organometallic compounds. 1910-2001. J Phys Chem Ref Data. 2002; 31:537–698.
Acree WE, Chickos JS. Phase transition enthalpy measurements of organic and organometallic compounds. sublimation, vaporization and fusion enthalpies from 1880 to 2015. part 1. c1-c10. J Phys Chem Ref Data. 2016; 45:3–10106314948363.
Prasad C. Bioactive cyclic dipeptides. Peptides. 1995; 16:151–64.
Curtius T, Goebel F. Ueber glycocollaether. J F Praktische Chemie. 1888; 37:150–81.
Fischer E, Raske K. Beitrag zur stereochemie der 2, 5-diketopiperazine In: Bergmann M, editor. Untersuchungen Über Aminosäuren, Polypeptide und Proteine II (1907-1919). Berlin Heidelberg: Springer: 1923. p. 279–94.
Aberhalden E, Komm E. The formation of diketopiperazines from polypeptides under various conditions. Z Physiol Chem. 1924; 139:147–52.
Aberhalden E, Haas R. Further studies on the structure of proteins: Studies on the physical and chemical properties of 2,5-di-ketopiperazines. Z Physiol Chem. 1926; 151:114–9.
Ortiz-Castro R, Díaz-Pérez C, Martínez-Trujillo M, del Río RE, Campos-García J, López-Bucio J. Transkingdom signaling based on bacterial cyclodipeptides with auxin activity in plants. Proc Nat Acad Sci U S A. 2011; 108:7253–8.
Borthwick AD. 2,5-diketopiperazines: Synthesis, reactions, medicinal chemistry, and bioactive natural products. Chem Rev. 2012; 112:3641–716.
Ziganshin MA, Safiullina AS, Gerasimov AV, Ziganshina SA, Klimovitskii AE, Khayarov KR, Gorbatchuk VV. Thermally induced self-assembly and cyclization of l-leucyl-l-leucine in solid state. J Phys Chem B. 2017; 121:8603–10.
bioRxiv. NY, USA: Cold Spring Harbor Laboratory; 2017. http://dx.doi.org/10.1101/119123. Accessed 18 Jan 2018.
Belin P, Moutiez M, Lautru S, Seguin J, Pernodet JL, Gondry M. The nonribosomal synthesis of diketopiperazines in trna-dependent cyclodipeptide synthase pathways. Nat Prod Rep. 2012; 29:961–79.
Tommonaro G, Abbamondi GR, Iodice C, Tait K, De Rosa S. Diketopiperazines produced by the halophilic archaeon, haloterrigena hispanica, activate ahl bioreporters. Microbial Ecol. 2012; 63:490–5.
Charlesworth JC, Burns BP. Untapped resources: Biotechnological potential of peptides and secondary metabolites in archaea. Archaea. 2015;282035.
The authors thank Angela Rutz and Frederik Schweiger for technical assistance. This work would have been impossible without the continuous support of Eduard Arzt.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Institute of Biomaterials and Biomolecular Systems, University of Stuttgart, Pfaffenwaldring 57, Stuttgart, D-70569, Germany
Ingrid M. Weiss
INM-Leibniz Institute for New Materials, Campus D2 2, Saarbruecken, D-66123, Germany
, Christina Muth
, Robert Drumm
& Helmut O. K. Kirchner
Search for Ingrid M. Weiss in:
Search for Christina Muth in:
Search for Robert Drumm in:
Search for Helmut O. K. Kirchner in:
IMW and HOKK designed the study, analyzed and interpreted the data and wrote the manuscript. CM prepared the samples and performed the visual examination of the amino acids. RD prepared the samples and performed the thermal analyses and quantitative mass spectrometry experiments. All authors read and approved the final manuscript.
Correspondence to Ingrid M. Weiss.
Weiss, I.M., Muth, C., Drumm, R. et al. Thermal decomposition of the amino acids glycine, cysteine, aspartic acid, asparagine, glutamic acid, glutamine, arginine and histidine. BMC Biophys 11, 2 (2018) doi:10.1186/s13628-018-0042-4
Quantitative mass spectrometry
Structural stability and dynamics | CommonCrawl |
I. Parametric Equations and Polar Coordinates
2. Parametric Equations
3. Calculus of Parametric Curves
4. Polar Coordinates
5. Area and Arc Length in Polar Coordinates
6. Conic Sections
II. Vectors in Space
8. Vectors in the Plane
9. Vectors in Three Dimensions
10. The Dot Product
11. The Cross Product
12. Equations of Lines and Planes in Space
13. Quadric Surfaces
14. Cylindrical and Spherical Coordinates
III. Vector-Valued Functions
16. Vector-Valued Functions and Space Curves
17. Calculus of Vector-Valued Functions
18. Arc Length and Curvature
19. Motion in Space
IV. Differentiation of Functions of Several Variables
21. Functions of Several Variables
22. Limits and Continuity
23. Partial Derivatives
24. Tangent Planes and Linear Approximations
25. The Chain Rule
26. Directional Derivatives and the Gradient
27. Maxima/Minima Problems
28. Lagrange Multipliers
V. Multiple Integration
30. Double Integrals over Rectangular Regions
31. Double Integrals over General Regions
32. Double Integrals in Polar Coordinates
33. Triple Integrals
34. Triple Integrals in Cylindrical and Spherical Coordinates
35. Calculating Centers of Mass and Moments of Inertia
36. Change of Variables in Multiple Integrals
VI. Vector Calculus
38. Vector Fields
39. Line Integrals
40. Conservative Vector Fields
41. Green's Theorem
42. Divergence and Curl
43. Surface Integrals
44. Stokes' Theorem
45. The Divergence Theorem
VII. Second-Order Differential Equations
47. Second-Order Linear Equations
48. Nonhomogeneous Linear Equations
49. Applications
50. Series Solutions of Differential Equations
Table of Integrals
Table of Derivatives
Review of Pre-Calculus
Calculus Volume 3
Differentiation of Functions of Several Variables
24 Tangent Planes and Linear Approximations
Determine the equation of a plane tangent to a given surface at a point.
Use the tangent plane to approximate a function of two variables at a point.
Explain when a function of two variables is differentiable.
Use the total differential to approximate the change in a function of two variables.
In this section, we consider the problem of finding the tangent plane to a surface, which is analogous to finding the equation of a tangent line to a curve when the curve is defined by the graph of a function of one variable, The slope of the tangent line at the point is given by what is the slope of a tangent plane? We learned about the equation of a plane in Equations of Lines and Planes in Space; in this section, we see how it can be applied to the problem at hand.
Tangent Planes
Intuitively, it seems clear that, in a plane, only one line can be tangent to a curve at a point. However, in three-dimensional space, many lines can be tangent to a given point. If these lines lie in the same plane, they determine the tangent plane at that point. A tangent plane at a regular point contains all of the lines tangent to that point. A more intuitive way to think of a tangent plane is to assume the surface is smooth at that point (no corners). Then, a tangent line to the surface at that point in any direction does not have any abrupt changes in slope because the direction changes smoothly.
Let be a point on a surface and let be any curve passing through and lying entirely in If the tangent lines to all such curves at lie in the same plane, then this plane is called the tangent plane to at ((Figure)).
The tangent plane to a surface at a point contains all the tangent lines to curves in that pass through
For a tangent plane to a surface to exist at a point on that surface, it is sufficient for the function that defines the surface to be differentiable at that point. We define the term tangent plane here and then explore the idea intuitively.
Let be a surface defined by a differentiable function and let be a point in the domain of Then, the equation of the tangent plane to at is given by
To see why this formula is correct, let's first find two tangent lines to the surface The equation of the tangent line to the curve that is represented by the intersection of with the vertical trace given by is Similarly, the equation of the tangent line to the curve that is represented by the intersection of with the vertical trace given by is A parallel vector to the first tangent line is a parallel vector to the second tangent line is We can take the cross product of these two vectors:
This vector is perpendicular to both lines and is therefore perpendicular to the tangent plane. We can use this vector as a normal vector to the tangent plane, along with the point in the equation for a plane:
Solving this equation for gives (Figure).
Finding a Tangent Plane
Find the equation of the tangent plane to the surface defined by the function at point
First, we must calculate and then use (Figure) with and
Then (Figure) becomes
(See the following figure).
Calculating the equation of a tangent plane to a given surface at a given point.
First, calculate and then use (Figure).
Finding Another Tangent Plane
Find the equation of the tangent plane to the surface defined by the function at the point
First, calculate and then use (Figure) with and
*** QuickLaTeX cannot compile formula:
\begin{array}{}\\ z=f\left({x}_{0},{y}_{0}\right)+{f}_{x}\left({x}_{0},{y}_{0}\right)\left(x-{x}_{0}\right)+{f}_{y}\left({x}_{0},{y}_{0}\right)\left(y-{y}_{0}\right)\hfill \\ \\ z=-\frac{\sqrt{6}}{4}+\frac{\sqrt{2}}{2}\left(x-\frac{\pi }{3}\right)-\frac{3\sqrt{6}}{4}\left(y-\frac{\pi }{4}\right)\hfill \\ z=\frac{\sqrt{2}}{2}x-\frac{3\sqrt{6}}{4}y-\frac{\sqrt{6}}{4}-\frac{\pi \sqrt{2}}{6}+\frac{3\pi \sqrt{6}}{16}.\hfill \end{array}
*** Error message:
Missing # inserted in alignment preamble.
leading text: $\begin{array}{}
Missing $ inserted.
leading text: $\begin{array}{}\\ z=f\left
leading text: ...0}\right)\left(y-{y}_{0}\right)\hfill \\ \\
leading text: ...0}\right)\hfill \\ \\ z=-\frac{\sqrt{6}}{4}
Extra }, or forgotten $.
Missing } inserted.
leading text: ...{4}\left(y-\frac{\pi }{4}\right)\hfill \\ z
A tangent plane to a surface does not always exist at every point on the surface. Consider the function
The graph of this function follows.
Graph of a function that does not have a tangent plane at the origin.
If either or then so the value of the function does not change on either the x– or y-axis. Therefore, so as either approach zero, these partial derivatives stay equal to zero. Substituting them into (Figure) gives as the equation of the tangent line. However, if we approach the origin from a different direction, we get a different story. For example, suppose we approach the origin along the line If we put into the original function, it becomes
When the slope of this curve is equal to when the slope of this curve is equal to This presents a problem. In the definition of tangent plane, we presumed that all tangent lines through point (in this case, the origin) lay in the same plane. This is clearly not the case here. When we study differentiable functions, we will see that this function is not differentiable at the origin.
Linear Approximations
Recall from Linear Approximations and Differentials that the formula for the linear approximation of a function at the point is given by
The diagram for the linear approximation of a function of one variable appears in the following graph.
Linear approximation of a function in one variable.
The tangent line can be used as an approximation to the function for values of reasonably close to When working with a function of two variables, the tangent line is replaced by a tangent plane, but the approximation idea is much the same.
Given a function with continuous partial derivatives that exist at the point the linear approximation of at the point is given by the equation
Notice that this equation also represents the tangent plane to the surface defined by at the point The idea behind using a linear approximation is that, if there is a point at which the precise value of is known, then for values of reasonably close to the linear approximation (i.e., tangent plane) yields a value that is also reasonably close to the exact value of ((Figure)). Furthermore the plane that is used to find the linear approximation is also the tangent plane to the surface at the point
Using a tangent plane for linear approximation at a point.
Using a Tangent Plane Approximation
Given the function approximate using point for What is the approximate value of to four decimal places?
To apply (Figure), we first must calculate and using and
Now we substitute these values into (Figure):
Last, we substitute and into
The approximate value of to four decimal places is
which corresponds to a
0.2\text{%}
File ended while scanning use of \text@.
Emergency stop.
error in approximation.
First calculate using and then use (Figure).
Differentiability
When working with a function of one variable, the function is said to be differentiable at a point if exists. Furthermore, if a function of one variable is differentiable at a point, the graph is "smooth" at that point (i.e., no corners exist) and a tangent line is well-defined at that point.
The idea behind differentiability of a function of two variables is connected to the idea of smoothness at that point. In this case, a surface is considered to be smooth at point if a tangent plane to the surface exists at that point. If a function is differentiable at a point, then a tangent plane to the surface exists at that point. Recall the formula for a tangent plane at a point is given by
For a tangent plane to exist at the point the partial derivatives must therefore exist at that point. However, this is not a sufficient condition for smoothness, as was illustrated in (Figure). In that case, the partial derivatives existed at the origin, but the function also had a corner on the graph at the origin.
A function is differentiable at a point if, for all points in a disk around we can write
where the error term satisfies
The last term in (Figure) is referred to as the error term and it represents how closely the tangent plane comes to the surface in a small neighborhood disk) of point For the function to be differentiable at the function must be smooth—that is, the graph of must be close to the tangent plane for points near
Demonstrating Differentiability
Show that the function is differentiable at point
First, we calculate using and then we use (Figure):
Therefore and and (Figure) becomes
Next, we calculate
Since for any value of the original limit must be equal to zero. Therefore, is differentiable at point
First, calculate using and then use (Figure) to find Last, calculate the limit.
The function is not differentiable at the origin. We can see this by calculating the partial derivatives. This function appeared earlier in the section, where we showed that Substituting this information into (Figure) using and we get
Calculating gives
Depending on the path taken toward the origin, this limit takes different values. Therefore, the limit does not exist and the function is not differentiable at the origin as shown in the following figure.
This function is not differentiable at the origin.
Differentiability and continuity for functions of two or more variables are connected, the same as for functions of one variable. In fact, with some adjustments of notation, the basic theorem is the same.
Differentiability Implies Continuity
Let be a function of two variables with in the domain of If is differentiable at then is continuous at
(Figure) shows that if a function is differentiable at a point, then it is continuous there. However, if a function is continuous at a point, then it is not necessarily differentiable at that point. For example,
is continuous at the origin, but it is not differentiable at the origin. This observation is also similar to the situation in single-variable calculus.
(Figure) further explores the connection between continuity and differentiability at a point. This theorem says that if the function and its partial derivatives are continuous at a point, the function is differentiable.
Continuity of First Partials Implies Differentiability
Let be a function of two variables with in the domain of If and all exist in a neighborhood of and are continuous at then is differentiable there.
Recall that earlier we showed that the function
was not differentiable at the origin. Let's calculate the partial derivatives and
The contrapositive of the preceding theorem states that if a function is not differentiable, then at least one of the hypotheses must be false. Let's explore the condition that must be continuous. For this to be true, it must be true that
Let Then
If then this expression equals if then it equals In either case, the value depends on so the limit fails to exist.
In Linear Approximations and Differentials we first studied the concept of differentials. The differential of written is defined as The differential is used to approximate where Extending this idea to the linear approximation of a function of two variables at the point yields the formula for the total differential for a function of two variables.
Let be a function of two variables with in the domain of and let and be chosen so that is also in the domain of If is differentiable at the point then the differentials and are defined as
The differential also called the total differential of at is defined as
Notice that the symbol is not used to denote the total differential; rather, appears in front of Now, let's define We use to approximate so
Therefore, the differential is used to approximate the change in the function at the point for given values of and Since this can be used further to approximate
See the following figure.
The linear approximation is calculated via the formula
One such application of this idea is to determine error propagation. For example, if we are manufacturing a gadget and are off by a certain amount in measuring a given quantity, the differential can be used to estimate the error in the total volume of the gadget.
Approximation by Differentials
Find the differential of the function and use it to approximate at point Use and What is the exact value of
First, we must calculate using and
Then, we substitute these quantities into (Figure):
This is the approximation to The exact value of is given by
First, calculate and using and then use (Figure).
Differentiability of a Function of Three Variables
All of the preceding results for differentiability of functions of two variables can be generalized to functions of three variables. First, the definition:
A function is differentiable at a point if for all points in a disk around we can write
where the error term E satisfies
If a function of three variables is differentiable at a point then it is continuous there. Furthermore, continuity of first partial derivatives at that point guarantees differentiability.
The analog of a tangent line to a curve is a tangent plane to a surface for functions of two variables.
Tangent planes can be used to approximate values of functions near known values.
A function is differentiable at a point if it is "smooth" at that point (i.e., no corners or discontinuities exist at that point).
The total differential can be used to approximate the change in a function at the point for given values of and
Tangent plane
Linear approximation
Total differential
Differentiability (two variables)
Differentiability (three variables)
For the following exercises, find a unit normal vector to the surface at the indicated point.
For the following exercises, as a useful review for techniques used in this section, find a normal vector and a tangent vector at point
Normal vector: tangent vector:
For the following exercises, find the equation for the tangent plane to the surface at the indicated point. (Hint: Solve for in terms of and
For the following exercises, find parametric equations for the normal line to the surface at the indicated point. (Recall that to find the equation of a line in space, you need a point on the line, and a vector that is parallel to the line. Then the equation of the line is
at point
For the following exercises, use the figure shown here.
The length of line segment is equal to what mathematical expression?
The differential of the function
Using the figure, explain what the length of line segment represents.
For the following exercises, complete each task.
Show that is differentiable at point
Using the definition of differentiability, we have
Find the total differential of the function
Show that is differentiable at every point. In other words, show that where both and approach zero as approaches
for small and satisfies the definition of differentiability.
Find the total differential of the function where changes from and changes from
Let Compute from to and then find the approximate change in from point to point Recall and and are approximately equal.
and They are relatively close.
The volume of a right circular cylinder is given by Find the differential Interpret the formula geometrically.
See the preceding problem. Use differentials to estimate the amount of aluminum in an enclosed aluminum can with diameter and height if the aluminum is cm thick.
Use the differential to approximate the change in as moves from point to point Compare this approximation with the actual change in the function.
Let Find the exact change in the function and the approximate change in the function as changes from and changes from
exact change approximate change is The two values are close.
The centripetal acceleration of a particle moving in a circle is given by where is the velocity and is the radius of the circle. Approximate the maximum percent error in measuring the acceleration resulting from errors of
3\text{%}
in and
in (Recall that the percentage error is the ratio of the amount of error over the original amount. So, in this case, the percentage error in is given by
The radius and height of a right circular cylinder are measured with possible errors of
4\text{%}\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}5\text{%},
respectively. Approximate the maximum possible percentage error in measuring the volume (Recall that the percentage error is the ratio of the amount of error over the original amount. So, in this case, the percentage error in is given by
13\text{%}\phantom{\rule{0.2em}{0ex}}\text{or}\phantom{\rule{0.2em}{0ex}}0.13
The base radius and height of a right circular cone are measured as in. and in., respectively, with a possible error in measurement of as much as in. each. Use differentials to estimate the maximum error in the calculated volume of the cone.
The electrical resistance produced by wiring resistors and in parallel can be calculated from the formula If and are measured to be and respectively, and if these measurements are accurate to within estimate the maximum possible error in computing (The symbol represents an ohm, the unit of electrical resistance.)
The area of an ellipse with axes of length and is given by the formula
Approximate the percent change in the area when increases by
and increases by
1.5\text{%}.
The period of a simple pendulum with small oscillations is calculated from the formula where is the length of the pendulum and is the acceleration resulting from gravity. Suppose that and have errors of, at most,
0.1\text{%},
respectively. Use differentials to approximate the maximum percentage error in the calculated value of
Electrical power is given by where is the voltage and is the resistance. Approximate the maximum percentage error in calculating power if is applied to a resistor and the possible percent errors in measuring and are
4\text{%},
For the following exercises, find the linear approximation of each function at the indicated point.
[T] Find the equation of the tangent plane to the surface at point and graph the surface and the tangent plane at the point.
[T] Find the equation for the tangent plane to the surface at the indicated point, and graph the surface and the tangent plane:
[T] Find the equation of the tangent plane to the surface at point and graph the surface and the tangent plane.
differentiable
a function is differentiable at if can be expressed in the form
given a function and a tangent plane to the function at a point we can approximate for points near using the tangent plane formula
given a function that is differentiable at a point the equation of the tangent plane to the surface is given by
the total differential of the function at is given by the formula
Previous: Partial Derivatives
Next: The Chain Rule
Tangent Planes and Linear Approximations by OSCRiceUniversity is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. | CommonCrawl |
Helly theorem + Nerve
Consider nerve $\mathcal N$ of a finite set of convex sets in $\mathbb R^n$. Helly theorem says that $\mathcal N$ is completely determined by its $n$-skeleton, say $\mathcal N_n$.
It seems that not all finite simplicial complexes with dimension $\le n$ can appear as $\mathcal N_n$. (For example take 2-dimenisonal simplicial complex which is homeomorphic to $\mathbb{R}\mathrm{P}^2$, it can not appear as $\mathcal N_3$ for finite set of convex sets in $\mathbb R^3$.)
Is it possible to describe all finite simplicial complexes which can appear as $\mathcal N_n$?
The question is inspired by a short discussion here.
I am sure a lot should be known, but a quick search gave me nothing.
convex-geometry discrete-geometry reference-request
ε-δε-δ
$\begingroup$ From the point of view of purely combinatorial obstructions, you can look at: G. Kalai, Intersection patterns of convex sets, Israel J. Math. 48 (1984) 161–174. $\endgroup$
– Thierry Zell
Nerves of convex sets in general
There is quite a bit known about nerves of families of convex sets in $R^n$. Indeed Helly's theorem asserts that the $n$-skeleton determines the entire complex. In fact, considerably more is known beyond Helly's theorem. It follows, for example that all homology group of the nerve vanish in dimensions larger or equal to n. Morover, this property is inherited to induced subcomplexes and to links of the nerve. (Because those are also nerves of families of convex sets in the same Euclidian space.)
For a survey see this paper: Martin Tancer, Intersection patterns of convex sets via simplicial complexes, a survey.
However, only little is known about the skeletons of the nerves below dimension n.
Every n dimensional complex can be represented as a nerve of convex sets in $R^{2n+1}$. This was proved by Wegner in 67 and again by Perel'man in 85. (This was Perel'man's first paper.) It is interesting to understand systematically obstructions for nerves of convex sets in $R^n$ whose dimension is between n/2 and n. For n=1 much is known. There are substantial results in the plane but not so many in higher dimensions.
Dimension 1 - interval graphs
There is a lot known about interval graphs which is what you ask about for n=1. This is a very restricted and well understood class of graphs. It is a subclass of the class of chordal graphs with itself is a very special class of the class of perfect graphs.
Families of convex sets in the Dimension 2
An example of the kind asked in the question to the best of my memory is obtained as follows: Start with a non planar graph, and divide every edge by adding a vertex in it. Then this graph is not a nerve of conves sets in the plane.
Since the question was about representing n dimensional complexes as nerves of families of convex sets in $R^m$ where $m>n$ let me mention a few specific results in this direction where n=1 and m=2.
Theorem (D. Larman, J. Matousek, J. Pach, J. Torocsik): For a family of planar convex sets either there is a subfamily of $n^{1/5}$ sets which are pairwise intersecting or there is a subfamily of $n^{1/5}$ sets which are pairwise disjoint.
(D. Larman, J. Matousek, J. Pach, J. Torocsik, A Ramsey-type result for convex sets. Bull. London Math. Soc. 26 (1994), no. 2, 132–136.)
A result which also directly follows from this paper is:
For a family of planar convex sets either
there is a family of p sets which are pairwise disjoint or
there is a family of $c_p n$ sets which are pairwise intersecting.
(It is not known if it is possible to replace "pairwise disjoint" with "pairwise intersecting" in this last theorem. Fox and Pach have some results in this direction.)
And the following beautiful theorem:
Theorem (J. Fox, J. Pach and Cs. D. Toth):
Every family of plane convex sets contains two subfamilies of size $cn$ such that:
either each element of the first intersects every element in the second, or
no element in the first intersects any element of the other.
Rough expected picture
Morally, graphs that are nerves of convex sets in the plane are far in their behavior from random graphs and come close (in a sense) to perfect graphs. (This comment applies to other graphs and hypergraphs arising in geometry.) Such statements (towards perfectness in a weak sense) are known to hold for the $n$-dimensional skeletons of nerves of families of convex sets in $R^n$. We may expect that this phenomena will start occuring (in weakers forms) already for $n/2$-dimensional skeleta but there are very few known results beyond the plane.
Gil KalaiGil Kalai
$\begingroup$ I took the liberty of fleshing out the example about non-planar graphs in my own answer. Thanks for the link to Tancer's paper! $\endgroup$
I really wish I knew more about general nerve complexes, so I'm afraid I don't have much to bring to your question. But information on intersection graphs is a lot easier to figure out, so I thought I'd add some remarks to what Gil wrote:
Non-planar Graphs
First, I want to explain how his remark about non-planar graphs works: say you have an edge between two vertices $u$ and $v$ of a graph $G$. Split the edge with a new vertex $e$ to get the new graph $G^\star$. Now, suppose that $G^\star$ is the intersection graph of some arrangement of convex sets in the plane, with sets $U$, $V$ and $E$ that correspond to the vertices of the same name. Then since there are edges $(u,e)$ and $(v,e)$ in $G^\star$, you can pick points $a\in U\cap E$ and $b\in V\cap E$, and the segment $[a,b]$ is of course contained in $E$.
Do this for all vertices of $G^\star$ that arose from adges in $G$. This gives you a collection of edges that must be pairwise disjoint since none of the $e$ vertices are adjacent to each other, they're only adjacent to vertices in $V(G)$. Now, contract the convex sets of type $U$ that correspond to vertices in $V(G)$: this keeps the segments disjoint (except for endpoints) and gives a picture of $G$ on the plane. Thus the convex arrangement fr $G^\star$ exists only if $G$ is planar.
Boxes and Graphs
Also, I think it's important to stress how special interval graphs are. To me, one of the best way to see this is to consider the natural generalization: instead of intervals in $\mathbb{R}$, consider boxes: cartersian products of intervals in $\mathbb{R}^d$. Then, F. S. Roberts proved (in his PhD thesis I think) that any graph can be realized as the intersection graph of a collection of $d$-boxes, and we can even take $d\leq |V(G)|/2$. (The smallest dimension you can choose for a graph $G$ is called the boxicity of $G$.)
If you'd rather look at more general convex sets, then it's been known for even longer that you can realize a graph as the intersection pattern of convex sets of $\mathbb{R}^3$ (and thus showing some sort of counterpoint to Gil Kalai's answer in dimension 2). The obvious conclusion: nerve complexes carry a lot more information than graphs; nothing new or surprising there, but these examples can help you really appreciate that.
Roberts, F. S. (1969), "On the boxicity and cubicity of a graph", in Tutte, W. T., Recent Progress in Combinatorics, Academic Press, pp. 301–310,
Thierry ZellThierry Zell
Not the answer you're looking for? Browse other questions tagged convex-geometry discrete-geometry reference-request or ask your own question.
Draw a Random Line Through a Voronoi Tessellation, What is the Average Number of Voronoi Cell the Line Intersects?
Reference request on Leray numbers
The facial structure of the convex hull of a family of characteristic functions
Constructing a simplicial set homology-equivalent to a given CW complex
Reference request: construction of Steenrod operations for an odd p
Two-point Helly | CommonCrawl |
Home Journals IJHT Pitch Variations Study on Helically Coiled Pipe in Turbulent Flow Region Using CFD
Pitch Variations Study on Helically Coiled Pipe in Turbulent Flow Region Using CFD
Anwer F. Faraj | Itimad D.J. Azzawi* | Samir G. Yahya
University of Manchester, Ministry of Oil, Iraqi Drilling Company, Diyala 32001, Iraq
Mechanical Engineering Department, Faculty of Engineering, University of Diyala, Diyala 32001, Iraq
[email protected]
38.04_02.pdf
A computational fluid dynamics (CFD) study was conducted to analyse the flow structure and the effect of varying the coil pitch on the coil friction factor and wall shear stress, through utilising different models' configurations. Three coils were tested, all of them having the same diameter and coil diameter: 0.005m and 0.04m respectively. Pitch variations began with 0.01, 0.05, 0.25 m for the first, second and third model respectively. Two turbulence models, STD(k-ϵ) and STD(k-w), were utilised in this simulation in order to determine the turbulence model which could capture most of the flow characteristics. A comparison was made between the STD(k-ϵ) and STD(k-w) models in order to analyse the pros and cons of each model. The results were validated with Ito's equation for turbulent flow and compared with Filonenko's equation for a straight pipe. The governing equations were discretized using finite volumes method and the SIMPLE algorithm was used to solve the equations iteratively. All the models were simulated using the ANSYS Fluent solver CFD commercial code. The results showed that in turbulent flows, Dean number had a stronger effect on reducing coil friction factor than the increment in pitch dimension.
CFD, helical coil, friction factor, Reynolds number, pitch size, turbulent flows
Flows following a curved path induce a centrifugal force which pushes the faster fluid particles outwards, whereas the slower ones are pushed inwards; and since the centrifugal force depends on the local axial velocity, therefore the slower particles suffer a lower centrifugal effect while the faster ones experience higher centrifugal forces [1]. The existence of the boundary layer determines the effect of the centrifugal force, where the fluid particles near the wall undergo a small effect while the fluid particles in the core of the pipe experience the opposite effect. The imbalance in the centrifugal forces develops a secondary flow which ends with two-counter rotating vortices called Dean Vortices as shown in Figure 1 [2]. The secondary flow, in turn, increases flow mixing which consequently increases the rate of heat transfer in comparison with a straight pipe.
Figure 1. Dean vortices [3]
Dean vortices are named after the British scientist Dean [4]. Dean vortices, which are generated from the unsteadiness of the centrifugal forces, appear in many engineering applications such as turbine blades and cooling passages inside engines. The secondary flow intensity increases as the curvature increases. Dean vortices cause an important modification to the boundary-layer structure which leads to a greater enhancement in the rate of heat transfer. Furthermore, these vortices have an effect in delaying the transition from laminar to turbulent flow [5]. Moreover, the controlling parameters in helically coiled pipes are curvature ratio ($\delta$), Reynolds number ($R_{e}$), Dean Number ($D_{e}$), Torsion parameter ($\beta_{0}$) and pitch size and the calculating formula for each parameter is available by Austen and Soliman [6] and not repeated here. Figure 2 defines these parameters especially the distance between the centerline of two turns.
Figure 2. The parametric explanation of helically coiled pipe [6]
The following section presents an outline of selected research papers investigating turbulent flows in helically coiled pipe. Experimental and numerical (using CFD) techniques will be studied and analysed for different parameters which have a direct effect on the secondary flow formulation. These parameters are Dean Number, curvature ratio, pitch size and pipe diameter, and the effect of these parameters on the rate of heat transfer. In this section, attention will be paid to the flow structure particularly in a fully developed region in terms of pressure drop, pitch size, and curvature ratio, which plays a leading role in determination of wall shear stress and consequently the coil friction factor at turbulent flow. Moreover, different turbulent models will be assessed in terms of accuracy in capturing the secondary flow phenomena and stability of the solution.
Hüttl and Friedrich [7] studied the turbulent fully developed flow in curved and helically coiled pipes but using a different simulation scheme from that used by Yamamoto et al. [8] and Hüttl and Friedrich [9]. A direct numerical simulation with a specific Reynolds number $R e_{\tau}=230$ was used by Hüttl and Friedrich [9]. It was stated that for a great value of curvature parameter ($k=0.1$), the turbulence is reduced by the streamwise curvature and the flow is approximately relaminarised [7]. The torsion has a relatively small effect in this region in comparison with the curvature effect. However, it cannot be negligible. The dissipation rate and the fluctuations of turbulent kinetic energy are increased since the torsion has an influence on the secondary flow which has been activated by a curvature.
Although laminar and turbulent flows in straight pipes have been widely investigated, turbulent flows in helically coiled pipes still need to be examined and their flow structure studied. In fact, that motivated Hüttl and Friedrich [9] to use Direct numerical simulation to demonstrate the similitudes and contrasts between the flows in curved and helical pipes. It has been concluded that the turbulent fluctuations in straight pipes are much larger than their counterparts in a curved pipe. Furthermore, the comparison between the mean axial velocity in a helical and toroidal pipe shows relatively small differences. The torsional effect is extremely small compared to the curvature ratio effect. However, it cannot be ignored, due to the fact that the torsion is responsible for inducing the secondary flow and consequently, the turbulent kinetic energy is increased by Hüttl and Friedrich [9]. A comparison also between many turbulence models with completely resolved direct numerical simulation, at $y^{+}=1.2$ i.e. inside the viscous affected region with a specific characteristic of Reynolds number and the curvature ratio was investigated by Castiglia et al. [10] and in this research the curvature ratio has been taken as a constant. It has been found that Reynolds number and the curvature ratio are the most critical factors for the turbulence flow.
Although $(k-\epsilon)$ model is extensively used in many types of flow and presents a quite acceptable result, the $(k-\epsilon)$ model, even with special near wall treatment, failed to correctly predict the behaviour of the Darcy-Weisbach friction coefficient. To overcome the shortcomings of the above DNS results, SST and RSM have been used to obtain an adequate agreement with the experimental data, particularly at low Reynolds numbers. The first investigation which concerns the development of turbulent forced convection heat transfer in helical pipes was done by Lin and Ebadian [11]. This investigation covered a wide range of influential parameters as listed below:
Reynolds ($2.5 \times 10^{4} \sim 1 \times 10^{5}$)
Pitch size ($0-0.6$)
Curvature ratio ($0.025-0.05$)
The numerical result shows a good agreement with the experimental data [12], as shown in Figure 3 below.
Figure 3. Nusselt number validation with experimental results for non-dimensional pitch=0 [11]
A numerical study using the $(k-\epsilon)$ model is available [13], but with a large pitch. This study was validated with other experimental data of Mori and Nakayawa [14], with satisfactory results as shown in Figure 4 below.
Figure 4. Comparison of the numerical results with the experimental findings [13]
Moreover, Rogers and Mayhew [12] conducted an experiment mainly to check out the surface roughness of the pipe, in order to do that the pressure losses are significantly hypersensitive in comparison with the heat transfer information. They have used Eq. (1), in order to determine the overall heat transfer coefficient (U).
$Q=U \log {mean} \Delta t$ (1)
where, $\Delta t=\frac{\left(t_{g}-t_{b 1}\right)-\left(t_{g}-t_{b 2}\right)}{\operatorname{Ln} \frac{\left(t_{g}-t_{b 1}\right)}{\left(\operatorname{tg}-t_{b 2}\right)}}$.
A comparison has been made with Kirpikov's [15] findings with different curvature ratios to calculate the heat transfer rate. In Figure 5, it can be seen that using Kirpikov's relationship shown in Eq. (2), to denote the y-axis in Figure 5, with different curvature ratio does not make a large difference in terms of heat transfer rate as clearly seen in Figure 5.
It was found that the results of Rogers and Mayhew [12] are 10% more than Kirpikov's findings and 10% less than those obtained by Seban and McLaughlin's [16] experiment. It is suggested that more work is needed to determine the exact exponent value of $\left(\frac{d}{D}\right)$.
Figure 5. Heat transfer findings, properties evaluated at bulk temperature [12]
Bai et al. [17] did an experiment to find the most appropriate correlation for measuring the average heat transfer coefficient at different cross-sections of helically coiled pipe. Although many investigations have been done previously by Rogers et al. [12, 14-16], it was still necessary to establish a correlation which would cover a wide range of horizontal helically coiled pipes and to gain more profound comprehension of the local heat transfer attributes in both axially and circumferential directions.
$\frac{N u_{L}}{N u}=0.22\left(\frac{R_{e} P_{r}}{10^{4}}\right)^{0.45}\left(0.5+0.1 \theta+0.2 \theta^{2}\right)$ for $0<$$\theta \leq \pi$ (3)
2. Model Description and Methodology
Three models of horizontally-oriented helically coiled pipe have been utilised with two turns, to ensure that flow reaches a fully developed region [18], and different pitches as shown in Figure 6. The pipe and coil diameters are taken respectively as d=0.005m, Dcoil=0.04m with different pitches P= (0.01, 0.05, and 0.25) m as shown in Table 1.
Figure 6. Models geometry plotted in 2:1 scale
Table 1. Models dimensions
Pipe diameter (d) m
Coil diameter (Dcoil) m
Pitch (P) m
Model one
Model two
Model three
In Table 1, the pipe and coil diameters are constants, but the pitch is different. The second and third models are designed to explore the effect of a varying pitch on the secondary flow structure. The two vortices are symmetrical if the pipe is bent in a toroidal shape, but if the bent in a helically shape the symmetry breaks up [1]. Stretching helically coiled pipe while keeping the pipe and coil diameter constant needs an increment in the helix length. The helix length of the three models has been calculated with a very simple basic equation derived from Pythagoras-theorem as shown in Eq. (4) bellow:
$helix\, length=\left[( { Coil\, circumference })^{2}+\right.$$\left.( { Pitch })^{2}\right]^{1 / 2} \times N$ (4)
where, N= number of turns.
Moreover, two surfaces were defined in the geometry in a fully developed region: specifically, in the last quarter of the helically coiled pipe before the outlet, as shown in Figure 7. Plane two is located well downstream of the inlet to guarantee fully developed flow conditions; one coil turn is enough to assure fully developed flow [18], while plane one is located near the outlet, to avoid the processed arrangement of being influenced by the outlet boundary conditions. The purpose of these two planes is to evaluate the average pressure at each plane, then compute the pressure difference in order to obtain the wall shear stress in a fully developed area.
Figure 7. Positions of planes for P=0.05m
2.1 Computational domain and solution procedure
A comparison has been made between the one-domain automatic generated and five-domain O-H grid method "butterfly topology" mesh as shown in Figures 8-a to 8-c. It has been found that ordinary automatic generated mesh has considerable skewness particularly near the wall, which is considered an important region, especially when studying the near wall behaviour (Figure 8-a). In Figures 8-b, when the five-domain O-H grid method "butterfly topology" mesh is applied, greater reduction in the maximum included angle is obtained i.e. the maximum included angle is reduced from 175.35° to 130.2°, which helps to increase the stability and accuracy of the solution. After selecting Grid-solver and running the solver, the maximum included angle is decreased again to 124.8° and most of the cells become orthogonal. The percentage of cells with an angle of 121°-124° does not exceed 10% of the total cells, as shown in Figures 8-c, and this mesh may be considered the best mesh which can capture most of the flow characteristics.
Figure 8. The one (a) to five (c) domain automatic generated mesh with the maximum included angle
In order to obtain a grid independent solution, different simulations were run with different mesh arrangements. Four cases of mesh: coarse (95,956 cells), medium (185,623cells), fine (313,823cells) and very fine mesh (597,600 cells), as shown in Figure 9 (see Table 2 for further details), were studied and analysed to choose an adequate mesh which gives acceptable results with minimum errors and computer resources in addition to a satisfactory computational time.
Table 2 showed that the mesh has been used with different numbers of cells to acquire a mesh independent solution. It can be seen that there is no great difference in maximum velocity in all types of mesh i.e. all of them have the same difference of 0.001. However, the difference between the very fine mesh and the course is 0.003. In order to attain the most appropriate mesh which can capture most of the flow characteristics, one may need to choose the mesh where there is no difference in results as the mesh size is increased. The fine mesh gives satisfactory results in addition to saving computational time and reducing the requirement for computer resources because of the large difference in cell numbers between the fine and very fine mesh, which leads to a small difference in the findings. For the above reasons, the fine mesh has been chosen to simulate the three models. Moreover, the mesh study using the maximum velocity has been validated by using the values of CFD Fanning friction factor with different cell numbers. It has been found that the difference in CFD Fanning friction factor between the fine and very fine mesh is quite low (0.0004), which makes it unnecessary to increase the mesh size if the difference is ignorable. Hence, the fine mesh has been chosen in the simulation.
Figure 9. Mesh Generation from coarse to very fine mesh
Table 2. Different mesh arrangements with their number of cells, maximum velocity and friction factor
Total mesh
Maximum velocity
Friction factor
Coarse mesh
Medium mesh
Very fine mesh
3. Governing Equations and Correlation Comparison
The governing equations applied to the models to calculate the friction factor and wall shear stress are available in the studies [1, 19] and not repeated here. However, in a fully developed region, i.e. when the velocity gradient is constant, the wall shear stress can be computed from the static pressure drop over a determined length of pipe [1]. There are many experimental equations which can be used to predict the friction factor and the pressure in a helically coiled pipe, for instance: [2, 14, 20-25]. All of these present an acceptable agreement between their correlations, since Ali [18] has made a comparison between the aforementioned correlations which gave almost convergent results.
Ito's equations have been adopted in the calculations because they are practical and easy to implement. Furthermore, this is the most accurate formula [10]. On the other hand, White's equations do not work with the model's dimensions since they are limited to De<11.6 and Re<100,000. Mori equation is complicated while Misra and Gupta's equations used what is called (He) helical number which made the equations drastically complicated and limited. For the above reasons, Ito's equations have been chosen to compute the coil friction factor.
4. Boundary Condition and Solution Methods
The boundary conditions for the helically coiled pipe simulation were set as water domain fluid with turbulent flow velocity inlet condition. Three Reynolds numbers (15000, 50000 and 100000) were used in the simulation i.e. different velocity values were set at the inlet for each pitch to examine the flow structure as the pitch changed. Moreover, the wall is taken as a stationary wall and no-slip condition is applied to the wall, while the outlet is taken as a pressure outlet.
Pressure based solver was chosen for the helically coiled pipe simulation since it is generally used for the incompressible fluid. The SIMPLE [Semi-Implicit Method for Pressure-Linked Equations] algorithm by Patankar and Spalding [26] was used in order to discretise the velocity field through the solution of the momentum equation. The Green-Gauss cell-based theorem was set to evaluate the scalars at the cell centroid. The second order upwind scheme was used for momentum, turbulent kinetic energy, specific dissipation rate and turbulent dissipation rate.
4.1 Turbulent model
The $(k-\epsilon)$ model was the most usable model until the last decade of the 20th century. It had been developed by Chou et al. [27-29]. This model started to be used widely when the updated version of the $(k-\epsilon)$ model had been presented by Jones and Launder [30]. The model was then modified again by Launder and Sharma [31] and became what is generally called the STD $(k-\epsilon)$ model [32].
The STD $(k-\epsilon)$ model's equations are listed below [32]:
Kinematic eddy viscosity:
$v_{T}=C_{\mu} \frac{k^{2}}{\epsilon}$ (5)
Turbulent kinetic energy:
$\frac{\partial k}{\partial t}+U_{j} \frac{\partial k}{\partial x_{j}}=\tau_{i j} \frac{\partial U_{i}}{\partial x_{j}}-\epsilon+\frac{\partial}{\partial x_{j}}\left[\left(v+\frac{v_{T}}{\sigma_{k}}\right) \frac{\partial k}{\partial x_{j}}\right]$ (6)
Dissipation rate:
$\frac{\partial \epsilon}{\partial t}+U_{j} \frac{\partial \epsilon}{\partial x_{j}}=C_{\epsilon 1} \frac{\epsilon}{k} \tau_{i j} \frac{\partial U_{i}}{\partial x_{j}}-C_{\epsilon 2} \frac{\epsilon^{2}}{k}+\frac{\partial}{\partial x_{j}}[(v+$$\left.\left.\frac{v_{T}}{\sigma_{\epsilon}}\right) \frac{\partial \epsilon}{\partial x_{j}}\right]$ (7)
Launder et al. [33] have recommended after wide research of free turbulent flows that the constants appearing in Eqns. (5), (6) and (7) can be tabulated as shown in Table 3 below:
Table 3. Constants in the STD $(k-\epsilon)$ model
$C_{\epsilon 1}$
$\boldsymbol{\sigma}_{\boldsymbol{k}}$
$\boldsymbol{\sigma}_{\boldsymbol{\epsilon}}$
$\boldsymbol{c \mu}$
These equations are connected together by the length scale as shown in Eq. (8) which is easy to describe [34]:
$l=C_{\mu} \frac{k^{3 / 2}}{\epsilon}$ (8)
The STD $(k-w)$ model in ANSYS Fluent is formulated on Wilcox model [32]. The first equation is for the turbulent kinetic energy while the second equation is for the specific dissipation rate (w), where:
$w=\frac{\epsilon}{\left(\beta^{*} k\right)}$ (9)
The equations of the $(k-w)$ model is listed below:
Eddy viscosity:
$v_{T}=\frac{k}{w}$ (10)
Turbulence kinetic energy:
$\frac{\partial k}{\partial t}+U_{j} \frac{\partial k}{\partial x_{j}}=\tau_{i j} \frac{\partial U_{i}}{\partial x_{j}}-\beta^{*} k w+\frac{\partial}{\partial x_{j}}\left[\left(v+\sigma^{*} v_{T}\right) \frac{\partial k}{\partial x_{j}}\right]$ (11)
Specific dissipation rate:
$\frac{\partial w}{\partial t}+U_{j} \frac{\partial w}{\partial x_{j}}=\alpha \frac{w}{k} \tau_{i j} \frac{\partial U_{i}}{\partial x_{j}}-\beta w^{2}+\frac{\partial}{\partial x_{j}}[(v+$$\left.\left.\sigma v_{T}\right) \frac{\partial w}{\partial x_{j}}\right]$ (12)
Closure coefficients and auxiliary relations:
$\alpha=\frac{5}{9}, \beta=\frac{3}{40}, \beta^{*}=0.09, \sigma=0.5, \sigma^{*}=0.5$ (13)
The dissipation and the specific dissipation rate are connected together by the following equation:
$\epsilon=\beta^{*} w k$ (14)
A comparison has been made between Ito's experimental equation for the turbulent flows and the CFD simulation results of the three models as shown in Figure 10. In terms of the straight pipe, it can be seen that Filonenko's equation [19], has been used instead of Colebrook's [34] equation for the turbulent flow. The reason is that the Fanning friction factor correlation with the Reynolds number is quite complicated and it is controlled by Colebrook's equation below.
$\frac{1}{\sqrt{f}}=-4.0 \log _{10}\left[\frac{\frac{\gamma}{d}}{3.7}+\frac{1.256}{R_{e} \sqrt{f}}\right]$ (15)
In terms of the coil friction factor of P=0.01m, it can be seen that the discrepancy between the experimental result of Ito's equation and the CFD coil friction factor decreases as the Reynolds number is increased and the overall trend is quite satisfactory. particularly at Re=15,000, is that this might be wrong because of the big difference between the experimental and the CFD result, but in fact, the difference does not exceed 0.5% which is good; while for the other Reynolds numbers the difference is much less, around 0.05% which is excellent. These results reflect the reasons behind considering the STD $(k-\epsilon)$ model as the workhorse of the most frequently encountered flow engineering applications in the industry in spite of its limitations and shortcomings (for example, numerical stiffness and poor performance in complex flow that contains steep curvature and strong pressure gradient), since it is robust and computationally cheap [35].
Turning to discuss the results of the second model (P=0.05m), when Re=15,000, the coil friction factor value is higher than Ito's equation, but it less than its equivalent for P=0.01m. However, the difference between the coil friction factor values of P=0.01m and P=0.05m does not exceed 0.165%. Due to the lack of information in the literature, one can say Ito's equation may also be applicable for P=0.05m. The third model (P=0.25m) follows the same trend and the coil friction factor becomes nearer to the straight pipe. The difference in coil friction factor values of all models in comparison with Ito's equation does not exceed 0.5% which might be considered small to be taken into consideration.
Figure 10. Log10(Friction factor) versus Log10(Re) using STD $(k-\epsilon)$ model
Viewing Figure 11, It can be seen that the difference between the wall shear stress values at Re=15,000 are relatively small and this difference increases as the Reynolds number is increased. The wall shear stress is directly proportional to the Reynolds number; in contrast, the coil friction factor is inversely proportion to the Reynolds number, because the coil friction factor is inversely proportion to the average flow velocity.
Figure 11. Wall shear stress versus Reynolds number using STD $(k-\epsilon)$ model
The non-uniformity in pressure distribution becomes more random due to the increment of the turbulence intensity and kinetic energy, particularly near the wall, and thus the inertia of the fluid motion is increased, which enhances the turbulent mixing of the flow regime. The adverse pressure gradient is induced from the curvature of the helically coiled pipe, causing an increment in the pressure near the inner edge of the pipe due to the reduction in fluid particle velocities, while the outer edge will experience an opposite effect [36], as shown in Figure 12 below. For P=0.25m and Re=50,000, the CFD simulation for the third model gives an overestimated result for the wall shear stress, where the pressure difference between the first and second plane was quite high, and consequently leads to a high wall shear stress value, greater than its equivalent at P=0.05m for the same Reynolds number, which is totally wrong, since the STD $(k-\epsilon)$ model presents inadequate performance at severe pressure gradient. Moreover, the wall shear stress for P=0.25m must be lower in comparison with P=0.05m for the same Reynolds number. This error has been corrected by calculating k and epsilon values; and these values have been set in the specification of the inlet boundary conditions instead of intensity and hydraulic diameter. k and epsilon were calculated by using the Eqns. (16) and (17) below [37]:
$k=\frac{2}{3}\left(U \times T_{i}\right)^{2}$ (16)
$\varepsilon=C \mu^{3 / 4} \frac{k^{3 / 2}}{l}$ (17)
Figure 12. Pressure contour of the first plane for P=0.01m at Re=15,000 using STD $(k-\epsilon)$ model
The turbulent kinetic energy and the dissipation rate values have been computed by using the average velocity value for Re=50,000 and $T_{i}$=5% in Eq. (10), which results in k=0.168 (m/sec)2 and $\varepsilon$=32.4 (m2/sec3). This correction gives a reasonably acceptable result for the wall shear stress, as expected, as shown in Figure 13.
It can be seen that the maximum velocity area increases as the Reynolds number is increased. The velocity profile plays an important role in the unsteadiness of the flow. The turbulent kinetic energy also increases, since it is directly proportion to the flow fluctuating velocity squared. Dean vortices are distorted as the Reynolds number increases due to the high flow velocity which has an effect on Dean vortices configuration, while for P=0.05m, the maximum velocity area is decreased as indicated in Figure 14 but the overall trend seems the same as the first model (P=0.01m) other than velocity magnitude. The streamlines show that the secondary flow intensity increases as the Reynolds number is increased where the distortions in flow paths are clearly indicated in Figure 15.
Figure 13. Velocity contour and vectors of the first plane for P=0.01m using STD $(k-\epsilon)$ model
Figure 14. Velocity contour of the first plane for P=0.05m using STD $(k-\epsilon)$ model
Turning to discuss the velocity contours of the third model (P=0.25m), as a consequence of increasing the pitch size, the influence of the centrifugal forces is highly reduced, which in turn, causes different velocity profile formulations in comparison with the first and second models, as shown in Figure 16.
Figure 17 validates the effect of the centrifugal forces due to the formation of the secondary flow, as explained earlier in the velocity contours section. For P=0.01m, there is a high effect from the secondary flow, which makes the pressure at the inner edge of the pipe relatively high in comparison with its equivalent for P=0.05m and P=0.25m, while for P=0.05m, there is a slightly lower effect of pressure at the inner edge due to the reduction in fluid velocity particles, as indicated in Figure 14. For P=0.25m, the pressure distribution looks quite uniform and this validates that the effect of the secondary flow is almost depleted, which makes the pressure distribution of the third model P=0.25 uniform, as indicated in Figure 17.
In Figure 18, it can be seen that at Re=15,000, the STD $(k-w)$ model predicts higher values of coil friction factor in comparison with the STD $(k-\epsilon)$ model. The STD $(k-w)$ model results are more rigorous in comparison with STD $(k-\epsilon)$ model results. Since the fine mesh is being used in the CFD simulation, a spontaneous transformation is based on $y^{+}$ value from a wall function to a low-Reynolds number approach, consequently giving a more accurate near wall treatment particularly in wall-bounded turbulent flows (Fluent, 2006). Moreover, the STD $(k-w)$ model performs better than the STD $(k-\epsilon)$ model with an adverse pressure gradient and it does not employ a damping function within its configuration.
At Re=50,000 and Re=100,000, both of the models give a satisfactory result where the difference between the results is not more than 0.07% which is acceptable in engineering designs. There are three parameters that have an effect on flow in helically coiled pipe: Dean number, Pitch size, and curvature ratio.
In Figure 19, a comparison of the wall shear stress has been made between the results obtained from the STD $(k-\epsilon)$ and $(k-w)$ models. The STD $(k-\epsilon)$ model predicts high shear stress values in comparison with the STD $(k-w)$ model. In fact, it is hard to recognise the difference in Figure 19 and for this reason, the plot was magnified only for Re=15,000 to show the difference in wall shear stress clearly, as shown in Figure 20.
Figure 15. Vortices formulation for P=0.05m STD $(k-\epsilon)$ model
Figure 17. Pressure distribution at Re=15,000 STD $(k-\epsilon)$ model
Figure 18. Friction factor comparison using STD $(k-\epsilon)$ and STD $(k-w)$ model
Figure 19. Wall shear stress comparison using STD $(k-\epsilon)$ and STD $(k-w)$ model
Figure 20. Wall shear stress for Re=15,000 using STD $(k-\epsilon)$ and STD $(k-w)$ model
A comparison has been made between the velocity contours obtained from STD $(k-\epsilon)$ and $(k-w)$ models. Figure 21 shows that the STD $(k-\epsilon)$ model predicts a higher range of velocity, denoted in red, in comparison with the STD $(k-w)$ model. In the STD $(k-\epsilon)$ model, the high-velocity fluid particles fill approximately half of the pipe, which means the effect of the centrifugal forces will also be great and consequently the average pressure value of this plane will be higher than its equivalent in the STD $(k-w)$ model, which explains what was mentioned earlier: that the STD $(k-\epsilon)$ model predicts high shear stress values in comparison with the STD $(k-w)$ model. Increasing the pitch causes a drop in high-velocity fluid particles, which means the effect of the centrifugal forces will be lower, which also causes a drop in pressure gradient within the whole domain, as occurs for P=0.05m in Figure 22 below.
Figure 21. Comparison of the first plane velocity contours for P=0.01m
In Figure 22, it can be seen that the STD $(k-w)$ model predicts high-velocity values of fluid particles more than its counterpart STD $(k-\epsilon)$ model. The difference in captured velocities magnitude does not exceed 4.7% which is considered acceptable in the CFD field. This difference returns to the configurations and specifications of each turbulence model in terms of accuracy and stability.
Turning to discuss the velocity contours of P=0.25m, it has been found that the maximum velocity values of the whole domain are typical for both turbulence models other than P=0.1m and P=0.05m. However, the velocity distribution in the first plane is not typical, as shown in Figure 23. A small difference in velocity distribution within the planes may cause a high-pressure variation, where the STD $(k-\epsilon)$ model predicts greater high-pressure values than the STD $(k-w)$, which makes the pressure difference between the first and second plane lower in comparison with the STD $(k-w)$ model. As a result, the STD $(k-w)$ model predicts higher wall shear stress values than the STD $(k-\epsilon)$ model, (see Figure 19).
The pressure distribution of the STD $(k-w)$ model does not look very different from that obtained from the STD $(k-\epsilon)$ model, as shown in Figure 24. The remarkable uniform pressure distribution of the STD$(k-w)$ model indicated at P=0.25m gives approximately the same distribution with the STD $(k-\epsilon)$ model, which means that the pitch of the third model is very large to induce high centrifugal forces and formulate an intensive secondary flow.
The pitch size plays a significant role in determination of the secondary flow intensity. Increasing the pitch size has an effect on damping out the turbulent fluctuations in flowing fluid particles, and consequently the emergence of the turbulent flows is delayed in comparison with a straight pipe. For example, the laminar flow in the first model is extended to Reynolds number up to 9581 depending on Ito's equation.
Figure 24. Comparison of the first plane velocity contours at Re=15,000
In this research, the influence of changing the pitch size was investigated by testing three different models in turbulent flow. This investigation was done through the observation of the coil friction factor profile, wall shear stress, and velocity-pressure contours. Two turbulence models have been utilized: STD(k-ϵ) and STD(k-w) models. It has been found that the STD(k-w) model presents more accurate results in comparison with the STD (k-ϵ) model due to the differences in specifications of the turbulence models in terms of the near wall treatment. However, the STD(k-ϵ) model presents a good estimation for preliminary results. In turbulent flows, Filonenko's equation was used instead of Colebrook's equation due to the complexity of the Fanning friction factor correlation with the Reynolds number, and it is controlled by Colebrook's equation. A comparison has been made between the turbulence models to observe the differences in results in terms of coil friction factor and wall shear stress. The found of this study indicated that Dean Number has a stronger effect on reducing coil friction factor than the increment in pitch dimension.
[1] Cioncolini, A., Santini, L. (2006). An experimental investigation regarding the laminar to turbulent flow transition in helically coiled pipes. Experimental Thermal and Fluid Science, 30(4): 367-380. https://doi.org/10.1016/j.expthermflusci.2005.08.005
[2] Ito, H. (1987). Flow in curved pipes. JSME International Journal, 30(262): 543-552. https://doi.org/10.1299/jsme1987.30.543
[3] De Amicis, J., Cammi, A., Colombo, L.P., Colombo, M., Ricotti, M.E. (2014). Experimental and numerical study of the laminar flow in helically coiled pipes. Progress in Nuclear Energy, 76: 206-215. https://doi.org/10.1016/j.pnucene.2014.05.019
[4] Dean, W.R., Hurst, J.M. (1959). Note on the motion of fluid in a curved pipe. Mathematika, 6(1): 77-85. https://doi.org/10.1112/S0025579300001947
[5] Ligrani, P.M. (1994). A study of Dean vortex development and structure in a curved rectangular channel with aspect ratio of 40 at Dean numbers up to 430. Contractor Report.
[6] Austen, D.S., Soliman, H.M. (1988). Laminar flow and heat transfer in helically coiled tubes with substantial pitch. Experimental Thermal and Fluid Science, 1(2): 183-194. https://doi.org/10.1016/0894-1777(88)90035-0
[7] Hüttl, T.J., Friedrich, R. (2000). Influence of curvature and torsion on turbulent flow in helically coiled pipes. International Journal of Heat and Fluid Flow, 21(3): 345-353. https://doi.org/10.1016/S0142-727X(00)00019-9
[8] Yamamoto, K., Yanase, S., Yoshida, T. (1994). Torsion effect on the flow in a helical pipe. Fluid Dynamics Research, 14(5): 259-273.
[9] Hüttl, T.J., Friedrich, R. (2001). Direct numerical simulation of turbulent flows in curved and helically coiled pipes. Computers & Fluids, 30(5): 591-605. https://doi.org/10.1016/S0045-7930(01)00008-1
[10] Castiglia, F., Chiovaro, P., Ciofalo, M., Liberto, M., Maio, P., Piazza, I.D., Giardina, M., Mascari, F., Morana, G., Vella, G. (2010). Modelling flow and heat transfer in helically coiled pipes. Part 3: Assessment of turbulence models, parametrical study and proposed correlations for fully turbulent flow in the case of zero pitch. Report Ricerca di Sistema Elettrico.
[11] Lin, C.X., Ebadian, M.A. (1997). Developing turbulent convective heat transfer in helical pipes. International Journal of Heat and Mass Transfer, 40(16): 3861-3873. https://doi.org/10.1016/S0017-9310(97)00042-2
[12] Rogers, G.F.C., Mayhew, Y.R. (1964). Heat transfer and pressure loss in helically coiled tubes with turbulent flow. International Journal of Heat and Mass Transfer, 7(11): 1207-1216. https://doi.org/10.1016/0017-9310(64)90062-6
[13] Yang, G., Ebadian, M.A. (1996). Turbulent forced convection in a helicoidal pipe with substantial pitch. International Journal of Heat and Mass Transfer, 39(10): 2015-2022. https://doi.org/10.1016/0017-9310(95)00303-7
[14] Mori, Y., Nakayama, W. (1967). Study on forced convective heat transfer in curved pipes: (3rd report, theoretical analysis under the condition of uniform wall temperature and practical formulae). International Journal of Heat and Mass Transfer, 10(5): 681-695. https://doi.org/10.1016/0017-9310(67)90113-5
[15] Kirpikov, A.V. (1957). Heat transfer in helically coiled pipes. Trudi. Moscov. Inst. Khim. Mashinojtrojenija, 12: 43-56.
[16] Seban, R.A., McLaughlin, E.F. (1963). Heat transfer in tube coils with laminar and turbulent flow. International Journal of Heat and Mass Transfer, 6(5): 387-395. https://doi.org/10.1016/0017-9310(63)90100-5
[17] Bai, B., Guo, L., Feng, Z., Chen, X. (1999). Turbulent heat transfer in a horizontal helically coiled tube. Heat Transfer-Asian Research: Co-sponsored by the Society of Chemical Engineers of Japan and the Heat Transfer Division of ASME, 28(5): 395-403. https://doi.org/10.1002/(SICI)1523-1496(1999)28:5<395::AID-HTJ5>3.0.CO;2-Y
[18] Ali, S. (2001). Pressure drop correlations for flow through regular helical coil tubes. Fluid Dynamics Research, 28(4): 295-310.
[19] Kedzierski, M., Kim, M.S. (1996). Single-phase heat transfer and pressure drop characteristics of an integral-spine fin within an annulus. Journal of Enhanced Heat Transfer, 3(3): 201-210. https://doi.org/10.1615/JEnhHeatTransf.v3.i3.40
[20] White, C.M. (1929). Streamline flow through curved pipes. Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, 123(792): 645-663. https://doi.org/10.1098/rspa.1929.0089
[21] White, C.M. (1932). Fluid friction and its relation to heat transfer. Trans. Inst. Chem. Eng. (London), 10: 66-86.
[22] Prandtl, L. (1949). Fuhrer dmchdie Stromungslehre, 3rd Edition, 159, Braunsschweigh; English Transl., Essentials of Fluid Dynamics, Blackie and Son, London, 168.
[23] Adler, M. (1934). Strömung in gekrümmten Rohren. ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik, 14(5): 257-275.
[24] Hasson, D. (1955). Streamline flow resistance in coils. Res. Corresp, 1: S1.
[25] Mishra, P., Gupta, S.N. (1979). Momentum transfer in curved pipes. 1. Newtonian fluids. Industrial & Engineering Chemistry Process Design and Development, 18(1): 130-137. https://doi.org/10.1021/i260069a017
[26] Patankar, S.V., Spalding, D.B. (1972). A calculation procedure for heat, mass and momentum transfer in three-dimensional parabolic flows. International Journal of Heat and Mass Transfer, 15(10): 1787-1806. https://doi.org/10.1016/0017-9310(72)90054-3
[27] Chou, P.Y. (1945). On velocity correlations and the solutions of the equations of turbulent fluctuation. Quarterly of Applied Mathematics, 3(1): 38-54.
[28] Harlow, F.H., Nakayama, P.I. (1968). Transport of turbulence energy decay rate (No. LA-3854). Los Alamos Scientific Lab., N. Mex.
[29] Davidov, B.I. (1961). On the statistical dynamics of an incompressive fluid. Doklady Academy Nauka SSSR, 136: 47.
[30] Jones, W.P., Launder, B.E. (1972). The prediction of laminarization with a two-equation model of turbulence. International Journal of Heat and Mass Transfer, 15(2): 301-314.
[31] Launder, B.E., Sharma, B.I. (1974). Application of the energy-dissipation model of turbulence to the calculation of flow near a spinning disc. Letters in Heat and Mass Transfer, 1(2): 131-137.
[32] Wilcox, D.C. (1988). Turbulence Modeling for CFD. DCW Industries., United States.
[33] Launder, B.E., Morse, A., Rodi, W., Spalding, D.B. (1973). Prediction of free shear flows: A comparison of the performance of six turbulence models. NASA. Langley Res. Center Free Turbulent Shear Flows.
[34] Colebrook, C.F., White, C.M. (1937). Experiments with fluid friction in roughened pipes. Proceedings of the Royal Society of London. Series A-Mathematical and Physical Sciences, 161(906): 367-381. https://doi.org/10.1098/rspa.1937.0150
[35] Menter, F. (1993). Zonal two equation kw turbulence models for aerodynamic flows. 23rd Fluid Dynamics, Plasmadynamics, and Lasers Conference. https://doi.org/10.2514/6.1993-2906
[36] Kalpakli, A. (2012). Experimental study of turbulent flows through pipe bends. Doctoral dissertation, KTH Royal Institute of Technology.
[37] Versteeg, H.K., Malalasekera, W. (2007). An introduction to computational fluid dynamics: The finite volume method. Pearson education. | CommonCrawl |
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS
MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS
Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS
The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance.
ISSN 1088-6850(online) ISSN 0002-9947(print)
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | Most recent issue | All issues (1900–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article
Degenerate elliptic operators as regularizers
Author: R. N. Pederson
Journal: Trans. Amer. Math. Soc. 280 (1983), 533-553
MSC: Primary 35J70
DOI: https://doi.org/10.1090/S0002-9947-1983-0716836-8
MathSciNet review: 716836
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: The spaces ${\mathcal {K}_{mk}}$, introduced in the Nehari Volume of Journal d'Analyse Mathématique, for nonnegative integer values of $m$ and arbitrary real values of $k$ are extended to negative values of $m$. The extension is consistent with the equivalence $\parallel {\zeta ^j}u{\parallel _{m,k}}\sim \parallel u{\parallel _{m,k - j}}$, the inequality $\parallel {D^\alpha }u{\parallel _{m,k}} \leqslant {\text {const}}\parallel u{\parallel _{m + |\alpha |,k + |\alpha |}}$, and the generalized Cauchy-Schwarz inequality $|\langle {u,v} \rangle | \leqslant \parallel u {\parallel _{m,k}}\parallel v\parallel _{ - m, - k}$. (Here $\langle u, \upsilon \rangle$ is the ${L_2}$ scalar product.) There exists a second order degenerate elliptic operator which maps ${\mathcal {K}_{m,k}} 1 - 1$ onto ${\mathcal {K}_{m - 2,k}}$. These facts are used to simplify proof of regularity theorems for elliptic and hyperbolic problems and to give new results concerning growth rates at the boundary for the coefficients of the operator and the forcing function. (See Notices Amer. Math. Soc. 28 (1981), 226.)
References [Enhancements On Off] (What's this?)
Felix E. Browder, On the regularity properties of solutions of elliptic differential equations, Comm. Pure Appl. Math. 9 (1956), 351–361. MR 90740, DOI https://doi.org/10.1002/cpa.3160090307
Gaetano Fichera, Linear elliptic differential systems and eigenvalue problems, Lecture Notes in Mathematics, vol. 8, Springer-Verlag, Berlin-New York, 1965. MR 0209639
K. O. Friedrichs, Symmetric positive linear differential equations, Comm. Pure Appl. Math. 11 (1958), 333–418. MR 100718, DOI https://doi.org/10.1002/cpa.3160110306
L. Hörmander, Linear partial differential operators, Springer-Verlag, Berlin, 1963.
J. J. Kohn and L. Nirenberg, Degenerate elliptic-parabolic equations of second order, Comm. Pure Appl. Math. 20 (1967), 797–872. MR 234118, DOI https://doi.org/10.1002/cpa.3160200410
Peter D. Lax, On Cauchy's problem for hyperbolic equations and the differentiability of solutions of elliptic equations, Comm. Pure Appl. Math. 8 (1955), 615–633. MR 78558, DOI https://doi.org/10.1002/cpa.3160080411
Sigeru Mizohata, Unicité du prolongement des solutions des équations elliptiques du quatrième ordre, Proc. Japan Acad. 34 (1958), 687–692 (French). MR 105553
Louis Nirenberg, Remarks on strongly elliptic partial differential equations, Comm. Pure Appl. Math. 8 (1955), 649–675. MR 75415, DOI https://doi.org/10.1002/cpa.3160080414
O. A. Oleĭnik and E. V. Radkevi�, Second order equations with nonnegative characteristic form, Plenum Press, New York-London, 1973. Translated from the Russian by Paul C. Fife. MR 0457908
O. A. Oleĭnik, The Cauchy problem and a boundary value problem for hyperbolic equations of the second order degenerating in the region and on its boundary, Dokl. Akad. Nauk SSSR 169 (1966), 525–528 (Russian). MR 0203267
R. N. Pederson, An equivalent norm for the Sobolev space $\breve H_{m}$, J. Analyse Math. 36 (1979), 213–216 (1980). MR 581814, DOI https://doi.org/10.1007/BF02798781
R. N. Pederson, On the unique continuation theorem for certain second and fourth order elliptic equations, Comm. Pure Appl. Math. 11 (1958), 67–80. MR 98900, DOI https://doi.org/10.1002/cpa.3160110104
L. Schwarz, Théorie des distributions. I, II, Hermann, Paris, 1950-1951.
F. E. Browder, On the regularity of solutions of elliptic differential equations, Comm. Pure Appl. Math. 9 (1956), 351-361. G. Fichera, Linear elliptic differential systems and boundary value problems, Springer-Verlag, Berlin and New York, 1965. K. O. Friedrichs, Symmetric positive linear differential equations, Comm. Pure Appl. Math. 11 (1958), 333-418. L. Hörmander, Linear partial differential operators, Springer-Verlag, Berlin, 1963. J. Kohn and L. Nirenberg, Degenerate elliptic parabolic equations of the second order, Comm. Pure Appl. Math. 20 (1967), 797-872. P. D. Lax, On the Cauchy Problem for hyperbolic equations and the differentiability of solutions of elliptic equations, Comm. Pure Appl. Math. 8 (1955). S. Mizohata, Unicité du prolongment des solution des equations elliptiques du quatrieme ordre, Proc. Japan Acad. 34 (1958), 687-692. L. Nirenberg, Remarks on strongly elliptic equations, Comm. Pure Appl. Math. 8 (1955), 648-674. O. Oleinik and E. V. Radkevi�, Second order elliptic equations with nonnegative characteristic form, Plenum Press, New York, 1973. O. Oleinik, The Cauchy Problem for hyperbolic equations of second order degenerating in a region and on its boundary, Dokl. Akad. Nauk SSSR 169 (1966), 525-528. R. N. Pederson, An equivalent norm for the Sobolev space ${\mathcal {K}_m}$, J. Analyse Math. 36 (1979), 213-216. ---, On the Unique Continuation Theorem for certain second and fourth order elliptic equations, Comm. Pure Appl. Math. 11 (1958), 67-80. L. Schwarz, Théorie des distributions. I, II, Hermann, Paris, 1950-1951.
Retrieve articles in Transactions of the American Mathematical Society with MSC: 35J70
Retrieve articles in all journals with MSC: 35J70
Article copyright: © Copyright 1983 American Mathematical Society
Join the AMS
AMS Conferences
News & Public Outreach
Math in the Media
Mathematical Imagery
Mathematical Moments
Fellows of the AMS
Mathematics Research Communities
AMS Fellowships
Programs for Students
Collaborations and position statements
Appropriations Process Primer
Congressional briefings and exhibitions
About the AMS
Jobs at AMS
Notices of the AMS · Bulletin of the AMS
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility | CommonCrawl |
Accessibility and territorial cohesion in a case of transport infrastructure improvements with changing population distributions
Chris Jacobs-Crisioni ORCID: orcid.org/0000-0001-6225-48131,
Filipe Batista e Silva1,
Carlo Lavalle1,
Claudia Baranzelli1,
Ana Barbosa1 &
Carolina Perpiña Castillo1
European Transport Research Review volume 8, Article number: 9 (2016) Cite this article
In the last decade or so several studies have looked into the impacts of transport infrastructure improvements on decreasing territorial disparities. In those studies population levels are usually assumed static, although future population levels likely change in response to changing accessibility levels as well as to other factors. To test how much accessibility impacts may be affected by changes in population levels, this study explores the effects of foreseeable population changes on the accessibility improvements offered by large scale transport infrastructure investments.
In this study we compare accessibility measures from four cases, namely the current situation; one case in which only transport investments are taken into account; and two cases that include transport investments and two scenarios with differing future population distributions that in turn are simulated by the LUISA land-use model. The modelled transport investments are assumed to improve travel times. The study concentrates on accessibility effects in Austria, Czech Republic, Germany and Poland. To provide a reference to the found results, the same computations are repeated with historical population and road network changes.
The results indicate that differences in local population levels have a limited effect on average accessibility levels, but may have a large impact on territorial inequalities related to accessibility.
The findings in this study underpin the importance of incorporating future local population levels when assessing the impacts of infrastructure investments on territorial disparities.
Accessibility deals with the level of service provided by transport networks, given the spatial distribution of activities [1]. Improving accessibility is an important means to increase social and economic opportunities [1, 2] and accessibility considerations are deemed an important component of sustainable development [3]. In Europe, a substantial amount of public funding is dedicated to increase accessibility in peripheral and/or landlocked regions; in particular through the European Union's (EU) cohesion policy instruments [4]. The territorial cohesion aim of those policies is usually interpreted as the aim to decrease disparities between European regions [5]. To do so, the EU's cohesion policies provide funding for regionally tied projects in a wide range of sectors with the aim to "kick-start growth, employment, competitiveness, and development on a sustainable basis" [6, p. 13]. The regional investment program includes a considerable amount of funding available for transport infrastructure improvements; but funding is also available for other aims such as environmental protection, promoting tourism, and urban and rural regeneration.
To assess whether transport infrastructure improvements have the intended effect of decreasing disparities in accessibility among European regions, recent studies have employed sophisticated accessibility measures and inequality indicators [5, 7, 8]. The cohesion effects that those measures yield are varied, depending in particular on the analysed transport mode. In general, road link upgrades seem to increase territorial cohesion [8, 9], while in contrast high speed railway links accentuate differences in accessibility between regions [5, 7]. Most accessibility measures are based on two dimensions: on the one hand the traveltime or generalized travel-cost needed to overcome geographic distance making use of available transport options; and on the other hand the spatial distribution of activities (commonly using GDP or population counts as a proxy). As is the case in all previously mentioned case studies, the effects of transport infrastructure improvements on accessibility are usually taken into account by known reductions in traveltime or generalized cost, while spatial activity distributions are often presumed static. However, the spatial distribution of activities is surely not static, and in fact adjusts to changing accessibility levels over time [10–12]. Thus, if spatial activity distributions adjust to changing accessibility levels, ex-ante evaluations of infrastructure studies may benefit from taking reciprocities with spatial activity distributions into account – for example to assess the robustness of found accessibility benefits with differing population growth scenarios, or to compose complementary spatial planning strategies that optimize the effectiveness of transport infrastructure investments.
Accessibility has received considerable attention in the literature. For example, the effect that accessibility improvements may have on activity distributions has been studied repeatedly [10–15]. Other studies have researched spill-over effects of transport infrastructure improvements [8, 16]. The effect that spatial activity distributions may have on accessibility, as studied in this paper, has received less attention. Geurs and Van Wee [17] compared the land resource, accessibility and transport consumption impacts of the relatively compact post-war urban development in the Netherlands with the outcomes of alternate land-use planning policies. Their study shows slightly better aggregate accessibility levels as a result of compact development, mainly due to lower congestion levels. Wang et al. [18] compare accessibility levels and associated social welfare effects in Madrid with different transport policy measures, while explicitly modelling changes in transport behaviour and land-use patterns. Other studies in the Netherlands have also explored land-use impacts on accessibility [19, 20], which in general confirm that land-use policies may increase aggregate accessibility levels and that tailored spatial planning can increase the benefits of transport infrastructure investments.
All of the abovementioned studies focus on total or average accessibility changes, and it is still unclear to what degree the spatial redistribution of activities may affect disparities in accessibility, in particular in regions where general activity levels are decreasing. This article will add to the available literature by looking into how local population changes may affect found levels of territorial disparities in accessibility. Because of computational limitations the study at hand had to be limited to four countries. Austria, Czech Republic, Germany and Poland have been selected, because they make a spatially adjacent but mixed set of new and old member states that differ substantially in current levels of infrastructure endowment (with much larger endowments in Austria and Germany) and in levels of transport infrastructure investment funded by EU cohesion policies (with much more investment in Czech Republic and Poland). Results from four cases will be compared: a reference case that comprises the current road network and population distribution in Europe in 2006 (case I); a case in which population distributions are from 2006, but road network improvements are imposed that are assumed to gradually decrease travel times between 2006 and 2030 (case II); and two cases that consider the same road network improvements, as well as modelled future population distributions (Compact scenario: case III and Business As Usual or BAU scenario: case IV). The latter two cases assume identical regional population projections, but differ in assumed local spatial planning policies, and therefore have different intra-regional population patterns. The modelled future road networks and population distributions are mostly based on well-documented and empirically tested relations, but to some extent rely on expert judgement, which in turn may raise doubts concerning their validity; a common problem for scenario approaches [21]. To provide some reference, this paper will compare the outcomes of relevant indicators with the same indicators computed for changes in observed population levels and accessibility levels between 1971 and 2011. We must nevertheless stress that past changes are not necessarily indicative of future changes. Furthermore, the uncertainties surrounding future projections are not problematic as long as the simulation outcomes are used for what they are: maps showing potential future developments, given many scenario-related assumptions.
The here presented results were produced in a land-use modelling exercise that aimed to look into how EU cohesion policies and other EU policies with spatial relevance may affect land-use, accessibility and a range of environmental indicators. The mentioned study is comprehensively documented in Batista e Silva et al. [22]. The study assumes a number of road network improvements funded by the EU's regional cohesion policy program for the years 2014 to 2020. A part of those improvements is known in advance, and a part consists of modelled upgrades given available funding at regional level. Population redistributions are modelled using the European Commission's platform for Land-Use-based Integrated Sustainability Assessment (LUISA) [23]. In this section we will describe the used land-use modelling platform, the way by which cohesion policy impacts are modelled with it, and the applied methods to evaluate cohesion impacts of the modelled outcomes.
The LUISA platform
LUISA is a dynamic spatial modelling platform that simulates future land-use changes based on biophysical and socio-economic drivers and is specifically designed to assess land-use impacts of EU policies. Its core was initially based on the Land Use Scanner [24, 25], CLUE and Dyna-CLUE land-use models [26–28], but its current form is the result of a continuous development effort by the Joint Research Centre [23] that owes much to the highly flexible GeoDMS [29] modelling software in which LUISA is implemented. LUISA downscales regional projected future land use demands to a fine spatial resolution and thus models changes in population and land use with reference to CORINE land-use/land-cover maps [30] and a fine resolution population distribution map [31]. It allocates land uses and population per year on a 100 m spatial grid. It discerns a number of land-use types, which can roughly be separated in urban, industrial, agricultural and natural land uses. The timeframe for which LUISA simulates land-use changes varies per study; for this study the model ran for the period from 2006 to 2030.
As can be seen in Fig. 1, LUISA is structured in a demand module, a land-use allocation module and an indicator module. At the core of LUISA is a discrete allocation method that is doubly constrained by on the one hand projected regional land demands and on the other hand regional land supply. For an elaborate description of the land allocation method we refer to Hilferink and Rietveld [24] and Koomen et al. [25]. The regional land demands are provided in the demand module by sector-specific economic models, such as the CAPRI model for agricultural land demands [32] and the GEM-E3 model for industrial land demands [33]. Within its constraints, the model attempts to achieve an optimal land-use distribution based on spatially varying local suitabilities for competing land uses. Those suitability values for given land uses, in turn, are derived from fitting biophysical, socio-economic and neighbourhood factors on spatial land-use patterns with a multinomial discrete choice method. LUISA is run for each country independently. Its outcomes are population distributions, spatial land-use patterns and accessibility values for each of the model's time steps. Those outcomes are used to inform local suitability values in the next time step and to compute policy-relevant indicators of the impacts of land-use change in the indicator module. A broad range of indicators is computed within LUISA, of which cohesion effects of policy scenarios are particularly relevant for this paper.
Flow chart of the LUISA land-use model
Two recent additions to LUISA set it apart from similar land-use models. The first addition considers the parallel endogenous allocation of number of people to the model's 100 m grid, which is described here briefly; for a detailed overview see Batista e Silva et al. [22]. In LUISA's people allocation method, in each time step a region's population is distributed over space. The distributed population and threshold rules are subsequently used to simulate the conversion to urban and abandoned urban land uses before all other simulated land-use types are allocated in the discrete land-use allocation method. Following observed land-use and population distributions, pixels become urban if their modelled population exceeds 6 inhabitants; conversely, urban pixels become 'abandoned' when their modelled population declines below 2 inhabitants. The distribution of population is foremost based on a `population potential' function that describes likely population counts per grid unit. This is a linear function incorporating neighbourhood interdependencies, the log-linear distance to the closest road, current potential accessibility, slope and current land uses; it is fitted on the observed 2006 population distribution by means of spatial econometric methods. For an overview of spatial econometric methods see Anselin [34]. Population allocation in LUISA is subsequently restricted by three factors. Regional urban land demands are accounted for, implying that minimum and maximum limits are imposed on the number of pixels that reach the urbanization threshold. Regional urban land demands are based on: 1) recent Europop 2010 population projections [35]; 2) an assumed Europe-wide convergence of average household sizes on the very long run (i.e., to 1.8 in all regions by 2100, so that in most regions a limited decrease in household size is modelled by 2030); and 3) extrapolated historical trends of regional urban land consumption per household. In each time step the population distribution method allocates the net regional population growth in a region, as projected by Eurostat, as well as 10 % of the pre-existing population in order to take internal movements into account. The 10 % internally moving population is a coarse estimate of internal movements that is used because projected internal migration numbers are unavailable. Lastly, the method is restricted by per-pixel housing supply, which is approximated in terms of inhabitant capacity in the model and is instrumental in imposing a larger degree of inertia on the model results. Approximated housing supply increases potential population if current population undershoots population capacity, and it penalizes population potential if population counts are higher than housing supply. Every five time steps it assumes the values from current modelled population counts to proxy structural changes in housing supply.
A second recent addition to LUISA is the inclusion of endogenous potential accessibility as a suitability factor for its land-use allocation and population distribution method. Here the model computes the following equation for each time step:
$$ {A}_i={\displaystyle {\sum}_{i=1}^n\frac{P_j}{f\left({c}_{ij}+{c}_j\right)},} $$
in which accessibility levels A for each origin point i are computed using current population counts P in destination zones j, the results of a function of traveltime c between i and j, and a zone-specific internal traveltime c j . The origin points are equally distributed throughout Europe with roughly 15 km intervals. Within the model, the destination zones are hybrid sets that differ per modelled country and consist of municipalities within, and NUTS2 regions outside of the modelled countries. Although national borders impose substantial barriers on levels of spatial interaction and urban development near national borders [36–38], no penalties on potential cross-border interactions are currently imposed on accessibility values. Population counts are aggregated from the model's previous time step's population distribution outcomes in the modelled country. Regional Europop2010 population projections are used for the remaining regions. Traveltimes are obtained from the TRANS-TOOLS road network [39] using a shortest path algorithm assuming free-flow traveltimes. For the purpose of this study, current and future traveltimes are distinguished (see the following section). To account for the unknown distribution of destinations within zones an additional traveltime is added that essentially depends on a destination zone's geographical area. It uses the Frost and Spence [40] approach to approximate internal Euclidean distances; thus, internal distance d j is assumed to be \( {d}_j=0.5\sqrt{ARE{A}_j/\pi } \). Subsequently, internal travel times c j are computed from d j by means of a function in which effective travel speeds in km/h are obtained with the fitted function 10.66 + 13.04 ln (d j ), with a minimum of 5 km/h imposed on very small zones (for details on the fitted function see [38]). Lastly the distance decay function f(c ij ) in the model is of the form c ij 1.5. The form of the distance decay function was chosen among many tested in the population potential fitting exercise because, in terms of explained variance, it fitted best on observed population distributions.
The feedbacks between land-use and transport that are modelled in LUISA are characteristic of land-use/transport interaction models (LUTI). In LUISA, just as in most other LUTI [41], accessibility is used as an important factor in the location decisions that cause land-use change, and as an indicator of socio-economic welfare. For an overview of LUTI models we refer to Wegener [42]. Compared to other recently applied LUTI, for example MARS [18, 43] or TIGRIS XL [44], LUISA has a larger geographic extent (all of the European Union), operates at a finer resolution (the 100 m pixel level), takes into account a broader set of land uses (including agricultural and forest land uses), and reports on a much more diverse set of environmental and economic indicators (including for example accessibility and land-use efficiency, but also ecosystem services, freshwater consumption and energy provision). However, currently LUISA does not take into account some of the characteristic strongpoints of other LUTI such as the modelling of network use and congestion, the inclusion of multiple transport modes, and the incorporation of other human activities besides residence, such as employment. Future development plans for LUISA do include the estimation of transport network use and a further breakdown of human activity, if sufficiently detailed data becomes available on a Europe-wide scale. For the article at hand the model's shortcomings imply limitations to the breadth of the applied methods and drawn conclusions. Thus, for example the effects of transport investments that aim to alleviate congestion cannot be explored, and impacts related to job-market dynamics and job-market access cannot be presented.
Modelling cohesion policy impacts
LUISA allows multi-policy scenarios to be accommodated, so that several interacting and complementary dimensions of spatially relevant policies are represented. Often LUISA inherits policy provisions from other sector models. For example, the CAPRI model from which agricultural land demands are obtained takes the EU's Common Agricultural Policy on board, and the macro-economic models that project future industrial land demand pass through energy and economic policies [45, 46]. Other policies such as nature protection schemes and transport infrastructure improvements are modelled in LUISA through assumed impacts on local suitability factors.
To assess the territorial consequences of EU cohesion policies, a number of impacts are inherited from upstream models; the most important example here is that the impacts of cohesion policy on industrial land demand were obtained using forecasts of economic growth from the Rhomolo model [6]. Regional population projections were assumed not to change as a result of the cohesion policies. At the local level suitability factors were adapted in order to assess the impacts of cohesion policies on the spatial distribution of people and land uses. Only aspects of the cohesion policy with a clear impact on land-use patterns were taken into account: investments in transport networks, investments in urban regeneration, investment in research and technological development infrastructure, investment in social infrastructure and investments in improving existing ports and airports. In this article we elaborate on how road network improvements were modelled; for an overview of the other modelled cohesion policy impacts we refer to Batista e Silva et al. [22]. We will furthermore elaborate on the two contrasting urban development scenarios that were taken into account in the cohesion policy assessments.
Taking into account road network funding
The effects of future funding for motorways and local, regional and national roads have been modelled explicitly by taking into account future changes in traveltimes and their subsequent effects on potential accessibility. The way that road upgrades were incorporated in LUISA is shown schematically in Fig. 2 . Because the true distribution of funding in the cohesion policy was not yet known at the time the research was conducted, the funds were assumed to be the same as in the 2007 to 2013 programme. Those funds are destined to three distinct road types, namely motorways, national roads and local roads. All modelled road network improvements were assumed to lead to traveltime improvements, either by new links identified in the used TRANS-TOOLS data, or by upgrades to the existing road network. The costs of upgrading one kilometre of lane were averaged from a European database of road construction projects that have successfully been implemented with cohesion policy funding; see EC [47]. For the purpose of this paper the total EU investments cited for those projects are divided by the length of the built road and the number of constructed lanes. Subsequently total road construction costs were estimated for the three road types based on an assumed amount of lanes per type. All cost assumptions are given in Table 1. We must acknowledge that the costs quoted here are very rough estimates that do not take into account terrain conditions, nationally varying pricing structures or complex civil engineering works. These estimates have nonetheless been used because more accurate information on road construction costs was unavailable. Finally, please note that the recorded projects are only co-funded by the EU so that only a part of the entire project costs are taken into account. The accounted partial costs are consistent with the modelling approach in which the effects of future EU subsidies on road network development are modelled.
Endogenous accessibility and population computations in LUISA
Table 1 Characteristics of road types as used in the upgrade funding allocation method and assumed amount of available funding
Table note: these are the costs for road projects incurred by the European Commission in projects that are only co-funded by the European Commission. Total construction costs may be much higher.
Given the costs of constructing a kilometre of a certain road type, the costs of road network improvements that are known a-priori were computed first. In many regions a substantial amount of funding was not depleted by those already known infrastructure developments. In such regions the remainder funding was allocated to road segments that, according to some simple rules, are likely candidates for upgrades. In that way all regional funding was allocated to road network improvements. The selected road segments had to meet the following criteria: they 1) were not known to be upgraded; 2) had slower recorded maximum speeds than typical for the destination road type; and 3) had the highest transport demand according to a simple transport modelling exercise. That transport modelling exercise is based on a straightforward spatial interaction model of the form T ij = P i P j c ij -2, with demand for flows T between municipalities i and j, population counts P and traveltimes c. The demands T were allocated to the shortest path between i and j, yielding estimated flows per road segment. With the set criteria, first upgrades to motorway level were allocated, and subsequently upgrades to regional and local roads. This was done until no more road segments could be upgraded because funds were depleted or because no more segments that meet the criteria were available in a region. This method assumes that network investment decisions follow an ad-hoc rationale of catering for transport demand where this is needed the most. We believe this is a fair assumption as long as strategic network investment plans are unknown for the regions that receive funding. We must acknowledge that the used transport demand figures are obtained from a rather coarse method that for example does not take into account spatially varying car ownership or the lessening effects that national borders have on transport flows [48]. We expect that this method is nonetheless useful here to demonstrate the effects that potential infrastructure investments may have on accessibility levels. Finally, the network improvements were assumed to be completed by 2030, with linearly improving traveltimes between 2006 and 2030 that fed into the LUISA accessibility computations.
Two contrasting scenarios of urban development
Unfortunately, local urban planning policies and regulations are not included in LUISA, even though their effect on future local land-use patterns is presumably profound. Such local policies are excluded because consistent Europe-wide data related to urban plans are yet unavailable. To sketch the potential impacts of cohesion policies with different local planning policies, those impacts have been computed with two contrasting, stylised spatial planning regimes. The choice of planning regimes reflects the contradiction between sprawled and compact urban development that is often addressed in spatial planning evaluation [17, 49]. In the Compact scenario (case III), urban development is restricted to the immediate surroundings of existing urban areas, thus leading to densification and expansion of existing urban perimeters, while limiting scattered and uncontrolled development. Because of the restricted availability of land near urban areas, this scenario additionally yields a more evenly spread urban development within regions. In the BAU scenario of urban development (case IV), urban areas are allowed to develop freely, are attracted to the areas with the highest gravitational attraction, and there form relatively scattered patterns that generally follow the main transport axes.
Measuring cohesion effects on accessibility
To study the effects of transport network improvements on accessibility a number of accessibility measures need to be selected from the many accessibility measures that are available in the existing literature; see for example Geurs and Van Wee [1]. We used the same set of accessibility measures as López et al. [5]. These measures are location accessibility, relative network efficiency, potential accessibility and daily accessibility, which can be loosely linked to specific policy objectives: location accessibility measures the degree in which locations are linked [9]; network efficiency measures the effectiveness of transport networks [5]; potential accessibility measures economic opportunity [5, 8]; and daily accessibility can perhaps indicate aspects of quality of life objectives, as it measures the opportunities that people may enjoy on a daily basis.
All accessibility indicators use shortest traveltimes (c ij ) between i and j and population at the destination (P j ). The list of used indicators is shown in Table 2. In all cases, the regularly distributed points described in Section 2.1 were used as origins, and municipalities were used as destinations. The road network data used to obtain traveltimes describes the current (2006) road network in case I, and describes the expected future (2030) network in cases II to IV. The latter takes into account the expected network improvements enabled by cohesion policy funding. For municipal populations the current (2006) population levels were used in cases I and II, while in cases III and IV future (2030) population levels modelled by LUISA were applied. All accessibility measures were computed for the roughly 22,000 municipalities in the study area. We must acknowledge that the selected accessibility indicators do not provide a comprehensive overview of socially relevant accessibility effects. As Geurs [50] and Wang et al. [18] show, accessibility indicators that include competition effects at the destination may add relevant information considering access to resources with limited capacity, such as jobs or public facilities. Because such resources are not yet modelled in LUISA, competition effects cannot be taken into account in this exercise.
Table 2 Accessibility measures used in this study and their definition
Subsequently, a number of indicators were computed that measure the territorial cohesion of the various accessibility indicators. The diversity indicators that have been proposed for measuring cohesion effects by López et al. [5] were used here. These indicators are the coefficient of variation and the Gini, Atkinson and Theil indices. All indicators capture the degree to which endowments are inequally distributed over areal units, but differ in the emphasis put on the distribution of high and low values. In all cases, lower values of the indicator signify greater equality of endowments and thus increased territorial cohesion.
Historical data for reference
To provide some reference to the modelling results, the same set of variables and indicators will be computed using historical data that has very recently become available. One used data-source describes municipal population counts in 1971 and 2011 in all municipalities in the selected countries [51]. The other used data describe the European road network in 1970 and 2012 [52] in a level of detail that is roughly comparable with the TRANS-TOOLS data used in the LUISA modelling effort. Thus, for the sake of comparison, historical trends regarding the cohesion effects of population and network changes are computed in the four selected countries.
In this section, first the results of allocating available funding to currently unknown future network improvements will be demonstrated along with the modelled population changes. Subsequently potential impacts of the cohesion policy on population distribution and accessibility levels will be discussed. Results from 2006 will be compared with results from 2030. Results from 1971 to 2011 are used to provide an historical reference. Please note that, because of the assumed linearly changing traveltime improvements, the impacts of intermediate years will fall roughly between the 2006 and 2030 results.
Allocated infrastructure improvements and population changes
According to the available data, roughly 16.000 km of road are known to be upgraded or constructed as motorways with cohesion policy funding. Not all funding is depleted with those upgrades. The previously outlined upgrade allocation method yields that an additional 700 km of road in Europe are upgraded to motorways. This method furthermore yields that 3600 km of road are upgraded to national roads and 6500 km of local roads are upgraded to the maximum speeds of the local/regional road level. The transport modelling results and the distribution of new links is shown in Fig. 3. From the assumed funding distribution follows that new EU member states such as Poland and Czech Republic will receive the most substantial funding for upgrades to the road network. This result is not surprising, given the speed at which road networks are expanding in the EU's new member states [8].
Above: modelled flows using 2006 population and road network data. Below: the road upgrades that are assumed to be in place in 2030 that are based on the modelled flows
To understand how the modelling network compares with historical road data, road speeds for 1971 and 2012 (historical network) as well as for 2006 and 2030 (modelling network) have been averaged for all European regions. Those averages are weighted by segment length so that longer links have a greater weight in the network average. When comparing average regional speeds, the historical network and the modelling network are considerably different. In the modelling network, regional inequalities are much more profound even when compared to the 1971 network; see Table 3. Thus the modelling network potentially overestimates disparities in accessibility. By 2030, speeds on Europe's road networks are expected to be more equally distributed. However, the modelled pace of inequality reduction does not keep up with historical trends. This is no doubt because only EU-funded network upgrades are foreseen in this analysis, so that many future network upgrades are likely not accounted for. To tackle that potential hiatus in knowledge, an effort to comprehensively project road network improvements in the EU is necessary, but such an exercise is outside the scope of this paper.
Table 3 Inequality indicators of average road speeds in the historical network and in the network used for modelling
Next to infrastructure improvements, population changes affect the analysed accessibility levels. In this modelling exercise, all future population levels are based on the `Europop2010' regional population projections for 2030. Those projections assume a general 7 % population growth in all of Europe between 2006 and 2030, but a 3 % population decrease in the study area (see Table 4).
Table 4 Population projections used in the population modelling exercise aggregated per country
In Fig. 4 the projected regional population changes are shown as well as the differences in the municipal population distribution as modelled by LUISA in the Compact and BAU scenarios. In both scenarios, regional migration flows modelled in the Europop2010 population projections cause that population levels will have increasingly inequal distributions in the study area. In fact, a quick check shows that the Europop2010 projections cause a 3 % to 5 % increase in population concentration. At the local level the modelled level of population concentration is even more pronounced, with up to 53 % increases in population inequality indicators.
Above: projected population changes per NUTS2 region from 2006 to 2030 [35] as modelled in cases III (Compact scenario) and IV (BAU scenario). Below: the differences in modelled municipal population between those two cases
When comparing the results from modelled population distributions with historical trends, it is immediately clear that the concentration tendencies in the modelling results are more conspicuous than in the historical trends. This can to some degree be explained by the increased concentration according to the used Europop2010 projections. Nevertheless, although we must repeat here that past trends are not indicative of future changes, the contradictory results may still signal a bias in the modelling results towards more concentrated population distributions. To verify the validity of modelling results, the team involved in developing the LUISA model is therefore using historical population data to explore whether variables that are relevant for population distributions are missing in the current approach. Notwithstanding whether the future will resemble the modelled trends, useful information can be extracted from a comparison of the modelled scenarios of land-use developments. Table 5 shows that in case III the regional inequality of population levels is much less compared with case IV. As Fig. 4 shows, in case III urban development is less substantial in the environs of the largest urban areas; this is due to the more restricted supply of land there in that scenario. Instead, in that case urban development is more evenly distributed near the edges of the various smaller and larger urban areas within the modelled regions. Thus, within the frame of overall population trends, the level of land-use development can have a substantial impact on population distribution outcomes.
Table 5 Inequality indicators of observed population distributions in 1971 and 2011, and in 2006 and 2030 according to the LUISA's Compact and BAU scenarios
Territorial cohesion impacts of accessibility
We proceed to discuss the territorial cohesion effects of the modelled accessibility changes. Here we take into account accessibility levels with the reference 2006 population and network (case I); with the 2006 populations but with network improvements in place (case II), so that the separate effects of infrastructure improvements and population changes can be observed; and lastly with 2030 population levels according to the Compact and BAU scenarios of local urban development (respectively cases III and IV). Reference accessibility levels and the relative effect of the assumed road network improvements on accessibility measures are plotted in Fig. 5. For all scenarios the averaged accessibility changes per country are furthermore given in Table 6. In both the mentioned figure and table, population levels are held static. The results show that, in relative terms, the assumed road network improvements have a profound effect on accessibility levels in particular in the easternmost regions of Poland and Czech Republic. In contrast, western Germany is hardly affected by the EU funded infrastructure improvements. These results confirm that EU road investments are the largest in more peripheral regions [8, 9]. Nevertheless, the infrastructure improvements do not affect the ranking of countries in terms of accessibility levels, and in absolute terms, the changes are modest. That the absolute accessibility effects of the infrastructure investments are so modest is without doubt caused by the fact that accessibility levels in the studied countries were already reasonably high in 2006.
Left: spatial distribution of accessibility levels with 2006 data (case I). Right: improvements in accessibility levels when taking only network changes into account (case II). The class breaks represent a Jenk's natural break distribution. Cases III and IV are deliberately excluded here to save space; when mapped the changes brought forth by those cases appear very similar to the results of case II
Table 6 Averaged accessibility levels per country given current and expected future road networks and the Compact and BAU scenarios of population change
The redistribution of population as modelled in LUISA substantially impacts accessibility levels. In general, the change in the location indicator is much smaller with future population levels, network efficiency is slightly increased and potential accessibility is much larger; while the effects on daily accessibility are mixed. The significant increase in potential accessibility in Germany, despite the overall population decline, is surprising. The observed increase of potential accessibility occurs in both cases III and IV and must therefore be due to regional population trends. This shows that such regional population distributions can have a substantial impact on potential accessibility levels. While cases III and IV yield consistently better average accessibility levels than the scenarios that ignore population changes (I and II), the results of cases III and IV do not differ much between themselves. This shows that, when considering average accessibility levels, regional population projections surely matter, but the aggregate effect of differing local urbanization patterns is rather limited.
In contrast to average accessibility levels, territorial cohesion indicators can change considerably with different local urbanization patterns. Table 7 shows cohesion effects of the outcomes of accessibility indicators in cases I to IV. Comparing cohesion indicators when only the network improvements are in place yields that the infrastructure improvements considerably increase cohesion: here, in all cases the inequality indices are lower when the 2030 network is taken into account. This is consistent with the findings of López et al. [5]. However, when projected population changes are taken into account, the cohesion impacts of infrastructure improvements are much smaller. With most inequality indicators, potential and daily accessibility have a smaller but still positive impact on cohesion. Only the cohesion effects of network efficiency seem to consistently improve with the modelled population changes, while in particular the cohesion effects of potential accessibility levels suffer from the modelled population changes. Differences in local urban development patterns have a substantial impact on the used cohesion indicators, with differences in cohesion indicator values of over 20 % in the case of potential accessibility. Comparing the results between cases III and IV, we find that more compact urban development decreases disparities in potential and daily accessibility, but increases disparities in location accessibility. Location accessibility, in fact, seems to profit considerably from the urban patterns modelled in the BAU scenario (case III).
Table 7 Inequality indicators of accessibility levels given current and expected future road networks and the Compact and BAU scenarios of population change
All in all, cohesion indicators of accessibility are very sensitive for local population levels. This is again emphasized when looking at the results from historical data. Those data show much more profound impacts on cohesion indicators, which is no doubt caused by the substantial network improvements observed between 1970 and 2012 and the relatively small changes in inequality of population distributions. All in all, the historical data show a remarkable decline in accessibility disparities that are in many cases even augmented by changes in population distributions over time. Thus, from the historical trends and the modelled results we extract that investments in the road network may have a considerable impact on disparities in accessibility levels, and that land-use development policies may be used to restrict the potentially unwanted effects of population distributions on those disparities.
This article explores the cohesion effects of accessibility changes induced by road infrastructure upgrades, given ongoing population changes. Accessibility levels have been obtained using partially provisional road network improvements and future population distributions that are modelled on a fine spatial resolution. The aforementioned population distributions have been modelled to readjust to intermediately changing accessibility levels, regional demographic trends and various other factors. Two scenarios of urban development have been assessed here: a Business-As-Usual scenario with unrestricted urbanization patterns and, as a consequence, considerable relocation to each region's prime centres of attraction; and a Compact scenario with more restricted urbanization patterns, and ultimately more evenly spread population growth in a region. The used methods to model future population projections and their accessibility impacts provide a useful first insight into potential future outcomes. It is however important to note that the presented framework only supports the evaluation of general accessibility impacts and may be unable to evaluate specific aims of network investments. For example, accessibility impacts may differ across population groups with diverging activity patterns and transport mode availability [53], and network investments may be necessary to improve access to specific activity places (such as hospitals or schools) or to support large recurrent transport flows (for example for tourism or international commuting). A comparison with results from observed historical changes in population levels and the road network show that the LUISA model seems to overestimate the level of concentration in future population levels. This emphasizes the importance of empirical model validation exercises that are currently underway.
Some more general findings can be extracted from the found results by comparing accessibility results with different population distribution assumptions. Average accessibility levels are improved substantially by population changes in both cases that take future population projections into account. This shows that average accessibility levels depend substantially on future regional population levels. The effect of local population distributions on average national accessibility levels is fairly limited. However, variance in local urbanization patterns can have a drastic effect on the impact that infrastructural investments have on territorial cohesion; in some cases migration to main urban areas can substantially alter the decrease in disparities that infrastructure investments aim at. The results further show that the cohesion effects of transport network investments, such as for example reported by López et al. [5] and Stępniak and Rosik [8], can differ substantially when population changes are taken into account. All in all, if policy makers aim at reducing disparities between regions by means of infrastructure investments, they will do well to take future urbanization patterns and spatial planning policies into account when evaluating their plans. This may be necessary to ensure that network investments are effective and robust to possible population changes.
We cannot easily discern a good and a bad scenario of urban growth here, even when the only goal would be to preserve or increase territorial cohesion. Some accessibility measures yield better territorial cohesion in one scenario of urban growth, while other measures score better cohesion marks in the other scenario. The essential question here is which sort of accessibility needs to be optimized? If the emphasis is on more evenly spread economic opportunity, cohesion results of potential accessibility indicate that policies that incite more evenly spread urban development over different cities in a region have better cohesion effects. However, the effectiveness of such policies and the net welfare effects of inciting such urban development is unclear; furthermore, infrastructure developments may aim at optimizing very different accessibility measures.
Geurs KT, Van Wee B (2004) Accessibility evaluation of land-use and transport strategies: Review and research directions. J Transp Geogr 12(2):127–140
Halden D (2002) Using accessibility measures to integrate land use and transport policy in Edinburgh and the Lothians. Transp Policy 9(4):313–324
Bertolini L, Le Clercq F, Kapoen L (2005) Sustainable accessibility: A conceptual framework to integrate transport and land use plan-making. Two test-applications in the Netherlands and a reflection on the way forward. Transp Policy 12:207–220
EC (2004) A new partnership for cohesion: Convergence competitiveness cooperation. Third report on economic and social cohesion. Publications Office of the European Union, Luxembourg
López E, Gutiérrez J, Gómez G (2008) Measuring regional cohesion effects of large-scale transport infrastructure investments: An accessibility approach. Eur Plan Stud 16(2):277–301
Brandsma A, Di Comite F, Diukanova O, Kancs A, Lopez Rodriguez J, Martinez Lopez D, Persyn D, Potters L (2013) Assessing policy options for the EU Cohesion Policy 2014–2020. Joint Research Centre of the European Commission
Martin JC, Gutiérrez J, Román C (2004) Data envelopment analysis (DEA) index to measure the accessibility impacts of new infrastructure investments: The case of the high-speed train corridor Madrid-Barcelona-French border. Reg Stud 38(6):697–712
Stępniak M, Rosik P (2013) Accessibility improvement, territorial cohesion and spillovers: A multidimensional evaluation of two motorway sections in Poland. J Transp Geogr 31:154–163
Gutiérrez J, Urbano P (1996) Accessibility in the European Union: The impact of the trans-European road network. J Transp Geogr 4(1):15–25
Xie F, Levinson D (2010) How streetcars shaped suburbanization: a Granger causality analysis of land use and transit in the Twin Cities. J Econ Geogr 10:453–470
Levinson D (2008) Density and dispersion: the co-development of land use and rail in London. J Econ Geogr 8:55–77
Koopmans C, Rietveld P, Huijg A (2012) An accessibility approach to railways and municipal population growth, 1840–1930. J Transp Geogr 25:98–104
Hansen WG (1959) How accessibility shapes land use. Journal of the American Institute of Planners 25:73–76
Meijers E, Hoekstra J, Leijten M, Louw E, Spaans M (2012) Connecting the periphery: Distributive effects of new infrastructure. J Transp Geogr 22:187–198
Padeiro M (2013) Transport infrastructures and employment growht in the Paris metropolitan margins. J Transp Geogr 31:44–53
Condeço-Melhorado A, Tillema T, De Jong T, Koopal R (2014) Distributive effects of new highway infrastructure in the Netherlands: the role of network effects and spatial spillovers. J Transp Geogr 34:96–105
Geurs K, Van Wee B (2006) Ex-post evaluation of thirty years of compact urban development in the Netherlands. Urban Stud 43(1):139–160
Wang Y, Monzon A, Di Ciommo F (2014) Assessing the accessibility impact of transport policy by a land-use and transport interaction model - The case of Madrid. Comput Environ Urban Syst 49:126–135
Geurs KT, De Bok M, Zondag B (2012) Accessibility benefits of integrated land use and public transport policy plans in the Netherlands. In: Geurs KT, Krizek KJ, Reggiani A (eds) Accessibility analysis and transport planning. Edward Elgar, Cheltenham, pp. 135–153
Geurs KT, Van Wee B, Rietveld P (2006) Accessibility appraisal of integrated land-use – transport strategies: Methodology and case study for the Netherlands Randstad area. Environment and Planning B 33(5):639–660
Dekkers JEC, Koomen E (2007) Land-use simulation for water management: application of the Land Use Scanner model in two large-scale scenario-studies. In: Koomen E, Stillwell J, Bakema A, Scholten HJ (eds) Modelling land-use change; progress and applications. Springer, Dordrecht, pp. 355–373
Batista e Silva F, Lavalle C, Jacobs-Crisioni C, Barranco R, Zulian G, Maes J, Baranzelli C, Perpiña C, Vandecasteele I, Ustaoglu E, Barbosa A, Mubareka S (2013) Direct and indirect land use impacts of the EU cohesion policy.assessment with the Land Use Modelling Platform. Publications office of the European Union, Luxembourg
Lavalle C, Baranzelli C, Batista e Silva F, Mubareka S, Rocha Gomes C, Koomen E, Hilferink M (2011) A High Resolution Land use/cover Modelling Framework for Europe: introducing the EU-ClueScanner100 model. In: Murgante B, Gervasi O, Iglesias A, Taniar D, BO A (eds) Computational Science and Its Applications - ICCSA 2011, Part I, Lecture Notes in Computer Science, vol 6782. Springer-Verlag, Berlin, pp. 60–75
Hilferink M, Rietveld P (1999) Land Use Scanner: An integrated GIS based model for long term projections of land use in urban and rural areas. J Geogr Syst 1(2):155–177
Koomen E, Hilferink M, Borsboom-van Beurden J (2011) Introducing Land Use Scanner. In: Koomen E, Borsboom-van Beurden J (eds) Land-use modeling in planning practice. Springer, Dordrecht, pp. 3–21
Veldkamp A, Fresco LO (1996) CLUE: a conceptual model to study the Conversion of Land Use and its Effects. Ecol Model 85:253–270
Verburg PH, Rounsevell MDA, Veldkamp A (2006) Scenario-based studies of future land use in Europe. Agric Ecosyst Environ 114(1):1–6
Verburg PH, Overmars K (2009) Combining top-down and bottom-up dynamics in land use modeling: exploring the future of abandoned farmlands in Europe with the Dyna-CLUE model. Landsc Ecol 24(1167):1181. doi:10.1007/s10980-009-9355-7
ObjectVision (2014) Geo data and model server (GeoDMS). http://objectvision.nl/geodms. Accessed 03/10/2014
Büttner G, Feranec J, Jaffrain G, Mari L, Maucha G, Soukup T (2004) The CORINE land cover 2000 project. EARSeL eProceedings 3(3):331–346
Batista e Silva F, Gallego J, Lavalle C (2013) A high-resolution population grid map for Europe. Journal of Maps 9(1):16–28
Britz W, Witzke HP (2008) Capri model documentation 2008: Version 2. Institute for Food and Resource Economicws. University of Bonn, Bonn
EC (2013) EU energy, transport and GHG emissions. Trends to 2050. Reference scenario 2013. Publications Office of the European Union. Luxembourg, Luxembourg
Anselin L (2001) Spatial econometrics. In: Baltagi BH (ed) A companion to theoretical econometrics. Blackwell Publishing Ltd, Malden, Ma, pp. 310–330
EuroStat (2011) Population projections. http://epp.eurostat.ec.europa.eu/statistics_explained/index.php/Population_projections. Accessed 10/04/2014
Redding SJ, Sturm DM (2008) The costs of remoteness: Evidence from German division and reunification. Am Econ Rev 98(5):1766–1797
Brakman S, Garretsen H, Van Marrewijk C, Oumer A (2012) The border population effects of EU integration. J Reg Sci 52(1):40–59
Jacobs-Crisioni C, Koomen E (2014) The infuence of national borders on urban development in border regions: An accessibility approach. Unpublished manuscript, VU University Amsterdam
Rich J, Brõcker J, Hansen CO, Korchenewych A, Nielsen OA, Vuk G (2009) Report on scenario, traffic forecast and analysis of traffic on the TEN-T, taking into consideration the external dimension of the union - TRANS-TOOLS version 2; model and data improvements. Copenhagen
Frost ME, Spence NA (1995) The rediscovery of accessibility and economic potential: the critical issue of self-potential. Environment and Planning A 27(11):1833–1848
Geurs KT, van Wee B (2004) Land-use/transport interaction models as tools for sustainability impact assessments of transport investments: Review and research directions. Eur J Transp Infrastruct Res 4(3):333–355
Wegener M (1998) Applied models of urban land use, transport and environment: state of the art and future developments. In: L. Lundqvist, L.G. Mattson, T.J. Kim (eds). Springer, Heidelberg,
Pfaffenbichler P, Emberger G, Shepherd S (2008) The integrated dynamic land use and transport model MARS. Networks and Spatial Economics 8:183–200
Zondag B, De Jong G (2005) The development of the TIGRIS XL model: a bottom-up to transport, land-use and the economy. In: Economic impacts of changes in accessibility, Edinburgh, 10/27/2005 2005.
Lavalle C, Mubareka S, Perpiña C, Jacobs-Crisioni C, Baranzelli C, Batista e Silva F, Vandecasteele I (2013) Configuration of a reference scenario for the land use modelling platform. Publications office of the European Union, Luxembourg
Batista e Silva F, Koomen E, Diogo V, Lavalle C (2014) Estimating demand for industrial and commercial land use given economic forecasts. PLoS One 9(3):e91991
EC (2013) Regional policy: Project examples. http://ec.europa.eu/regional_policy/projects/stories/search.cfm?LAN=EN&pay=ALL®ion=ALL&the=60&type=ALL&per=2. Accessed 02/04/2014
Rietveld P (2001) Obstacles to openness of border regions in Europe. In: Van Geenhuizen M, Ratti R (eds) Gaining advantage from open borders. An active space approach to regional development. Ashgate, Aldershot, pp. 79–96
Ritsema van Eck J, Koomen E (2008) Characterising urban concentration and land-use diversity in simulations of future land use. Ann Reg Sci 42(1):123–140
Geurs K (2006) Accessibility, land use and transport. Utrecht University, Accessibility evaluation of land use and transport developments and policy strategies. Ph.D. Dissertation
Gløersen E, Lüer C (2013) Population data collection for European local administrative units from 1960 onwards. Spatial Foresight, Heisdorf
Stelder D, Groote P, De Bakker M (2013) Changes in road infrastructure and accessibility in Europe since 1960. Final report tender reference nr 2012.CE.16.BAT.040 European Commission
Kwan MP (1998) Space-time and integral measures of individual accessibility: A comparative analysis using a point-based framework. Geogr Anal 30(3):191–216
European Commission, Joint Research Centre, Institute for Environment and Sustainability, Sustainability Assessment Unit, Via E. Fermi, 2749, 21027, Ispra (Va), Italy
Chris Jacobs-Crisioni
, Filipe Batista e Silva
, Carlo Lavalle
, Claudia Baranzelli
, Ana Barbosa
& Carolina Perpiña Castillo
Search for Chris Jacobs-Crisioni in:
Search for Filipe Batista e Silva in:
Search for Carlo Lavalle in:
Search for Claudia Baranzelli in:
Search for Ana Barbosa in:
Search for Carolina Perpiña Castillo in:
Correspondence to Chris Jacobs-Crisioni.
This article is part of the Topical Collection on Accessibility and Policy Making
Jacobs-Crisioni, C., Batista e Silva, F., Lavalle, C. et al. Accessibility and territorial cohesion in a case of transport infrastructure improvements with changing population distributions. Eur. Transp. Res. Rev. 8, 9 (2016) doi:10.1007/s12544-016-0197-5
Land-use modelling
Land-use/transport interaction
Topical Collection on Accessibility and Policy Making | CommonCrawl |
Realizability of modules over Tate cohomology
Authors: David Benson, Henning Krause and Stefan Schwede
Journal: Trans. Amer. Math. Soc. 356 (2004), 3621-3668
MSC (2000): Primary 20J06; Secondary 16E40, 16E45, 55S35
DOI: https://doi.org/10.1090/S0002-9947-03-03373-7
Published electronically: December 12, 2003
Abstract | References | Similar Articles | Additional Information
Abstract: Let $k$ be a field and let $G$ be a finite group. There is a canonical element in the Hochschild cohomology of the Tate cohomology $\gamma _G\in H\!H^{3,-1}\hat H^*(G,k)$ with the following property. Given a graded $\hat H^*(G,k)$-module $X$, the image of $\gamma _G$ in $\operatorname {Ext}^{3,-1}_{\hat H^*(G,k)}(X,X)$ vanishes if and only if $X$ is isomorphic to a direct summand of $\hat H^*(G,M)$ for some $kG$-module $M$. The description of the realizability obstruction works in any triangulated category with direct sums. We show that in the case of the derived category of a differential graded algebra $A$, there is also a canonical element of Hochschild cohomology $H\!H^{3,-1}H^*(A)$ which is a predecessor for these obstructions.
M. Auslander and Sverre O. Smalø, Preprojective modules over Artin algebras, J. Algebra 66 (1980), no. 1, 61–122. MR 591246, DOI https://doi.org/10.1016/0021-8693%2880%2990113-1
Hans-Joachim Baues, On the cohomology of categories, universal Toda brackets and homotopy pairs, $K$-Theory 11 (1997), no. 3, 259–285. MR 1451757, DOI https://doi.org/10.1023/A%3A1007796409912
Hans Joachim Baues and Winfried Dreckmann, The cohomology of homotopy categories and the general linear group, $K$-Theory 3 (1989), no. 4, 307–338. MR 1047191, DOI https://doi.org/10.1007/BF00584524
Hans-Joachim Baues and Andy Tonks, On sum-normalised cohomology of categories, twisted homotopy pairs and universal Toda brackets, Quart. J. Math. Oxford Ser. (2) 47 (1996), no. 188, 405–433. MR 1460232, DOI https://doi.org/10.1093/qmath/47.4.405
A. A. Beĭlinson, J. Bernstein, and P. Deligne, Faisceaux pervers, Analysis and topology on singular spaces, I (Luminy, 1981) Astérisque, vol. 100, Soc. Math. France, Paris, 1982, pp. 5–171 (French). MR 751966
D. J. Benson, Representations and cohomology. I, Cambridge Studies in Advanced Mathematics, vol. 30, Cambridge University Press, Cambridge, 1991. Basic representation theory of finite groups and associative algebras. MR 1110581
D. J. Benson, Complexity and varieties for infinite groups. I, II, J. Algebra 193 (1997), no. 1, 260–287, 288–317. MR 1456576, DOI https://doi.org/10.1006/jabr.1996.6996
D. J. Benson and Jon F. Carlson, Products in negative cohomology, J. Pure Appl. Algebra 82 (1992), no. 2, 107–129. MR 1182934, DOI https://doi.org/10.1016/0022-4049%2892%2990116-W
D. J. Benson and G. Ph. Gnacadja, Phantom maps and purity in modular representation theory. II, Algebr. Represent. Theory 4 (2001), no. 4, 395–404. MR 1863392, DOI https://doi.org/10.1023/A%3A1012475019810
A. J. Berrick and A. A. Davydov, Splitting of Gysin extensions, Algebr. Geom. Topol. 1 (2001), 743–762. MR 1875616, DOI https://doi.org/10.2140/agt.2001.1.743
Marcel Bökstedt and Amnon Neeman, Homotopy limits in triangulated categories, Compositio Math. 86 (1993), no. 2, 209–234. MR 1214458
Jon F. Carlson, Modules and group algebras, Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel, 1996. Notes by Ruedi Suter. MR 1393196
H. Cartan and S. Eilenberg, Homological algebra, Princeton Mathematical Series, no. 19, Princeton Univ. Press, 1956.
D. Christensen, B. Keller, and A. Neeman, Failure of Brown representability in derived categories, Topology 40 (2001), 1339–1361.
Jonathan Cornick and Peter H. Kropholler, Homological finiteness conditions for modules over strongly group-graded rings, Math. Proc. Cambridge Philos. Soc. 120 (1996), no. 1, 43–54. MR 1373347, DOI https://doi.org/10.1017/S030500410007465X
Jonathan Cornick and Peter H. Kropholler, Homological finiteness conditions for modules over group algebras, J. London Math. Soc. (2) 58 (1998), no. 1, 49–62. MR 1666074, DOI https://doi.org/10.1112/S0024610798005729
P. Deligne, Cohomologie étale, Lecture Notes in Mathematics, vol. 569, Springer-Verlag, Berlin, 1977 (French). Séminaire de géométrie algébrique du Bois-Marie SGA $4\frac {1}{2}$. MR 463174
Leonard Evens, The cohomology of groups, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 1991. Oxford Science Publications. MR 1144017
Peter Freyd, Stable homotopy, Proc. Conf. Categorical Algebra (La Jolla, Calif., 1965) Springer, New York, 1966, pp. 121–172. MR 0211399
Sergei I. Gelfand and Yuri I. Manin, Methods of homological algebra, Springer-Verlag, Berlin, 1996. Translated from the 1988 Russian original. MR 1438306
Murray Gerstenhaber, The cohomology structure of an associative ring, Ann. of Math. (2) 78 (1963), 267–288. MR 161898, DOI https://doi.org/10.2307/1970343
François Goichot, Homologie de Tate-Vogel équivariante, J. Pure Appl. Algebra 82 (1992), no. 1, 39–64 (French, with English summary). MR 1181092, DOI https://doi.org/10.1016/0022-4049%2892%2990009-5
Dieter Happel, On the derived category of a finite-dimensional algebra, Comment. Math. Helv. 62 (1987), no. 3, 339–389. MR 910167, DOI https://doi.org/10.1007/BF02564452
T. V. Kadeishvili, The algebraic structure in the homology of an $A(\infty )$-algebra, Soobshch. Akad. Nauk Gruzin. SSR 108 (1982), no. 2, 249–252 (1983) (Russian, with English and Georgian summaries). MR 720689
T. V. Kadeishvili, The structure of the $A(\infty )$-algebra, and the Hochschild and Harrison cohomologies, Trudy Tbiliss. Mat. Inst. Razmadze Akad. Nauk Gruzin. SSR 91 (1988), 19–27 (Russian, with English summary). MR 1029003
M. M. Kapranov, On the derived categories of coherent sheaves on some homogeneous spaces, Invent. Math. 92 (1988), no. 3, 479–508. MR 939472, DOI https://doi.org/10.1007/BF01393744
Bernhard Keller, Deriving DG categories, Ann. Sci. École Norm. Sup. (4) 27 (1994), no. 1, 63–102. MR 1258406
---, Introduction to $A$-infinity algebras and modules, Homology, Homotopy and Applications 3 (2001), 1–35.
Peter H. Kropholler, Hierarchical decompositions, generalized Tate cohomology, and groups of type $({\rm FP})_\infty $, Combinatorial and geometric group theory (Edinburgh, 1993) London Math. Soc. Lecture Note Ser., vol. 204, Cambridge Univ. Press, Cambridge, 1995, pp. 190–216. MR 1320283
H. R. Margolis, Spectra and the Steenrod algebra, North-Holland Mathematical Library, vol. 29, North-Holland Publishing Co., Amsterdam, 1983. Modules over the Steenrod algebra and the stable homotopy category. MR 738973
Guido Mislin, Tate cohomology for arbitrary groups via satellites, Topology Appl. 56 (1994), no. 3, 293–300. MR 1269317, DOI https://doi.org/10.1016/0166-8641%2894%2990081-7
Amnon Neeman, The Grothendieck duality theorem via Bousfield's techniques and Brown representability, J. Amer. Math. Soc. 9 (1996), no. 1, 205–236. MR 1308405, DOI https://doi.org/10.1090/S0894-0347-96-00174-9
Amnon Neeman, Triangulated categories, Annals of Mathematics Studies, vol. 148, Princeton University Press, Princeton, NJ, 2001. MR 1812507
Jeremy Rickard, Idempotent modules in the stable category, J. London Math. Soc. (2) 56 (1997), no. 1, 149–170. MR 1462832, DOI https://doi.org/10.1112/S0024610797005309
Claus Michael Ringel, Tame algebras and integral quadratic forms, Lecture Notes in Mathematics, vol. 1099, Springer-Verlag, Berlin, 1984. MR 774589
N. Spaltenstein, Resolutions of unbounded complexes, Compositio Math. 65 (1988), no. 2, 121–154. MR 932640
James Dillon Stasheff, Homotopy associativity of $H$-spaces. I, II, Trans. Amer. Math. Soc. 108 (1963), 275-292; ibid. 108 (1963), 293–312. MR 0158400, DOI https://doi.org/10.1090/S0002-9947-1963-0158400-5
John Tate, Nilpotent quotient groups, Topology 3 (1964), no. suppl, suppl. 1, 109–111. MR 160822, DOI https://doi.org/10.1016/0040-9383%2864%2990008-4
Jean-Louis Verdier, Des catégories dérivées des catégories abéliennes, Astérisque 239 (1996), xii+253 pp. (1997) (French, with French summary). With a preface by Luc Illusie; Edited and with a note by Georges Maltsiniotis. MR 1453167
Charles A. Weibel, An introduction to homological algebra, Cambridge Studies in Advanced Mathematics, vol. 38, Cambridge University Press, Cambridge, 1994. MR 1269324
George W. Whitehead, Recent advances in homotopy theory, American Mathematical Society, Providence, R.I., 1970. Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics, No. 5. MR 0309097
M. Auslander and S. O. Smalø, Preprojective modules over artin algebras, J. Algebra 66 (1980), 61–122.
H.-J. Baues, On the cohomology of categories, universal Toda brackets and homotopy pairs, $K$-Theory 11 (1997), no. 3, 259–285.
H.-J. Baues and W. Dreckmann, The cohomology of homotopy categories and the general linear group, $K$-Theory 3 (1989), no. 4, 307–338.
H.-J. Baues and A. Tonks, On sum-normalised cohomology of categories, twisted homotopy pairs and universal Toda brackets, Quarterly Journal of Math (Oxford) 47 (1996), no. 188, 405–433.
A. A. Beĭlinson, J. Bernstein, and P. Deligne, Faisceaux pervers, Analysis and topology on singular spaces, I (Luminy, 1981), Soc. Math. France, Paris, 1982, pp. 5–171.
D. J. Benson, Representations and Cohomology I: Basic representation theory of finite groups and associative algebras, Cambridge Studies in Advanced Mathematics, vol. 30, Cambridge University Press, 1991.
---, Complexity and varieties for infinite groups, I, J. Algebra 193 (1997), 260–287.
---, Complexity and varieties for infinite groups, II, J. Algebra 193 (1997), 288–317.
D. J. Benson and J. F. Carlson, Products in negative cohomology, J. Pure & Applied Algebra 82 (1992), 107–129.
D. J. Benson and G. Ph. Gnacadja, Phantom maps and purity in modular representation theory, II, Algebras and Representation Theory 4 (2001), 395–404.
A. J. Berrick and A. A. Davydov, Splitting of Gysin extensions, Algebr. Geom. Topol. 1 (2001), 743–762.
M. Bökstedt and A. Neeman, Homotopy limits in triangulated categories, Compositio Math. 86 (1993), no. 2, 209–234.
J. F. Carlson, Modules and group algebras, Lectures in Mathematics, ETH Zürich, Birkhäuser, 1996.
J. Cornick and P. H. Kropholler, Homological finiteness conditions for modules over strongly group-graded rings, Math. Proc. Camb. Phil. Soc. 120 (1996), 43–54.
---, Homological finiteness conditions for modules over group algebras, J. London Math. Soc. 58 (1998), 49–62.
P. Deligne, Cohomologie étale, Springer-Verlag, Berlin/New York, Berlin, 1977, Séminaire de Géométrie Algébrique du Bois-Marie SGA 4$\frac {1}{2}$, Avec la collaboration de J. F. Boutot, A. Grothendieck, L. Illusie et J. L. Verdier, Lecture Notes in Mathematics, Vol. 569.
L. Evens, The cohomology of groups, The Clarendon Press Oxford University Press, New York, 1991, Oxford Science Publications.
P. Freyd, Stable homotopy, Proceedings of the conference on categorical algebra (La Jolla, 1965), Springer-Verlag, Berlin/New York, 1966, pp. 121–172.
S. I. Gelfand and Yu. I. Manin, Methods of homological algebra, Springer-Verlag, Berlin/New York, 1996.
M. Gerstenhaber, The cohomology structure of an associative ring, Ann. of Math. (2) 78 (1963), 267–288.
François Goichot, Homologie de Tate-Vogel équivariante, J. Pure Appl. Algebra 82 (1992), 39–64.
D. Happel, On the derived category of a finite-dimensional algebra, Comment. Math. Helvetici 62 (1987), 339–389.
T. V. Kadeishvili, The algebraic structure in the homology of an ${A}(\infty )$-algebra, Soobshch. Akad. Nauk Gruzin. SSR 108 (1982), no. 2, 249–252 (1983).
---, The structure of the ${A}(\infty )$-algebra, and the Hochschild and Harrison cohomologies, Trudy Tbiliss. Mat. Inst. Razmadze Akad. Nauk Gruzin. SSR 91 (1988), 19–27.
M. M. Kapranov, On the derived categories of coherent sheaves on some homogeneous spaces, Invent. Math. 92 (1988), no. 3, 479–508.
B. Keller, Deriving DG categories, Ann. Sci. École Norm. Sup. (4) 27 (1994), no. 1, 63–102.
P. H. Kropholler, Hierarchical decompositions, generalized Tate cohomology, and groups of type $FP_\infty$, Proc. Edin. Conf. Geometric Group Theory 1993 (A. Duncan, N. Gilbert, and J. Howie, eds.), Cambridge University Press, 1994.
H. R. Margolis, Spectra and the Steenrod algebra, North Holland, Amsterdam, 1983.
G. Mislin, Tate cohomology for arbitrary groups via satellites, Topology and its Applications 56 (1994), 293–300.
A. Neeman, The Grothendieck duality theorem via Bousfield's techniques and Brown representability, J. Amer. Math. Soc. 9 (1996), no. 1, 205–236.
---, Triangulated categories, Princeton University Press, Princeton, NJ, 2001.
J. Rickard, Idempotent modules in the stable category, J. London Math. Soc. 178 (1997), 149–170.
C. M. Ringel, Tame algebras and integral quadratic forms, Lecture Notes in Mathematics, vol. 1099, Springer-Verlag, Berlin/New York, 1984.
N. Spaltenstein, Resolutions of unbounded complexes, Compositio Math. 65 (1988), 121–154.
J. D. Stasheff, Homotopy associativity of ${H}$-spaces. II, Trans. Amer. Math. Soc. 108 (1963), 293–312.
J. Tate, Nilpotent quotient groups, Topology 3 (1964), 109–111, suppl. 1.
J.-L. Verdier, Des catégories dérivées des catégories abéliennes, Astérisque (1996), no. 239, xii+253 pp. (1997), with a preface by Luc Illusie, edited and with a note by Georges Maltsiniotis.
C. A. Weibel, Introduction to homological algebra, Cambridge Studies in Advanced Mathematics, vol. 38, Cambridge University Press, 1994.
G. W. Whitehead, Recent advances in homotopy theory, American Mathematical Society, Providence, R.I., 1970, Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics, No. 5.
Retrieve articles in Transactions of the American Mathematical Society with MSC (2000): 20J06, 16E40, 16E45, 55S35
Retrieve articles in all journals with MSC (2000): 20J06, 16E40, 16E45, 55S35
Affiliation: Department of Mathematics, University of Georgia, Athens, Georgia 30602
MR Author ID: 34795
Email: [email protected]
Henning Krause
Affiliation: Department of Pure Mathematics, University of Leeds, Leeds LS2 9JT, United Kingdom
Address at time of publication: Institut für Mathematik, Universität Paderborn, D-33095 Paderborn, Germany
MR Author ID: 306121
Email: [email protected], [email protected]
Stefan Schwede
Affiliation: SFB 478 Geometrische Strukturen in der Mathematik, Westfälische Wilhelms-Universität Münster, Hittorfstr. 27, 48149 Münster, Germany
Email: [email protected]
Received by editor(s): April 5, 2002
Received by editor(s) in revised form: April 25, 2003
Additional Notes: The first author was partly supported by NSF grant DMS-9988110
Article copyright: © Copyright 2003 American Mathematical Society | CommonCrawl |
The dark red line graphs the reversed Gumbel distribution with parameters $\alpha$ and $\beta$. normal distribution, X ∼ N(µ,σ2), or a Gumbel distribution, X ∼ G(α,β). The two distributions are closely related: if X has a Weibull distribution with parameters α and c , then log( X ) has an extreme value distribution with parameters µ=log α and β =1/ c. Please note, to clarify some assertions appearing elsewhere in this thread, that. FTG asserts that sequences (a n) and (b n) can be chosen so that these distribution functions converge pointwise at every x to some extreme value distribution, up to scale and location. There are some examples pg.6/71, but not for the Normal case: $$\Phi\left(a_n x+b_n\right)^n=\left(\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{a_n x+b_n} e^{-\frac{y^2}{2}}dy\right)^n\rightarrow e^{-\exp(-x)}$$. When considering the distribution of minimum values for which a lower bound is known (e.g. @renrenthehamster I think these two parts are independently stated (no direct connection). Each of the previous graphs has been shifted to place its median at $0$ and to make its interquartile range of unit length. This is from ch. The Maximum of $X_1,\dots,X_n. To learn more, see our tips on writing great answers. B. V. Gnedenko, On The Limiting Distribution of the Maximum Term in a Random Series. Gumbel Distribution The Gumbel distribution is used to model the largest value from a relatively large set of independent elements from distributions whose tails decay relatively fast, such as a normal or exponential distribution. What is this part of an aircraft (looks like a long thick pole sticking out of the back)? Gaussians with decreasing variance, Product and sum of big $O_p$ random variables, Extreme value distribution with unknown variance, Approximation/bound to a_n and b_n in normal maxima to Gumbel, Extreme value theory: show that $ \lim_{n\rightarrow \infty}a_n $ exists and is finite, distribution for scaled Maximum of n independent Weibulls for $n \to \infty$. Mill's ratio). = P(X_1 \leq x) \cdots P(X_n \leq x) = F(x)^n $$a_n = \frac 1{n\phi(b_n)},\;\;\; b_n = \Phi^{-1}(1-1/n)$$. "Haan, L. D. (1976). What is the distribution of a bivariate normal component conditional on the max of the other component? Finding the mean of the max order statistic drawn from standard normal, Extreme Value Theory - Normalizing constants for Generalized Extreme Value distribution, Using extreme value theory to estimate bounds, How to find the $(a_n,b_n)$ for extreme value theory, Limiting distribution of maximum of i.i.d. 10.5 of the book H.A. Recalling the definition of $F_n(x) = F^n(x)$, the solution is, $$b_n = x_{1/2;n},\ a_n = x_{3/4;n} - x_{1/4;n};\ G_n(x) = F_n(a_n x + b_n).$$, Because, by construction, the median of $G_n$ is $0$ and its IQR is $1$, the median of the limiting value of $G_n$ (which is some version of a reversed Gumbel) must be $0$ and its IQR must be $1$. When F is a Normal distribution, the particular limiting extreme value distribution is a reversed Gumbel, up to location and scale. These distributions differ in their location and scale parameters: the mean ("average") of the distribution defines its location, and the standard deviation ("variability") defines the scale. Statistica Neerlandica, 30(4), 161-172." (This general approach should succeed in finding $a_n$ and $b_n$ for any continuous distribution. What makes cross input signature aggregation complicated to implement? David & H.N. The most common is the type I distribution, which are sometimes referred to as Gumbel types or just Gumbel distributions. I don't believe there is a standard recipe for all cases, to find these series. "Question closed" notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2/4/9 UTC (8:30PM…, Extreme value distribution for univariate normal: Derive parameters of the Gumbel, Examples of convergence in distribution using CDF directly, Variance of maximum of Gaussian random variables, Normalization to non-degenerate distribution. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Properties The Gumbel distribution is a continuous probability distribution. The convergence is clear (although the rate of convergence for negative $x$ is noticeably slower). As a result, it can be used to analyze annual maximum daily rainfall volumes. Can you solve it or find it in literature? Thank you! The case where μ = 0 and β = 1 is called the standard Gumbel distribution. Making statements based on opinion; back them up with references or personal experience. The Normal distribution is symmetric; Gumbel isn't. The Gumbel-Softmax Distribution Let Z be a categorical variable with categorical distribution Categorical (₁, …, ₓ), where ᵢ are the class probabilities to be learned by our neural network. So, $$\lim_{x\rightarrow \infty}\left (x\frac {(1-\Phi(x))}{\phi(x)}-1\right) = x\frac {1}{x}-1= 0$$. I followed through and agree that the sufficient condition is satisfied. When the $X_i$ are iid with common distribution function $F$, the distribution of the maximum $X_{(n)}$ is, $$F_n(x) = \Pr(X_{(n)}\le x) = \Pr(X_1 \le x)\Pr(X_2 \le x) \cdots \Pr(X_n \le x) = F^n(x).$$. By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy. The question asks two things: (1) how to show that the maximum $X_{(n)}$ converges, in the sense that $(X_{(n)}-b_n)/a_n$ converges (in distribution) for suitably chosen sequences $(a_n)$ and $(b_n)$, to the Standard Gumbel distribution and (2) how to find such sequences. Also, for the normal distribution, $F^{-1}(1) = \infty$. So you took $F$ to be the standard normal CDF. I'm quitting while I'm behind. The asymptotic distribution of the maximum value, also sometimes called a Gumbel distribution, is implemented in the Wolfram Language as ExtremeValueDistribution . Gumbel (1958) showed that for any well-behaved initial distribution (i.e., \(F(x)\) is continuous and has an inverse), only a few models are needed, depending on whether you are interested in the maximum or the minimum, and also if there is a lower bound of zero) then the Weibull distribution should be used in preference to the Gumbel. The equation for the standard Gumbel distribution (maximum) reduces to \( f(x) = e^{-x}e^{-e^{-x}} \) The following is the plot of the Gumbel probability density function for the maximum case. In Monopoly, if your Community Chest card reads "Go back to ...." , do you move forward or backward? When $F$ is a Normal distribution, the particular limiting extreme value distribution is a reversed Gumbel, up to location and scale. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The second appears to be more difficult; that is the issue addressed here. Standardnormals converges to the Standard Gumbel Distribution according to Extreme Value Theory. Anyway, I opened a question about this issue (and more generally, for other distributions beyond the standard normal). Asking for help, clarification, or responding to other answers. Were any IBM mainframes ever run multiuser? Can I run my 40 Amp Range Stove partially on a 30 Amp generator.
Godrej Aer Spray Musk, Blackbird Nest Box Plans, Tropicana Comp Room Codes, Observable And Measurable Behavior Examples, Is Oleic Acid Ionic Or Covalent, Decision Tree Analysis For Npv Estimation, Overcoming Resentment Worksheet, Egg Hakka Noodles Recipe, Altamira Oriole Vs Hooded Oriole, Tostones Dipping Sauce, Notice Writing For Class 8 Questions, Missha Time Revolution Night Repair Probio Ampoule Cream, Biology Class 9, University Of Illinois Urbana Champaign Application Prompts, Blackberry Custard Tart Recipe, Imperative In Arabic Grammar, Visitation Vs Viewing, In Praise Of Idleness Epub, Types Of Sets, Sample Size For Population Proportion, Indicative Mood Definition, Frozen Chicken Legs In Air Fryer, Heineken Bottle Size Ml, Slimming World Beef Bourguignon Slow Cooker, White-breasted Nuthatch Egg Care, Luke 1 Audio, Avere Conjugation Italian, Level 70 4 Star Crafting Rotation Ffxiv, Numerical Methods In Civil Engineering Book Pdf, L'anno Che Verrà Streaming Ita, Managerial Accounting 11th Canadian Edition Ebook, Buy Herb Plants Online Canada, Kabanos Sausage How To Cook, | CommonCrawl |
Abstract and Talk Materials
Polyhedral stochastic integer programming
We present polyhedral results for scenario based models of some stochastic integer programs. The key idea is to combine valid inequalities from scenario subproblems to derive new inequalities for the overall problem. We illustrate the procedure by developing inequalities for the stochastic uncapacitated lot-sizing problem and some other stochastic integer programs.
Based on joint work with George L. Nemhauser, Yongpei Guan, Andrew Miller, and Jim Luedtke
Alper Atamturk
On connections between mixed-integer rounding and superadditive lifting
Mixed-integer rounding and superadditive lifting are two common ways for generating valid inequalities for mixed-integer programming. Mixed-integer rounding is a prescriptive approach that consists of simple disjunction and rounding rules. On the other hand, superadditive lifting is descriptive in the sense that it requires the characterization of superadditive lifting functions. Proving validity of an inequality is easier with mixed-integer rounding, whereas proving the strength of an inequality is easier with superadditive lifting. It is well-known that mixed-integer rounding is a special case of superadditive lifting and in several cases mixed-integer rounding leads to strong valid inequalities.
In this talk we will explore the connections between mixed-integer rounding and superadditive lifting. We will give techniques that generalize mixed-integer rounding as well as two-step mixed-integer rounding. These simple prescriptive techniques lead to strong superadditive lifting inequalities that are difficult to describe explicitly.
(joint work with Oktay Gunluk)
Pierre Bonami
An hybrid branch-and-cut for solving MINLPs
We present some recent work related to developing Bon-min, an open-source solver for MINLP (Mixed Integer NonLinear Programming). Bon-min implements several exact algorithms for solving MINLPs which exhibits a convex continuous relaxation : a branch-and-bound, an outer-approximation decomposition and an hybrid branch-and-cut algorithm.
We focus here on presenting the hybrid algorithm. This algorithm is a flexible branch-and-cut where linear and nonlinear continuous relaxations are solved alternatively at the nodes of the tree search. The linear relaxation is obtained by considering an outer approximation of the nonlinear constraints of the problem and solving nonlinear programs allows us to improve this approximation. The interest of using linear relaxations is that they can be solved much faster than nonlinear continuous relaxations and the goal of the algorithm is to build a good enough outer approximation while not solving too many nonlinear programs. In its extreme settings this algorithm can be made similar either to a branch-and-bound based only on non-linear programming, or to an outer-approximation decomposition algorithm. This algorithm is compared to classical algorithms (as implemented both in commercial solvers and in Bon-min) on a new publicly available library of convex mixed integer nonlinear program which we have put together.
(This work is part of an ongoing project funded by IBM and conducted both at Carnegie Mellon University and IBM to study novel algorithmic approaches for solving MINLPs.)
Alberto Caprara
On solving combinatorial optimization problems without known decent ILP formulations
We illustrate our experience on some NP-hard combinatorial optimization problems that share the following characteristics: (1) they arise in real-world applications, (2) they are very simple to state, (3) they do not seem to admit any decent ILP formulation, i.e., all the formulations that were tried do not allow the solution of toy instances by modern, general-purpose ILP solvers. Focusing attention on the optimal solution of (real-world) medium-size instances, we briefly illustrate what could be achieved for each of these problems, ranging from basically nothing to effective solution by strong ILP formulations of suitable relaxations, which are defined after a careful analysis of the combinatorial structure of the problem.
Sebastian Ceria
Robust Portfolio Construction
PDF PPT
In this talk, we will describe how modern optimization techniques can help overcome important issues that "traditional" mean-variance optimizers face in practical portfolio management when dealing with Model and Estimation Error. In particular, we will discuss Robust Optimization, an optimization framework that considers errors in the input parameters directly and explicitly in the optimization problem itself, by considering a worse-case scenario for all uncertain parameters. We will introduce a new variant of Robust Optimization, called Robust Mean-Variance Optimization, that considers error in expected returns as well as risk parameters. We will demonstrate how Robust Mean Variance Optimization can be used to significantly reduce the error-maximization property found in classical mean-variance optimizers. We show how the portfolios generated through Robust Mean Variance are also more stable and intuitive. We will present a series of computational experiments to demonstrate the significant economic benefits of investing in portfolios computed using Robust Mean Variance Optimization.
Sanjeeb Dash
Separating from the MIR closure of polyhedra
PDF PS
We study the problem of separating an arbitrary point from the MIR closure of a polyhedron (finding violated rank-1 MIR cuts). Motivated by the work of Fischetti and Lodi (2005), who gave an MIP model for separating from the Chvatal closure of a polyhedron, we describe an MIP model for separating from the MIR closure of a polyhedron. Our analysis yields a short proof of the result of Cook, Kannan and Schrijver (1990) that the split closure of a polyhedron is again a polyhedron. We present computational results on finding violated MIR cuts using this model.
Coauthors: Oktay Gunluk and Andrea Lodi.
Milind Dawande
Computing Forecast Horizons: An Integer Programming Approach
The concept of a forecast horizon has been widely studied in the Operations Research/Management literature in the context of evaluating the impact of future conditions on current decisions in a multi-period decision making environment. While the forecast horizon literature is extensive, the use of integer programming to analyze and compute forecast horizons has been limited. The recent significant developments in computational integer programming coupled with the modelling and structural simplicity of the integer programming approach make for a strong case for its use in computing forecast horizons. We first demonstrate the viability of the integer programming approach for computing classical forecast horizons. We then present structural and computational investigations of a new class of weak forecast horizons -- minimal forecast horizons under the assumption that future demands are integer multiples of a given positive real number. Throughout the discussion, we will use the dynamic lot-size problem to illustrate the ideas.
Jesus De Loera
Recent Progress in the Test Set method for Integer Programming
A test set for a family of integer programs is a finite collection of integral vectors with the property that every feasible non-optimal solution of any integer program in the family can be improved by adding a vector in the test set. There has been considerable activity in the area of test sets and primal methods (e.g. Graver and Gr\obner bases, the integral basis method, etc.). In the past, test sets were considered problematic due to their large entry size or their difficulty of computation. Here we report on fresh progress, made in the past year, that yields interesting algorithmic results and addresse some of the problems:
1) In joint work with Shmuel Onn we created polynomial-time algorithm to canonically rewrite any polyhedral system $\{x : Ax=b, \ x \geq 0\}$ as a face of a $m \times n \times k$ axial transportation polytope. Axial transportation polytopes are very special. For instance, one can decide whether they are empty or not in only linear time without relying on linear programming algorithms. Using one explicitly known Grobner basis for axial transportation problems we propose a new combinatorial, constant-memory algorithm for testing integer feasibility of polyhedra.
We present a new kind of linear integer programs of variable dimension, but constructed from a fixed block matrix, that admit a polynomial time solution. Interestingly enough we employed in the proof algebraic techniques such as Graver test sets and the equivalence of augmentation and optimization oracles. We discuss several applications of our algorithm to multiway transportation problems and to packing problems. One important consequence of our results is a polynomial time algorithm for the d-dimensional integer transportation problem for long multiway tables. Another interesting application is a new algorithm for the classical cutting stock problem. This is joint work with R. Hemmecke, S. Onn, and R. Weismantel.
Matteo Fischetti
MIP models for MIP separation
In this talk we show how the separation problem for some general classes of MIP inequalities can be modeled itself as a MIP, and discuss possible ways to use this approach in practice.
Illya Hicks
Branchwidth via Integer Programming
Branch decompositions were first introduced by Robertson and Seymour in their proof of the Graph Minors Theorem. In addition, dynamic programming algorithms utilizing branch decompositions have recently been proposed for addressing difficult problems in combinatorial optimization problems modeled on graphs. The efficacy of these algorithms depends on obtaining a branch decomposition of the input graph with the smallest width possible. However, determining the smallest width (i.e., the branchwidth) of a graph is NP-hard. This talk offers a brief overview of research in branch decompositions and focuses on an integer programming methodology for establishing the branchwidth of a graph.
Coauthors: Elif Kolotoglu and Cole Smith.
Brady Hunsaker
Evaluating progress of branch-and-bound trees
We consider methods of evaluating the progress of branch-and-bound algorithms by analyzing the data available about the branch-and-bound tree. Our goal is to provide users with more useful information about the progress of the algorithm than traditional measures such as the optimality gap and number of active nodes. We use two open-source codes, CBC and GLPK, and study tree development for a number of instances from the MIPLIB library. We present visualization tools to help make sense of the data and to guide intuition for further research.
Coauthor: Osman Ozaltin
Matthias Koeppe
Intermediate Integer Programming Representations Using Value Disjunctions
We introduce a general technique to create an extended formulation of a mixed-integer program. We classify the integer variables into blocks, each of which generates a finite set of vector values. The extended formulation is constructed by creating a new binary variable for each generated value. Initial experiments show that the extended formulation can have a more compact complete description than the original formulation. We prove that, using this reformulation technique, the facet description decomposes into one ``linking polyhedron'' per block and the ``aggregated polyhedron''. Each of these polyhedra can be analyzed separately. For the case of identical coefficients in a block, we provide a complete description of the linking polyhedron and a polynomial-time separation algorithm. Applied to the knapsack with a fixed number of distinct coefficients, this theorem provides a complete description in an extended space with a polynomial number of variables. Based on this theory, we propose a new branching scheme that analyzes the problem structure. It is designed to be applied in those subproblems of hard integer programs where LP-based techniques do not provide good branching decisions. Preliminary computational experiments show that it is successful for some benchmark problems of multi-knapsack type. Coauthors: Quentin Louveaux and Robert Weismantel
Arie Koster
Treewidth and Integer Programming
For combinatorial optimization problems that turn out to be extremely difficult for integer programming techniques (for example frequency assignment), one has to search for alternative algorithms in order to find an optimal solution. If the problem is defined on a graph, a dynamic programming algorithm along a tree decomposition or branch decomposition of the graph could be a possibility. For many NP-hard optimization problems, there exist such algorithms that run in polynomial time, except for the width of tree/branch decomposition. The minimum width over all possible tree decompositions of a graph is called the treewidth of the graph. Hence, for graphs of bounded treewidth, such algorithms are polynomial. In this talk, we give an update on the efforts to exploit the notion of treewidth for solving combinatorial optimization problems. We in particular discuss algorithms to determine the treewidth of graphs; among them an integer programming formulation. Some new computational results for the minimum interference frequency assignment problem conclude the talk.
Laci Ladanyi
Decomposition and Mixed Integer Programs
MIPs with a constraint matrix that is mostly block diagonal with relatively few connecting constraints (resp. variables) can be solved via Dantzig-Wolfe (resp. Benders) decoposition. In this talk we explore how decomposition can be applied to MIPs when *both* connecting constraints and variables are present besides the block diagonal core. We introduce a true branch-cut-price algorithm that, due to its massively parallizable nature, has the potential to reach and prove optimality faster (in wall clock time) than any branch-and-cut based method. Preliminary computational results will be presented.
Giuseppe Lancia
Mathematical Programming Approaches in Computational Biology
Computational Biology has emerged in the past years as an established new branch of Computer Science, bordering with Combinatorial Optimization, Statistics, Applied Mathematics and, of course, Molecular Biology. The field has provided researchers with a whealm of new exciting problems to work on. Of much interest to us, it provides "mathematical programming people" with a large set of optimization problems on which they can try their standard techniques. The use of O.R. techniques for Computational Biology problems has steadly increased in the last few years. In this seminar we will survey some of the problems on which these techniques have proved successful, and outline directions for future research on challenging problems. In particular, we will mention alignment problems (for both sequences and protein structures), genome rearrangements, protein folding, and, more in detail, the recent research area of polymorphism haplotyping. Time permitting, we will also describe some problems where optimization can still play a major role, such as microarray data analysis, virus barcoding, primer design.
Jon Lee
An MINLP solution method for a water-network optimization problem
PDF (Paper)
We discuss a formulation and solution method for a water-network design problem using mixed-integer nonlinear programming (MINLP). The problem is to decide on the diameters of the pipes, chosen from a discrete set of available pipes, to support demand at the network junctions. A primary source of nonnliearity is related to the Hazen-Williams friction loss equation. By paying careful attention to the modeling and using available MINLP software, we are able to find very good solutions to problem instances from the literature as well as some real-world data.
Coauthors: Claudia D'Ambrosio, Cristiana Bragalli, Andrea Lodi, Paolo Toth
Branching Rules Revisited
We present a new generalization called reliability branching of today's state-of-the-art strong and pseudo-cost branching strategies for linear programming based branch-and-bound algorithms. After reviewing commonly used branching strategies and performing extensive computational studies we compare different parameter settings and show the superiority of the proposed new strategy. If time permits we also give some generalizations and new applications of SOS branching.
Lisa Miller
Valid inequalities for MIPs and group polyhedra from approximate liftings
We present an approximate lifting scheme to derive valid inequalities for general mixed integer programs and for the group problem. This scheme uses superadditive functions as the building block of integer and continuous lifting procedures. It yields an alternate simple derivation of new as well as known families of cuts that correspond to faces of the group polyhedron. Furthermore, it can be used to obtain new families of two-, three- and four-slope facet-defining inequalities for the master cyclic group problem. This lifting approach is simple and constructive. We highlight its potential computational advantages.
This is joint work with Jean-Philippe Richard and Yanjun Li.
Biobjective Mixed Integer Programming
Multiobjective mathematical programs arise naturally in applications where an analysis of the tradeoff between multiple competing objectives is required. In the first part of this talk, we review the basic principles surrounding the analysis of biobjective mixed-integer programs. We then discuss several related methods for enumerating the Pareto outcomes of a biobjective mixed-integer program and present details of their implementation within the SYMPHONY MILP solver framework. Finally, we compare the performance of these methods on several applications and discuss the performance of a parallel implementation developed using the MW framework.
Jean-Philippe P Richard
MIP Lifting Techniques for Mixed Integer Nonlinear Programs
Lifting locally valid inequalities to make them globally valid is a successful methodology for obtaining cuts in MILPs. We show how to extend the approach to generate both linear and nonlinear cuts for MINLPs. We illustrate the approach in two different ways. First, we obtain cuts for mixed integer bilinear sets without adding variables. These cuts are obtained through lifting from a generalization of the concept of a cover and can not be obtained from any single-row relaxation of the IP reformulation of the nonlinear set. Second, we show how to obtain the nonlinear convex hull of various disjunctive nonlinear sets through lifting.
Coauthor: Mohit Tawarmalani
Tallys Yunes
An Integrated Solver for Optimization Problems
One of the central trends in the optimization community over the past several years has been the steady improvement of general-purpose solvers. A logical next step in this evolution is to combine mixed integer linear programming, global optimization, and constraint programming in a single system. Recent research in the area of integrated problem solving suggests that the right combination of different technologies can simplify modeling and speed up computation substantially. In this talk we address this goal by presenting a general purpose solver that achieves low-level integration of solution techniques with a high-level modeling language. We validate our solver with computational experiments on problems in production planning, product configuration and job scheduling. Our results indicate that an integrated approach reduces modeling effort, while solving two of the three problem classes substantially faster than state-of-the-art commercial software. | CommonCrawl |
Why did the universe not collapse to a black hole shortly after the big bang?
Wasn't the density of the universe at the moment after the Big Bang so great as to create a black hole? If the answer is that the universe/space-time can expand anyway what does it imply about what our universe looks like from the outside?
cosmology black-holes space-expansion big-bang singularities
Volker Siegel
pferrelpferrel
A high enough energy density is a necessary condition but not a sufficient condition for black holes to form: one needs to have a center which will ultimately become the center of the black holes; one needs the matter that collapses to the black hole to have a low enough velocity so that gravity may squeeze it before the matter manages to fly away and dilute the density.
The latter two conditions are usually almost trivially satisfied for ordinary chunks of matter peacefully sitting at some place of the Universe; but they're almost maximally violated by the matter density right after the Big Bang. This matter has no center - it is almost uniform throughout space - and has high enough velocity (away from itself) that the density eventually gets diluted. And indeed, we know that it did get diluted.
In other words, a collapse of matter (e.g. a star) into a black hole is an idealized calculation that makes certain assumptions about the initial state of the matter. These assumptions are clearly not satisfied by matter after the Big Bang. Instead of a collapse of a star, you should use another simplified version of Einstein's equations of general relativity - namely the Friedmann equations for cosmology. You will get the FRW metric as a solution. When it is uniform to start with, it will pretty much stay uniform.
The visible Universe is, in some sense, analogous to a black hole. There exists a cosmic horizon and we can't see behind it. However, it is more correct to imagine that the interior of the visible space - that increasingly resembles de Sitter space because the cosmological constant increasingly dominates the energy density - should be viewed as an analogy to the exterior of a black hole. And it's the exterior of the visible de Sitter space that plays the role of the interior of a black hole.
The relationship between (namely the ratio of) the mass and the radius for the visible Universe is not too far from the relationship between (or ratio of) the black hole mass and radius of the same size. However, it's not accurate, and it is not supposed to be accurate. The mass/radius ratio is only universal for static (and neutral) black holes localized in an external flat space and our Universe is clearly not one of them.
David Z♦
Luboš MotlLuboš Motl
$\begingroup$ "the matter that collapses to the black hole has to have a low enough velocity so that gravity may squeeze it before the matter manages to fly away and dilute the density" And that is why a star must typically run out of fuel and collapse into a BH? $\endgroup$ – pferrel Jan 20 '11 at 4:43
$\begingroup$ @dimwit Stars do not typically collapse into a black hole, only a small portion are massive enough to suffer that fate. Most stars are thought to end up as white dwarves, others as neutron stars and only very few end up as black holes. See also WP article on stellar evolution. $\endgroup$ – Eugene Seidel May 30 '13 at 22:20
$\begingroup$ So if I understand, black holes are a symmetry breaking gravitational condensation phenomenon, and you need that velocity cooling in order for gravity to start a collapse onto a particular seed location? Still it's confusing to me that the huge (isotropic) energy density should imply everywhere very high spatial curvature through the stress–energy tensor in the early universe and yet there are no event horizons. $\endgroup$ – Robotbugs Oct 27 '15 at 22:33
$\begingroup$ If you're a believer in the BBT, then you don't ask questions about what's on other side of the boundary, $\endgroup$ – Cinaed Simson Apr 2 '19 at 0:15
$\begingroup$ The early universe was assumed to be radiation dominate - it took a bit of time for matter to appear. $\endgroup$ – Cinaed Simson Apr 2 '19 at 0:17
I don't think that the question "what does the universe look like from the outside?" is very meaningful. Just because there is not outside for the universe. As for the black hole why should high density i.e. a lot of mass in little volume, cause the creation of a black hole? If you are thinking about the Schwarzschild solution (and radius), it describes a spherical object outside of which the space is empty, and as I said there is no outside for the universe.
MBNMBN
$\begingroup$ Yeah, not only is it not meaningful, but it isn't even physically-possible. According to the laws of relativity, there is no outside to a universe. $\endgroup$ – Gareth Meredith Apr 2 '19 at 1:05
The first thing to understand is that the Big Bang was not an explosion that happened at one place in a preexisting, empty space. The Big Bang happened everywhere at once, so there is no location that would be the place where we would expect a black hole's singularity to form. Cosmological models are either exactly or approximately homogeneous. In a homogeneous cosmology, symmetry guarantees that tidal forces vanish everywhere, and that any observer at rest relative to the average motion of matter will measure zero gravitational field. Based on these considerations, it's actually a little surprising that the universe ever developed any structure at all. The only kind of collapse that can occur in a purely homogeneous model is the recollapse of the entire universe in a "Big Crunch," and this happens only for matter densities and values of the cosmological constant that are different from what we actually observe.
A black hole is defined as a region of space from which light rays can't escape to infinity. "To infinity" can be defined in a formal mathematical way,[HE] but this definition requires the assumption that spacetime is asymptotically flat. To see why this is required, imagine a black hole in a universe that is spatially closed. Such a cosmology is spatially finite, so there is no sensible way to define what is meant by escaping "to infinity." In cases of actual astrophysical interest, such as Cygnus X-1 and Sagittarius A*, the black hole is surrounded by a fairly large region of fairly empty interstellar space, so even though our universe isn't asymptotically flat, we can still use a portion of an infinite and asymptotically flat spacetime as an approximate description of that region. But if one wants to ask whether the entire universe is a black hole, or could have become a black hole, then there is no way to even approximately talk about asymptotic flatness, so the standard definition of a black hole doesn't even give a yes-no answer. It's like asking whether beauty is a U.S. citizen; beauty isn't a person, and wasn't born, so we can't decide whether beauty was born in the U.S.
Black holes can be classified, and we know, based on something called a no-hair theorem, that all static black holes fall within a family of solutions to the Einstein field equations called Kerr-Newman black holes. (Non-static black holes settle down quickly to become static black holes.) Kerr-Newman black holes have a singularity at the center, are surrounded by a vacuum, and have nonzero tidal forces everywhere. The singularity is a point at which the world-lines only extend a finite amount of time into the future. In our universe, we observe that space is not a vacuum, and tidal forces are nearly zero on cosmological distance scales (because the universe is homogeneous on these scales). Although cosmological models do have a Big Bang singularity in them, it is not a singularity into which future world-lines terminate in finite time, it's a singularity from which world-lines emerged at a finite time in the past.
A more detailed and technical discussion is given in [Gibbs].
[HE] Hawking and Ellis, The large-scale structure of spacetime, p. 315.
[Gibbs] http://math.ucr.edu/home/baez/physic.../universe.html
This is a FAQ entry written by the following members of physicsforums.com: bcrowell George Jones jim mcnamara marcus PAllen tiny-tim vela
Ben CrowellBen Crowell
$\begingroup$ Do you know anything about gravitational aether theory? Not to impede on what you said, because what you say is true within the standard model of black holes... except gravitational aether theory offers a novel solution solving many problems in physics. Using some various statements, the idea is that light can only approach zero speeds, but never reach it when mediating in spacetime, regardless of the gravitational field strength. Obviously one big solution to this, is a way out of the information paradox. $\endgroup$ – Gareth Meredith Apr 2 '19 at 1:14
$\begingroup$ @Ben Crowell --Using Einstein-Cartan theory, Nickodem J. Poplawski has formulated a multiversal cosmology that would appear inflationary to us, and would result in causally-separated local universes each shaped like the skin (not just the surface) of a rotating basketball, with each forming within a black hole itself initially within the preceding one, through all of an INFINITY of sequentially-smaller scales. This seems consistent with the fact that the fundamental laws of physics are reversible in time. Might it resolve the difficulty described in your 2nd paragraph? $\endgroup$ – Edouard Jan 16 at 19:46
$\begingroup$ Sorry, but, for the benefit of anyone googling, I had misspelled P.'s first name: It's actually Nikodem. (There are millions named Poplawski, but this one has several great papers on Arxiv, 2009-2019.) $\endgroup$ – Edouard Jan 16 at 20:24
The standard ΛCDM model of the Big Bang fits obsersvations to the Friedmann-Robertson-Walker solutions of general relativity, which do not form black holes. Intuitively, the initial expansion is great enough to counteract the usual tendency of matter to gravitationally collapse. As far as we know, the universe looks about the same from every point on the large scale. It is a built-in assumption of the FRW family solutions, and sometimes called the "Copernican principle."
It doesn't absolutely have to be right, of course, though in a sense it is the simplest possible empirically adequate model, and so is favored by Ockham's razor. There have been attempts to fit the astronomical observations to an isotropic and inhomogeneous solution of GTR (meaning, we would be near "the center"), but to my knowledge they have been less than conclusive.
There is an oversimplified model of spherical stellar collapse assumes that the star has uniform density and no pressure, the interior of which comes out to be equivalent to the k = +1 (positive curvature, closed) contracting FRW universe. The interior is smoothly patched to a Schwarzschild exterior. The k = 0 (flat) and k = -1 (open) cases can be thought of as the interior of such a star in the limit of infinite radius, collapsing from rest and with some finite velocity, respectively. They too can be smoothly patched to a Schwarzschild exterior.
Our the observed universe is expanding, but we still can say that's it's possible for the isotropic and homogeneous region we observe to have an edge, or perhaps even be the the interior of a time-reversed black hole. But it should be emphasized that we have no empirical reason to believe that it's anything more exotic than a plain FRW universe. Though on a more serious alternatives, some models of cosmic inflation have our observed universe as one of many "bubbles" in an inflating background.
$\begingroup$ Nice and informative +1 $\endgroup$ – MBN Jan 19 '11 at 6:13
In many ways, the early universe was very similar in structure to a black hole, if one takes the singularity picture seriously. And even then, singularity free models still exist of black holes, so maybe the early universe does not require one either.
Anyway, this isn't important, what is important is that mathematics supports strongly an early universe with a structure similar to a black hole and in the later epoch where the universe has sufficiently cooled down and got large enough, seems to preserve the weak equivalence principle. (if you want more information on this I will elaborate).
It is possible, that these analogies to be taken seriously enough to speculate we live in a black-hole-like structure. Certainly there is a lot of arguments which attempt to support it. For instance, The radius of a black hole is found directly proportional to its mass $R \propto m$. The density of a black hole is given by its mass divided by its volume $\rho = \frac{m}{V}$ and since the volume is proportional to the radius of the black hole to the power of three $V \propto R^3$ then the density of a black hole is inversely proportional to its mass radius by the second power $\rho \propto m^2$)
What does all this mean? It means that if a black hole has a large enough mass then it does not appear to be very dense, which is more or less the description of our own vacuum: it has a lot of matter, around $3 \times 10^{80}$ particles give or take a few power of tens of atoms in spacetime alone, the factor of $3$ to account how many spacetime dimensions there are - this is certainly not an infinite amount of matter, but it is arguably a lot yet, our universe does not appear very dense at all.
Early rotation properties resulting in centrifugal and torsion (the latter here to prevent singularities forming) as corrections to cosmology (if our universe is not a black hole analogy) could explain how a universe can break free from a dense Planck epoch (according to Arun and Sivaram). A lot of misconceptions concerning primordial rotation, exists even today.
Instead of going into great deal concerning equations I studied, I will give a summary of what I learned from it:
Hoyle and Narlikar showed that rotation exponentially decays with the linear expansion of a universe (this solves nicely why we cannot detect the background radiation ''axis of evil'' expected to be like a finger print of rotation in the background temperatures).
Dark flow, an unusual flow that seems to show that galaxies are drifting in some particular direction at a very slow speed, could be the existence of a residual spin that has been left over.
Rotation explains cosmic expansion as a centrifigal force. Arun and Sivaram made a calculation from an expanding model.
Because rotation is suggested to slow down, it would seem then at odds to why the universe is now accelerating. There may be two ways out of this problem. The light we detect from the further galaxies tend to tell us something about the past, not something about the present moment in that region of spacetime. What appears to be accelerating, maybe the light from an early universe when it was accelerating. This would explain nicely the Hubble recession in which the further the galaxy, the faster it appears to receed. A second option comes from recent studies, which it has been stated that cosmologists are pretty sure the universe is expanding, but they are no longer sure at what speed.
If particle production happened while the universe expanded due to centrifugal acceleration, then there is no need for inflation to explain why matter appears to be evenly distributed (as noticed by Hoyle). In fact, Inflation doesn't answer for anything, according to Penrose because it requires a fine tuning. Though this bit is quite speculative, I have wondered whether the spin has ''taken'' the bulk energy of the vacuum in an attempt to explain the quantum discrepency, dubbed the ''worst prediction'' ever made.
The fact the universe could have a rotational property, would explain why there is an excess of matter over antimatter because the universe would possess a particular handednesss (chirality) - there is also a bulk excess of a particular rotation property observed in a large collection of galaxies with odds ranging between 1 to a million by chance.
But most importantly of all (and related to the previous statement), it suggests that there is in fact a preferred frame in the universe so long as it rotates. This will imply a Lorentz violating theory but one that satisfies the full Poincare group of symmetries. According to Sean Carrol, Lorentz violating theories will involve absolute acceleration.
Some people might say ''dark energy is responsible,'' and there would have been a time I would have disagreed with this, since dark energy only becomes significant when a universe gets sufficiently large enough - it's effects are apparent because we believe the universe is accelerating now.
But I noticed a while back, that this is not the case if the impetus of a universe was constant, but strong enough to break free from dense fields. The difference here, is that scientists tend to think of this situation as one where the cosmological constant is not really a constant - but I tend to think now that it is a constant and the real dynamic effect giving rise to acceleration, is a weakening of gravity as it gets larger.
Gareth MeredithGareth Meredith
Yes, it is dense, but at that "time" space and time, is undefined, so the two statements would be non-sense. Plus, the inflation field has way more power than the gravitational pull, so the universe bursts in just a fraction of a second.
Brief story of the Big Bang and Before it
Before the Big Bang, there was this field called the inflation field. Which consists of repulsive particles, called the inflatons. Theoretically, the inflation field is considered to be the reason of the creation of a new universe. Every time the vacuum of the field gets excited, it bursts, forming a new universe.
And the field would stay quiet to regain its energy for the next universe to be born.
And the inflation field might explain the repulsive dark energy. (we don't know yet)
How to get a peak at our universe
Under our best understanding, the only way possible, regardless of being smaller than the plank scale(which will never be possible), is String Theory.
String Theory says that all matters are made out of strings, and the strings are made out of extra dimensions(10D or 11D). The stings live in a 10 ten dimensional world. It is from the six extra dimensions that made up all the matters!!. Physicists have calculated just how many ways the extra dimensions can interweave each other to form a new string, and the result they found was impressive: one followed by 500 zeros. All of them potentially able to give rise to a universe!
Stepping outside
To see our entire universe, first we need to shrink. To the size of about a billion billion billion times smaller than an actual human hair. Now, you need to slip in one of the strings, into the extra dimensions, in order to see your entire universe from above. There you will see the entire universe in front of you, along with other parallel universes, called Branes. These universe comes with different sizes, different dimensions.
And that's how you can actually see the whole universe from "above". Theoretically, it works, but in real life, its not quite practical.
Nicole. CNicole. C
$\begingroup$ There's a lot of hand-wavy comments here on otherwise, controversial subjects. From string theory, to extra dimensions... and highly arguable statements like ''Under our best understanding, the only way possible, is String Theory,'' is clearly bias and false. There are many alternative much more appealing models than string theory. This is my reason to neg rep, and I don't do it often. $\endgroup$ – Gareth Meredith Apr 2 '19 at 1:10
Not the answer you're looking for? Browse other questions tagged cosmology black-holes space-expansion big-bang singularities or ask your own question.
How could the "Big Bang" singularity have actually expanded?
Isn't the Big Bang contradictory with the existance of singularities in black holes?
Why didn't a black hole form right after the Big Bang?
What prevented the early universe to become a black hole before inflation?
Given that matter cannot escape a black hole, how did the big bang produce the universe we see today?
Has the universe we live in started as a black hole that is imploding?
Why did the matter in the early universe not stick together due to gravity?
Hydrogen cloud at the universe's beginning?
If the Big Bang theory is true, what caused it to explode?
Big Bang not really the beginning of a completely new universe?
Was the universe a black hole at the beginning?
Universe spawned from the death of a hyper black hole?
Was the Big Bang denser and hotter than a black hole?
Temperature after the big bang | CommonCrawl |
nLab > Latest Changes: dependent type theoretic methods in natural language semantics
CommentTimeOct 28th 2015
(edited Oct 28th 2015)
Format: MarkdownItexI thought this should be better developed, and it is. Following the work of Sundholm and Ranta, several people are trying to use dependent type theory to understand natural language. This involves a range of things philosophers care about: anaphora, polysemy, modality, factivity, etc. It should be interesting to bring this work into contact with the work here on dependent type theory in mathematics and physics. Already I see an overlap in the analysis of modality via a type of worlds, between us [here](http://ncatlab.org/nlab/show/necessity+and+possibility#InFirstOrderLogicAndTypeTheory) and them in [Resolving Modal Anaphora in Dependent Type Semantics](http://link.springer.com/chapter/10.1007%2F978-3-662-48119-6_7) on p. 89. So I've started a page [[dependent type theoretic methods in natural language semantics]] to list references, and later results.
I thought this should be better developed, and it is. Following the work of Sundholm and Ranta, several people are trying to use dependent type theory to understand natural language. This involves a range of things philosophers care about: anaphora, polysemy, modality, factivity, etc.
It should be interesting to bring this work into contact with the work here on dependent type theory in mathematics and physics. Already I see an overlap in the analysis of modality via a type of worlds, between us here and them in Resolving Modal Anaphora in Dependent Type Semantics on p. 89.
So I've started a page dependent type theoretic methods in natural language semantics to list references, and later results.
Format: MarkdownItexThe idea of 'implicit generalization' in Coq that Mike mentioned [here](https://golem.ph.utexas.edu/category/2013/02/have_you_left_off_beating_your.html#c043547), does it have a formalization? I ask since [[Dependent Type Semantics]] approach is to extend MLTT with an @-operator, which deals with underspecified contexts, see section 3 of 'Representing Anaphora with Dependent Types' [Springer](http://link.springer.com/chapter/10.1007%2F978-3-662-43742-1_2). On p. 91 of 'Resolving Modal Anaphora in Dependent Type Semantics' [Springer](http://link.springer.com/chapter/10.1007%2F978-3-662-48119-6_7) we even see an aspect of our analysis of 'Have you left of beating your wife?', where we discussed the presupposition that this used to happen. In this case, somebody stopping smoking is seen to presuppose that they used to smoke.
The idea of 'implicit generalization' in Coq that Mike mentioned here, does it have a formalization? I ask since Dependent Type Semantics approach is to extend MLTT with an @-operator, which deals with underspecified contexts, see section 3 of 'Representing Anaphora with Dependent Types' Springer.
On p. 91 of 'Resolving Modal Anaphora in Dependent Type Semantics' Springer we even see an aspect of our analysis of 'Have you left of beating your wife?', where we discussed the presupposition that this used to happen. In this case, somebody stopping smoking is seen to presuppose that they used to smoke.
CommentAuthorNoam_Zeilberger
Author: Noam_Zeilberger
Format: MarkdownItexWould you like this page to also include references to people trying to use type theory in a broader sense (without necessarily emphasizing the dependent aspect) to understand natural language? If so, we might include a link to Chris Barker and Jim Pryor's recent seminar at NYU, <a href="http://lambda.jimpryor.net/">What Philosophers and Linguists Can Learn From Theoretical Computer Science But Didn't Know To Ask</a>, and to Oleg Kiselyov and Chung-chieh Shan's NASSLLI/ESSLLI course <a href="http://okmij.org/ftp/gengo/NASSLLI10/">Lambda: the ultimate syntax-semantics interface</a>.
Would you like this page to also include references to people trying to use type theory in a broader sense (without necessarily emphasizing the dependent aspect) to understand natural language? If so, we might include a link to Chris Barker and Jim Pryor's recent seminar at NYU, What Philosophers and Linguists Can Learn From Theoretical Computer Science But Didn't Know To Ask, and to Oleg Kiselyov and Chung-chieh Shan's NASSLLI/ESSLLI course Lambda: the ultimate syntax-semantics interface.
Format: MarkdownItexAdded GF-HoTT as a worked example.
Added GF-HoTT as a worked example.
Format: MarkdownItexThey look like interesting links. Thanks for those! Re organisation of pages, I would quite like to keep the dependent type approach on its own, but I don't know how much blurring of the lines there is out there. Do you think there could be a parent page, say, on all logical and category theoretic approaches to natural language? Then would there be a natural division into the different systems? There's no doubt material already about on nLab. I see we have [[categorial grammar]]. [[linguistics]], [[context-free grammar]].
They look like interesting links. Thanks for those!
Re organisation of pages, I would quite like to keep the dependent type approach on its own, but I don't know how much blurring of the lines there is out there.
Do you think there could be a parent page, say, on all logical and category theoretic approaches to natural language? Then would there be a natural division into the different systems?
There's no doubt material already about on nLab. I see we have categorial grammar. linguistics, context-free grammar.
Format: MarkdownItexRe #4, Bas could you tell us in a couple of lines what Grammatical Framework is trying to achieve?
Re #4, Bas could you tell us in a couple of lines what Grammatical Framework is trying to achieve?
CommentAuthorRodMcGuire
Author: RodMcGuire
Format: MarkdownItexAs far as I can tell, [http://www.grammaticalframework.org/~aarne/gf-hott/](http://www.grammaticalframework.org/~aarne/gf-hott/) has nothing to do with HOTT or dependent type theory being used in linguistics. Instead it is about parsing an example English text about HOTT into enough of a logical representation that it can generate the corresponding text in French. The logical representation: [http://www.grammaticalframework.org/~aarne/gf-hott/ltestLogic.txt](http://www.grammaticalframework.org/~aarne/gf-hott/ltestLogic.txt). While that representation may encode dependent type concepts they are only used as uninterpreted templates for parsing and generation. GF seems in part to be a kitchen sink geared mainly towards parsing, generation, and translation. Wikipedia: [Grammatical_Framework](https://en.wikipedia.org/wiki/Grammatical_Framework) seems like a good overview. There it is stated there that >Typologically, GF is a functional programming language. Mathematically, it is a type-theoretic formal system (a logical framework to be precise) based on Martin-Löf's intuitionistic type theory, with additional judgments tailored specifically to the domain of linguistics.
As far as I can tell, http://www.grammaticalframework.org/~aarne/gf-hott/ has nothing to do with HOTT or dependent type theory being used in linguistics.
Instead it is about parsing an example English text about HOTT into enough of a logical representation that it can generate the corresponding text in French. The logical representation: http://www.grammaticalframework.org/~aarne/gf-hott/ltestLogic.txt.
While that representation may encode dependent type concepts they are only used as uninterpreted templates for parsing and generation.
GF seems in part to be a kitchen sink geared mainly towards parsing, generation, and translation.
Wikipedia: Grammatical_Framework seems like a good overview. There it is stated there that
Typologically, GF is a functional programming language. Mathematically, it is a type-theoretic formal system (a logical framework to be precise) based on Martin-Löf's intuitionistic type theory, with additional judgments tailored specifically to the domain of linguistics.
Format: MarkdownItexRe: #2, I don't know. It seems likely to me that most type theorists regard it as part of the "sugar" that a proof assistant adds on top, rather than part of the formal system itself.
Re: #2, I don't know. It seems likely to me that most type theorists regard it as part of the "sugar" that a proof assistant adds on top, rather than part of the formal system itself.
CommentAuthorRichard Williamson
Author: Richard Williamson
Format: MarkdownItexA comment in passing! A lot of the work that has been done (by Ranta and others) on the use of dependent type theory in linguistics, very interesting though it is, has a similar feel to me as the discussions that take place on the forum and on the n-café every now and then on this: one picks out pieces of natural language, various themes of the study of syntactics and semantics in linguistics, and offers a suggestion for an expression in dependent type theory, but one does not attempt to pin down a rigorous formal theory of semantics that reasonably captures a fragment (an interesting one, naturally!) of a particular natural language. In this way, I feel that the work of Ranta et al. and that of the discussions here actually differs quite significantly from that which characterises 'semantics' as a field of study within linguistics, post-Montague. If I might offer a suggestion, something that would be of great interest to me at least would be to go through some of the pioneering original papers of Montague (or presentations of Montague work in other texts), and try to understand those (in an equally formal and rigorous way) using dependent type theory, Ranta et al's work, etc. Of course the field of semantics has evolved since then, but there should be still be more than enough to grapple with there. I am myself interested in, and working on when I find the opportunity, a formal theory of semantics using category theory, but this is of a very different flavour, and takes a different point of departure.
A comment in passing!
A lot of the work that has been done (by Ranta and others) on the use of dependent type theory in linguistics, very interesting though it is, has a similar feel to me as the discussions that take place on the forum and on the n-café every now and then on this: one picks out pieces of natural language, various themes of the study of syntactics and semantics in linguistics, and offers a suggestion for an expression in dependent type theory, but one does not attempt to pin down a rigorous formal theory of semantics that reasonably captures a fragment (an interesting one, naturally!) of a particular natural language. In this way, I feel that the work of Ranta et al. and that of the discussions here actually differs quite significantly from that which characterises 'semantics' as a field of study within linguistics, post-Montague.
If I might offer a suggestion, something that would be of great interest to me at least would be to go through some of the pioneering original papers of Montague (or presentations of Montague work in other texts), and try to understand those (in an equally formal and rigorous way) using dependent type theory, Ranta et al's work, etc. Of course the field of semantics has evolved since then, but there should be still be more than enough to grapple with there.
I am myself interested in, and working on when I find the opportunity, a formal theory of semantics using category theory, but this is of a very different flavour, and takes a different point of departure.
CommentAuthorJohn Dougherty
Author: John Dougherty
Format: MarkdownItexRe: implicit generalization (#2 and #8). In Coq, implicit generalization is basically syntactic sugar. It was mainly implemented by Amokrane Saïbi, and the theory behind it forms chapters 4 and 5 of [his thesis](https://tel.archives-ouvertes.fr/tel-00523810/document). There, he writes (in very rough translation) > The implicit calculus is to be considered an informal language, allowing one to approximate the usual notation of mathematics (and of programming languages), thus avoiding having to write large terms in the explicit calculus. The idea is that... certain information about the type may be omitted and reconstructed by a type-inference algorithm. So implicit generalization in Coq is not part of the type system, but a feature of the specification language. However, Alexandre Miquel has [a paper](http://www.pps.univ-paris-diderot.fr/~miquel/publis/tlca01.pdf) on including an implicit dependent product alongside the usual pi-types as an extension to the type theory. One interesting feature of this implicit dependent product is the subtyping it allows. I only just found this paper now while poking around online, though, so I don't know what's become of the theory.
Re: implicit generalization (#2 and #8). In Coq, implicit generalization is basically syntactic sugar. It was mainly implemented by Amokrane Saïbi, and the theory behind it forms chapters 4 and 5 of his thesis. There, he writes (in very rough translation)
The implicit calculus is to be considered an informal language, allowing one to approximate the usual notation of mathematics (and of programming languages), thus avoiding having to write large terms in the explicit calculus. The idea is that… certain information about the type may be omitted and reconstructed by a type-inference algorithm.
So implicit generalization in Coq is not part of the type system, but a feature of the specification language. However, Alexandre Miquel has a paper on including an implicit dependent product alongside the usual pi-types as an extension to the type theory. One interesting feature of this implicit dependent product is the subtyping it allows. I only just found this paper now while poking around online, though, so I don't know what's become of the theory.
Format: MarkdownItexRod, #7, yes that seems about right. I hope I did not claim HoTT was used for NLP.
Rod, #7, yes that seems about right. I hope I did not claim HoTT was used for NLP.
Format: MarkdownItexDavid #5: a parent page makes sense, perhaps as a section within the [[linguistics]] article. I see there's already a long list of "related concepts" at the bottom of that article, so that could be expanded by interested parties into a more detailed overview.
David #5: a parent page makes sense, perhaps as a section within the linguistics article. I see there's already a long list of "related concepts" at the bottom of that article, so that could be expanded by interested parties into a more detailed overview.
Format: MarkdownItexRichard #9: in case you are not already familiar with it, you might have a look at [Chris Barker](http://www.nyu.edu/projects/barker/) and [Chung-chieh Shan](http://homes.soic.indiana.edu/ccshan/)'s work, and in particular their recent book [_Continuations and Natural Language_](https://books.google.fr/books/about/Continuations_and_Natural_Language.html?id=wIpVBQAAQBAJ&redir_esc=y). Again, their work is not about dependent type theory per se, nor is it expressed in categorical language, but they take one aspect which is implicit in Montague's analysis of quantification (namely, the concept of continuation semantics) and try to turn that into a general account of many different natural language phenomena.
Richard #9: in case you are not already familiar with it, you might have a look at Chris Barker and Chung-chieh Shan's work, and in particular their recent book Continuations and Natural Language. Again, their work is not about dependent type theory per se, nor is it expressed in categorical language, but they take one aspect which is implicit in Montague's analysis of quantification (namely, the concept of continuation semantics) and try to turn that into a general account of many different natural language phenomena.
Format: MarkdownItexRe #9, that sounds like an interesting proposal. I agree that there is excessive focus on limited examples -- farmers always seem to be beating their donkeys. If we could nail a piece of, say, Jane Austen, I'd be suitable impressed: >Mr. Collins was not left long to the silent contemplation of his successful love; for Mrs. Bennet, having dawdled about in the vestibule to watch for the end of the conference, no sooner saw Elizabeth open the door and with quick step pass her towards the staircase, than she entered the breakfast-room and congratulated both him and herself in warm terms on the happy prospect of their nearer connection. Mr. Collins received and returned these felicitations with equal pleasure, and then proceeded to relate the particulars of their interview with the result of which he trusted he had every reason to be satisfied, since the refusal which his cousin had steadfastly given him would naturally flow from her bashful modesty and the genuine delicacy of her character. I've started a page [[Montague grammar]]. Is there any particular salient piece of writing to indicate there?
Re #9, that sounds like an interesting proposal. I agree that there is excessive focus on limited examples – farmers always seem to be beating their donkeys. If we could nail a piece of, say, Jane Austen, I'd be suitable impressed:
Mr. Collins was not left long to the silent contemplation of his successful love; for Mrs. Bennet, having dawdled about in the vestibule to watch for the end of the conference, no sooner saw Elizabeth open the door and with quick step pass her towards the staircase, than she entered the breakfast-room and congratulated both him and herself in warm terms on the happy prospect of their nearer connection. Mr. Collins received and returned these felicitations with equal pleasure, and then proceeded to relate the particulars of their interview with the result of which he trusted he had every reason to be satisfied, since the refusal which his cousin had steadfastly given him would naturally flow from her bashful modesty and the genuine delicacy of her character.
I've started a page Montague grammar. Is there any particular salient piece of writing to indicate there?
Format: MarkdownItexI shifted some material on categorial grammar/typelogical grammar from [[linguistics]] to the page [[categorial grammar]]. Does anyone know how these terms, categorial grammar/typelogical grammar, are related? Synonyms?
I shifted some material on categorial grammar/typelogical grammar from linguistics to the page categorial grammar. Does anyone know how these terms, categorial grammar/typelogical grammar, are related? Synonyms?
(edited Oct 31st 2015)
Format: MarkdownItexJohn #10, thanks. So another thing to explore, whether that work makes contact with Bekki's ideas. From the reference in #2: >Anaphora and presupposition triggers are represented by @-operators that take the left context that is passed to a dynamic proposition that contains them. The representation for "He whistled" is (13), where the @$_0$ -operator (@$_0 : \gamma_0 \to e)$ is fed its left context $c$ (whose type is underspecified as $\gamma_0$ ) and returns an entity (of type $e$) whom the pronoun refers to. >(13) $(\lambda c)W((a_0 : \gamma_0 \to e)(c))$ >@-operators have the following formation rule (@ F). >$$\frac{A : type \;\; A true}{(a_i : A) : A} $$ >The (@ F) rule requires that a certain type ($\gamma_0 \to e$), in the case of (13)) is inhabited, which is the presupposition triggered by the @-operator. There is no introduction or elimination rule for @-operators. [How does @ work in Latex? I've used $a$ in a couple of places.]
John #10, thanks. So another thing to explore, whether that work makes contact with Bekki's ideas. From the reference in #2:
Anaphora and presupposition triggers are represented by @-operators that take the left context that is passed to a dynamic proposition that contains them. The representation for "He whistled" is (13), where the @0_0 -operator (@0:γ 0→e)_0 : \gamma_0 \to e) is fed its left context cc (whose type is underspecified as γ 0\gamma_0 ) and returns an entity (of type ee) whom the pronoun refers to.
(13) (λc)W((a 0:γ 0→e)(c))(\lambda c)W((a_0 : \gamma_0 \to e)(c))
@-operators have the following formation rule (@ F).
A:typeAtrue(a i:A):A\frac{A : type \;\; A true}{(a_i : A) : A}
The (@ F) rule requires that a certain type (γ 0→e\gamma_0 \to e), in the case of (13)) is inhabited, which is the presupposition triggered by the @-operator. There is no introduction or elimination rule for @-operators.
[How does @ work in Latex? I've used aa in a couple of places.]
Format: MarkdownItexRe #15, what I understand is that the distinction between traditional categorial grammar and type-logical grammar is roughly analogous to the distinction between proof theory and type theory, with the connotation that the latter is supposed to be more general, or at least more flexible. So in categorial grammar you show that sentences are well-formed by building derivations of sequents in little substructural proof systems, whereas in type-logical grammar you do the same by building derivations of typing judgments in little substructural type systems. The term "type-logical grammar" is newer, I think it might have been introduced by [Morrill (1994)](https://books.google.fr/books/about/Type_Logical_Grammar.html?id=nSvpZ82Gml8C&redir_esc=y), drawing inspiration from an older essay by [van Benthem (1983)](https://books.google.fr/books?id=sUoRxrf9d3AC&lpg=PA37&ots=LlZU9g_UiA&dq=%22the%20semantics%20of%20variety%20in%20categorial%20grammar%22%20benthem&lr&pg=PA37#v=onepage&q=%22the%20semantics%20of%20variety%20in%20categorial%20grammar%22%20benthem&f=false) which is said to have revived interest in Lambek's work by showing how to extract lambda terms as meanings for syntactic derivations. I don't think the distinction categorial grammar vs type-logical grammar is very rigid, though, and approaches like Philippe de Groote's [abstract categorial grammars](http://www.cs.utoronto.ca/~gpenn/csc2517/degrooteACL01.pdf) blur the distinction.
Re #15, what I understand is that the distinction between traditional categorial grammar and type-logical grammar is roughly analogous to the distinction between proof theory and type theory, with the connotation that the latter is supposed to be more general, or at least more flexible. So in categorial grammar you show that sentences are well-formed by building derivations of sequents in little substructural proof systems, whereas in type-logical grammar you do the same by building derivations of typing judgments in little substructural type systems.
The term "type-logical grammar" is newer, I think it might have been introduced by Morrill (1994), drawing inspiration from an older essay by van Benthem (1983) which is said to have revived interest in Lambek's work by showing how to extract lambda terms as meanings for syntactic derivations. I don't think the distinction categorial grammar vs type-logical grammar is very rigid, though, and approaches like Philippe de Groote's abstract categorial grammars blur the distinction.
CommentTimeFeb 20th 2016
(edited Feb 20th 2016)
Format: MarkdownItexI happened to remember that I never replied to #13 and #14, my apologies! Regarding #13: thanks very much, I was not aware of this work! Regarding #14: such an example would indeed be impressive! But my main point was not so much to do with the scope of examples, as with the approach one takes. What I was getting at is that I would like to see things turned on their head: rather than saying "here is a nice example, let us translate it into dependent type theory using some rough guiding principles", I would like to see a formal theory of semantics in dependent type theory, which we then test on specific examples. One of the main points of Montague's work (and thus almost all work on semantics by linguists since the 1970s) is to take exactly this step: here is a famous quote regarding the difference between his approach to expressing natural language statements in logic and the 'naïve' one, the latter being the one that I feel is being taken when dependent type theory is discussed here on the nForum and at the n-café. > It should be emphasized that this is not a matter of vague intuition, as in elementary logic courses, but an assertion to which we have assigned exact significance. Regarding Montague's work, I think the standard reference is to _Formal philosophy_, which contains all of his most important works.
I happened to remember that I never replied to #13 and #14, my apologies!
Regarding #13: thanks very much, I was not aware of this work!
Regarding #14: such an example would indeed be impressive! But my main point was not so much to do with the scope of examples, as with the approach one takes. What I was getting at is that I would like to see things turned on their head: rather than saying "here is a nice example, let us translate it into dependent type theory using some rough guiding principles", I would like to see a formal theory of semantics in dependent type theory, which we then test on specific examples. One of the main points of Montague's work (and thus almost all work on semantics by linguists since the 1970s) is to take exactly this step: here is a famous quote regarding the difference between his approach to expressing natural language statements in logic and the 'naïve' one, the latter being the one that I feel is being taken when dependent type theory is discussed here on the nForum and at the n-café.
It should be emphasized that this is not a matter of vague intuition, as in elementary logic courses, but an assertion to which we have assigned exact significance.
Regarding Montague's work, I think the standard reference is to Formal philosophy, which contains all of his most important works.
CommentTimeMay 26th 2017
Format: MarkdownItexSlides on [Copredication in Homotopy Type Theory: A homotopical approach to formal semantics of natural languages](https://www.slideshare.net/HamidrezaBahramian/copredicationpresentation) haven't read it yet, but I know some people here are interested in this direction.
Slides on Copredication in Homotopy Type Theory: A homotopical approach to formal semantics of natural languages haven't read it yet, but I know some people here are interested in this direction.
Format: MarkdownItexThanks, Bas. I'm still hoping to be (one of) the first to get something about the homotopy of HoTT meets natural language into print in the form of the equivariant definite description idea [here](https://ncatlab.org/davidcorfield/show/Expressing+%27The+Structure+of%27+in+Homotopy+Type+Theory). Two to-and-fros to a referee already. Third time lucky. >I know some people here... Where are you?
Thanks, Bas. I'm still hoping to be (one of) the first to get something about the homotopy of HoTT meets natural language into print in the form of the equivariant definite description idea here. Two to-and-fros to a referee already. Third time lucky.
I know some people here… | CommonCrawl |
Global threshold dynamics in an HIV virus model with nonlinear infection rate and distributed invasion and production delays
MBE Home
Dynamics of an infectious diseases with media/psychology induced non-smooth incidence
2013, 10(2): 463-481. doi: 10.3934/mbe.2013.10.463
On latencies in malaria infections and their impact on the disease dynamics
Yanyu Xiao 1, and Xingfu Zou 2,
Department of Applied Mathematics, University of Western Ontario, London, Ontario, N6A 5B7, Canada
Department of Applied Mathematics, University of Western Ontario, London, Ontario N6A 5B7
Received February 2012 Revised August 2012 Published January 2013
In this paper, we modify the classic Ross-Macdonald model for malaria disease dynamics by incorporating latencies both for human beings and female mosquitoes. One novelty of our model is that we introduce two general probability functions ($P_1(t)$ and $P_2(t)$) to reflect the fact that the latencies differ from individuals to individuals. We justify the well-posedness of the new model, identify the basic reproduction number $\mathcal{R}_0$ for the model and analyze the dynamics of the model. We show that when $\mathcal{R}_0 <1$, the disease free equilibrium $E_0$ is globally asymptotically stable, meaning that the malaria disease will eventually die out; and if $\mathcal{R}_0 >1$, $E_0$ becomes unstable. When $\mathcal{R}_0 >1$, we consider two specific forms for $P_1(t)$ and $P_2(t)$: (i) $P_1(t)$ and $P_2(t)$ are both exponential functions; (ii) $P_1(t)$ and $P_2(t)$ are both step functions. For (i), the model reduces to an ODE system, and for (ii), the long term disease dynamics are governed by a DDE system. In both cases, we are able to show that when $\mathcal{R}_0>1$ then the disease will persist; moreover if there is no recovery ($\gamma_1=0$), then all admissible positive solutions will converge to the unique endemic equilibrium. A significant impact of the latencies is that they reduce the basic reproduction number, regardless of the forms of the distributions.
Keywords: Lyapunov function/functional, stability, latency, Malaria, persistence., basic reproduction number, delay.
Mathematics Subject Classification: Primary: 92D25, 92D30; Secondary: 37G9.
Citation: Yanyu Xiao, Xingfu Zou. On latencies in malaria infections and their impact on the disease dynamics. Mathematical Biosciences & Engineering, 2013, 10 (2) : 463-481. doi: 10.3934/mbe.2013.10.463
R. M. Anderson and R. M. May, "Infectious Diseases of Humans: Dynamics and Control," Oxford University Press, Oxford, 1991. Google Scholar
J. L. Aron and R. M. May, The population dynamics of malaria, in "Population Dynamics Of Infectious Diseases: Theory and Applications" (ed. R. M. Anderson), Chapman And Hall Press, (1982), 139-179. Google Scholar
F. Chanchod and N. F. Britton, Analysis of a vector-bias model on malaria, Bull. Math. Biol., 73 (2011), 639-657. doi: 10.1007/s11538-010-9545-0. Google Scholar
C. Castillo-Chavez and H. R. Thieme, Asymptotically autonomous epidemic models, in "Mathematical Population Dynamics: Analysis of Heterogeneity, I. Theory of Epidemics" (eds. O. Arino et al.), Wuerz, Winnepeg, Canada, (1995), 33-50. Google Scholar
O. Diekmann, J. A. P Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio $\mathcalR_0$ in models for infectious diseases, J. Math. Biol., 35 (1990), 503-522. Google Scholar
O. Diekmann, J. A. P. Heesterbeek and M. G. Robert, The construction of next generation matrices for compartmental epidemic models, J. R. Soc. Interface, 7 (2011), 873-885. Google Scholar
H. Guo, M. Y. Li and Z. Shuai, Global stability of the endemic equilibrium of multigroup SIR epidemic models, Can. Appl. Math. Q., 14 (2006), 259-284. Google Scholar
H. Guo, M. Y. Li and Z. Shuai, A graph-theoretic approach to the method of global Lyapunov functions, Proc. Amer. Math. Soc., 136 (2008), 2793-2802. doi: 10.1090/S0002-9939-08-09341-6. Google Scholar
J. K. Hale and S. M. Verduyn Lunel, "Introduction to Functional Differential Equations," Springer-Verlag, New York, 1993. Google Scholar
J. M. Heffernan, R. J. Smith and L. M. Wahl, Perspectives on the basic reproduction ratio, J. R. Soc. Interface, 2 (2005), 281-293. Google Scholar
W. M. Hirsch, H. Hanisch and J. P. Gabriel, Differential equation models of some parasitic infections: methods for the study of asymptotic behavior, Comm. Pure Appl. Math., 38 (1985), 733-753. doi: 10.1002/cpa.3160380607. Google Scholar
A. Korobeinikov, Lyapunov function and global properties for SIR and SEIR epidemic models, Math. Med. Biol., 21 (2004), 75-83. doi: 10.1007/s11538-008-9352-z. Google Scholar
A. Korobeinikov and P. K. Maini, A lyapunov function and global properties for SIR and SEIR epidemiological models with nonlinear incidence, Math. Biosci. Eng., 1 (2004), 57-60. doi: 10.3934/mbe.2004.1.57. Google Scholar
Y. Lou and X-Q. Zhao, A reaction-diffusion malaria model with incubation period in the vector population, J. Math. Biol., 62 (2011), 543-568. doi: 10.1007/s00285-010-0346-8. Google Scholar
R. K. Miller, "Nonlinear Volterra Integral Equations," Benjamin, Menlo Park, California, 1971. Google Scholar
G. Macdonald, The analysis of sporozoite rate, Trop. Dis. Bull., 49 (1952), 569-585. Google Scholar
G. Macdonald, Epidemiological basis of malaria control, Bull. WHO, 15 (1956), 613-626. Google Scholar
G. Macdonald, "The Epidemiology And Control Of Malaria," Oxford University Press, London, 1957. Google Scholar
R. Ross, "The Prevention Of Malaria," J. Murray, London, 1910. Google Scholar
S. Ruan, D. Xiao and J. C. Beier, On the delayed Ross-Macdonald model for Malaria transmission, Bull. Math. Biol., 70 (2008), 1098-1114. doi: 10.1007/s11538-007-9292-z. Google Scholar
H. L. Smith, "Monotone Dynamical Systems. An Introduction to the Theory of Competitive and Cooperative Systems," 41. AMS, Providence, 1995. Google Scholar
A. M. Talman, O. Domarle, F. McKenzie, F. Ariey and V. Robert, Gametocytogenesis: the puberty of Plasmodium falciparum, Malaria Journal, 3 (2004), 24 pp. Google Scholar
H. R. Thieme, "Mathematics In Population Biology," Princeton University Press, Princeton, NJ, 2003. Google Scholar
H. R. Thieme, Persistence under relaxed point-dissipativity (with application to an endemic model), SIAM J. Math. Anal., 24 (1993), 407-435. doi: 10.1137/0524026. Google Scholar
P. van den Driessche, L. Wang and X. Zou, Modeling diseases with latency and relapse, Math. Biosci. Eng., 4 (2007), 205-219. doi: 10.3934/mbe.2007.4.205. Google Scholar
P. van den Drissche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar
Hui Cao, Yicang Zhou. The basic reproduction number of discrete SIR and SEIS models with periodic parameters. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 37-56. doi: 10.3934/dcdsb.2013.18.37
Tianhui Yang, Lei Zhang. Remarks on basic reproduction ratios for periodic abstract functional differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6771-6782. doi: 10.3934/dcdsb.2019166
Nicolas Bacaër, Xamxinur Abdurahman, Jianli Ye, Pierre Auger. On the basic reproduction number $R_0$ in sexual activity models for HIV/AIDS epidemics: Example from Yunnan, China. Mathematical Biosciences & Engineering, 2007, 4 (4) : 595-607. doi: 10.3934/mbe.2007.4.595
Nitu Kumari, Sumit Kumar, Sandeep Sharma, Fateh Singh, Rana Parshad. Basic reproduction number estimation and forecasting of COVID-19: A case study of India, Brazil and Peru. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021170
Gerardo Chowell, R. Fuentes, A. Olea, X. Aguilera, H. Nesse, J. M. Hyman. The basic reproduction number $R_0$ and effectiveness of reactive interventions during dengue epidemics: The 2002 dengue outbreak in Easter Island, Chile. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1455-1474. doi: 10.3934/mbe.2013.10.1455
Sibel Senan, Eylem Yucel, Zeynep Orman, Ruya Samli, Sabri Arik. A Novel Lyapunov functional with application to stability analysis of neutral systems with nonlinear disturbances. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1415-1428. doi: 10.3934/dcdss.2020358
Rafael Obaya, Ana M. Sanz. Persistence in non-autonomous quasimonotone parabolic partial functional differential equations with delay. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3947-3970. doi: 10.3934/dcdsb.2018338
Zhijun Liu, Weidong Wang. Persistence and periodic solutions of a nonautonomous predator-prey diffusion with Holling III functional response and continuous delay. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 653-662. doi: 10.3934/dcdsb.2004.4.653
Ismael Maroto, Carmen Núñez, Rafael Obaya. Exponential stability for nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3167-3197. doi: 10.3934/dcdsb.2017169
Tom Burr, Gerardo Chowell. The reproduction number $R_t$ in structured and nonstructured populations. Mathematical Biosciences & Engineering, 2009, 6 (2) : 239-259. doi: 10.3934/mbe.2009.6.239
Tianhui Yang, Ammar Qarariyah, Qigui Yang. The effect of spatial variables on the basic reproduction ratio for a reaction-diffusion epidemic model. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021170
Paul L. Salceanu, H. L. Smith. Lyapunov exponents and persistence in discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 187-203. doi: 10.3934/dcdsb.2009.12.187
Frédéric Grognard, Frédéric Mazenc, Alain Rapaport. Polytopic Lyapunov functions for persistence analysis of competing species. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 73-93. doi: 10.3934/dcdsb.2007.8.73
Jinliang Wang, Jingmei Pang, Toshikazu Kuniya. A note on global stability for malaria infections model with latencies. Mathematical Biosciences & Engineering, 2014, 11 (4) : 995-1001. doi: 10.3934/mbe.2014.11.995
Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115
Pham Huu Anh Ngoc. New criteria for exponential stability in mean square of stochastic functional differential equations with infinite delay. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021040
Gerardo Chowell, Catherine E. Ammon, Nicolas W. Hengartner, James M. Hyman. Estimating the reproduction number from the initial phase of the Spanish flu pandemic waves in Geneva, Switzerland. Mathematical Biosciences & Engineering, 2007, 4 (3) : 457-470. doi: 10.3934/mbe.2007.4.457
Ling Xue, Caterina Scoglio. Network-level reproduction number and extinction threshold for vector-borne diseases. Mathematical Biosciences & Engineering, 2015, 12 (3) : 565-584. doi: 10.3934/mbe.2015.12.565
Gang Huang, Edoardo Beretta, Yasuhiro Takeuchi. Global stability for epidemic model with constant latency and infectious periods. Mathematical Biosciences & Engineering, 2012, 9 (2) : 297-312. doi: 10.3934/mbe.2012.9.297
Junyuan Yang, Yuming Chen, Jiming Liu. Stability analysis of a two-strain epidemic model on complex networks with latency. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2851-2866. doi: 10.3934/dcdsb.2016076
Yanyu Xiao Xingfu Zou | CommonCrawl |
Physics and Astronomy (23)
Materials Research (15)
Statistics and Probability (8)
Proceedings of the Nutrition Society (27)
Epidemiology & Infection (8)
Symposium - International Astronomical Union (6)
MRS Advances (5)
The Canadian Entomologist (5)
Weed Science (5)
Microscopy and Microanalysis (4)
Political Analysis (4)
British Journal of Nutrition (3)
Journal of Fluid Mechanics (3)
Behavioral and Brain Sciences (2)
Developmental Medicine and Child Neurology (2)
Palliative & Supportive Care (2)
The Journal of Agricultural Science (2)
Materials Research Society (15)
Entomological Society of Canada TCE ESC (5)
Weed Science Society of America (5)
AMMS - Australian Microscopy and Microanalysis Society (3)
American Political Science Association (APSA) (3)
American Academy of Cerebral and Developmental Medicine (2)
American Society of Church History (2)
Mineralogical Society (2)
Australian Mathematical Society Inc (1)
Society for American Archaeology (1)
Society for Political Methodology (1)
Structural Analysis in the Social Sciences (1)
Seed-shattering phenology at soybean harvest of economically important weeds in multiple regions of the United States. Part 3: Drivers of seed shatter
Lauren M. Schwartz-Lazaro, Lovreet S. Shergill, Jeffrey A. Evans, Muthukumar V. Bagavathiannan, Shawn C. Beam, Mandy D. Bish, Jason A. Bond, Kevin W. Bradley, William S. Curran, Adam S. Davis, Wesley J. Everman, Michael L. Flessner, Steven C. Haring, Nicholas R. Jordan, Nicholas E. Korres, John L. Lindquist, Jason K. Norsworthy, Tameka L. Sanders, Larry E. Steckel, Mark J. VanGessel, Blake Young, Steven B. Mirsky
Journal: Weed Science / Volume 70 / Issue 1 / January 2022
Published online by Cambridge University Press: 15 November 2021, pp. 79-86
Print publication: January 2022
Seed retention, and ultimately seed shatter, are extremely important for the efficacy of harvest weed seed control (HWSC) and are likely influenced by various agroecological and environmental factors. Field studies investigated seed-shattering phenology of 22 weed species across three soybean [Glycine max (L.) Merr.]-producing regions in the United States. We further evaluated the potential drivers of seed shatter in terms of weather conditions, growing degree days, and plant biomass. Based on the results, weather conditions had no consistent impact on weed seed shatter. However, there was a positive correlation between individual weed plant biomass and delayed weed seed–shattering rates during harvest. This work demonstrates that HWSC can potentially reduce weed seedbank inputs of plants that have escaped early-season management practices and retained seed through harvest. However, smaller individuals of plants within the same population that shatter seed before harvest pose a risk of escaping early-season management and HWSC.
22 - On Trust
from IV - New Perspectives
By Sandra S. Smith, Jasmine M. Sanders
Edited by Mario L. Small, Harvard University, Massachusetts, Brea L. Perry, Indiana University, Bloomington
Bernice Pescosolido, Indiana University, Bloomington, Edward B. Smith, Northwestern University, Illinois
Book: Personal Networks
Published online: 01 October 2021
A growing body of research has examined the role that trust plays in the mobilization of social capital for instrumental and emotional aid. Few have theorized, however, how individuals' own notions of self, and their need for self-confirmation, shape the process by which they de-cide who to help, when, under what circumstances, and why. In this paper, we consider the role that self-verification plays in the development of trust that facilitates social capital mobiliza-tion for emotional and instrumental aid, with specific attention to job-matching assistance. We draw from the work of social psychologists to suggest that we might better understand the cir-cumstances under which people provide instrumental and emotional aid by considering the extent to which their self-views, positive or negative, are confirmed by others around them. Self-verification should feed trust that produces a greater willingness to offer both emotional and instrumental aid. We illustrate this point with a discussion of one empirical case.
Settling behaviour of thin curved particles in quiescent fluid and turbulence
Timothy T.K. Chan, Luis Blay Esteban, Sander G. Huisman, John S. Shrimpton, Bharathram Ganapathisubramani
Journal: Journal of Fluid Mechanics / Volume 922 / 10 September 2021
Published online by Cambridge University Press: 16 July 2021, A30
Print publication: 10 September 2021
The motion of thin curved falling particles is ubiquitous in both nature and industry but is not yet widely examined. Here, we describe an experimental study on the dynamics of thin cylindrical shells resembling broken bottle fragments settling through quiescent fluid and homogeneous anisotropic turbulence. The particles have Archimedes numbers based on the mean descent velocity $0.75 \times 10^{4} \lesssim Ar \lesssim 2.75 \times 10^{4}$. Turbulence reaching a Reynolds number of $Re_\lambda \approx 100$ is generated in a water tank using random jet arrays mounted in a coplanar configuration. After the flow becomes statistically stationary, a particle is released and its three-dimensional motion is recorded using two orthogonally positioned high-speed cameras. We propose a simple pendulum model that accurately captures the velocity fluctuations of the particles in still fluid and find that differences in the falling style might be explained by a closer alignment between the particle's pitch angle and its velocity vector. By comparing the trajectories under background turbulence with the quiescent fluid cases, we measure a decrease in the mean descent velocity in turbulence for the conditions tested. We also study the secondary motion of the particles and identify descent events that are unique to turbulence such as 'long gliding' and 'rapid rotation' events. Lastly, we show an increase in the radial dispersion of the particles under background turbulence and correlate the time scale of descent events with the local settling velocity.
Nomenclature for Pediatric and Congenital Cardiac Care: Unification of Clinical and Administrative Nomenclature – The 2021 International Paediatric and Congenital Cardiac Code (IPCCC) and the Eleventh Revision of the International Classification of Diseases (ICD-11)
The International Society for Nomenclature of Paediatric and Congenital Heart Disease (ISNPCHD)
Jeffrey P. Jacobs, Rodney C. G. Franklin, Marie J. Béland, Diane E. Spicer, Steven D. Colan, Henry L. Walters III, Frédérique Bailliard, Lucile Houyel, James D. St. Louis, Leo Lopez, Vera D. Aiello, J. William Gaynor, Otto N. Krogmann, Hiromi Kurosawa, Bohdan J. Maruszewski, Giovanni Stellin, Paul Morris Weinberg, Marshall Lewis Jacobs, Jeffrey R. Boris, Meryl S. Cohen, Allen D. Everett, Jorge M. Giroud, Kristine J. Guleserian, Marina L. Hughes, Amy L. Juraszek, Stephen P. Seslar, Charles W. Shepard, Shubhika Srivastava, Andrew C. Cook, Adrian Crucean, Lazaro E. Hernandez, Rohit S. Loomba, Lindsay S. Rogers, Stephen P. Sanders, Jill J. Savla, Elif Seda Selamet Tierney, Justin T. Tretter, Lianyi Wang, Martin J. Elliott, Constantine Mavroudis, Christo I. Tchervenkov
Journal: Cardiology in the Young / Volume 31 / Issue 7 / July 2021
Substantial progress has been made in the standardization of nomenclature for paediatric and congenital cardiac care. In 1936, Maude Abbott published her Atlas of Congenital Cardiac Disease, which was the first formal attempt to classify congenital heart disease. The International Paediatric and Congenital Cardiac Code ( IPCCC ) is now utilized worldwide and has most recently become the paediatric and congenital cardiac component of the Eleventh Revision of the International Classification of Diseases ( ICD-11 ). The most recent publication of the IPCCC was in 2017. This manuscript provides an updated 2021 version of the IPCCC .
The International Society for Nomenclature of Paediatric and Congenital Heart Disease ( ISNPCHD ), in collaboration with the World Health Organization (WHO), developed the paediatric and congenital cardiac nomenclature that is now within the eleventh version of the International Classification of Diseases (ICD-11). This unification of IPCCC and ICD-11 is the IPCCC ICD-11 Nomenclature and is the first time that the clinical nomenclature for paediatric and congenital cardiac care and the administrative nomenclature for paediatric and congenital cardiac care are harmonized. The resultant congenital cardiac component of ICD-11 was increased from 29 congenital cardiac codes in ICD-9 and 73 congenital cardiac codes in ICD-10 to 318 codes submitted by ISNPCHD through 2018 for incorporation into ICD-11. After these 318 terms were incorporated into ICD-11 in 2018, the WHO ICD-11 team added an additional 49 terms, some of which are acceptable legacy terms from ICD-10, while others provide greater granularity than the ISNPCHD thought was originally acceptable. Thus, the total number of paediatric and congenital cardiac terms in ICD-11 is 367. In this manuscript, we describe and review the terminology, hierarchy, and definitions of the IPCCC ICD-11 Nomenclature . This article, therefore, presents a global system of nomenclature for paediatric and congenital cardiac care that unifies clinical and administrative nomenclature.
The members of ISNPCHD realize that the nomenclature published in this manuscript will continue to evolve. The version of the IPCCC that was published in 2017 has evolved and changed, and it is now replaced by this 2021 version. In the future, ISNPCHD will again publish updated versions of IPCCC , as IPCCC continues to evolve.
Seed-shattering phenology at soybean harvest of economically important weeds in multiple regions of the United States. Part 1: Broadleaf species
Published online by Cambridge University Press: 04 November 2020, pp. 95-103
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed-shattering phenology at soybean harvest of economically important weeds in multiple regions of the United States. Part 2: Grass species
Published online by Cambridge University Press: 26 October 2020, pp. 104-110
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
EPA-0973 – Disturbed Regulation of Wakefulness as a Pathogenetic Factor in Affective Disorders and ADHD
U. Hegerl, P. Schönknecht, T. Hensch, S. Olbrich, M. Kluge, H. Himmerich, C. Sander
Within the vigilance regulation model the hyperactivity and sensation seeking observed in overtired children, ADHD and mania are interpreted as an autoregulatory attempt to stabilize vigilance (central nervous arousal) by increasing external stimulation. Correspondingly the withdrawal and sensation avoidance in major depression is interpreted as a reaction to a state of tonically high vigilance (1, 2). Using an EEG-based algorithm to classify automatically short EEG-segments into different vigilance stages as observed during the transition from active wakefulness to drowsiness and sleep onset (VIGALL), both patients with ADHD and mania show an unstable vigilance regulation with rapid drops to lower vigilance stages under quiet rest. The contrary was found in unmedicated patients with major depression (2). Studies will be presented supporting the validity of VIGALL (simulataneous EEG-fMRI and EEG/FDG-PET studies, as well as the neurophysiological, clinical and predictive validity of the vigilance regulation model of affective disorders. Among the far reaching consequences of the vigilance model is the question whether psychostimulants have similar beneficial effects in mania as observed in ADHD (3), an aspect which is presently studied in an international, randomized controlled trial (4).
First Evidence For Glial Pathology In Late Life Minor Depression: s100b Is Increased In Males With Minor Depression
M. Polyakova, C. Sander, K. Arelin, L. Lampe, T. Luck, J. Kratzsch, K.T. Hoffman, S. Riedel-Heller, A. Villringer, P. Schoenknecht, M. Schroeter
Published online by Cambridge University Press: 23 March 2020, p. S421
Minor depression is diagnosed when a patient suffers from two to four depressive symptoms for at least two weeks. Though minor depression is a widespread phenomenon, its pathophysiology has hardly been studied. To get a first insight into the pathophysiological mechanisms underlying this disorder we assessed serum levels of biomarkers for plasticity, glial and neuronal function: brain-derived neurotrophic factor (BDNF), S100B and neuron specific enolase (NSE). Twenty-seven subjects with minor depressive episode and 82 healthy subjects over 60 years of age were selected from the database of the Leipzig population-based study of civilization diseases (LIFE). Serum levels of BDNF, S100B and NSE were compared between groups, and correlated with age, body-mass index, and degree of white matter hyperintensities (score on Fazekas scale). S100B was significantly increased in males with minor depression in comparison to healthy males, whereas other biomarkers did not differ between groups (P = 0.10–0.66). NSE correlated with Fazekas score in patients with minor depression (rs = 0.436, P = 0.048) and in the whole sample (rs = 0.252, P = 0.019). S100B correlated with body mass index (rs = 0.246, P = 0.031) and with age in healthy subjects (rs = 0.345, P = 0.002). Increased S100B in males with minor depression, without alterations in BDNF and NSE, supports the glial hypothesis of depression. Correlation between white matter hyperintensities and NSE underscores the vascular hypothesis of late life depression.
Showerhead-Assisted Chemical Vapor Deposition of Perovskite Films for Solar Cell Application
S. Sanders, D. Stümmler, J. D. Gerber, J. H. Seidel, G. Simkus, M. Heuken, A. Vescan, H. Kalisch
Journal: MRS Advances / Volume 5 / Issue 8-9 / 2020
Published online by Cambridge University Press: 24 February 2020, pp. 385-393
In the last years, perovskite solar cells have attracted great interest in photovoltaic (PV) research due to their possibility to become a highly efficient and low-cost alternative to silicon solar cells. Cells based on the widely used Pb-containing perovskites have reached power conversion efficiencies (PCE) of more than 20 %. One of the major hurdles for the rapid commercialization of perovskite photovoltaics is the lack of deposition tools and processes for large areas. Chemical vapor deposition (CVD) is an appealing technique because it is scalable and furthermore features superior process control and reproducibility in depositing high-purity films. In this work, we present a novel showerhead-based CVD tool to fabricate perovskite films by simultaneous delivery of precursors from the gas phase. We highlight the control of the perovskite film composition and properties by adjusting the individual precursor deposition rates. Providing the optimal supply of precursors results in stoichiometric perovskite films without any detectable residues.
14 - Making Chondrules by Splashing Molten Planetesimals
from Part II - Possible Chondrule-Forming Mechanisms
By Ian S. Sanders, Edward R. D. Scott
Edited by Sara S. Russell, Natural History Museum, London, Harold C. Connolly Jr., Rowan University, New Jersey, Alexander N. Krot, University of Hawaii, Manoa
Book: Chondrules
Print publication: 19 July 2018, pp 361-374
The antiquity of iron meteorites and the inferred early intense heating by the decay of 26Al suggest that many planetesimals were molten beneath a thin insulating cap at the same time as chondrules were being made. As those planetesimals were colliding and merging, it seems inevitable that impact plumes of droplets from their liquid interiors would have been launched into space and cooled to form chondrules. We call the process splashing; it is quite distinct from making droplets by jetting during hypervelocity impacts. Evidence both for the existence of molten planetesimals, and for the cooling of chondrules within a plume setting, is strong and growing. Detailed petrographic and isotopic features of chondrules, particularly in carbonaceous chondrites (that probably formed beyond the orbit of Jupiter), suggest that the chondrule plume would have been 'dirty' and the otherwise uniform droplets would have been contaminated with earlier-formed dust and larger grains from a variety of sources. The contamination possibly accounts for relict grains, for the spread of oxygen isotopes along the primitive chondrule mineral (PCM) line in carbonaceous chondrites, and for the newly recognized nucleosynthetic isotopic complementarity between chondrules and matrix in Allende.
From Barriers to Assets: Rethinking factors impacting advance care planning for African Americans
Justin J. Sanders, Kimberly S. Johnson, Kimberly Cannady, Joanna Paladino, Dee W. Ford, Susan D. Block, Katherine R. Sterba
Journal: Palliative & Supportive Care / Volume 17 / Issue 3 / June 2019
Published online by Cambridge University Press: 05 June 2018, pp. 306-313
We aimed to explore multiple perspectives regarding barriers to and facilitators of advance care planning (ACP) among African Americans to identify similarities or differences that might have clinical implications.
Qualitative study with health disparities experts (n = 5), community members (n = 9), and seriously ill African American patients and caregivers (n = 11). Using template analysis, interviews were coded to identify intrapersonal, interpersonal, and systems-level themes in accordance with a social ecological framework.
Participants identified seven primary factors that influence ACP for African Americans: religion and spirituality; trust and mistrust; family relationships and experiences; patient-clinician relationships; prognostic communication, care preferences, and preparation and control. These influences echo those described in the existing literature; however, our data highlight consistent differences by group in the degree to which these factors positively or negatively affect ACP. Expert participants reinforced common themes from the literature, for example, that African Americans were not interested in prognostic information because of mistrust and religion. Seriously ill patients were more likely to express trust in their clinicians and to desire prognostic communication; they and community members expressed a desire to prepare for and control the end of life. Religious belief did not appear to negate these desires.
Significance of results
The literature on ACP in African Americans may not accurately reflect the experience of seriously ill African Americans. What are commonly understood as barriers to ACP may in fact not be. We propose reframing stereotypical barriers to ACP, such as religion and spirituality, or family, as cultural assets that should be engaged to enhance ACP. Although further research can inform best practices for engaging African American patients in ACP, findings suggest that respectful, rapport-building communication may facilitate ACP. Clinicians are encouraged to engage in early ACP using respectful and rapport building communication practices, including open-ended questions.
Probing the non-thermal emission in the Perseus cluster with the JVLA
M. Gendron-Marsolais, J. Hlavacek-Larrondo, R. J. van Weeren, T. Clarke, A. C. Fabian, H. T. Intema, G. B. Taylor, K. M. Blundell, J. S. Sanders
Journal: Proceedings of the International Astronomical Union / Volume 14 / Issue S342 / May 2018
Published online by Cambridge University Press: 07 April 2020, pp. 44-52
Print publication: May 2018
We present deep low radio frequency (230-470 MHz) observations from the Karl G. Jansky Very Large Array of the Perseus cluster, probing the non-thermal emission from the old particle population of the AGN outflows. Our observations of this nearby relaxed cool core cluster have revealed a multitude of new structures associated with the mini-halo, extending to hundreds of kpc in size. Its irregular morphology seems to have been influenced both by the AGN activity and by the sloshing motion of the cluster' gas. In addition, it has a filamentary structure similar to that seen in radio relics found in merging clusters. These results illustrate the high-quality images that can be obtained with the new JVLA at low radio-frequencies.
Deep Chandra observations of the core of the Perseus cluster
Jeremy S. Sanders
The Perseus cluster is the X-ray brightest cluster in the sky and with deep Chandra observations we are able to map its central structure on very short spatial scales. In addition, the high quality of X-ray data allows detailed spatially-resolved spectroscopy. In this paper I review what these deep observations have told us about AGN feedback in clusters, sloshing and instabilities, and the metallicity distribution.
Safety of tracheal intubation in the presence of cardiac disease in paediatric ICUs
Eleanor A. Gradidge, Adnan Bakar, David Tellez, Michael Ruppe, Sarah Tallent, Geoffrey Bird, Natasha Lavin, Anthony Lee, Vinay Nadkarni, Michelle Adu-Darko, Jesse Bain, Katherine Biagas, Aline Branca, Ryan K. Breuer, Calvin Brown III, Kris Bysani, Guillaume Emeriaud, Sandeep Gangadharan, John S. Giuliano, Jr, Joy D. Howell, Conrad Krawiec, Jan Hau Lee, Simon Li, Keith Meyer, Michael Miksa, Natalie Napolitano, Sholeen Nett, Gabrielle Nuthall, Alberto Orioles, Erin B. Owen, Margaret M. Parker, Simon Parsons, Lee A. Polikoff, Kyle Rehder, Osamu Saito, Ron C. Sanders, Jr, Asha Shenoi, Dennis W. Simon, Peter W. Skippen, Keiko Tarquinio, Anne Thompson, Iris Toedt-Pingel, Karen Walson, Akira Nishisaki, For National Emergency Airway Registry for Children (NEARKIDS) Investigators and Pediatric Acute Lung Injury and Sepsis Investigators (PALISI)
Children with CHD and acquired heart disease have unique, high-risk physiology. They may have a higher risk of adverse tracheal-intubation-associated events, as compared with children with non-cardiac disease.
We sought to evaluate the occurrence of adverse tracheal-intubation-associated events in children with cardiac disease compared to children with non-cardiac disease. A retrospective analysis of tracheal intubations from 38 international paediatric ICUs was performed using the National Emergency Airway Registry for Children (NEAR4KIDS) quality improvement registry. The primary outcome was the occurrence of any tracheal-intubation-associated event. Secondary outcomes included the occurrence of severe tracheal-intubation-associated events, multiple intubation attempts, and oxygen desaturation.
A total of 8851 intubations were reported between July, 2012 and March, 2016. Cardiac patients were younger, more likely to have haemodynamic instability, and less likely to have respiratory failure as an indication. The overall frequency of tracheal-intubation-associated events was not different (cardiac: 17% versus non-cardiac: 16%, p=0.13), nor was the rate of severe tracheal-intubation-associated events (cardiac: 7% versus non-cardiac: 6%, p=0.11). Tracheal-intubation-associated cardiac arrest occurred more often in cardiac patients (2.80 versus 1.28%; p<0.001), even after adjusting for patient and provider differences (adjusted odds ratio 1.79; p=0.03). Multiple intubation attempts occurred less often in cardiac patients (p=0.04), and oxygen desaturations occurred more often, even after excluding patients with cyanotic heart disease.
The overall incidence of adverse tracheal-intubation-associated events in cardiac patients was not different from that in non-cardiac patients. However, the presence of a cardiac diagnosis was associated with a higher occurrence of both tracheal-intubation-associated cardiac arrest and oxygen desaturation.
Fabrication and Characterization of Air-Stable Organic-Inorganic Bismuth-Based Perovskite Solar Cells
S. Sanders, D. Stümmler, P. Pfeiffer, N. Ackermann, G. Simkus, M. Heuken, P. K. Baumann, A. Vescan, H. Kalisch
Journal: MRS Advances / Volume 3 / Issue 51 / 2018
Pb-based organometal halide perovskite solar cells have passed the threshold of 20 % power conversion efficiency (PCE). However, the main issues hampering commercialization are toxic Pb contained in these cells and their instability in ambient air. Therefore, great attention is devoted to replace Pb by Sn or Bi, which are less harmful and - in the case of Bi - also expected to yield enhanced stability. In literature, the most efficient hybrid organic-inorganic methylammonium bismuth iodide (MBI) perovskite solar cells reach PCE up to 0.2 %. In this work, we present spin-coated MBI perovskite solar cells and highlight the impact of the concentration of the perovskite solution on the layer morphology and photovoltaic (PV) characteristics. The solar cells exhibit open-circuit voltages of 0.73 V, which is the highest value published for this type of solar cell. The PCE increases from 0.004 % directly after processing to 0.17 % after 48 h of storage in air. 300 h after exposure to air, the cells still yield 56 % of their peak PCE and 84 % of their maximum open-circuit voltage.
Human Stem Cell Derived Osteocytes in Bone-on-Chip
E. Budyn, N. Gaci, S. Sanders, M. Bensidhoum, E. Schmidt, B. Cinquin, P. Tauc, H. Petite
Published online by Cambridge University Press: 07 March 2018, pp. 1443-1455
Human mesenchymal stem cells were reseeded in decellularized human bone subject to a controlled mechanical loading to create a bone-on-chip that was cultured for over 26 months. The cell morphology and their secretome were characterized using immunohistochemistry and in situ immunofluorescence under confocal microscopy. The presence of stem cell derived osteocytes was confirmed at 547 days. Different cell populations were identified. Some cells were connected by long processes and formed a network. Comparison of the MSCs in vitro reorganization and calcium response to in situ mechanical stimulation were compared to MLOY4 cells reseeded on human bone. The bone-on-chip produced an ECM of which the strength was nearly a quarter of native bone after 109 days and that contained calcium minerals at 39 days and type I collagen at 256 days. The cytoplasmic calcium concentration variations seemed to adapt to the expected in vivo mechanical load at the successive stages of cell differentiation in agreement with studies using fluid shear flow stimulation. Some degree of bone-like formation over a long period of time with the formation of a newly formed matrix was observed.
Are more environmentally sustainable diets with less meat and dairy nutritionally adequate?
S Marije Seves, Janneke Verkaik-Kloosterman, Sander Biesbroek, Elisabeth HM Temme
Journal: Public Health Nutrition / Volume 20 / Issue 11 / August 2017
Our current food consumption patterns, and in particular our meat and dairy intakes, cause high environmental pressure. The present modelling study investigates the impact of diets with less or no meat and dairy foods on nutrient intakes and assesses nutritional adequacy by comparing these diets with dietary reference intakes.
Environmental impact and nutrient intakes were assessed for the observed consumption pattern (reference) and two replacement scenarios. For the replacement scenarios, 30 % or 100 % of meat and dairy consumption (in grams) was replaced with plant-based alternatives and nutrient intakes, greenhouse gas emissions and land use were calculated.
Dutch adults (n 2102) aged 19–69 years.
Replacing 30 % of meat and dairy with plant-based alternatives did not substantially alter percentages below the Estimated Average Requirement (EAR) for all studied nutrients. In the 100 % replacement scenario, SFA intake decreased on average by ~35 % and Na intake by ~8 %. Median Ca intakes were below the Adequate Intake. Estimated habitual fibre, Fe and vitamin D intakes were higher; however, non-haem Fe had lower bioavailability. For Zn, thiamin and vitamin B12, 10–31 % and for vitamin A, 60 % of adults had intakes below the EAR.
Diets with all meat and dairy replaced with plant-based foods lowered environmental impacts by >40 %. Estimated intakes of Zn, thiamin, vitamins A and B12, and probably Ca, were below recommendations. Replacing 30 % was beneficial for SFA, Na, fibre and vitamin D intakes, neutral for other nutrients, while reducing environmental impacts by 14 %.
Backside Contacting for Uniform Luminance in Large-Area OLED
P. Pfeiffer, X. D. Zhang, D. Stümmler, S. Sanders, M. Weingarten, M. Heuken, A. Vescan, H. Kalisch
Published online by Cambridge University Press: 15 February 2017, pp. 2275-2280
We have investigated organic light emitting diode (OLED) backside contacting for the enhancement of luminance uniformity as a superior alternative to gridlines. In this approach, the low-conductivity OLED anode is supported by a high-conductivity auxiliary electrode and vertically contacted through via holes. Electrical simulations of large-area OLEDs have predicted that this method allows comparable luminance uniformity while sacrificing significantly less active area compared to the common gridline approach.
The method for fabricating backside contacts is comprised of five steps: (1) Thin-film encapsulation of the OLED, (2) Patterning of the OLED surface with lithography (resist mask defining via hole positions), (3) Via hole formation to the bottom anode by a plasma etching process, (4) Organic residues removal and sidewall insulation. (5) Contacting of the anode with a high-conductivity auxiliary electrode.
Backside-contacted OLEDs processed by organic vapor phase deposition show high luminance uniformity. Scanning electron microscopy pictures and electrical breakthrough measurements confirm efficient sidewall insulation.
Direct Chemical Vapor Phase Deposition of Organometal Halide Perovskite Layers
D. Stümmler, S. Sanders, P. Pfeiffer, M. Weingarten, A. Vescan, H. Kalisch
Journal: MRS Advances / Volume 2 / Issue 21-22 / 2017
Published online by Cambridge University Press: 16 January 2017, pp. 1189-1194
Recently, organometal halide perovskite solar cells have passed the threshold of 20 % power conversion efficiency (PCE). While such PCE values of perovskite solar cells are already competitive to those of other photovoltaic technologies, processing of large-area devices is still a challenge. Most of the devices reported in literature are prepared by small-scale solution-based processing techniques (e.g. spin-coating). Perovskite solar cells processed by vacuum thermal evaporation (VTE), which show uniform layers and achieve higher PCE and better reproducibility, have also been presented. Regarding the co-evaporation of the perovskite constituents, this technology suffers from large differences in the thermodynamic characteristics of the two species. While the organic components evaporate instantaneously at room temperature at pressures in the range of 10−6 hPa, significantly higher temperatures are needed for reasonable deposition rates of the metal halide compound. In addition, hybrid vapor phase deposition techniques have been developed employing a carrier gas to deposit the organic compound on the previously solution-processed metal halide compound. Generally, vapor phase processes have proven to be a desirable choice for industrial large-area production. In this work, we present a setup for the direct chemical vapor phase deposition (CVD) of methylammonium lead iodide (MAPbI3) employing nitrogen as carrier gas. X-ray diffraction (XRD) and scanning electron microscopy (SEM) measurements are carried out to investigate the crystal quality and structural properties of the resulting perovskite. By optimizing the deposition parameters, we have produced perovskite films with a deposition rate of 30 nm/h which are comparable to those fabricated by solution processing. Furthermore, the developed CVD process can be easily scaled up to higher deposition rates and larger substrates sizes, thus rendering this technique a promising candidate for manufacturing large-area devices. Moreover, CVD of perovskite solar cells can overcome most of the limitations of liquid processing, e.g. the need for appropriate and orthogonal solvents.
Malnutrition in healthcare settings and the role of gastrostomy feeding
Matthew Kurien, Jake Williams, David S. Sanders
Journal: Proceedings of the Nutrition Society / Volume 76 / Issue 3 / August 2017
Published online by Cambridge University Press: 05 December 2016, pp. 352-360
Malnutrition can adversely affect physical and psychological function, influencing both morbidity and mortality. Despite the prevalence of malnutrition and its associated health and economic costs, malnutrition remains under-detected and under-treated in differing healthcare settings. For a subgroup of malnourished individuals, a gastrostomy (a feeding tube placed directly into the stomach) may be required to provide long-term nutritional support. In this review we explore the spectrum and consequences of malnutrition in differing healthcare settings. We then specifically review gastrostomies as a method of providing nutritional support. The review highlights the origins of gastrostomies, and discusses how endoscopic and radiological advances have culminated in an increased demand and placement of gastrostomy feeding tubes. Several studies have raised concerns about the benefits derived following this intervention and also about the patients selected to undergo this procedure. These studies are discussed in detail in this review, alongside suggestions for future research to help better delineate those who will benefit most from this intervention, and improve understanding about how gastrostomies influence nutritional outcomes. | CommonCrawl |
Search Results: 1 - 10 of 411251 matches for " J. Wang "
A Control Strategy for Smoothing Active Power Fluctuation of Wind Farm with Flywheel Energy Storage System Based on Improved Wind Power Prediction Algorithm [PDF]
J. C. Wang, X. R. Wang
Energy and Power Engineering (EPE) , 2013, DOI: 10.4236/epe.2013.54B075
Abstract: The fluctuation of active power output of wind farm has many negative impacts on large-scale wind power integration into power grid. In this paper, flywheel energy storage system (FESS) was connected to AC side of the doubly-fed induction generator (DFIG) wind farm to realize smooth control of wind power output. Based on improved wind power prediction algorithm and wind speed-power curve modeling, a new smooth control strategy with the FESS was proposed. The requirement of power system dispatch for wind power prediction and flywheel rotor speed limit were taken into consideration during the process. While smoothing the wind power fluctuation, FESS can track short-term planned output of wind farm. It was demonstrated by quantitative analysis of simulation results that the proposed control strategy can smooth the active power fluctuation of wind farm effectively and thereby improve power quality of the power grid.
A Comparative Study of Two Schools: How School Cultures Interplay the Development of Teacher Leadership in Mainland China [PDF]
Feiye Wang, Sally J. Zepeda
Creative Education (CE) , 2013, DOI: 10.4236/ce.2013.49B013
This article seeks to gain an understanding of the interrelated relationship between school cultures and the teacher leadership development by comparing the experience of teacher leaders' from two middle schools in China that exhibited different kinds of school culture. The researchers argue that the better the school culture was, the more prospective Teacher leaders would develop and the better Teacher leaders would enact their leadership, which further would reinforce the building of a healthy school culture.
Historical earthquake investigation and research in China
J. Wang
Annals of Geophysics , 2004, DOI: 10.4401/ag-3337
Abstract: China is one of the countries with the longest tradition of culture and has suffered many earthquake disasters, so many earthquake documents have therefore been conserved. In this paper we try to outline some basic information of historical earthquake investigation and research in China, such as collection of historical earthquake data from archives, historical earthquake catalogues, seismic intensity scales. We introduce briefly the huge accomplishments of historical research and discuss some problems encountered. Through examples, we illustrate the solutions to some typical problems. There are some suggestions on further work.
Role of Feedback in AGN-HOST Coevolution: A Study from Partially Obscured Active Galactic Nuclei
Abstract: Partially obscured AGNs within a redshift range $z=0.011\sim0.256$ are used to re-study the role of feedback in the AGN-host coevolution issue in terms of their [OIII]$\lambda$5007 emission line profile. The spectra of these objects enable us to determine the AGN's accretion properties directly from their broad H$\alpha$ emission. This is essential for getting rid of the "circular reasoning" in our previous study of narrow emission-line galaxies, in which the [OIII] emission line was used not only as a proxy of AGN's bolometric luminosity, but also as a diagnostic of outflow. In addition, the measurement of $D_n(4000)$ index is improved by removing an underlying AGN's continuum according to the corresponding broad H$\alpha$ emission. With these improvements, we confirm and reinforce the correlation between $L/L_{\mathrm{Edd}}$ and stellar population age. More important is that this correlation is found to be related to both [OIII] line blue asymmetry and bulk blueshift velocity, which suggests a linkage between SMBH growth and host star formation through the feedback process. The current sample of partially obscured AGNs shows that the composite galaxies have younger host stellar population, higher Eddington ratio, less significant [OIII] blue wing and smaller bulk [OIII] line shift than do the Seyfert galaxies .
Evidence of Contribution of Intervening Clouds to GRB's X-ray Column Density
Physics , 2013, DOI: 10.1088/0004-637X/776/2/96
Abstract: The origin of excess of X-ray column density with respect to optical extinction in Gamma-ray bursts (GRBs) is still a puzzle. A proposed explanation of the excess is the photoelectric absorption due to the intervening clouds along a GRB's line-of-sight. We here test this scenario by using the intervening \ion{Mg}{2} absorption as a tracer of the neutral hydrogen column density of the intervening clouds. We identify a connection between large X-ray column density (and large column density ratio of $\mathrm{\log(N_{H,X}/N_{HI})}\sim0.5$) and large neutral hydrogen column density probed by the \ion{Mg}{2} doublet ratio (DR). In addition, GRBs with large X-ray column density (and large ratio of $\mathrm{\log(N_{H,X}/N_{HI})}>0$) tend to have multiple saturated intervening absorbers with $\mathrm{DR<1.2}$. These results therefore indicate an additional contribution of the intervening system to the observed X-ray column density in some GRBs, although the contribution of the host galaxy alone cannot be excluded based on this study.
Rank three Nichols algebras of diagonal type over fields of positive characteristic
Mathematics , 2015,
Abstract: Over fields of arbitrary characteristic we classify all rank three Nichols algebras of diagonal type with a finite root system. Our proof uses the classification of the finite Weyl groupoids of rank three.
Jitter Self-Compton Process: GeV Emission of GRB 100728A
J. Mao,J. Wang
Physics , 2012, DOI: 10.1088/0004-637X/748/2/135
Abstract: Jitter radiation, the emission of relativistic electrons in a random and small-scale magnetic field, has been applied to explain the gamma-ray burst (GRB) prompt emission. The seed photons produced from jitter radiation can be scattered by thermal/nonthermal electrons to the high-energy bands. This mechanism is called jitter self-Compton (JSC) radiation. GRB 100728A, which was simultaneously observed by the Swift and Fermi, is a great example to constrain the physical processes of jitter and JSC. In our work, we utilize jitter/JSC radiation to reproduce the multiwavelength spectrum of GRB 100728A. In particular, due to JSC radiation, the powerful emission above the GeV band is the result of those jitter photons in X-ray band scattered by the relativistic electrons with a mixed thermal-nonthermal energy distribution. We also combine the geometric effect of microemitters to the radiation mechanism, such that the "jet-in-jet" scenario is considered. The observed GRB duration is the result of summing up all of the contributions from those microemitters in the bulk jet.
Application of Jitter Radiation: Gamma-ray Burst Prompt Polarization
Abstract: A high-degree of polarization of gamma-ray burst (GRB) prompt emission has been confirmed in recent years. In this paper, we apply jitter radiation to study the polarization feature of GRB prompt emission. In our framework, relativistic electrons are accelerated by turbulent acceleration. Random and small-scale magnetic fields are generated by turbulence. We further determine that the polarization property of GRB prompt emission is governed by the configuration of the random and small-scale magnetic fields. A two-dimensional compressed slab, which contains stochastic magnetic fields, is applied in our model. If the jitter condition is satisfied, the electron deflection angle in the magnetic field is very small and the electron trajectory can be treated as a straight line. A high-degree of polarization can be achieved when the angle between the line of sight and the slab plane is small. Moreover, micro-emitters with mini-jet structure are considered to be within a bulk GRB jet. The jet "off-axis" effect is intensely sensitive to the observed polarization degree. We discuss the depolarization effect on GRB prompt emission and afterglow. We also speculate that the rapid variability of GRB prompt polarization may be correlated with the stochastic variability of the turbulent dynamo or the magnetic reconnection of plasmas.
Gamma-ray Burst Prompt Emission: Jitter Radiation in Stochastic Magnetic Field Revisited
Abstract: We revisit the radiation mechanism of relativistic electrons in the stochastic magnetic field and apply it to the high-energy emissions of gamma-ray bursts (GRBs). We confirm that jitter radiation is a possible explanation for GRB prompt emission in the condition of a large electron deflection angle. In the turbulent scenario, the radiative spectral property of GRB prompt emission is decided by the kinetic energy spectrum of turbulence. The intensity of the random and small-scale magnetic field is determined by the viscous scale of the turbulent eddy. The microphysical parameters $\epsilon_e$ and $\epsilon_B$ can be obtained. The acceleration and cooling timescales are estimated as well. Due to particle acceleration in magnetized filamentary turbulence, the maximum energy released from the relativistic electrons can reach a value of about $10^{14}$ eV. The GeV GRBs are possible sources of high-energy cosmic-ray.
Using Optimized Distributional Parameters as Inputs in a Sequential Unsupervised and Supervised Modeling of Sunspots Data [PDF]
K. Mwitondi, J. Bugrien, K. Wang
Journal of Software Engineering and Applications (JSEA) , 2013, DOI: 10.4236/jsea.2013.67B007
Detecting naturally arising structures in data is central to knowledge extraction from data. In most applications, the main challenge is in the choice of the appropriate model for exploring the data features. The choice is generally poorly understood and any tentative choice may be too restrictive. Growing volumes of data, disparate data sources and modelling techniques entail the need for model optimization via adaptability rather than comparability. We propose a novel two-stage algorithm to modelling continuous data consisting of an unsupervised stage whereby the algorithm searches through the data for optimal parameter values and a supervised stage that adapts the parameters for predictive modelling. The method is implemented on the sunspots data with inherently Gaussian distributional properties and assumed bi-modality. Optimal values separating high from lows cycles are obtained via multiple simulations. Early patterns for each recorded cycle reveal that the first 3 years provide a sufficient basis for predicting the peak. Multiple Support Vector Machine runs using repeatedly improved data parameters show that the approach yields greater accuracy and reliability than conventional approaches and provides a good basis for model selection. Model reliability is established via multiple simulations of this type. | CommonCrawl |
Hostname: page-component-7ccbd9845f-ktfbs Total loading time: 1.466 Render date: 2023-01-31T19:21:36.345Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Combinatorics, Probability and Computing
>FirstView
>Brownian bridge expansions for Lévy area approximations...
Combinatorics, Probability and Computing
Series expansions for the Brownian bridge
Particular values of the Riemann zeta function
Fluctuations for the trigonometric expansions of the Brownian bridge
Approximations of Brownian Lévy area
Brownian bridge expansions for Lévy area approximations and particular values of the Riemann zeta function
Part of: Approximations and expansions Stochastic analysis Markov processes Limit theorems Zeta and $L$-functions: analytic theory Harmonic analysis in one variable
Published online by Cambridge University Press: 03 November 2022
James Foster [Opens in a new window] and
Karen Habermann [Opens in a new window]
James Foster
Department of Mathematical Sciences, University of Bath, Bath, BA2 7AY, UK
Karen Habermann*
Department of Statistics, University of Warwick, Coventry, CV4 7AL, UK
*Corresponding author. Email: [email protected]
We study approximations for the Lévy area of Brownian motion which are based on the Fourier series expansion and a polynomial expansion of the associated Brownian bridge. Comparing the asymptotic convergence rates of the Lévy area approximations, we see that the approximation resulting from the polynomial expansion of the Brownian bridge is more accurate than the Kloeden–Platen–Wright approximation, whilst still only using independent normal random vectors. We then link the asymptotic convergence rates of these approximations to the limiting fluctuations for the corresponding series expansions of the Brownian bridge. Moreover, and of interest in its own right, the analysis we use to identify the fluctuation processes for the Karhunen–Loève and Fourier series expansions of the Brownian bridge is extended to give a stand-alone derivation of the values of the Riemann zeta function at even positive integers.
Brownian motionKarhunen–Loève expansionpolynomial approximationLévy areafluctuationsRiemann zeta function
MSC classification
Primary: 60F05: Central limit and other weak theorems 60H35: Computational methods for stochastic equations 60J65: Brownian motion
Secondary: 41A10: Approximation by polynomials 42A10: Trigonometric approximation 11M06: $zeta (s)$ and $L(s, chi)$
Combinatorics, Probability and Computing , First View , pp. 1 - 28
DOI: https://doi.org/10.1017/S096354832200030X[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© The Author(s), 2022. Published by Cambridge University Press
One of the well-known applications for expansions of the Brownian bridge is the strong or $L^2(\mathbb{P})$ approximation of stochastic integrals. Most notably, the second iterated integrals of Brownian motion are required by high order strong numerical methods for general stochastic differential equations (SDEs), as discussed in [Reference Clark and Cameron4, Reference Kloeden and Platen22, Reference Rößler33]. Due to integration by parts, such integrals can be expressed in terms of the increment and Lévy area of Brownian motion. The approximation of multidimensional Lévy area is well studied, see [Reference Davie5, Reference Dickinson8, Reference Foster11, Reference Gaines and Lyons13, Reference Gaines and Lyons14, Reference Kloeden, Platen and Wright23, Reference Kuznetsov25, Reference Mrongowius and Rößler32, Reference Wiktorsson35], with the majority of the algorithms proposed being based on a Fourier series expansion or the standard piecewise linear approximation of Brownian motion. Some alternatives include [Reference Davie5, Reference Foster11, Reference Kuznetsov25] which consider methods associated with a polynomial expansion of the Brownian bridge.
Since the advent of Multilevel Monte Carlo (MLMC), introduced by Giles in [Reference Giles16] and subsequently developed in [Reference Belomestny and Nagapetyan2, Reference Debrabant, Ghasemifard and Mattsson6, Reference Debrabant and Rößler7, Reference Giles15, Reference Giles and Szpruch17], Lévy area approximation has become less prominent in the literature. In particular, the antithetic MLMC method introduced by Giles and Szpruch in [Reference Giles and Szpruch17] achieves the optimal complexity for the weak approximation of multidimensional SDEs without the need to generate Brownian Lévy area. That said, there are concrete applications where the simulation of Lévy area is beneficial, such as for sampling from non-log-concave distributions using Itô diffusions. For these sampling problems, high order strong convergence properties of the SDE solver lead to faster mixing properties of the resulting Markov chain Monte Carlo (MCMC) algorithm, see [Reference Li, Wu, Mackey and Erdogdu26].
In this paper, we compare the approximations of Lévy area based on the Fourier series expansion and on a polynomial expansion of the Brownian bridge. We particularly observe their convergence rates and link those to the fluctuation processes associated with the different expansions of the Brownian bridge. The fluctuation process for the polynomial expansion is studied in [Reference Habermann19], and our study of the fluctuation process for the Fourier series expansion allows us, at the same time, to determine the fluctuation process for the Karhunen–Loève expansion of the Brownian bridge. As an attractive side result, we extend the required analysis to obtain a stand-alone derivation of the values of the Riemann zeta function at even positive integers. Throughout, we denote the positive integers by $\mathbb{N}$ and the nonnegative integers by ${\mathbb{N}}_0$ .
Let us start by considering a Brownian bridge $(B_t)_{t\in [0,1]}$ in $\mathbb{R}$ with $B_0=B_1=0$ . This is the unique continuous-time Gaussian process with mean zero and whose covariance function $K_B$ is given by, for $s,t\in [0,1]$ ,
(1.1) \begin{equation} K_B(s,t)=\min\!(s,t)-st. \end{equation}
We are concerned with the following three expansions of the Brownian bridge. The Karhunen–Loève expansion of the Brownian bridge, see Loève [[Reference Loève27], p. 144], is of the form, for $t\in [0,1]$ ,
(1.2) \begin{equation} B_t=\sum _{k=1}^\infty \frac{2\sin\!(k\pi t)}{k\pi } \int _0^1\cos\!(k\pi r){\mathrm{d}} B_r. \end{equation}
The Fourier series expansion of the Brownian bridge, see Kloeden–Platen [[Reference Kloeden and Platen22], p. 198] or Kahane [[Reference Kahane21], Sect. 16.3], yields, for $t\in [0,1]$ ,
(1.3) \begin{equation} B_t=\frac{1}{2}a_0+\sum _{k=1}^\infty \left ( a_k\cos\!(2k\pi t)+b_k\sin\!(2k\pi t) \right ), \end{equation}
where, for $k\in{\mathbb{N}}_0$ ,
(1.4) \begin{equation} a_k=2\int _0^1 \cos\!(2k\pi r)B_r{\mathrm{d}} r\quad \text{and}\quad b_k=2\int _0^1 \sin\!(2k\pi r)B_r{\mathrm{d}} r. \end{equation}
A polynomial expansion of the Brownian bridge in terms of the shifted Legendre polynomials $Q_k$ on the interval $[0,1]$ of degree $k$ , see [Reference Foster, Lyons and Oberhauser12, Reference Habermann19], is given by, for $t\in [0,1]$ ,
(1.5) \begin{equation} B_t=\sum _{k=1}^\infty (2k+1) c_k \int _0^t Q_k(r){\mathrm{d}} r, \end{equation}
where, for $k\in{\mathbb{N}}$ ,
(1.6) \begin{equation} c_k=\int _0^1 Q_k(r){\mathrm{d}} B_r. \end{equation}
These expansions are summarised in Table A1 in Appendix A and they are discussed in more detail in Section 2. For an implementation of the corresponding approximations for Brownian motion as Chebfun examples into MATLAB, see Filip, Javeed and Trefethen [Reference Filip, Javeed and Trefethen9] as well as Trefethen [Reference Trefethen34].
We remark that the polynomial expansion (1.5) can be viewed as a Karhunen–Loève expansion of the Brownian bridge with respect to the weight function $w$ on $(0,1)$ given by $w(t) = \frac{1}{t(1-t)}$ . This approach is employed in [Reference Foster, Lyons and Oberhauser12] to derive the expansion along with the standard optimality property of Karhunen–Loève expansions. In this setting, the polynomial approximation of $(B_t)_{t\in [0,1]}$ is optimal among truncated series expansions in a weighted $L^2(\mathbb{P})$ sense corresponding to the nonconstant weight function $w$ . To avoid confusion, we still adopt the convention throughout to reserve the term Karhunen–Loève expansion for (1.2), whereas (1.5) will be referred to as the polynomial expansion.
Before we investigate the approximations of Lévy area based on the different expansions of the Brownian bridge, we first analyse the fluctuations associated with the expansions. The fluctuation process for the polynomial expansion is studied and characterised in [Reference Habermann19], and these results are recalled in Section 2.3. The fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ for the Karhunen–Loève expansion and the fluctuation processes $(F_t^{N,2})_{t\in [0,1]}$ for the Fourier series expansion are defined as, for $N\in{\mathbb{N}}$ ,
(1.7) \begin{equation} F_t^{N,1}=\sqrt{N}\left (B_t-\sum _{k=1}^N\frac{2\sin\!(k\pi t)}{k\pi }\int _0^1\cos\!(k\pi r){\mathrm{d}} B_r\right ), \end{equation}
(1.8) \begin{equation} F_t^{N,2}=\sqrt{2N}\left (B_t-\frac{1}{2}a_0-\sum _{k=1}^N\left (a_k\cos\!(2k\pi t)+b_k\sin\!(2k\pi t)\right )\right ). \end{equation}
The scaling by $\sqrt{2N}$ in the process $(F_t^{N,2})_{t\in [0,1]}$ is the natural scaling to use because increasing $N$ by one results in the subtraction of two additional Gaussian random variables. We use $\mathbb{E}$ to denote the expectation with respect to Wiener measure $\mathbb{P}$ .
Theorem 1.1. The fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ for the Karhunen–Loève expansion converge in finite dimensional distributions as $N\to \infty$ to the collection $(F_t^1)_{t\in [0,1]}$ of independent Gaussian random variables with mean zero and variance
\begin{equation*} {\mathbb {E}}\left [\left (F_t^1\right )^2 \right ]= \begin {cases} \dfrac{1}{\pi ^2} & \text {if }\;t\in (0,1)\\[12pt] 0 & \text {if }\; t=0\;\text { or }\;t=1 \end {cases}\!. \end{equation*}
The fluctuation processes $(F_t^{N,2})_{t\in [0,1]}$ for the Fourier expansion converge in finite dimensional distributions as $N\to \infty$ to the collection $(F_t^2)_{t\in [0,1]}$ of zero-mean Gaussian random variables whose covariance structure is given by, for $s,t\in [0,1]$ ,
\begin{equation*} {\mathbb {E}}\left [F_s^2 F_t^2 \right ]= \begin {cases} \dfrac{1}{\pi ^2} & \text {if }\;s=t\text { or } s,t\in \{0,1\}\\[12pt] 0 & \text {otherwise} \end {cases}. \end{equation*}
The difference between the fluctuation result for the Karhunen–Loève expansion and the fluctuation result for the polynomial expansion, see [[Reference Habermann19], Theorem 1.6] or Section 2.3, is that there the variances of the independent Gaussian random variables follow the semicircle $\frac{1}{\pi }\sqrt{t(1-t)}$ whereas here they are constant on $(0,1)$ , see Figure 1. The limit fluctuations for the Fourier series expansion further exhibit endpoints which are correlated.
Figure 1. Table showing basis functions and fluctuations for the Brownian bridge expansions.
As pointed out in [Reference Habermann19], the reason for considering convergence in finite dimensional distributions for the fluctuation processes is that the limit fluctuations neither have a realisation as processes in $C([0,1],{\mathbb{R}})$ , nor are they equivalent to measurable processes.
We prove Theorem 1.1 by studying the covariance functions of the Gaussian processes $(F_t^{N,1})_{t\in [0,1]}$ and $(F_t^{N,2})_{t\in [0,1]}$ given in Lemma 2.2 and Lemma 2.3 in the limit $N\to \infty$ . The key ingredient is the following limit theorem for sine functions, which we see concerns the pointwise convergence for the covariance function of $(F_t^{N,1})_{t\in [0,1]}$ .
Theorem 1.2. For all $s,t\in [0,1]$ , we have
\begin{equation*} \lim _{N\to \infty }N\left (\min\!(s,t)-st-\sum _{k=1}^N\frac {2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2}\right )= \begin {cases} \dfrac{1}{\pi ^2} & \text {if } s=t\text { and } t\in (0,1)\\[10pt] 0 & \text {otherwise} \end {cases}. \end{equation*}
The above result serves as one of four base cases in the analysis performed in [Reference Habermann18] of the asymptotic error arising when approximating the Green's function of a Sturm–Liouville problem through a truncation of its eigenfunction expansion. The work [Reference Habermann18] offers a unifying view for Theorem 1.2 and [[Reference Habermann19], Theorem 1.5].
The proof of Theorem 1.2 is split into an on-diagonal and an off-diagonal argument. We start by proving the convergence on the diagonal away from its endpoints by establishing locally uniform convergence, which ensures continuity of the limit function, and by using a moment argument to identify the limit. As a consequence of the on-diagonal convergence, we obtain the next corollary which then implies the off-diagonal convergence in Theorem 1.2.
Corollary 1.3. For all $t\in (0,1)$ , we have
\begin{equation*} \lim _{N\to \infty } N\sum _{k=N+1}^\infty \frac {\cos\!(2k\pi t)}{k^2\pi ^2}=0. \end{equation*}
Moreover, and of interest in its own right, the moment analysis we use to prove the on-diagonal convergence in Theorem 1.2 leads to a stand-alone derivation of the result that the values of the Riemann zeta function $\zeta \colon{\mathbb{C}}\setminus \{1\}\to{\mathbb{C}}$ at even positive integers can be expressed in terms of the Bernoulli numbers $B_{2n}$ as, for $n\in{\mathbb{N}}$ ,
\begin{equation*} \zeta (2n)=(\!-\!1)^{n+1}\frac {\left (2\pi \right )^{2n}B_{2n}}{2(2n)!}, \end{equation*}
see Borevich and Shafarevich [Reference Borevich and Shafarevich3]. In particular, the identity
(1.9) \begin{equation} \sum _{k=1}^\infty \frac{1}{k^2} =\frac{\pi ^2}{6}, \end{equation}
that is, the resolution to the Basel problem posed by Mengoli [Reference Mengoli28] is a consequence of our analysis and not a prerequisite for it.
We turn our attention to studying approximations of second iterated integrals of Brownian motion. For $d\geq 2$ , let $(W_t)_{t\in [0,1]}$ denote a $d$ -dimensional Brownian motion and let $(B_t)_{t\in [0,1]}$ given by $B_t=W_t-tW_1$ be its associated Brownian bridge in ${\mathbb{R}}^d$ . We denote the independent components of $(W_t)_{t\in [0,1]}$ by $(W_t^{(i)})_{t\in [0,1]}$ , for $i\in \{1,\ldots,d\}$ , and the components of $(B_t)_{t\in [0,1]}$ by $(B_t^{(i)})_{t\in [0,1]}$ , which are also independent by construction. We now focus on approximations of Lévy area.
Definition 1.4. The Lévy area of the $d$ -dimensional Brownian motion $W$ over the interval $[s,t]$ is the antisymmetric $d\times d$ matrix $A_{s,t}$ with the following entries, for $i,j\in \{1,\ldots,d\}$ ,
\begin{equation*} A_{s,t}^{(i,j)} \;:\!=\; \frac {1}{2}\left (\int _s^t \left (W_r^{(i)} - W_s^{(i)}\right ){\mathrm {d}} W_r^{(j)} - \int _s^t \left (W_r^{(j)} - W_s^{(j)}\right ){\mathrm {d}} W_r^{(i)}\right ). \end{equation*}
For an illustration of Lévy area for a two-dimensional Brownian motion, see Figure 2.
Figure 2. Lévy area is the chordal area between independent Brownian motions.
Remark 1.5. Given the increment $W_t - W_s$ and the Lévy area $A_{s,t}$ , we can recover the second iterated integrals of Brownian motion using integration by parts as, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ ,
\begin{equation*} \int _s^t \left (W_r^{(i)} - W_s^{(i)}\right ){\mathrm {d}} W_r^{(j)} = \frac {1}{2}\left (W_t^{(i)} - W_s^{(i)}\right )\left (W_t^{(j)} - W_s^{(j)}\right ) + A_{s,t}^{(i,j)}. \end{equation*}
We consider the sequences $\{a_k\}_{k\in{\mathbb{N}}_0}$ , $\{b_k\}_{k\in{\mathbb{N}}}$ and $\{c_k\}_{k\in{\mathbb{N}}}$ of Gaussian random vectors, where the coordinate random variables $a_k^{(i)}$ , $b_k^{(i)}$ and $c_k^{(i)}$ are defined for $i\in \{1,\ldots,d\}$ by (1.4) and (1.6), respectively, in terms of the Brownian bridge $(B_t^{(i)})_{t\in [0,1]}$ . Using the random coefficients arising from the Fourier series expansion(1.3), we obtain the approximation of Brownian Lévy area proposed by Kloeden and Platen [Reference Kloeden and Platen22] and Milstein [Reference Milstein31]. Further approximating terms so that only independent random coefficients are used yields the Kloeden–Platen–Wright approximation in [Reference Kloeden, Platen and Wright23, Reference Milstein30, Reference Wiktorsson35]. Similarly, using the random coefficients from the polynomial expansion (1.5), we obtain the Lévy area approximation first proposed by Kuznetsov in [Reference Kuznetsov24]. These Lévy area approximations are summarised in Table A2 in Appendix A and have the following asymptotic convergence rates.
Theorem 1.6 (Asymptotic convergence rates of Lévy area approximations). For $n\in{\mathbb{N}}$ , we set $N=2n$ and define approximations $\widehat{A}_{n}$ , $\widetilde{A}_{n}$ and $\overline{A}_{2n}$ of the Lévy area $A_{0,1}$ by, for $i,j\in \{1,\ldots,d\}$ ,
(1.10) \begin{align} \widehat{A}_{n}^{ (i,j)} \;:\!=\; \frac{1}{2}\left (a_0^{(i)}W_1^{(j)} - W_1^{(i)}a_0^{(j)}\right ) + \pi \sum _{k=1}^{n-1} k\left (a_{k}^{(i)}b_k^{(j)} - b_k^{(i)}a_{k}^{(j)}\right ), \end{align}
(1.11) \begin{align} \widetilde{A}_{n}^{ (i,j)} \;:\!=\; \pi \sum _{k=1}^{n-1} k\left (a_{k}^{(i)}\left (b_k^{(j)} - \frac{1}{k\pi }W_1^{(j)}\right ) - \left (b_k^{(i)} - \frac{1}{k\pi }W_1^{(i)}\right )a_{k}^{(j)}\right ), \end{align}
(1.12) \begin{align} \overline{A}_{2n}^{ (i,j)} \;:\!=\; \frac{1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right ) + \frac{1}{2}\sum _{k=1}^{2n-1}\left (c_k^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_k^{(j)}\right ). \end{align}
Then $\widehat{A}_{n}$ , $\widetilde{A}_{n}$ and $\overline{A}_{2n}$ are antisymmetric $d \times d$ matrices and, for $i\neq j$ and as $N\to \infty$ , we have
\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widehat{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{1}{\pi ^2}\bigg (\frac{1}{N}\bigg ),\\[3pt]{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widetilde{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{3}{\pi ^2}\bigg (\frac{1}{N}\bigg ),\\[3pt]{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline{A}_{2n}^{ (i,j)}\Big )^2 \bigg ] & \sim \frac{1}{8}\bigg (\frac{1}{N}\bigg ). \end{align*}
The asymptotic convergence rates in Theorem 1.6 are phrased in terms of $N$ since the number of Gaussian random vectors required to define the above Lévy area approximations is $N$ or $N-1$ , respectively. Of course, it is straightforward to define the polynomial approximation $\overline{A}_{n}$ for $n\in{\mathbb{N}}$ , see Theorem 5.4.
Intriguingly, the convergence rates for the approximations resulting from the Fourier series and the polynomial expansion correspond exactly with the areas under the limit variance function for each fluctuation process, which are
\begin{equation*} \int _0^1\frac {1}{\pi ^2}{\mathrm {d}} t=\frac {1}{\pi ^2} \quad \text {and}\quad \int _0^1 \frac {1}{\pi }\sqrt {t(1-t)}{\mathrm {d}} t=\frac {1}{8}. \end{equation*}
We provide heuristics demonstrating how this correspondence arises at the end of Section 5.
By adding an additional Gaussian random matrix that matches the covariance of the tail sum, it is possible to derive high order Lévy area approximations with $O(N^{-1})$ convergence in $L^2(\mathbb{P})$ . Wiktorsson [Reference Wiktorsson35] proposed this approach using the Kloeden–Platen–Wright approximation (1.11) and this was recently improved by Mrongowius and Rößler in [Reference Mrongowius and Rößler32] who use the approximation (1.10) obtained from the Fourier series expansion (1.3).
We expect that an $O(N^{-1})$ polynomial-based approximation is possible using the same techniques. While this approximation should be slightly less accurate than the Fourier approach, we expect it to be easier to implement due to both the independence of the coefficients $\{c_k\}_{k\in{\mathbb{N}}}$ and the covariance of the tail sum having a closed-form expression, see Theorem 5.4. Moreover, this type of method has already been studied in [Reference Davie5, Reference Flint and Lyons10, Reference Foster11] with Brownian Lévy area being approximated by
(1.13) \begin{equation} \widehat{A}_{0,1}^{ (i,j)} \;:\!=\; \frac{1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right ) + \lambda _{ 0,1}^{(i,j)}, \end{equation}
where the antisymmetric $d\times d$ matrix $\lambda _{ 0,1}$ is normally distributed and designed so that $\widehat{A}_{ 0,1}$ has the same covariance structure as the Brownian Lévy area $A_{ 0,1}$ . Davie [Reference Davie5] as well as Flint and Lyons [Reference Flint and Lyons10] generate each $(i,j)$ -entry of $\lambda _{0,1}$ independently as $\lambda _{ 0,1}^{(i,j)} \sim \mathcal{N}\big (0, \frac{1}{12}\big )$ for $i \lt j$ . In [Reference Foster11], it is shown that the covariance structure of $A_{0,1}$ can be explicitly computed conditional on both $W_1$ and $c_1$ . By matching the conditional covariance structure of $A_{ 0,1}$ , the work [Reference Foster11] obtains the approximation
\begin{equation*} \lambda _{ 0,1}^{(i,j)} \sim \mathcal {N}\bigg (0, \frac {1}{20} + \frac {1}{20}\Big (\big (c_1^{(i)}\big )^2 + \big (c_1^{(j)}\big )^2\Big )\bigg ), \end{equation*}
where the entries $ \{\lambda _{ 0,1}^{(i,j)} \}_{i \lt j}$ are still generated independently, but only after $c_1$ has been generated.
By rescaling (1.13) to approximate Lévy area on $\big [\frac{k}{N}, \frac{k+1}{N}\big ]$ and summing over $k\in \{0,\ldots, N-1\}$ , we obtain a fine discretisation of $A_{0,1}$ involving $2N$ Gaussian random vectors and $N$ random matrices. In [Reference Davie5, Reference Flint and Lyons10, Reference Foster11], the Lévy area of Brownian motion and this approximation are probabilistically coupled in such a way that $L^{2}(\mathbb{P})$ convergence rates of $O(N^{-1})$ can be established. Furthermore, the efficient Lévy area approximation (1.13) can be used directly in numerical methods for SDEs, which then achieve $L^{2}(\mathbb{P})$ convergence of $O(N^{-1})$ under certain conditions on the SDE vector fields, see [Reference Davie5, Reference Flint and Lyons10]. We leave such high order polynomial-based approximations of Lévy area as a topic for future work.
The paper is organised as follows.
In Section 2, we provide an overview of the three expansions we consider for the Brownian bridge, and we characterise the associated fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ and $(F_t^{N,2})_{t\in [0,1]}$ . Before discussing their behaviour in the limit $N\to \infty$ , we initiate the moment analysis used to prove the on-diagonal part of Theorem 1.2 and we extend the analysis to determine the values of the Riemann zeta function at even positive integers in Section 3. The proof of Theorem 1.2 follows in Section 4, where we complete the moment analysis and establish a locally uniform convergence to identify the limit on the diagonal, before we deduce Corollary 1.3, which then allows us to obtain the off-diagonal convergence in Theorem 1.2. We close Section 4 by proving Theorem 1.1. In Section 5, we compare the asymptotic convergence rates of the different approximations of Lévy area, which results in a proof of Theorem 1.6.
2. Series expansions for the Brownian bridge
We discuss the Karhunen–Loève expansion as well as the Fourier expansion of the Brownian bridge more closely, and we derive expressions for the covariance functions of their Gaussian fluctuation processes.
In our analysis, we frequently use a type of Itô isometry for Itô integrals with respect to a Brownian bridge, and we include its statement and proof for completeness.
Lemma 2.1. Let $(B_t)_{t\in [0,1]}$ be a Brownian bridge in $\mathbb{R}$ with $B_0=B_1=0$ , and let $f,g{\kern-0.5pt}\colon [0,1]\to{\mathbb{R}}$ be integrable functions. Setting $F(1)=\int _0^1 f(t){\mathrm{d}} t$ and $G(1)=\int _0^1 g(t){\mathrm{d}} t$ , we have
\begin{equation*} {\mathbb {E}}\left [\left (\int _0^1 f(t){\mathrm {d}} B_t\right )\left (\int _0^1 g(t){\mathrm {d}} B_t\right )\right ] =\int _0^1 f(t)g(t) {\mathrm {d}} t - F(1)G(1). \end{equation*}
Proof. For a standard one-dimensional Brownian motion $(W_t)_{t\in [0,1]}$ , the process $(W_t-t W_1)_{t\in [0,1]}$ has the same law as the Brownian bridge $(B_t)_{t\in [0,1]}$ . In particular, the random variable $\int _0^1 f(t){\mathrm{d}} B_t$ is equal in law to the random variable
\begin{equation*} \int _0^1 f(t){\mathrm {d}} W_t -W_1\int _0^1f(t){\mathrm {d}} t =\int _0^1 f(t){\mathrm {d}} W_t - W_1 F(1). \end{equation*}
Using a similar expression for $\int _0^1 g(t){\mathrm{d}} B_t$ and applying the usual Itô isometry, we deduce that
\begin{align*} &{\mathbb{E}}\left [\left (\int _0^1 f(t){\mathrm{d}} B_t\right )\left (\int _0^1 g(t){\mathrm{d}} B_t\right )\right ]\\[5pt] &\qquad =\int _0^1 f(t)g(t){\mathrm{d}} t-F(1)\int _0^1 g(t){\mathrm{d}} t -G(1)\int _0^1 f(t){\mathrm{d}} t+F(1)G(1)\\[5pt] &\qquad =\int _0^1 f(t)g(t){\mathrm{d}} t - F(1)G(1), \end{align*}
as claimed.
2.1 The Karhunen–Loève expansion
Mercer's theorem, see [Reference Mercer.29], states that for a continuous symmetric nonnegative definite kernel $K\colon [0,1]\times [0,1]\to{\mathbb{R}}$ there exists an orthonormal basis $\{e_k\}_{k\in{\mathbb{N}}}$ of $L^2([0,1])$ which consists of eigenfunctions of the Hilbert–Schmidt integral operator associated with $K$ and whose eigenvalues $\{\lambda _k\}_{k\in{\mathbb{N}}}$ are nonnegative and such that, for $s,t\in [0,1]$ , we have the representation
\begin{equation*} K(s,t)=\sum _{k=1}^\infty \lambda _k e_k(s) e_k(t), \end{equation*}
which converges absolutely and uniformly on $[0,1]\times [0,1]$ . For the covariance function $K_B$ defined by (1.1) of the Brownian bridge $(B_t)_{t\in [0,1]}$ , we obtain, for $k\in{\mathbb{N}}$ and $t\in [0,1]$ ,
\begin{equation*} e_k(t)=\sqrt {2}\sin\!(k\pi t) \quad \text {and}\quad \lambda _k=\frac {1}{k^2\pi ^2}. \end{equation*}
The Karhunen–Loève expansion of the Brownian bridge is then given by
\begin{equation*} B_t=\sum _{k=1}^\infty \sqrt {2}\sin\!(k\pi t) Z_k \quad \text {where}\quad Z_k=\int _0^1 \sqrt {2}\sin\!(k\pi r) B_r {\mathrm {d}} r, \end{equation*}
which after integration by parts yields the expression (1.2). Applying Lemma 2.1, we can compute the covariance functions of the associated fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ .
Lemma 2.2. The fluctuation process $(F_t^{N,1})_{t\in [0,1]}$ for $N\in{\mathbb{N}}$ is a zero-mean Gaussian process with covariance function $NC_1^N$ where $C_1^N\colon [0,1]\times [0,1]\to{\mathbb{R}}$ is given by
\begin{equation*} C_1^N(s,t)=\min\!(s,t)-st- \sum _{k=1}^N\frac {2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2}. \end{equation*}
Proof. From the definition (1.7), we see that $(F_t^{N,1})_{t\in [0,1]}$ is a zero-mean Gaussian process. Hence, it suffices to determine its covariance function. By Lemma 2.1, we have, for $k,l\in{\mathbb{N}}$ ,
\begin{equation*} {\mathbb {E}}\left [\left (\int _0^1\cos\!(k\pi r){\mathrm {d}} B_r\right )\left (\int _0^1\cos\!(l\pi r){\mathrm {d}} B_r\right )\right ] =\int _0^1\cos\!(k\pi r)\cos\!(l\pi r){\mathrm {d}} r= \begin {cases} \frac {1}{2} & \text {if }k=l\\[5pt] 0 & \text {otherwise} \end {cases} \end{equation*}
and, for $t\in [0,1]$ ,
\begin{equation*} {\mathbb {E}}\left [B_t\int _0^1\cos\!(k\pi r){\mathrm {d}} B_r\right ] =\int _0^t\cos\!(k\pi r){\mathrm {d}} r=\frac {\sin\!(k\pi t)}{k\pi }. \end{equation*}
Therefore, from (1.1) and (1.7), we obtain that, for all $s,t\in [0,1]$ ,
\begin{equation*} {\mathbb {E}}\left [F_s^{N,1}F_t^{N,1}\right ] =N\left (\min\!(s,t)-st -\sum _{k=1}^N\frac {2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2}\right ), \end{equation*}
Consequently, Theorem 1.2 is a statement about the pointwise convergence of the function $NC_1^N$ in the limit $N\to \infty$ .
For our stand-alone derivation of the values of the Riemann zeta function at even positive integers in Section 3, it is further important to note that since, by Mercer's theorem, the representation
(2.1) \begin{equation} K_B(s,t)=\min\!(s,t)-st=\sum _{k=1}^\infty \frac{2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2} \end{equation}
converges uniformly for $s,t\in [0,1]$ , the sequence $\{C_1^N\}_{N\in{\mathbb{N}}}$ converges uniformly on $[0,1]\times [0,1]$ to the zero function. It follows that, for all $n\in{\mathbb{N}}_0$ ,
(2.2) \begin{equation} \lim _{N\to \infty }\int _0^1 C_1^N(t,t) t^n{\mathrm{d}} t = 0. \end{equation}
2.2 The Fourier expansion
Whereas for the Karhunen–Loève expansion the sequence
\begin{equation*} \left \{\int _0^1\cos\!(k\pi r){\mathrm {d}} B_r\right \}_{k\in {\mathbb {N}}} \end{equation*}
of random coefficients is formed by independent Gaussian random variables, it is crucial to observe that the random coefficients appearing in the Fourier expansion are not independent. Integrating by parts, we can rewrite the coefficients defined in (1.4) as
(2.3) \begin{equation} a_0=2\int _0^1B_r{\mathrm{d}} r=-2\int _0^1r{\mathrm{d}} B_r \quad \text{and}\quad b_0=0 \end{equation}
as well as, for $k\in{\mathbb{N}}$ ,
(2.4) \begin{equation} a_k=-\int _0^1\frac{\sin\!(2k\pi r)}{k\pi }{\mathrm{d}} B_r \quad \text{and}\quad b_k=\int _0^1\frac{\cos\!(2k\pi r)}{k\pi }{\mathrm{d}} B_r. \end{equation}
Applying Lemma 2.1, we see that
(2.5) \begin{equation} {\mathbb{E}}\left [a_0^2\right ]=4\left (\int _0^1 r^2{\mathrm{d}} r-\frac{1}{4}\right )=\frac{1}{3} \end{equation}
and, for $k,l\in{\mathbb{N}}$ ,
(2.6) \begin{equation} {\mathbb{E}}\left [a_k a_l\right ]={\mathbb{E}}\left [b_k b_l\right ]= \begin{cases} \dfrac{1}{2k^2\pi ^2} & \text{if }k=l\\[10pt] 0 & \text{otherwise} \end{cases}. \end{equation}
Since the random coefficients are Gaussian random variables with mean zero, by (2.3) and (2.4), this implies that, for $k\in{\mathbb{N}}$ ,
\begin{equation*} a_0\sim \mathcal {N}\left (0,\frac {1}{3}\right ) \quad \text {and}\quad a_k,b_k\sim \mathcal {N}\left (0,\frac {1}{2k^2\pi ^2}\right ). \end{equation*}
For the remaining covariances of these random coefficients, we obtain that, for $k,l\in{\mathbb{N}}$ ,
(2.7) \begin{equation} {\mathbb{E}}\left [a_k b_l\right ]=0,\quad{\mathbb{E}}\left [a_0a_k\right ]=2\int _0^1\frac{\sin\!(2k\pi r)}{k\pi }r{\mathrm{d}} r=-\frac{1}{k^2\pi ^2} \quad \text{and}\quad{\mathbb{E}}\left [a_0b_k\right ]=0. \end{equation}
Using the covariance structure of the random coefficients, we determine the covariance functions of the fluctuation processes $(F_t^{N,2})_{t\in [0,1]}$ defined in (1.8) for the Fourier series expansion.
Lemma 2.3. The fluctuation process $(F_t^{N,2})_{t\in [0,1]}$ for $N\in{\mathbb{N}}$ is a Gaussian process with mean zero and whose covariance function is $2NC_2^N$ where $C_2^N\colon [0,1]\times [0,1]$ is given by
\begin{equation*} C_2^N(s,t)=\min\!(s,t)-st+\frac {s^2-s}{2}+\frac {t^2-t}{2}+\frac {1}{12}- \sum _{k=1}^N\frac {\cos\!(2k\pi (t-s))}{2k^2\pi ^2}. \end{equation*}
Proof. Repeatedly applying Lemma 2.1, we compute that, for $t\in [0,1]$ ,
(2.8) \begin{equation} {\mathbb{E}}\left [B_ta_0\right ]=-2\int _0^t r{\mathrm{d}} r+\int _0^t{\mathrm{d}} r=t-t^2 \end{equation}
(2.9) \begin{equation} {\mathbb{E}}\left [B_ta_k\right ]=-\int _0^t\frac{\sin\!(2k\pi r)}{k\pi }{\mathrm{d}} r =\frac{\cos\!(2k\pi t)-1}{2k^2\pi ^2} \quad \text{and}\quad{\mathbb{E}}\left [B_tb_k\right ]=\frac{\sin\!(2k\pi t)}{2k^2\pi ^2}. \end{equation}
From (2.5) and (2.8), it follows that, for $s,t\in [0,1]$ ,
\begin{equation*} {\mathbb {E}}\left [\left (B_s-\frac {1}{2}a_0\right )\left (B_t-\frac {1}{2}a_0\right )\right ] =\min\!(s,t)-st+\frac {s^2-s}{2}+\frac {t^2-t}{2}+\frac {1}{12}, \end{equation*}
whereas (2.7) and (2.9) imply that
\begin{equation*} {\mathbb {E}}\left [\frac {1}{2}a_0\sum _{k=1}^N a_k \cos\!(2k\pi t) -B_s\sum _{k=1}^N a_k \cos\!(2k\pi t)\right ] =-\sum _{k=1}^N\frac {\cos\!(2k\pi s)\cos\!(2k\pi t)}{2k^2\pi ^2} \end{equation*}
\begin{equation*} {\mathbb {E}}\left [B_s\sum _{k=1}^N b_k \sin\!(2k\pi t)\right ] =\sum _{k=1}^N\frac {\sin\!(2k\pi s)\sin\!(2k\pi t)}{2k^2\pi ^2}. \end{equation*}
It remains to observe that, by (2.6) and (2.7),
\begin{align*} &{\mathbb{E}}\left [\left (\sum _{k=1}^N\left (a_k\cos\!(2k\pi s)+b_k\sin\!(2k\pi s)\right )\right ) \left (\sum _{k=1}^N\left (a_k\cos\!(2k\pi t)+b_k\sin\!(2k\pi t)\right )\right )\right ]\\[5pt] &\qquad = \sum _{k=1}^N\frac{\cos\!(2k\pi s)\cos\!(2k\pi t)+\sin\!(2k\pi s)\sin\!(2k\pi t)}{2k^2\pi ^2}. \end{align*}
Using the identity
(2.10) \begin{equation} \cos\!(2k\pi (t-s))=\cos\!(2k\pi s)\cos\!(2k\pi t)+\sin\!(2k\pi s)\sin\!(2k\pi t) \end{equation}
and recalling the definition (1.8) of the fluctuation process $(F_t^{N,2})_{t\in [0,1]}$ for the Fourier expansion, we obtain the desired result.
By combining Corollary 1.3, the resolution (1.9) to the Basel problem and the representation (2.1), we can determine the pointwise limit of $2N C_2^N$ as $N\to \infty$ . We leave further considerations until Section 4.2 to demonstrate that the identity (1.9) is really a consequence of our analysis.
2.3 The polynomial expansion
As pointed out in the introduction and as discussed in detail in [Reference Foster, Lyons and Oberhauser12], the polynomial expansion of the Brownian bridge is a type of Karhunen–Loève expansion in the weighted $L^2({\mathbb{P}})$ space with weight function $w$ on $(0,1)$ defined by $w(t)=\frac{1}{t(1-t)}$ .
An alternative derivation of the polynomial expansion is given in [Reference Habermann19] by considering iterated Kolmogorov diffusions. The iterated Kolmogorov diffusion of step $N\in{\mathbb{N}}$ pairs a one-dimensional Brownian motion $(W_t)_{t\in [0,1]}$ with its first $N-1$ iterated time integrals, that is, it is the stochastic process in ${\mathbb{R}}^N$ of the form
\begin{equation*} \left (W_t,\int _0^t W_{s_1}{\mathrm {d}} s_1,\ldots, \int _0^t\int _0^{s_{N-1}}\ldots \int _0^{s_2} W_{s_1}{\mathrm {d}} s_1\ldots {\mathrm {d}} s_{N-1}\right )_{t\in [0,1]}. \end{equation*}
The shifted Legendre polynomial $Q_k$ of degree $k\in{\mathbb{N}}$ on the interval $[0,1]$ is defined in terms of the standard Legendre polynomial $P_k$ of degree $k$ on $[-1,1]$ by, for $t\in [0,1]$ ,
\begin{equation*} Q_k(t)=P_k(2t-1). \end{equation*}
It is then shown that the first component of an iterated Kolmogorov diffusion of step $N\in{\mathbb{N}}$ conditioned to return to $0\in{\mathbb{R}}^N$ in time $1$ has the same law as the stochastic process
\begin{equation*} \left (B_t-\sum _{k=1}^{N-1}(2k+1)\int _0^tQ_k(r){\mathrm {d}} r \int _0^1Q_k(r){\mathrm {d}} B_r\right )_{t\in [0,1]}. \end{equation*}
The polynomial expansion (1.5) is an immediate consequence of the result [[Reference Habermann19], Theorem 1.4] which states that these first components of the conditioned iterated Kolmogorov diffusions converge weakly as $N\to \infty$ to the zero process.
As for the Karhunen–Loève expansion discussed above, the sequence $\{c_k\}_{k\in{\mathbb{N}}}$ of random coefficients defined by (1.6) is again formed by independent Gaussian random variables. To see this, we first recall the following identities for Legendre polynomials [[Reference Arfken and Weber1], (12.23), (12.31), (12.32)] which in terms of the shifted Legendre polynomials read as, for $k\in{\mathbb{N}}$ ,
(2.11) \begin{equation} Q_k = \frac{1}{2(2k+1)}\left (Q_{k+1}^{\prime } - Q_{k-1}^{\prime }\right ),\qquad Q_k(0) = (\!-\!1)^k,\qquad Q_k(1) = 1. \end{equation}
In particular, it follows that, for all $k\in{\mathbb{N}}$ ,
\begin{equation*} \int _0^1 Q_k(r){\mathrm {d}} r=0, \end{equation*}
which, by Lemma 2.1, implies that, for $k,l\in{\mathbb{N}}$ ,
\begin{equation*} {\mathbb {E}}\left [c_k c_l\right ] ={\mathbb {E}}\left [\left (\int _0^1 Q_k(r){\mathrm {d}} B_r\right )\left (\int _0^1 Q_l(r){\mathrm {d}} B_r\right )\right ] =\int _0^1 Q_k(r) Q_l(r){\mathrm {d}} r= \begin {cases} \dfrac {1}{2k+1} & \text {if } k=l\\[6pt] 0 & \text {otherwise} \end {cases}. \end{equation*}
Since the random coefficients are Gaussian with mean zero, this establishes their independence.
The fluctuation processes $(F_t^{N,3})_{t\in [0,1]}$ for the polynomial expansion defined by
(2.12) \begin{equation} F_t^{N,3}=\sqrt{N}\left (B_t-\sum _{k=1}^{N-1}(2k+1) \int _0^t Q_k(r){\mathrm{d}} r \int _0^1 Q_k(r){\mathrm{d}} B_r\right ) \end{equation}
are studied in [Reference Habermann19]. According to [[Reference Habermann19], Theorem 1.6], they converge in finite dimensional distributions as $N\to \infty$ to the collection $(F_t^3)_{t\in [0,1]}$ of independent Gaussian random variables with mean zero and variance
\begin{equation*} {\mathbb {E}}\left [\left (F_t^3\right )^2\right ]=\frac {1}{\pi }\sqrt {t(1-t)}, \end{equation*}
that is, the variance function of the limit fluctuations is given by a scaled semicircle.
3. Particular values of the Riemann zeta function
We demonstrate how to use the Karhunen–Loève expansion of the Brownian bridge or, more precisely, the series representation arising from Mercer's theorem for the covariance function of the Brownian bridge to determine the values of the Riemann zeta function at even positive integers. The analysis further feeds directly into Section 4.1 where we characterise the limit fluctuations for the Karhunen–Loève expansion.
The crucial ingredient is the observation (2.2) from Section 2, which implies that, for all $n\in{\mathbb{N}}_0$ ,
(3.1) \begin{equation} \sum _{k=1}^\infty \int _0^1 \frac{2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}t^{n}{\mathrm{d}} t =\int _0^1\left (t-t^2\right )t^{n}{\mathrm{d}} t =\frac{1}{(n+2)(n+3)}. \end{equation}
For completeness, we recall that the Riemann zeta function $\zeta \colon{\mathbb{C}}\setminus \{1\}\to{\mathbb{C}}$ analytically continues the sum of the Dirichlet series
\begin{equation*} \zeta (s)=\sum _{k=1}^\infty \frac {1}{k^s}. \end{equation*}
When discussing its values at even positive integers, we encounter the Bernoulli numbers. The Bernoulli numbers $B_n$ , for $n\in{\mathbb{N}}$ , are signed rational numbers defined by an exponential generating function via, for $t\in (\!-\!2\pi,2\pi )$ ,
\begin{equation*} \frac {t}{\operatorname {e}^t-1}=1+\sum _{n=1}^\infty \frac {B_n t^n}{n!}, \end{equation*}
see Borevich and Shafarevich [[Reference Borevich and Shafarevich3], Chapter 5.8]. These numbers play an important role in number theory and analysis. For instance, they feature in the series expansion of the (hyperbolic) tangent and the (hyperbolic) cotangent, and they appear in formulae by Bernoulli and by Faulhaber for the sum of positive integer powers of the first $k$ positive integers. The characterisation of the Bernoulli numbers which is essential to our analysis is that, according to [[Reference Borevich and Shafarevich3], Theorem 5.8.1], they satisfy and are uniquely given by the recurrence relations
(3.2) \begin{equation} 1+\sum _{n=1}^{m}\binom{m+1}{n}B_n=0\quad \text{for }m\in{\mathbb{N}}. \end{equation}
In particular, choosing $m=1$ yields $1+2B_1=0$ , which shows that
\begin{equation*} B_{1}=-\frac {1}{2}. \end{equation*}
Moreover, since the function defined by, for $t\in (\!-\!2\pi,2\pi )$ ,
\begin{equation*} \frac {t}{\operatorname {e}^t-1}+\frac {t}{2}=1+\sum _{n=2}^\infty \frac {B_n t^n}{n!} \end{equation*}
is an even function, we obtain $B_{2n+1}=0$ for all $n\in{\mathbb{N}}$ , see [[Reference Borevich and Shafarevich3], Theorem 5.8.2]. It follows from (3.2) that the Bernoulli numbers $B_{2n}$ indexed by even positive integers are uniquely characterised by the recurrence relations
(3.3) \begin{equation} \sum _{n=1}^m \binom{2m+1}{2n}B_{2n}=\frac{2m-1}{2}\quad \text{for }m\in{\mathbb{N}}. \end{equation}
These recurrence relations are our tool for identifying the Bernoulli numbers when determining the values of the Riemann zeta function at even positive integers.
The starting point for our analysis is (3.1), and we first illustrate how it allows us to compute $\zeta (2)$ . Taking $n=0$ in (3.1), multiplying through by $\pi ^2$ , and using that $\int _0^1\left (\sin\!(k\pi t)\right )^2{\mathrm{d}} t=\frac{1}{2}$ for $k\in{\mathbb{N}}$ , we deduce that
\begin{equation*} \zeta (2)=\sum _{k=1}^\infty \frac {1}{k^2} =\sum _{k=1}^\infty \int _0^1 \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2} {\mathrm {d}} t =\frac {\pi ^2}{6}. \end{equation*}
We observe that this is exactly the identity obtained by applying the general result
\begin{equation*} \int _0^1 K(t,t){\mathrm {d}} t=\sum _{k=1}^\infty \lambda _k \end{equation*}
for a representation arising from Mercer's theorem to the representation for the covariance function $K_B$ of the Brownian bridge.
For working out the values for the remaining even positive integers, we iterate over the degree of the moment in (3.1). While for the remainder of this section it suffices to only consider the even moments, we derive the following recurrence relation and the explicit expression both for the even and for the odd moments as these are needed in Section 4.1. For $k\in{\mathbb{N}}$ and $n\in{\mathbb{N}}_0$ , we set
\begin{equation*} e_{k,n}=\int _0^1 2\left (\sin\!(k\pi t)\right )^2t^{n}{\mathrm {d}} t. \end{equation*}
Lemma 3.1. For all $k\in{\mathbb{N}}$ and all $n\in{\mathbb{N}}$ with $n\geq 2$ , we have
\begin{equation*} e_{k,n}=\frac {1}{n+1}-\frac {n(n-1)}{4k^2\pi ^2}e_{k,n-2} \end{equation*}
subject to the initial conditions
\begin{equation*} e_{k,0}=1\quad \text {and}\quad e_{k,1}=\frac {1}{2}. \end{equation*}
Proof. For $k\in{\mathbb{N}}$ , the values for $e_{k,0}$ and $e_{k,1}$ can be verified directly. For $n\in{\mathbb{N}}$ with $n\geq 2$ , we integrate by parts twice to obtain
\begin{align*} e_{k,n} &=\int _0^1 2\left (\sin\!(k\pi t)\right )^2t^{n}{\mathrm{d}} t\\[5pt] &=1-\int _0^1\left (t-\frac{\sin\!(2k\pi t)}{2k\pi }\right ) nt^{n-1}{\mathrm{d}} t\\[5pt] &=1-\frac{n}{2}+\frac{n(n-1)}{2} \int _0^1\left (t^2- \frac{\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}\right )t^{n-2}{\mathrm{d}} t \\[5pt] &=\frac{2-n}{2}+\frac{n(n-1)}{2} \left (\frac{1}{n+1}-\frac{1}{2k^2\pi ^2}e_{k,n-2}\right )\\[5pt] &=\frac{1}{n+1}-\frac{n(n-1)}{4k^2\pi ^2}e_{k,n-2}, \end{align*}
Iteratively applying the recurrence relation, we find the following explicit expression, which despite its involvedness is exactly what we need.
Lemma 3.2. For all $k\in{\mathbb{N}}$ and $m\in{\mathbb{N}}_0$ , we have
\begin{align*} e_{k,2m}&=\frac{1}{2m+1}+ \sum _{n=1}^m\frac{(\!-\!1)^n(2m)!}{(2(m-n)+1)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}} \quad \text{and}\\[5pt] e_{k,2m+1}&=\frac{1}{2m+2}+ \sum _{n=1}^m\frac{(\!-\!1)^n(2m+1)!}{(2(m-n)+2)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}}. \end{align*}
Proof. We proceed by induction over $m$ . Since $e_{k,0}=1$ and $e_{k,1}=\frac{1}{2}$ for all $k\in{\mathbb{N}}$ , the expressions are true for $m=0$ with the sums being understood as empty sums in this case. Assuming that the result is true for some fixed $m\in{\mathbb{N}}_0$ , we use Lemma 3.1 to deduce that
\begin{align*} e_{k,2m+2}&=\frac{1}{2m+3}-\frac{(2m+2)(2m+1)}{4k^2\pi ^2}e_{k,2m}\\[5pt] &=\frac{1}{2m+3}-\frac{2m+2}{4k^2\pi ^2}- \sum _{n=1}^m\frac{(\!-\!1)^n(2m+2)!}{(2(m-n)+1)!2^{2n+2}}\frac{1}{k^{2n+2}\pi ^{2n+2}}\\[5pt] &=\frac{1}{2m+3}+ \sum _{n=1}^{m+1}\frac{(\!-\!1)^n(2m+2)!}{(2(m-n)+3)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}} \end{align*}
\begin{align*} e_{k,2m+3}&=\frac{1}{2m+4}-\frac{(2m+3)(2m+2)}{4k^2\pi ^2}e_{k,2m+1}\\[5pt] &=\frac{1}{2m+4}-\frac{2m+3}{4k^2\pi ^2}- \sum _{n=1}^m\frac{(\!-\!1)^n(2m+3)!}{(2(m-n)+2)!2^{2n+2}}\frac{1}{k^{2n+2}\pi ^{2n+2}}\\[5pt] &=\frac{1}{2m+4}+ \sum _{n=1}^{m+1}\frac{(\!-\!1)^n(2m+3)!}{(2(m-n)+4)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}}, \end{align*}
which settles the induction step.
Focusing on the even moments for the remainder of this section, we see that by (3.1), for all $m\in{\mathbb{N}}_0$ ,
\begin{equation*} \sum _{k=1}^\infty \frac {e_{k,2m}}{k^2\pi ^2} =\frac {1}{(2m+2)(2m+3)}. \end{equation*}
From Lemma 3.2, it follows that
\begin{equation*} \sum _{k=1}^\infty \frac {1}{k^2\pi ^2} \left (\sum _{n=0}^m\frac {(\!-\!1)^n(2m)!}{(2(m-n)+1)!2^{2n}}\frac {1}{k^{2n}\pi ^{2n}}\right ) =\frac {1}{(2m+2)(2m+3)}. \end{equation*}
Since $\sum _{k=1}^\infty k^{-2n}$ converges for all $n\in{\mathbb{N}}$ , we can rearrange sums to obtain
\begin{equation*} \sum _{n=0}^m\frac {(\!-\!1)^n(2m)!}{(2(m-n)+1)!2^{2n}} \left (\sum _{k=1}^\infty \frac {1}{k^{2n+2}\pi ^{2n+2}}\right ) =\frac {1}{(2m+2)(2m+3)}, \end{equation*}
which in terms of the Riemann zeta function and after reindexing the sum rewrites as
\begin{equation*} \sum _{n=1}^{m+1}\frac {(\!-\!1)^{n+1}(2m)!}{(2(m-n)+3)!2^{2n-2}} \frac {\zeta (2n)}{\pi ^{2n}} =\frac {1}{(2m+2)(2m+3)}. \end{equation*}
Multiplying through by $(2m+1)(2m+2)(2m+3)$ shows that, for all $m\in{\mathbb{N}}_0$ ,
\begin{equation*} \sum _{n=1}^{m+1}\binom {2m+3}{2n} \left (\frac {(\!-\!1)^{n+1}2(2n)!}{\left (2\pi \right )^{2n}}\zeta (2n)\right ) =\frac {2m+1}{2}. \end{equation*}
Comparing the last expression with the characterisation (3.3) of the Bernoulli numbers $B_{2n}$ indexed by even positive integers implies that
\begin{equation*} B_{2n}=\frac {(\!-\!1)^{n+1}2(2n)!}{\left (2\pi \right )^{2n}}\zeta (2n), \end{equation*}
that is, we have established that, for all $n\in{\mathbb{N}}$ ,
\begin{equation*} \zeta (2n)=(\!-\!1)^{n+1}\frac {\left (2\pi \right )^{2n}B_{2n}}{2(2n)!}. \end{equation*}
4. Fluctuations for the trigonometric expansions of the Brownian bridge
We first prove Theorem 1.2 and Corollary 1.3 which we use to determine the pointwise limits for the covariance functions of the fluctuation processes for the Karhunen–Loève expansion and of the fluctuation processes for the Fourier series expansion, and then we deduce Theorem 1.1.
4.1 Fluctuations for the Karhunen–Loève expansion
For the moment analysis initiated in the previous section to allow us to identify the limit of $NC_1^N$ as $N\to \infty$ on the diagonal away from its endpoints, we apply the Arzelà–Ascoli theorem to guarantee continuity of the limit away from the endpoints. To this end, we first need to establish the uniform boundedness of two families of functions. Recall that the functions $C_1^N\colon [0,1]\times [0,1]\to{\mathbb{R}}$ are defined in Lemma 2.2.
Lemma 4.1. The family $\{NC_1^N(t,t)\colon N\in{\mathbb{N}}\text{ and }t\in [0,1]\}$ is uniformly bounded.
Proof. Combining the expression for $C_1^N(t,t)$ from Lemma 2.2 and the representation (2.1) for $K_B$ arising from Mercer's theorem, we see that
\begin{equation*} NC_1^N(t,t)=N\sum _{k=N+1}^\infty \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}. \end{equation*}
In particular, for all $N\in{\mathbb{N}}$ and all $t\in [0,1]$ , we have
\begin{equation*} \left |NC_1^N(t,t)\right |\leq N\sum _{k=N+1}^\infty \frac {2}{k^2\pi ^2}. \end{equation*}
We further observe that
(4.1) \begin{equation} \lim _{M\to \infty }N\sum _{k=N+1}^M\frac{1}{k^2}\leq \lim _{M\to \infty } N\sum _{k=N+1}^M\left (\frac{1}{k-1}-\frac{1}{k}\right ) =\lim _{M\to \infty }\left (1-\frac{N}{M}\right )=1. \end{equation}
It follows that, for all $N\in{\mathbb{N}}$ and all $t\in [0,1]$ ,
\begin{equation*} \left |NC_1^N(t,t)\right |\leq \frac {2}{\pi ^2}, \end{equation*}
which is illustrated in Figure 3 and which establishes the claimed uniform boundedness.
Figure 3. Profiles of $t\mapsto NC_1^N(t,t)$ plotted for $N\in \{5, 25, 100\}$ along with $t\mapsto \frac{2}{\pi ^2}$ .
Lemma 4.2. Fix $\varepsilon \gt 0$ . The family
\begin{equation*} \left \{N\frac {{\mathrm {d}}}{{\mathrm {d}} t}C_1^N(t,t)\colon N\in {\mathbb {N}}\text { and }t\in [\varepsilon,1-\varepsilon ]\right \} \end{equation*}
is uniformly bounded.
Proof. According to Lemma 2.2, we have, for all $t\in [0,1]$ ,
\begin{equation*} C_1^N(t,t)=t-t^2-\sum _{k=1}^N \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}, \end{equation*}
which implies that
\begin{equation*} N\frac {{\mathrm {d}}}{{\mathrm {d}} t}C_1^N(t,t)=N\left (1-2t-\sum _{k=1}^N\frac {2\sin\!(2k\pi t)}{k\pi }\right ). \end{equation*}
Figure 4. Profiles of $t\mapsto N(\frac{\pi -t}{2}-\sum \limits _{k=1}^N \frac{\sin\!(kt)}{k})$ plotted for $N\in \{5, 25, 100, 1000\}$ on $[\varepsilon, 2\pi - \varepsilon ]$ with $\varepsilon = 0.1$ .
The desired result then follows by showing that, for $\varepsilon \gt 0$ fixed, the family
\begin{equation*} \left \{N\left (\frac {\pi -t}{2}-\sum _{k=1}^N\frac {\sin\!(kt)}{k}\right )\colon N\in {\mathbb {N}}\text { and }t\in [\varepsilon,2\pi -\varepsilon ]\right \} \end{equation*}
is uniformly bounded, as illustrated in Figure 4. Employing a usual approach, we use the Dirichlet kernel, for $N\in{\mathbb{N}}$ ,
\begin{equation*} \sum _{k=-N}^N\operatorname {e}^{\operatorname {i} kt}=1+\sum _{k=1}^N2\cos\!(kt)= \frac {\sin\!\left (\left (N+\frac {1}{2}\right )t\right )}{\sin\!\left (\frac {t}{2}\right )} \end{equation*}
to write, for $t\in (0,2\pi )$ ,
\begin{equation*} \frac {\pi -t}{2}-\sum _{k=1}^N\frac {\sin\!(kt)}{k} =-\frac {1}{2}\int _\pi ^t\left (1+\sum _{k=1}^N2\cos\!(ks)\right ){\mathrm {d}} s =-\frac {1}{2}\int _\pi ^t\frac {\sin\!\left (\left (N+\frac {1}{2}\right )s\right )} {\sin\!\left (\frac {s}{2}\right )}{\mathrm {d}} s. \end{equation*}
Integration by parts yields
\begin{equation*} -\frac {1}{2}\int _\pi ^t\frac {\sin\!\left (\left (N+\frac {1}{2}\right )s\right )} {\sin\!\left (\frac {s}{2}\right )}{\mathrm {d}} s =\frac {\cos\!\left (\left (N+\frac {1}{2}\right )t\right )}{(2N+1)\sin\!\left (\frac {t}{2}\right )} -\frac {1}{2N+1}\int _\pi ^t\cos\!\left (\left (N+\frac {1}{2}\right )s\right )\frac {{\mathrm {d}}}{{\mathrm {d}} s} \left (\frac {1}{\sin\!\left (\frac {s}{2}\right )}\right ){\mathrm {d}} s. \end{equation*}
By the first mean value theorem for definite integrals, it follows that for $t\in (0,\pi ]$ fixed, there exists $\xi \in [t,\pi ]$ , whereas for $t\in [\pi,2\pi )$ fixed, there exists $\xi \in [\pi,t]$ , such that
\begin{equation*} -\frac {1}{2}\int _\pi ^t\frac {\sin\!\left (\left (N+\frac {1}{2}\right )s\right )} {\sin\!\left (\frac {s}{2}\right )}{\mathrm {d}} s =\frac {\cos\!\left (\left (N+\frac {1}{2}\right )t\right )}{(2N+1)\sin\!\left (\frac {t}{2}\right )} -\frac {\cos\!\left (\left (N+\frac {1}{2}\right )\xi \right )}{2N+1} \left (\frac {1}{\sin\!\left (\frac {t}{2}\right )}-1\right ). \end{equation*}
Since $\left |\cos\!\left (\left (N+\frac{1}{2}\right )\xi \right )\right |$ is bounded above by one independently of $\xi$ and as $\frac{t}{2}\in (0,\pi )$ for $t\in (0,2\pi )$ implies that $0\lt \sin\!\left (\frac{t}{2}\right )\leq 1$ , we conclude that, for all $N\in{\mathbb{N}}$ and for all $t\in (0,2\pi )$ ,
\begin{equation*} N\left |\frac {\pi -t}{2}-\sum _{k=1}^N\frac {\sin\!(kt)}{k}\right | \leq \frac {2N}{(2N+1)\sin\!\left (\frac {t}{2}\right )}, \end{equation*}
which, for $t\in [\varepsilon,2\pi -\varepsilon ]$ , is uniformly bounded by $1/\sin\!\left (\frac{\varepsilon }{2}\right )$ .
Remark 4.3. In the proof of the previous lemma, we have essentially controlled the error in the Fourier series expansion for the fractional part of $t$ which is given by
\begin{equation*} \frac {1}{2}-\sum _{k=1}^\infty \frac {\sin\!(2k\pi t)}{k\pi }, \end{equation*}
see [[Reference Iwaniec20], Exercise on p. 4].
We can now prove the convergence in Theorem 1.2 on the diagonal away from the endpoints, which consists of a moment analysis to identify the moments of the limit function as well as an application of the Arzelà–Ascoli theorem to show that the limit function is continuous away from the endpoints. Alternatively, one could prove Corollary 1.3 directly with a similar approach as in the proof of Lemma 4.2, but integrating the Dirichlet kernel twice, and then deduce Theorem 1.2. However, as the moment analysis was already set up in Section 3 to determine the values of the Riemann zeta function at even positive integers, we demonstrate how to proceed with this approach.
Proposition 4.4. For all $t\in (0,1)$ , we have
\begin{equation*} \lim _{N\to \infty } N\left (t-t^2-\sum _{k=1}^N \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}\right ) =\frac {1}{\pi ^2}. \end{equation*}
Proof. Recall that, due Lemma 2.2 and the representation (2.1), we have, for $t\in [0,1]$ ,
(4.2) \begin{equation} C_1^N(t,t)=t-t^2-\sum _{k=1}^N \frac{2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2} =\sum _{k=N+1}^\infty \frac{2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}. \end{equation}
By Lemmas 4.1 and 4.2, the Arzelà–Ascoli theorem can be applied locally to any subsequence of $\{N C_1^N\}_{N\in{\mathbb{N}}}$ . Repeatedly using the Arzelà–Ascoli theorem and a diagonal argument, we deduce that there exists a subsequence of $\{N C_1^N\}_{N\in{\mathbb{N}}}$ which converges pointwise to a continuous limit function on the interval $(0,1)$ . To prove that the full sequence converges pointwise and to identify the limit function, we proceed with the moment analysis initiated in Section 3. Applying Lemma 3.2, we see that, for $m\in{\mathbb{N}}_0$ ,
(4.3) \begin{equation} N\sum _{k=N+1}^\infty \frac{e_{k,2m}}{k^2\pi ^2} =N\sum _{k=N+1}^\infty \frac{1}{k^2\pi ^2} \left (\frac{1}{2m+1}+ \sum _{n=1}^m\frac{(\!-\!1)^n(2m)!}{(2(m-n)+1)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}}\right ), \end{equation}
(4.4) \begin{equation} N\sum _{k=N+1}^\infty \frac{e_{k,2m+1}}{k^2\pi ^2} =N\sum _{k=N+1}^\infty \frac{1}{k^2\pi ^2} \left (\frac{1}{2m+2}+ \sum _{n=1}^m\frac{(\!-\!1)^n(2m+1)!}{(2(m-n)+2)!2^{2n}}\frac{1}{k^{2n}\pi ^{2n}}\right ). \end{equation}
The bound (4.1) together with
\begin{equation*} \lim _{M\to \infty }N\sum _{k=N+1}^M\frac {1}{k^2} \geq \lim _{M\to \infty }N\sum _{k=N+1}^M\left (\frac {1}{k}-\frac {1}{k+1}\right ) =\lim _{M\to \infty }\left (\frac {N}{N+1}-\frac {N}{M+1}\right ) =\frac {N}{N+1} \end{equation*}
implies that
(4.5) \begin{equation} \lim _{N\to \infty }N\sum _{k=N+1}^\infty \frac{1}{k^2}=1. \end{equation}
For $n\in{\mathbb{N}}$ , we further have
\begin{equation*} 0\leq N\sum _{k=N+1}^\infty \frac {1}{k^{2n+2}} \leq \frac {N}{(N+1)^2}\sum _{k=N+1}^\infty \frac {1}{k^{2n}} \leq \frac {1}{N}\sum _{k=1}^\infty \frac {1}{k^{2n}}, \end{equation*}
and since $\sum _{k=1}^\infty k^{-2n}$ converges, this yields
\begin{equation*} \lim _{N\to \infty }N\sum _{k=N+1}^\infty \frac {1}{k^{2n+2}}=0 \quad \text {for }n\in {\mathbb {N}}. \end{equation*}
From (4.2) as well as (4.3) and (4.4), it follows that, for all $n\in{\mathbb{N}}_0$ ,
\begin{equation*} \lim _{N\to \infty }\int _0^1 N C_1^N(t,t) t^n{\mathrm {d}} t= \lim _{N\to \infty }N\sum _{k=N+1}^\infty \frac {e_{k,n}}{k^2\pi ^2}=\frac {1}{(n+1)\pi ^2}. \end{equation*}
This shows that, for all $n\in{\mathbb{N}}_0$ ,
\begin{equation*} \lim _{N\to \infty }\int _0^1 N C_1^N(t,t) t^n{\mathrm {d}} t= \int _0^1\frac {1}{\pi ^2}t^n{\mathrm {d}} t. \end{equation*}
If the sequence $\{N C_1^N\}_{N\in{\mathbb{N}}}$ failed to converge pointwise, we could use the Arzelà–Ascoli theorem and a diagonal argument to construct a second subsequence of $\{N C_1^N\}_{N\in{\mathbb{N}}}$ converging pointwise but to a different continuous limit function on $(0,1)$ compared to the first subsequence. Since this contradicts the convergence of moments, the claimed result follows.
We included the on-diagonal convergence in Theorem 1.2 as a separate statement to demonstrate that Corollary 1.3 is a consequence of Proposition 4.4, which is then used to prove the off-diagonal convergence in Theorem 1.2.
Proof of Corollary 1.3. Using the identity that, for $k\in{\mathbb{N}}$ ,
(4.6) \begin{equation} \cos\!(2k\pi t)=1-2\left (\sin\!(k\pi t)\right )^2, \end{equation}
we obtain
\begin{equation*} \sum _{k=N+1}^\infty \frac {\cos\!(2k\pi t)}{k^2\pi ^2} =\sum _{k=N+1}^\infty \frac {1}{k^2\pi ^2}- \sum _{k=N+1}^\infty \frac {2\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2}. \end{equation*}
From (4.5) and Proposition 4.4, it follows that, for all $t\in (0,1)$ ,
\begin{equation*} \lim _{N\to \infty } N\sum _{k=N+1}^\infty \frac {\cos\!(2k\pi t)}{k^2\pi ^2} =\frac {1}{\pi ^2}-\frac {1}{\pi ^2}=0, \end{equation*}
Proof of Theorem 1.2. If $s\in \{0,1\}$ or $t\in \{0,1\}$ , the result follows immediately from $\sin\!(k\pi )=0$ for all $k\in{\mathbb{N}}_0$ , and if $s=t$ for $t\in (0,1)$ , the claimed convergence is given by Proposition 4.4. Therefore, it remains to consider the off-diagonal case, and we may assume that $s,t\in (0,1)$ are such that $s\lt t$ . Due to the representation (2.1) and the identity
\begin{equation*} 2\sin\!(k\pi s)\sin\!(k\pi t)=\cos\!(k\pi (t-s))-\cos\!(k\pi (t+s)), \end{equation*}
\begin{align*} \min\!(s,t)-st-\sum _{k=1}^N\frac{2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2} &=\sum _{k=N+1}^\infty \frac{2\sin\!(k\pi s)\sin\!(k\pi t)}{k^2\pi ^2}\\[5pt] &=\sum _{k=N+1}^\infty \frac{\cos\!(k\pi (t-s))-\cos\!(k\pi (t+s))}{k^2\pi ^2}. \end{align*}
Since $0\lt t-s\lt t+s\lt 2$ for $s,t\in (0,1)$ with $s\lt t$ , the convergence away from the diagonal is a consequence of Corollary 1.3.
Note that Theorem 1.2 states, for $s,t\in [0,1]$ ,
(4.7) \begin{equation} \lim _{N\to \infty } N C_1^N(s,t)= \begin{cases} \dfrac{1}{\pi ^2} & \text{if } s=t\text{ and } t\in (0,1)\\[10pt] 0 & \text{otherwise} \end{cases}, \end{equation}
which is the key ingredient for obtaining the characterisation of the limit fluctuations for the Karhunen–Loève expansion given in Theorem 1.1. We provide the full proof of Theorem 1.1 below after having determined the limit of $2N C_2^N$ as $N\to \infty$ .
4.2 Fluctuations for the Fourier series expansion
Instead of setting up another moment analysis to study the pointwise limit of $2N C_2^N$ as $N\to \infty$ , we simplify the expression for $C_2^N$ from Lemma 2.3 and deduce the desired pointwise limit from Corollary 1.3.
Using the standard Fourier basis for $L^2([0,1])$ , the polarised Parseval identity and the trigonometric identity (2.10), we can write, for $s,t\in [0,1]$ ,
\begin{align*} \min\!(s,t)&=\int _0^1{\mathbb {1}}_{[0,s]}(r){\mathbb {1}}_{[0,t]}(r){\mathrm{d}} r\\[5pt] &=st+\sum _{k=1}^\infty 2\int _0^s \cos\!(2k\pi r){\mathrm{d}} r\int _0^t \cos\!(2k\pi r){\mathrm{d}} r +\sum _{k=1}^\infty 2\int _0^s \sin\!(2k\pi r){\mathrm{d}} r\int _0^t \sin\!(2k\pi r){\mathrm{d}} r\\[5pt] &=st-\sum _{k=1}^\infty \frac{\cos\!(2k\pi s)}{2k^2\pi ^2} -\sum _{k=1}^\infty \frac{\cos\!(2k\pi t)}{2k^2\pi ^2} +\sum _{k=1}^\infty \frac{\cos\!(2k\pi (t-s))}{2k^2\pi ^2}+\sum _{k=1}^\infty \frac{1}{2k^2\pi ^2}. \end{align*}
Applying the identity (4.6) as well as the representation (2.1) and using the value for $\zeta (2)$ derived in Section 3, we have
\begin{equation*} \sum _{k=1}^\infty \frac {\cos\!(2k\pi t)}{2k^2\pi ^2}= \sum _{k=1}^\infty \frac {1}{2k^2\pi ^2}- \sum _{k=1}^\infty \frac {\left (\sin\!(k\pi t)\right )^2}{k^2\pi ^2} =\frac {1}{12}+\frac {t^2-t}{2}. \end{equation*}
Once again exploiting the value for $\zeta (2)$ , we obtain
\begin{equation*} \min\!(s,t)-st+\frac {s^2-s}{2}+\frac {t^2-t}{2}+\frac {1}{12} =\sum _{k=1}^\infty \frac {\cos\!(2k\pi (t-s))}{2k^2\pi ^2}. \end{equation*}
Using the expression for $C_2^N$ from Lemma 2.3, it follows that, for $s,t\in [0,1]$ ,
\begin{equation*} C_2^N(s,t)=\sum _{k=N+1}^\infty \frac {\cos\!(2k\pi (t-s))}{2k^2\pi ^2}. \end{equation*}
This implies that if $t-s$ is an integer then, as a result of the limit (4.5),
\begin{equation*} \lim _{N\to \infty } 2NC_2^N(s,t)=\frac {1}{\pi ^2}, \end{equation*}
whereas if $t-s$ is not an integer then, by Corollary 1.3,
\begin{equation*} \lim _{N\to \infty } 2NC_2^N(s,t)=0. \end{equation*}
This can be summarised as, for $s,t\in [0,1]$ ,
(4.8) \begin{equation} \lim _{N\to \infty } 2NC_2^N(s,t)= \begin{cases} \dfrac{1}{\pi ^2} & \text{if } s=t\text{ or } s,t\in \{0,1\}\\[10pt] 0 & \text{otherwise} \end{cases}. \end{equation}
We finally prove Theorem 1.1 by considering characteristic functions.
Proof of Theorem 1.1. According to Lemma 2.2 as well as Lemma 2.3, the fluctuation processes $(F_t^{N,1})_{t\in [0,1]}$ and $(F_t^{N,2})_{t\in [0,1]}$ are zero-mean Gaussian processes with covariance functions $N C_1^N$ and $2N C_2^N$ , respectively.
By the pointwise convergences (4.7) and (4.8) of the covariance functions in the limit $N\to \infty$ , for any $n\in{\mathbb{N}}$ and any $t_1,\ldots,t_n\in [0,1]$ , the characteristic functions of the Gaussian random vectors $(F_{t_1}^{N,i},\ldots,F_{t_n}^{N,i})$ , for $i\in \{1,2\}$ , converge pointwise as $N\to \infty$ to the characteristic function of the Gaussian random vector $(F_{t_1}^{i},\ldots,F_{t_n}^{i})$ . Therefore, the claimed convergences in finite dimensional distributions are consequences of Lévy's continuity theorem.
5. Approximations of Brownian Lévy area
In this section, we consider approximations of second iterated integrals of Brownian motion, which is a classical problem in the numerical analysis of stochastic differential equations (SDEs), see [Reference Kloeden and Platen22]. Due to their presence within stochastic Taylor expansions, increments and second iterated integrals of multidimensional Brownian motion are required by high order strong methods for general SDEs, such as stochastic Taylor [Reference Kloeden and Platen22] and Runge–Kutta [Reference Rößler33] methods. Currently, the only methodology for exactly generating the increment and second iterated integral, or equivalently the Lévy area, given by Definition 1.4, of a $d$ -dimensional Brownian motion is limited to the case when $d = 2$ . This algorithm for the exact generation of Brownian increments and Lévy area is detailed in [Reference Gaines and Lyons13]. The approach adapts Marsaglia's "rectangle-wedge-tail" algorithm to the joint density function of $ (W_1^{(1)}, W_1^{(2)}, A_{0,1}^{(1,2)} )$ , which is expressible as an integral, but can only be evaluated numerically. Due to the subtle relationships between different entries in $A_{0,1}$ , it has not been extended to $d\gt 2$ .
Obtaining good approximations of Brownian Lévy area in an $L^{2}(\mathbb{P})$ sense is known to be difficult. For example, it was shown in [Reference Dickinson8] that any approximation of Lévy area which is measurable with respect to $N$ Gaussian random variables, obtained from linear functionals of the Brownian path, cannot achieve strong convergence faster than $O(N^{-\frac{1}{2}})$ . In particular, this result extends the classical theorem of Clark and Cameron [Reference Clark and Cameron4] which establishes a best convergence rate of $O(N^{-\frac{1}{2}})$ for approximations of Lévy area based on only the Brownian increments $\{W_{(n+1)h} - W_{nh}\}_{0\leq n\leq N - 1}$ . Therefore, approximations have been developed which fall outside of this paradigm, see [Reference Davie5, Reference Foster11, Reference Mrongowius and Rößler32, Reference Wiktorsson35]. In the analysis of these methodologies, the Lévy area of Brownian motion and its approximation are probabilistically coupled in such a way that $L^{2}(\mathbb{P})$ convergence rates of $O(N^{-1})$ can be established.
We are interested in the approximations of Brownian Lévy area that can be obtained directly from the Fourier series expansion (1.3) and the polynomial expansion (1.5) of the Brownian bridge. For the remainder of the section, the Brownian motion $(W_t)_{t\in [0,1]}$ is assumed to be $d$ -dimensional and $(B_t)_{t\in [0,1]}$ is its associated Brownian bridge.
We first recall the standard Fourier approach to the strong approximation of Brownian Lévy area.
Theorem 5.1 (Approximation of Brownian Lévy area via Fourier coefficients, see [[Reference Kloeden and Platen22], p. 205] and [[Reference Milstein31], p. 99]). For $n\in{\mathbb{N}}$ , we define a random antisymmetric $d\times d$ matrix $\widehat{A}_{n}$ by, for $i,j\in \{1,\ldots,d\}$ ,
\begin{equation*} \widehat {A}_{n}^{ (i,j)} \;:\!=\; \frac {1}{2}\left (a_0^{(i)}W_1^{(j)} - W_1^{(i)}a_0^{(j)}\right ) + \pi \sum _{k=1}^{n-1} k\left (a_{k}^{(i)}b_k^{(j)} - b_k^{(i)}a_{k}^{(j)}\right ), \end{equation*}
where the normal random vectors $\{a_k\}_{k\in{\mathbb{N}}_0}$ and $\{b_k\}_{k\in{\mathbb{N}}}$ are the coefficients from the Brownian bridge expansion (1.3), that is, the coordinates of each random vector are independent and defined according to (1.4). Then, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ , we have
\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widehat{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & = \frac{1}{2\pi ^2}\sum _{k = n}^{\infty }\frac{1}{k^2}. \end{align*}
Remark 5.2. Using the covariance structure given by (2.5), (2.6), (2.7) and the independence of the components of a Brownian bridge, it immediately follows that the coefficients $\{a_k\}_{k\in{\mathbb{N}}_0}$ and $\{b_k\}_{k\in{\mathbb{N}}}$ are jointly normal with $a_0\sim \mathcal{N}\big (0,\frac{1}{3}I_d\big )$ , $a_k, b_k\sim \mathcal{N}\big (0,\frac{1}{2k^2\pi ^2}I_d\big )$ , $\operatorname{cov}(a_0, a_k) = -\frac{1}{k^2\pi ^2}I_d$ and $\operatorname{cov}(a_l, b_k) = 0$ for $k\in{\mathbb{N}}$ and $l\in{\mathbb{N}}_0$ .
In practice, the above approximation may involve generating the $N$ independent random vectors $\{a_k\}_{1\leq k\leq N}$ followed by the coefficient $a_0$ , which will not be independent, but can be expressed as a linear combination of $\{a_k\}_{1\leq k\leq N}$ along with an additional independent normal random vector. Without this additional normal random vector, we obtain the following discretisation of Lévy area.
Theorem 5.3 (Kloeden–Platen–Wright approximation of Brownian Lévy area, see [Reference Kloeden, Platen and Wright23, Reference Milstein30, Reference Wiktorsson35]). For $n\in{\mathbb{N}}$ , we define a random antisymmetric $d\times d$ matrix $\widetilde{A}_{n}$ by, for $i,j\in \{1,\ldots,d\}$ ,
\begin{equation*} \widetilde {A}_{n}^{ (i,j)} \;:\!=\; \pi \sum _{k=1}^{n-1} k\left (a_{k}^{(i)}\left (b_k^{(j)} - \frac {1}{k\pi }W_1^{(j)}\right ) - \left (b_k^{(i)} - \frac {1}{k\pi }W_1^{(i)}\right )a_{k}^{(j)}\right ), \end{equation*}
where the sequences $\{a_k\}_{k\in{\mathbb{N}}}$ and $\{b_k\}_{k\in{\mathbb{N}}}$ of independent normal random vectors are the same as before. Then, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ , we have
\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \widetilde{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & = \frac{3}{2\pi ^2}\sum _{k = n}^{\infty }\frac{1}{k^2}. \end{align*}
Proof. As for Theorem 5.1, the result follows by direct calculation. The constant is larger because, for $i\in \{1,\ldots,d\}$ and $k\in{\mathbb{N}}$ ,
\begin{equation*} {\mathbb {E}}\bigg [\Big (b_k^{(i)} - \frac {1}{k\pi }W_1^{(i)}\Big )^2 \bigg ] = \frac {3}{2k^2\pi ^2} = 3\,{\mathbb {E}}\Big [\big (b_k^{(i)}\big )^2 \Big ], \end{equation*}
which yields the required result.
Finally, we give the approximation of Lévy area corresponding to the polynomial expansion (1.5). Although this series expansion of Brownian Lévy area was first proposed in [Reference Kuznetsov24], a straightforward derivation based on the polynomial expansion (1.5) was only established much later in [Reference Kuznetsov25]. However in [Reference Kuznetsov24, Reference Kuznetsov25], the optimal bound for the mean squared error of the approximation is not identified. We will present a similar derivation to [Reference Kuznetsov25], but with a simple formula for the mean squared error.
Theorem 5.4 (Polynomial approximation of Brownian Lévy area, see [[Reference Kuznetsov24], p. 47] and [Reference Kuznetsov25]). For $n\in{\mathbb{N}}_0$ , we define a random antisymmetric $d\times d$ matrix $\overline{A}_{n}$ by, for $n\in{\mathbb{N}}$ and $i,j\in \{1,\ldots,d\}$ ,
\begin{equation*} \overline {A}_{n}^{ (i,j)} \;:\!=\; \frac {1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right ) + \frac {1}{2}\sum _{k=1}^{n-1}\left (c_k^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_k^{(j)}\right ), \end{equation*}
where the normal random vectors $\{c_k\}_{k\in{\mathbb{N}}}$ are the coefficients from the polynomial expansion (1.5), that is, the coordinates are independent and defined according to (1.6), and we set
\begin{equation*} \overline {A}_{0}^{ (i,j)} \;:\!=\; 0. \end{equation*}
Then, for $n\in{\mathbb{N}}_0$ and for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ , we have
\begin{equation*} {\mathbb {E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline {A}_{n}^{ (i,j)}\Big )^2 \bigg ] = \frac {1}{8n+4}. \end{equation*}
Remark 5.5. By applying Lemma 2.1, the orthogonality of shifted Legendre polynomials and the independence of the components of a Brownian bridge, we see that the coefficients $\{c_k\}_{k\in{\mathbb{N}}}$ are independent and distributed as $c_k\sim \mathcal{N}\big (0,\frac{1}{2k + 1}I_d\big )$ for $k\in{\mathbb{N}}$ .
Proof. It follows from the polynomial expansion (1.5) that, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ ,
(5.1) \begin{equation} \int _0^1 B_t^{(i)}{\mathrm{d}} B_t^{(j)} = \int _0^1 \left (\sum _{k=1}^\infty (2k+1)\, c_k^{(i)} \int _0^t Q_k(r){\mathrm{d}} r\right ){\mathrm{d}}\left (\sum _{l=1}^\infty (2l+1)\, c_l^{(j)} \int _0^t Q_l(r){\mathrm{d}} r \right ), \end{equation}
where the series converge in $L^2(\mathbb{P})$ . To simplify (5.1), we use the identities in (2.11) for shifted Legendre polynomials as well as the orthogonality of shifted Legendre polynomials to obtain that, for $k, l\in{\mathbb{N}}$ ,
\begin{align*} \int _0^1 \left (\int _0^t Q_k(r){\mathrm{d}} r\right ){\mathrm{d}} \left (\int _0^t Q_l(r){\mathrm{d}} r\right ) & = \int _0^1 Q_l(t)\int _0^t Q_k(r){\mathrm{d}} r{\mathrm{d}} t\\[5pt] & = \dfrac{1}{2(2k+1)}{\displaystyle \int _0^1 Q_l(t)\left (Q_{k+1}(t) - Q_{k-1}(t)\right ){\mathrm{d}} t}\\[3pt] & = \begin{cases}\phantom{-}\dfrac{1}{2(2k+1)}{\displaystyle \int _0^1 \left (Q_{k+1}(t)\right )^2{\mathrm{d}} t} & \text{if }l = k + 1\\[9pt] -\dfrac{1}{2(2k+1)}{\displaystyle \int _0^1 \left (Q_{k-1}(t)\right )^2{\mathrm{d}} t} & \text{if }l = k - 1\\[8pt] \phantom{-}0 & \text{otherwise}\end{cases}. \end{align*}
Evaluating the above integrals gives, for $k, l\in{\mathbb{N}}$ ,
(5.2) \begin{align} \int _0^1 \left (\int _0^t Q_k(r){\mathrm{d}} r\right ){\mathrm{d}} \left (\int _0^t Q_l(r){\mathrm{d}} r\right ) & = \begin{cases} \phantom{-}\dfrac{1}{2(2k+1)(2k + 3)} & \text{if }l = k + 1\\[12pt] -\dfrac{1}{2(2k+1)(2k - 1)} & \text{if }l = k - 1\\[12pt] \phantom{-}0 & \text{otherwise}\end{cases}. \end{align}
In particular, for $k,l\in{\mathbb{N}}$ , this implies that
\begin{align*} \int _0^1 \left ((2k+1) c_k^{(i)} \int _0^t Q_k(r){\mathrm{d}} r\right ){\mathrm{d}} \left ((2l+1) c_l^{(j)} \int _0^t Q_l(r){\mathrm{d}} r\right ) & = \begin{cases} \phantom{-}\dfrac{1}{2}c_k^{(i)}c_{k+1}^{(j)} & \text{if }l = k + 1\\[13pt] -\dfrac{1}{2}c_k^{(i)}c_{k-1}^{(j)} & \text{if }l = k - 1\\[13pt] \phantom{-}0 & \text{otherwise}\end{cases}. \end{align*}
Therefore, by the bounded convergence theorem in $L^2(\mathbb{P})$ , we can simplify the expansion (5.1) to
(5.3) \begin{align} \int _0^1 B_t^{(i)}{\mathrm{d}} B_t^{(j)} & = \frac{1}{2}\sum _{k=1}^{\infty }\left (c_k^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_k^{(j)}\right ), \end{align}
where, just as before, the series converges in $L^2(\mathbb{P})$ . Since $W_t = t W_1 + B_t$ for $t\in [0,1]$ , we have, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ ,
\begin{align*} \int _0^1 W_t^{(i)}{\mathrm{d}} W_t^{(j)} & = \int _0^1 \big (t W_1^{(i)}\big ){\mathrm{d}} \big (t W_1^{(j)}\big ) + \int _0^1 B_t^{(i)}{\mathrm{d}} \big (t W_1^{(j)}\big )+ \int _0^1 \big (t W_1^{(i)}\big ){\mathrm{d}} B_t^{(j)} + \int _0^1 B_t^{(i)}{\mathrm{d}} B_t^{(j)}\\[3pt] & = \frac{1}{2}W_1^{(i)}W_1^{(j)} - W_1^{(j)}\int _0^1 t{\mathrm{d}} B_t^{(i)} + W_1^{(i)}\int _0^1 t{\mathrm{d}} B_t^{(j)} + \int _0^1 B_t^{(i)}{\mathrm{d}} B_t^{(j)}, \end{align*}
where the second line follows by integration by parts. As
\begin{equation*}\int _0^1 W_t^{(i)}{\mathrm {d}} W_t^{(j)} = \frac {1}{2}W_1^{(i)}W_1^{(j)} + A_{0,1}^{(i,j)}\end{equation*}
and $Q_1(t) = 2t - 1$ , the above and (5.3) imply that, for $i,j\in \{1,\ldots,d\}$ ,
\begin{align*} A_{0,1}^{(i,j)} = \frac{1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right ) + \frac{1}{2}\sum _{k=1}^{\infty }\left (c_k^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_k^{(j)}\right ). \end{align*}
By the independence of the normal random vectors in the sequence $\{c_k\}_{k\in{\mathbb{N}}}$ , it is straightforward to compute the mean squared error in approximating $A_{0,1}$ and we obtain, for $n\in{\mathbb{N}}$ and for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ ,
\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline{A}_{n}^{ (i,j)}\Big )^2 \bigg ] & ={\mathbb{E}}\left [\left (\frac{1}{2}\sum _{k=n}^{\infty } \left (c_{k}^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_{k}^{(j)}\right )\right )^2 \right ]\\[5pt] & = \frac{1}{4}\sum _{k=n}^{\infty } \frac{2}{(2k+1)(2k+3)}\\[5pt] & = \frac{1}{4}\sum _{k=n}^{\infty } \left (\frac{1}{2k+1} - \frac{1}{2k+3}\right )\\[5pt] & = \frac{1}{8n+4}, \end{align*}
by Remark 5.5. Similarly, as the normal random vector $W_1$ and the ones in the sequence $\{c_k\}_{k\in{\mathbb{N}}}$ are independent, we have
\begin{align*}{\mathbb{E}}\bigg [\Big (A_{0,1}^{(i,j)} - \overline{A}_{0}^{(i,j)}\Big )^2 \bigg ] & ={\mathbb{E}}\left [\left (\frac{1}{2}\left (W_1^{(i)}c_1^{(j)} - c_1^{(i)}W_1^{(j)}\right )\right )^2 \right ] +{\mathbb{E}}\left [\left (\frac{1}{2}\sum _{k=1}^{\infty } \left (c_{k}^{(i)}c_{k+1}^{(j)} - c_{k+1}^{(i)}c_{k}^{(j)}\right )\right )^2 \right ]\\[5pt] & = \frac{1}{6} + \frac{1}{12} = \frac{1}{4}, \end{align*}
Given that we have now considered three different strong approximations of Brownian Lévy area, it is reasonable to compare their respective rates of convergence. Combining the above theorems, we obtain the following result.
Corollary 5.6 (Asymptotic convergence rates of Lévy area approximations). For $n\in{\mathbb{N}}$ , we set $N = 2n$ so that the number of Gaussian random vectors required to define the Lévy area approximations $\widehat{A}_{n}, \widetilde{A}_{n}$ and $\overline{A}_{2n}$ is $N$ or $N-1$ , respectively. Then, for $i,j\in \{1,\ldots,d\}$ with $i\neq j$ and as $N\to \infty$ , we have
In particular, the polynomial approximation of Brownian Lévy area is more accurate than the Kloeden–Platen–Wright approximation, both of which use only independent Gaussian vectors.
Remark 5.7. It was shown in [Reference Dickinson8] that $\frac{1}{\pi ^2}\hspace{-0.5mm}\left (\frac{1}{N}\right )$ is the optimal asymptotic rate of mean squared convergence for Lévy area approximations that are measurable with respect to $N$ Gaussian random variables, obtained from linear functionals of the Brownian path.
As one would expect, all the Lévy area approximations converge in $L^{2}(\mathbb{P})$ with a rate of $O(N^{-\frac{1}{2}})$ and thus the main difference between their respective accuracies is in the leading error constant. More concretely, for sufficiently large $N$ , the approximation based on the Fourier expansion of the Brownian bridge is roughly 11% more accurate in $L^{2}(\mathbb{P})$ than that of the polynomial approximation. On the other hand, the polynomial approximation is easier to implement in practice as all of the required coefficients are independent. Since it has the largest asymptotic error constant, the Kloeden–Platen–Wright approach gives the least accurate approximation for Brownian Lévy area.
We observe that the leading error constants for the Lévy area approximations resulting from the Fourier series and the polynomial expansion coincide with the average $L^2(\mathbb{P})$ error of their respective fluctuation processes, that is, applying Fubini's theorem followed by the limit theorems for the fluctuation processes $(F_t^{N,2})_{t\in [0,1]}$ and $(F_t^{N,3})_{t\in [0,1]}$ defined by (1.8) and (2.12), respectively, gives
\begin{align*} \lim _{N\to \infty }{\mathbb{E}}\left [\int _0^1 \left (F_t^{N,2}\right )^2{\mathrm{d}} t \right ] &=\int _0^1\frac{1}{\pi ^2}{\mathrm{d}} t=\frac{1}{\pi ^2},\\[3pt] \lim _{N\to \infty }{\mathbb{E}}\left [\int _0^1 \left (F_t^{N,3}\right )^2{\mathrm{d}} t \right ] &=\int _0^1 \frac{1}{\pi }\sqrt{t(1-t)}{\mathrm{d}} t=\frac{1}{8}. \end{align*}
To demonstrate how this correspondence arises, we close with some heuristics. For $N\in{\mathbb{N}}$ , we consider an approximation of the Brownian bridge which uses $N$ random vectors, and we denote the corresponding approximation of Brownian motion $(W_t)_{t\in [0,1]}$ by $(S_t^N)_{t\in [0,1]}$ , where the difference between Brownian motion and its associated Brownian bridge is the first term in the approximation. In the Fourier and polynomial approaches, the error in approximating Brownian Lévy area is then essentially given by
\begin{equation*} \int _0^1 W_t^{(i)}{\mathrm {d}} W_t^{(j)}-\int _0^1 S_t^{N,(i)}{\mathrm {d}} S_t^{N,(j)} =\int _0^1 \left (W_t^{(i)}-S_t^{N,(i)}\right ){\mathrm {d}} W_t^{(j)} +\int _0^1 S_t^{N,(i)}{\mathrm {d}} \left (W_t^{(j)}-S_t^{N,(j)}\right ). \end{equation*}
If one can argue that
\begin{equation*} \int _0^1 S_t^{N,(i)}{\mathrm {d}} \left (W_t^{(j)}-S_t^{N,(j)}\right )=O\left (\frac {1}{N}\right ), \end{equation*}
which, for instance, for the polynomial approximation follows directly from (5.2) and Remark 5.5, then in terms of the fluctuation processes $(F_t^N)_{t\in [0,1]}$ defined by
\begin{equation*} F_t^N=\sqrt {N}\left (W_t-S_t^{N}\right ), \end{equation*}
the error of the Lévy area approximation can be expressed as
\begin{equation*} \frac {1}{\sqrt {N}}\int _0^1 F_t^{N,(i)}{\mathrm {d}} W_t^{(j)}+O\left (\frac {1}{N}\right ). \end{equation*}
Thus, by Itô's isometry and Fubini's theorem, the leading error constant in the mean squared error is indeed given by
\begin{equation*} \int _0^1\lim _{N\to \infty }{\mathbb {E}}\left [\left (F_t^{N,(i)}\right )^2 \right ]{\mathrm {d}} t. \end{equation*}
This connection could be interpreted as an asymptotic Itô isometry for Lévy area approximations.
A. Summarising tables
Table A1 Table summarising the Brownian bridge expansions considered in this paper
Table A2 Table summarising the Lévy area expansions considered in this paper
James Foster was supported by the Department of Mathematical Sciences at the University of Bath and the DataSig programme under the EPSRC grant EP/S026347/1.
Arfken, G. B. and Weber, H. J. (2005) Mathematical Methods for Physicists, Sixth ed. Elsevier.Google Scholar
Belomestny, D. and Nagapetyan, T. (2017) Multilevel path simulation for weak approximation schemes with application to Lévy-driven SDEs. Bernoulli 23 927–950.CrossRefGoogle Scholar
Borevich, Z. I. and Shafarevich, I. R. (1966) Number Theory. Translated from the Russian by Newcomb Greenleaf, Pure and Applied Mathematics, Vol. 20. New York: Academic Press.Google Scholar
Clark, J. M. C. and Cameron, R. J. (1980) The maximum rate of convergence of discrete approximations for stochastic differential equations. In Stochastic Differential Systems Filtering and Control. Springer.Google Scholar
Davie, A. (2014) KMT theory applied to approximations of SDE. In Stochastic Analysis and Applications, Vol. 100. Springer Proceedings in Mathematics and Statistics. Springer, pp. 185–201 . CrossRefGoogle Scholar
Debrabant, K., Ghasemifard, A. and Mattsson, N. C. (2019) Weak Antithetic MLMC Estimation of SDEs with the Milstein scheme for Low-Dimensional Wiener Processes. Appl. Math. Lett. 91 22–27.CrossRefGoogle Scholar
Debrabant, K. and Rößler, A. (2015) On the acceleration of the multi-level Monte Carlo method. J. Appl. Probab. 52 307–322.CrossRefGoogle Scholar
Dickinson, A. S. (2007) Optimal Approximation of the Second Iterated Integral of Brownian Motion. Stoch. Anal. Appl. 25(5) 1109–1128.CrossRefGoogle Scholar
Filip, S., Javeed, A. and Trefethen, L. N. (2019) Smooth random functions, random ODEs, and Gaussian processes. SIAM Rev. 61(1) 185–205.CrossRefGoogle Scholar
Flint, G. and Lyons, T. (2015) Pathwise approximation of SDEs by coupling piecewise abelian rough paths. Available at https://arxiv.org/abs/1505.01298 Google Scholar
Foster, J. (2020) Numerical approximations for stochastic differential equations, PhD thesis. University of Oxford. Google Scholar
Foster, J., Lyons, T. and Oberhauser, H. (2020) An optimal polynomial approximation of Brownian motion. SIAM J. Numer. Anal. 58 1393–1421.CrossRefGoogle Scholar
Gaines, J. and Lyons, T. (1994) Random Generation of Stochastic Area Integrals. SIAM J. Appl. Math. 54 1132–1146.CrossRefGoogle Scholar
Gaines, J. and Lyons, T. (1997) Variable step size control for stochastic differential equations. SIAM J. Appl. Math. 57 1455–1484.CrossRefGoogle Scholar
Giles, M. B. (2008) Improved multilevel Monte Carlo convergence using the Milstein scheme. In Monte Carlo and quasi-Monte Carlo methods 2006. Springer, pp. 343–358.CrossRefGoogle Scholar
Giles, M. B. (2008) Multilevel Monte Carlo path simulation. Oper. Res. 56 607–617.CrossRefGoogle Scholar
Giles, M. B. and Szpruch, L. (2014) Antithetic multilevel Monte Carlo estimation for multi-dimensional SDEs without Lévy area simulation. Ann. Appl. Probab. 24 1585–1620.CrossRefGoogle Scholar
Habermann, K. (2021) Asymptotic error in the eigenfunction expansion for the Green's function of a Sturm–Liouville problem. Available at https://arxiv.org/abs/2109.10887 Google Scholar
Habermann, K. (2021) A semicircle law and decorrelation phenomena for iterated Kolmogorov loops. J. London Math. Soc. 103 558–586.CrossRefGoogle Scholar
Iwaniec, H. (1997) Topics in Classical Automorphic Forms, Graduate Studies in Mathematics, Vol. 17. American Mathematical Society.Google Scholar
Kahane, J.-P. (1985) Some Random Series of Functions, Cambridge Studies in Advanced Mathematics, Vol. 5, Second ed. Cambridge University Press.Google Scholar
Kloeden, P. E. and Platen, E. (1992) Numerical Solution of Stochastic Differential Equations, Applications of Mathematics, Vol. 23. Springer.Google Scholar
Kloeden, P. E., Platen, E. and Wright, I. W. (1992) The approximation of multiple stochastic integrals. Stoch. Anal. Appl. 10 431–441.CrossRefGoogle Scholar
Kuznetsov, D. F. (1997) A method of expansion and approximation of repeated stochastic Stratonovich integrals based on multiple Fourier series on full orthonormal systems [In Russian]. Electron. J. "Diff. Eq. Control Process." 1 18-77.Google Scholar
Kuznetsov, D. F. (2019) New Simple Method of Expansion of Iterated Ito Stochastic integrals of Multiplicity 2 Based on Expansion of the Brownian Motion Using Legendre Polynomials and Trigonometric Functions. Available at https://arxiv.org/abs/1807.00409 Google Scholar
Li, X., Wu, D., Mackey, L. and Erdogdu, M. A. (2019) Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond. Adv. Neural Inform. Process. Syst. Google Scholar
Loève, M. (1978) Probability Theory II, Vol. 46. Fourth ed., Graduate Texts in Mathematics. Springer. CrossRefGoogle Scholar
Mengoli, Pietro (1650) Novae quadraturae arithmeticae, seu de Additione fractionum. Ex Typographia Iacobi Montij.Google Scholar
Mercer., J XVI (1909) Functions of positive and negative type, and their connection with the theory of integral equations. Philosop. Trans. Roy. Soc. London. A 209 415–446.Google Scholar
Milstein, G. N. (1988) Numerical Integration of Stochastic Differential Equations. [In Russian]. Ural University Press.Google Scholar
Milstein, G. N. (1994) Numerical Integration of Stochastic Differential Equations, Vol. 313. Springer Science & Business Media. Google Scholar
Mrongowius, J. and Rößler, A. (2022) On the approximation and simulation of iterated stochastic integrals and the corresponding Lévy areas in terms of a multidimensional Brownian motion. Stoch. Anal. Appl. 40 397–425.CrossRefGoogle Scholar
Rößler, A. (2010) Runge–Kutta Methods for the Strong Approximation of Solutions of Stochastic Differential Equations. SIAM J. Numer. Anal. 8 922–952.CrossRefGoogle Scholar
Trefethen, N. (June 2019) Brownian paths and random polynomials, Version, Chebfun Example. Available at https://www.chebfun.org/examples/stats/RandomPolynomials.html Google Scholar
Wiktorsson, M. (2001) Joint characteristic function and simultaneous simulation of iterated Itô integrals for multiple independent Brownian motions. Ann. Appl. Probab. 11 470–487.CrossRefGoogle Scholar
Figure 3. Profiles of $t\mapsto NC_1^N(t,t)$ plotted for $N\in \{5, 25, 100\}$ along with $t\mapsto \frac{2}{\pi ^2}$.
Figure 4. Profiles of $t\mapsto N(\frac{\pi -t}{2}-\sum \limits _{k=1}^N \frac{\sin\!(kt)}{k})$ plotted for $N\in \{5, 25, 100, 1000\}$ on $[\varepsilon, 2\pi - \varepsilon ]$ with $\varepsilon = 0.1$.
You have Access Open access | CommonCrawl |
Hostname: page-component-7ccbd9845f-2c279 Total loading time: 1.291 Render date: 2023-02-01T19:48:41.378Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Journals
>Animal Health Research Reviews
>Volume 20 Issue 2
>A systematic review and network meta-analysis of bacterial...
Animal Health Research Reviews
A systematic review and network meta-analysis of bacterial and viral vaccines, administered at or near arrival at the feedlot, for control of bovine respiratory disease in beef cattle
Part of: Antimicrobial stewardship in livestock
Published online by Cambridge University Press: 21 February 2020
A. M. O'Connor [Opens in a new window] ,
D. Hu ,
S. C. Totton ,
N. Scott ,
C. B. Winder [Opens in a new window] ,
B. Wang ,
C. Wang [Opens in a new window] ,
J. Glanville ,
H. Wood and
B. White [Opens in a new window]
A. M. O'Connor*
Department of Veterinary Diagnostic and Production Animal Medicine, Iowa State University, Ames, Iowa50010, USA
D. Hu
Department of Statistics, Iowa State University, Ames, Iowa50010, USA
S. C. Totton
Guelph, Ontario, N1G 1S1, Canada
N. Scott
C. B. Winder
Department of Population Medicine, University of Guelph, Ontario, N1G 2W1, Canada
B. Wang
Department of Food Science and Technology, University of Nebraska-Lincoln, Lincoln, Nebraska, USA
C. Wang
Department of Veterinary Diagnostic and Production Animal Medicine, Iowa State University, Ames, Iowa50010, USA Department of Statistics, Iowa State University, Ames, Iowa50010, USA
J. Glanville
York Health Economics Consortium, University of York, England
H. Wood
B. White
Department of Clinical Sciences, Kansas State University, Manhattan, Kansas, USA
R. Larson
C. Waldner
Department of Large Animal Clinical Sciences, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
J. M. Sargeant
Author for correspondence: A. M. O'Connor, Department of Veterinary Diagnostic and Production Animal Medicine, Iowa State University, Ames, Iowa50010, USA. E-mail: [email protected]
Save PDF (1 mb) View PDF[Opens in a new window]
Rights & Permissions[Opens in a new window]
Vaccination against putative causal organisms is a frequently used and preferred approach to controlling bovine respiratory disease complex (BRD) because it reduces the need for antibiotic use. Because approximately 90% of feedlots use and 90% of beef cattle receive vaccines in the USA, information about their comparative efficacy would be useful for selecting a vaccine. We conducted a systematic review and network meta-analysis of studies assessing the comparative efficacy of vaccines to control BRD when administered to beef cattle at or near their arrival at the feedlot. We searched MEDLINE, MEDLINE In-Process, MEDLINE Daily Epub Ahead of Print, AGRICOLA, Cambridge Agricultural and Biological Index, Science Citation Index, and Conference Proceedings Citation Index – Science and hand-searched the conference proceedings of the American Association of Bovine Practitioners and World Buiatrics Congress. We found 53 studies that reported BRD morbidity within 45 days of feedlot arrival. The largest connected network of studies, which involved 17 vaccine protocols from 14 studies, was included in the meta-analysis. Consistent with previous reviews, we found little compelling evidence that vaccines used at or near arrival at the feedlot reduce the incidence of BRD diagnosis.
Bovinemeta-analysisrespiratory diseasesystematic reviewvaccination
Animal Health Research Reviews , Volume 20 , Issue 2 , December 2019 , pp. 143 - 162
DOI: https://doi.org/10.1017/S1466252319000288[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright © The Author(s), 2020
Bovine respiratory disease complex (BRD) is the most economically significant disease of feedlot cattle. Putative causal organisms include Mannheimia haemolytica, Pasteurella multocida, Histophilus somni, Mycoplasma bovis, bovine herpes virus (BHV), bovine viral diarrhea virus (BVDV), bovine respiratory syncytial virus (BRSV), and parainfluenza type 3 virus (PI3) (Larson and Step, Reference Larson and Step2012; Theurer et al., Reference Theurer, Larson and White2015). Ideally, BRD can be prevented rather than treated because prevention can reduce antibiotic use and improve animal welfare. Although disease prevention can take many forms, vaccination generally plays an important role in veterinary science. Vaccines with unconditional licenses from the United States Department of Agriculture (USDA) Center for Veterinary Biologics and other regulatory agencies worldwide are expected to serve as an approach to preventing disease (American Veterinary Medical Association none provided). Also, based on section 151 of Chapter 5 of the Virus-Serum-Toxin Act 21 USC 151–159 (Anon 1913 as amended 1985), the preparation and sale of worthless or harmful products for domestic animals are prohibited. Evidence from the industry also suggests that producers expect BRD vaccines, in particular, to help prevent disease. According to the National Animal Health Monitoring Scheme Feedlot 2011 report, Vaccination is a cornerstone of disease prevention activities for all livestock operations, including feedlots. Vaccination with products targeting the pathogens most frequently associated with morbidity in the feedlots may lessen the numbers of animals affected as well as the severity of disease. More than 90% of feedlots vaccinated at least some cattle against some of the key respiratory pathogens such as BVDV and infectious bovine rhinotracheitis virus. More than 90% of all cattle placed were vaccinated for these pathogens (United States Department of Agriculture, Animal and Plant Health Inspection Service, Veterinary Services, 2013). Although this high level of usage implies some degree of efficacy, the actual efficacy of vaccines when used upon feedlot arrival is unclear. When designing economic disease prevention programs, it is important to understand their overall efficacy and any differences between vaccines.
Ideally, a series of relevant clinical trials providing evidence of comparative efficacy would be available to producers and veterinarians for decision-making. Such studies would be conducted in samples that are representative of the intended population and include contrasts of interest. In feedlot production, such trials might include a placebo group to allow producers to determine whether a vaccine is effective. Additionally, the trials would ideally include a comparison of vaccines to determine whether one is more effective than others. Such trials, however, are difficult to conduct for economic and logistic reasons. When high-quality relevant clinical trials are not available, then systematic reviews and pairwise meta-analyses of randomized controlled trials can yield evidence of comparative efficacy of treatments under field conditions. However, when an insufficient number of trials containing the comparison of interest is available for pairwise meta-analysis, information from a network of evidence can be used to obtain estimates of the comparative efficacy of vaccines using network meta-analysis.
As a consequence of common vaccine usage and the need for producers and veterinarians to have information about the comparative efficacy of vaccines intended to prevent BRD, the objective of this systematic review and network meta-analysis was to determine the comparative efficacy of commercially available vaccines for the prevention of BRD among beef cattle in feedlots.
Protocol and registration
The protocol was developed prior to conducting the review, approved by the funding agency advisory board and staff, and posted online for public access (https://works.bepress.com/annette_oconnor/85/). Modifications to the original protocol are described when relevant.
Eligibility criteria were defined based on the population (P), intervention (I), comparators (C), outcome (O), and study design (S), based on the PICOS format (Centre for Reviews and Dissemination (CRD) 2008).
The eligible population was weaned cattle raised for meat in intensive systems at risk of BRD. Eligible cattle were housed in feedlot settings (i.e., groups of penned cattle receiving rations rather than grazing on pasture). Calves explicitly described as veal or dairy calves were excluded from consideration. Calves with vaccines administered post-weaning but prior to feedlot arrival were eligible provided that no difference other than vaccination existed between groups. Similarly, calves vaccinated after feedlot arrival were also eligible provided that no difference other than vaccination existed between groups. Although not in our original protocol, such studies were included because they provided more evidence to the network for estimation of the non-active control group baseline risk.
Interventions and comparators
The interventions of interest included commercially available vaccines in any country as recognized by the reporting of a brand name or manufacturer. This definition included commercially produced autogenous vaccines, which had to explicitly include the name of the manufacturer and identify as autogenous (i.e., farm of origin vaccines). Studies that administered the same vaccines at different times (e.g., before versus after feedlot arrival) were excluded, as the treatment effect in these cases reflected vaccination timing. If different products were compared, they had to be administered at the same time to avoid confounding by time. For studies to be included if the vaccination was used prior to feedlot arrival, the vaccination had to be the only difference in the regimen (i.e., animals had to have the same weaning dates, rations, etc. before feedlot arrival). Relevant comparators were non-active controls (e.g., saline or no vaccine) or active interventions (e.g., another vaccine).
The cumulative incidence of the first treatment for BRD in the first 45 days of the feedlot period was the primary outcome of interest, as this is the period of increased BRD incidence. As possible secondary outcomes, we also extracted data for cumulative treatment for BRD over the entire feedlot period and BRD mortality. Extraction of metrics for cumulative incidence within the first 45 days was prioritized as follows:
• 1st priority metric: Estimates of efficacy that adjusted for clustering of feedlot populations, such as adjusted risk ratio, adjusted odds ratio, or arm-level probability of the event obtained by transforming the adjusted odds ratio. If the study was conducted in only one pen, then adjustment for clustering was not considered necessary.
• 2nd priority metric: Estimates of efficacy that did not adjust for clustering of feedlot populations such as unadjusted risk ratio, unadjusted odds ratio, or arm-level probability of the event obtained by transforming the unadjusted odds ratio.
• 3rd priority metric: Raw arm-level data, such as the number of animals with BRD or the number of animals allocated to and analyzed in a group.
If a prioritized metric was reported, lower-priority metrics were not extracted. The rationale for this prioritization is that the meta-analysis should use an adjusted summary effect, as most relevant studies are randomized trials conducted in clustered populations.
Study designs
Studies relevant to the review had to contain at least one comparison group (i.e., active comparator or placebo) and at least one commercial vaccine in a cattle population with naturally occurring BRD. Although the protocol stated that only randomized studies of natural infection were to be included, studies reporting the allocation of animals to a vaccine group were considered trials and therefore eligible for inclusion. If studies did not report allocation, the potential for bias associated with non-random allocation methods was measured in the risk of bias assessment. Cluster or individually based allocation methods were acceptable.
Report characteristics
In addition to the eligibility criteria as described in the PICOS elements described above, studies had to have a full text available in English. No country restrictions were applied. For the bacterial vaccines, no date restrictions were applied to the search. The results of the viral vaccine search strings were limited to those published from 2014 to current. The rationale for this restriction was that the number of citations returned by the search strings related to BRD viral vaccines was very large and we considered very likely to contain many a lot of irrelevant studies. Therefore, as proposed in the protocol we used prior reviews to identify studies published before 2014 and the database searches to capture recent relevant publications (Larson and Step, Reference Larson and Step2012; Theurer et al., Reference Theurer, Larson and White2015).
The electronic databases used for the literature search were MEDLINE, MEDLINE In-Process and MEDLINE Daily, Epub Ahead of Print, Cambridge Agricultural and Biological Index (CABI), Science Citation Index, Conference Proceedings Citation Index Science, and AGRICOLA. MEDLINE sources were searched using the Ovid interface. AGRICOLA was searched via Proquest. The remainder of the databases were searched using the Iowa State University Web of Science interface.
The conference proceedings of the American Association of Bovine Practitioners (1997–2017) and World Buiatrics Congress (1998–2016) were hand-searched for relevant records. Conference reports with fewer than 500 words were not considered eligible, as our experience suggests that their reporting is not sufficiently detailed to conduct risk of bias assessment and extract detailed results. The reference lists of two recent reviews considered relevant to the project were also evaluated for relevant studies (Larson and Step, Reference Larson and Step2012; Theurer et al., Reference Theurer, Larson and White2015).
The database search strategies, which were modified as appropriate for each database, are reported in Supplementary Tables S1 to S5. As the conference proceedings were hand-searched, no search terms are presented for these sources of information. Search results were imported into ®EndNoteX9 (Clarivate Analytics, Philadelphia, PA), and duplicate results were removed. Records were then uploaded to the systematic review management software ®DistillerSR (Evidence Partners, Ontario, Canada) and additionally examined for duplicate records.
Study selection
The first round of screening was based on titles and abstract, and the second round of screening was based on the full text. The questions used for each round of screening are provided in the review protocol, which underwent minor modifications for clarity as described in the supplementary materials section titled TS12.
For the title and abstract round of screening, the two reviews pre-tested 100 records to ensure clarity of the questions and consistency in understanding the questions. Records were excluded if both reviewers responded 'no' to any screening question. If one reviewer indicated 'yes' or 'unclear', the record was advanced to full-text screening. For the full-text round of screening, the two reviewers pre-tested five records to ensure clarity of the questions and consistency in understanding the questions.
Studies were included in the meta-analysis if sufficient data were reported to enable calculation of the log odds ratio and standard error of the log odds ratio based on the extraction of the prioritized metrics.
DistillerSR was used to extract data into pre-tested forms. Two reviewers independently extracted all data elements of interest from relevant full-text articles. After extraction, any disagreements were resolved by discussion. If the discussion did not lead to resolution of the conflict, a third reviewer was consulted. The unit of concern for data extraction was the individual study, if available. Therefore, if an article described multiple studies at different sites, data were extracted at the site level if this information was reported. If investigators combined multiple sites into a single analysis and only reported pooled information, then the pooled information was extracted. We did not contact authors when data were missing. If studies were linked (i.e., if the same data were reported in multiple publications, such as a conference proceeding and a journal article), we used all the available information but cited the version considered to be the most complete report.
Data were extracted to describe individual study-level characteristics, such as country, year, and outcomes measured. For baseline characteristics and loss to follow-up information, if reported, we extracted arm-level data in preference over data combining the groups. Other arm-level data extracted included the intervention and results (based on prioritization).
Geometry of the network
Network geometry was assessed using a previously described approach (Salanti et al., Reference Salanti, Kavvoura and Ioannidis2008). The probability of an interspecific encounter (PIE) index was calculated using a custom-written R script, and the C-score test was performed using the R package EcoSimR version 0.1.0 (Gotelli and Entsminger, Reference Gotelli and Entsminger2001). The PIE index is a continuous variable that decreases as unevenness increases, with values <0.75 reflecting a limited diversity of interventions. We also assessed co-occurrence using the C-score, which describes, based on checkerboard analysis, whether particular pairwise comparisons of treatments are preferred or avoided (Salanti et al., Reference Salanti, Kavvoura and Ioannidis2008).
Risk of bias in individual studies
The risk of bias form was based on the Cochrane Risk of Bias (ROB) 2.0 tool for randomized trials. However, this form was modified to ensure relevance to the review topic (Higgins et al., Reference Higgins, Sterne, Savović, Page, Hróbjartsson, Boutron, Reeves and Eldridge2016). The risk of bias assessment was conducted on the outcome level (i.e., BRD morbidity and mortality) for the outcome assessment domain; for the other risk domains, BRD morbidity and mortality outcomes were considered to be the same in a given study.
Summary measures
The summary measure used to describe pairwise comparative efficacy was the risk ratio. The baseline risk used to convert odds ratios to risk ratios was obtained using the distribution of the reported log odds in the placebo group. The posterior distribution of the mean of the baseline log odds was N( − 0.7183, 1.9537). The posterior distribution of the standard deviation of the baseline log odds was N(1.7958, 0.5186).
Planned method of statistical analysis
The proposed method of statistical analysis was a network meta-analysis, which is described in detail elsewhere (Dias et al., Reference Dias, Welton, Caldwell and Ades2010, Reference Dias, Welton, Sutton and Ades2011). Network meta-analysis is defined as 'The simultaneous synthesis of evidence of all pairwise comparisons across more than two interventions' (Coleman et al., Reference Coleman, Phung, Cappelleri, Baker, Kluger, White and Sobieraj2012). Although frequently used as a synonym for network meta-analysis, a mixed treatment comparisons meta-analysis is a subtype of a network meta-analysis in which 'A statistical approach used to analyze a network of evidence with more than two interventions which are being compared indirectly, and at least one pair of interventions compared both directly and indirectly' (Coleman et al., Reference Coleman, Phung, Cappelleri, Baker, Kluger, White and Sobieraj2012). Direct comparisons of interventions were calculated based on the observed effects in trials or observational studies that compared the pair of interventions of interest, whereas indirect comparisons of interventions were calculated based on the results of trials that did not directly compare the pair of interventions of interest.
We used a random effects Bayesian model to obtain a continuous outcome for the network meta-analysis. b denoted the baseline treatment of the whole network (usually placebo), and b i denoted the trial-specific baseline treatment of trial i. It could be the case that b ≠ b i. We supposed there were L treatments in a network and assumed a normal distribution for the continuous measure of the treatment effects of arm k relative to the trial-specific baseline arm b i in trial $i\comma \;y_{ib_ik}$, with variance $V_{ib_ik}$, such that
$$y_{ib_ik}\sim N\lpar {\theta_{ib_ik}\comma \;V_{ib_ik}} \rpar \comma \;$$
$$\theta _{ib_ik}\sim \left\{{\matrix{ {N\lpar {d_{b_ik}\comma \;\sigma_{b_ik}^2 } \rpar \comma \;} & {{\rm for\;}b_i = b\comma \;} \cr {N\lpar {d_{bk} - d_{bb_i}\comma \;\sigma_{b_ik}^2 } \rpar \comma \;} & {{\rm for\;}b_i \ne b\comma \;} \cr } } \right.$$
where d bk was the treatment effects of k relative to the network baseline treatment b and where $\sigma _{b_ik}^2 $ was the between-trial variance. The priors of d bk and $\sigma _{b_ik}$ were
$$d_{bk}\sim N\lpar {0\comma \;10000} \rpar \comma \;$$
and there was a homogeneous variance assumption that $\sigma _{b_ik}^2 = \sigma ^2$, where σ ~ U(0, 5). Thus, for L treatments, we have L − 1 priors for d bl, l ∈ {1, …, L}, l ≠ b. For l = b, we have d bb = 0.
Handling of multi-arm trials
For multi-arm trials, we assumed that the co-variance between $\theta _{jb_jk}$ and $\theta _{jb_j{k}^{\prime}}$ was σ 2/2 (Higgins and Whitehead, Reference Higgins and Whitehead1996; Lu and Ades, Reference Lu and Ades2004). The likelihood of a trial i with a i arms would be defined as multivariate normal:
$$\left({\matrix{ {y_{i\comma 1\comma 2}} \cr {y_{i\comma 1\comma 3}} \cr \vdots \cr {y_{i\comma 1\comma a_i}} \cr } } \right)\sim N_{a_i - 1}\left({\left({\matrix{ {\theta_{i\comma 1\comma 2}} \cr {\theta_{i\comma 1\comma 3}} \cr \vdots \cr {\theta_{i\comma 1\comma a_i}} \cr } } \right)\comma \;\left[{\matrix{ {V_{i\comma 1\comma 2}} & {se_{i1}^2 } & \cdots & {se_{i1}^2 } \cr {se_{i1}^2 } & {V_{i\comma 1\comma 3}} & \cdots & {se_{i1}^2 } \cr \vdots & \vdots & \ddots & \vdots \cr {se_{i1}^2 } & {se_{i1}^2 } & \cdots & {V_{i\comma 1\comma a_i}} \cr } } \right]} \right)\comma \;$$
where the diagonal elements in the variance-covariance matrix represent the variances of the treatment differences and the off-diagonal elements represent the observed variance in the control arm in trial i, denoted by $se_{i1}^2 $.
For all studies, the results were converted to the log odds ratio for analysis. If the authors reported a risk ratio, this was converted back to the log odds ratio using the reported risk of disease in the placebo group. When the authors reported the probability of BRD in each treatment arm based on a model, that probability was converted back to the logs odds ratio using a previously described method (Hu et al., Reference Hu, Wang and O'Connor2019).
Vaccines for the same bacterial or viral target produced by different companies were considered to be different interventions, as the specific strains used in the manufacture of the vaccine may differ or the products may contain different antigens. However, if the same vaccine or intervention arm was administered in a different number of doses, then this was considered to be the same intervention. As much of the information about vaccines is proprietary and the production process can be country-specific, we assumed vaccines from the same company were different if they came from different countries unless the authors specifically indicated that a vaccine was equivalent to another company's product.
Selection of prior distributions in Bayesian analysis
Prior distributions were originally based on a previously reported approach (Dias et al., Reference Dias, Welton, Sutton and Ades2011). As with prior models, we assessed σ ~ U(0, 2) and σ ~ U(0, 5), and the results suggested σ ~ U(0, 5) was preferred. We repeated the assessment and retained the same prior used in a previous model (O'Connor et al., Reference O'Connor, Coetzee, da Silva and Wang2013, Reference O'Connor, Yuan, Cullen, Coetzee, da Silva and Wang2016).
Implementation and output
All posterior samples were generated using Markov Chain Monte Carlo (MCMC) simulation implemented using Just Another Gibbs Sampler (JAGS) software (version 3.4.0) (Plummer, Reference Plummer2015). All statistical analyses were performed using R software (version 3.2.1) (R Core Team, 2015). We fitted the model using JAGS, a MCMC sampler, using the R rjags package (Plummer, Reference Plummer2015). Three chains were simulated, and convergence was assessed using Gelman-Rubin diagnostics. We discarded 5000 'burn-in' iterations and based the inferences on a further 10,000 iterations. The model output included all possible pairwise comparisons using log odds ratios (for inconsistency assessment), risk ratios (for comparative efficacy reporting), and treatment failure rankings (for comparative efficacy reporting).
Assessment of model fit
The fit of the model was assessed based on the log odds ratio by examining the residual deviance between the predicted values from the MTC model and the observed value for each study (Dias et al., Reference Dias, Welton, Caldwell and Ades2010, Reference Dias, Welton, Sutton and Ades2011).
Assessment of inconsistency
We used the back-calculation method to assess the consistency assumption (Dias et al., Reference Dias, Welton, Caldwell and Ades2010). We did not rely only on P-values during inconsistency evaluation. We also compared estimates from direct and indirect models and considered the standard deviation of each estimate. Comparisons for which the direct and indirect estimates had different signs were further evaluated and discussed.
Risk-of-bias assessment: individual studies and overall network
In systematic reviews, risk of bias assessment informs readers about the potential for bias in individual studies and facilitates their interpretation of estimates of efficacy. Intervention studies are generally considered to involve five risk of bias domains: bias related to the allocation process, deviation from the intended interventions, missing outcome data, outcome measurement, and selection of the reported result. (Higgins et al., Reference Higgins, Sterne, Savović, Page, Hróbjartsson, Boutron, Reeves and Eldridge2016). For this review, risk of bias assessment of individual studies was conducted based on the bias domains proposed by the Cochrane ROB 2.0 algorithm (Higgins et al., Reference Higgins, Sterne, Savović, Page, Hróbjartsson, Boutron, Reeves and Eldridge2016).
In assessing risk of bias due to the allocation process, the Cochrane ROB tool for individually allocated studies places substantial value on allocation concealment; however, it is unclear if this emphasis is applicable to production settings for beef, as the value of the individual animal is likely to be equivalent and/or is unlikely to be known at the time of allocation to treatment group. Therefore, rather than make an overall assessment of bias arising from the approach to allocation, we present answers to three signaling questions (SQ):
• SQ1.1 – Was the allocation sequence random?
• SQ1.2 – Was the allocation sequence concealed until participants were recruited and assigned to interventions?
• SQ1.3 – Were there baseline imbalances that suggest a problem with the randomization process?
For cluster-randomized trials, the first three signaling questions for bias arising from the allocation approach were also assessed and presented with the individual-level questions. Two additional questions related to bias arising from individual participant characteristics in cluster-randomized studies were assessed and presented separately:
• Were all individual participant characteristics likely to be evenly distributed across treatment groups?
• Were there baseline imbalances that suggest differential identification or recruitment of individual participants between arms?
We did not assess the risk of bias of studies that did not have reportable data. If studies within an article had different characteristics that impacted bias, such as sample size, risk of bias was assessed for each study separately. Otherwise, studies within an article are presented as a single set of results.
To describe the overall quality of the evidence network, a modification of the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach was employed (Salanti et al., Reference Salanti, Del Giovane, Chaimani, Caldwell and Higgins2014; Papakonstantinou et al., Reference Papakonstantinou, Nikolakopoulou, Rucker, Chaimani, Schwarzer, Egger and Salanti2018) using Confidence in Network Meta-Analysis (CINeMA) online software (http://cinema.ispm.ch) (CINeMA: Confidence in Network Meta-Analysis [Software] 2017). CINeMA uses a frequentist approach to calculate treatment effects based on the R metafor package (Viechtbauer, Reference Viechtbauer2010), on which the contribution matrix for the risk of bias is based. The proposed system evaluates within-study bias, across-studies bias, indirectness, imprecision, heterogeneity, and incoherence. For within-study bias, we evaluated and report the contribution of studies based on randomization and blinding rather than an overall assessment of bias. The rationale for assessing and presenting these two factors is that evidence in veterinary science indicates that failure to include these design elements is associated with larger estimates of effect, whereas the Cochrane ROB tool considers design elements such as allocation concealment, for which there is no evidence of bias in livestock production to date. For randomization, we evaluated risk of bias based on the following questions.
• Low risk of bias: study reports random allocation with evidence provided; based on a 'yes' or 'probably yes' response to Q1.1 – Was the allocation sequence random?
• Unclear risk of bias: study reports random allocation but provides no evidence; based on a 'no information random' response to Q1.1 – Was the allocation sequence random?
• High risk of bias: study reports non-random allocation or no information about allocation; based on a 'no', 'probably no', or 'no information at all' response to Q1.1 – Was the allocation sequence random?
For blinding, we considered blinding of caregivers and outcomes assessors to be associated with a low risk of bias. If authors mentioned only one category, this was considered unclear, and if blinding of neither caregivers nor outcome assessors was reported, we assigned this as a high risk of bias.
• Low risk of bias: study reports blinding of caregivers and outcome assessors; based on a 'no' or 'probably no' response Q2.2 and Q4.1, which relate to whether these people were aware of the intervention assignment of animals.
• Unclear risk of bias: study reports blinding of caregivers or outcome assessors but not both; based on a 'no' or 'probably no' response to Q2.2 or Q4.1.
• High risk of bias: study that did not fall into the above two categories.
In CINeMA, indirectness refers to how closely the samples resemble the populations in which the intervention will be used. Given the narrow eligibility criteria for this review, indirectness was not considered an issue, and therefore the concerns were low for all studies. As the ability to assess across-studies bias in network meta-analysis is poorly developed, and no studies had a sufficient number of pairs such that pairwise assessment would be informative, this was not assessed. To assess imprecision (which indicates whether the boundaries of the confidence intervals for the treatment effects could allow different conclusions), we considered 0.8 to be a clinically relevant odds ratio. We used the same odds ratio of 0.8 to assess heterogeneity (NB: log(0.8) = −0.2231436 or 0.2231436). We did not present the inconsistency analysis from CINeMA because this was already conducted based on the Bayesian analysis.
Additional analyses
No additional analyses were conducted.
The flow of studies from retrieval to inclusion in the meta-analysis of the BRD morbidity outcome within 45 days of arrival at the feedlot is presented in Fig. 1 (Moher et al., Reference Moher, Liberati, Tetzlaff and Altman2009). The other outcomes were not reported, as the data available were too sparse. The flow chart shows numbers of unique studies. Some articles had multiple studies, some studies had multiple comparisons, and some studies report the outcome at multiple time periods. Fifty-three studies report at least one BRD morbidity outcome measured in the first 45 days in the feedlot period, which came from 44 articles (Griffin et al., Reference Griffin, Amstutz, Morter, Hendrix and Crandall1979; Wohler and Baugh, Reference Wohler and Baugh1980; Amstutz et al., Reference Amstutz, Horstman and Morter1981; Bennett, Reference Bennett1982; Morter et al., Reference Morter, Amstutz and Crandell1982; Confer et al., Reference Confer, Wright, Cummins, Panciera and Corstvet1983; Morter and Amstutz, Reference Morter and Amstutz1983; Martin et al., Reference Martin, Acres, Janzen, Willson and Allen1984; Morter et al., Reference Morter, Amstutz and Roussel1984; Purdy et al., Reference Purdy, Livingston, Frank, Cummins, Cole and Loan1986; Smith et al., Reference Smith, Hicks, Gill and Ball1986; Thomas et al., Reference Thomas, Stott, Howard and Gourlay1986; Bateman, Reference Bateman1988; Jim et al., Reference Jim, Guichon and Shaw1988; Ribble et al., Reference Ribble, Jim and Janzen1988; McLean et al., Reference McLean, Smith, Gill and Randolph1990; Thorlakson et al., Reference Thorlakson, Martin and Peters1990; Van Donkersgoed et al., Reference Van Donkersgoed, Janzen, Townsend and Durham1990; Bechtol et al., Reference Bechtol, Ballinger and Sharp1991; Mills, Reference Mills1991; Harland et al., Reference Harland, Potter, van Drunen-Littel-van den Hurk, Van Donkersgoed, Parker, Zamb and Janzen1992; Koevering et al., Reference Koevering, Gill, Owens, Smith and Ball1992; van Donkersgoed et al., Reference van Donkersgoed, Schumann, Harland, Potter and Janzen1993; Malcolm-Callis et al., Reference Malcolm-Callis, Galyean and Duff1994; Wright et al., Reference Wright, Mowat and Mallard1994; Gummow and Mapham, Reference Gummow and Mapham2000; O'Connor et al., Reference O'Connor, Martin, Harland, Shewen and Menzies2001; Frank et al., Reference Frank, Briggs, Duff, Loan and Purdy2002; Frank et al., Reference Frank, Briggs, Duff and Hurd2003; MacGregor et al., Reference MacGregor, Smith, Perino and Hunsaker2003; Schunicht et al., Reference Schunicht, Booker, Jim, Guichon, Wildman and Hill2003; Kirkpatrick et al., Reference Kirkpatrick, Step, Payton, Richards, McTague, Saliki, Confer, Cook, Ingram and Wright2008; Perrett et al., Reference Perrett, Wildman, Abutarbush, Pittman, Jones, Pollock, Schunicht, Guichon, Jim and Booker2008; Stilwell et al., Reference Stilwell, Matos, Carolino and Lima2008; Wildman et al., Reference Wildman, Perrett, Abutarbush, Guichon, Pittman, Booker, Schunicht, Fenton and Jim2008; Rogers et al., Reference Rogers, Portillo, Smialek, Miles, Lehenbauer and Smyth2009; Wildman et al., Reference Wildman, Jim, Perrett, Schunicht, Hannon, Fenton, Abutarbush and Booker2009; Grooms et al., Reference Grooms, Brock, Bolin, Grotelueschen and Cortese2014; McKaig and Taylor, Reference McKaig and Taylor2015; Richeson et al., Reference Richeson, Beck, Poe, Gadberry, Hess and Hubbell2015; Rogers et al., Reference Rogers, Miles, Hughes, Renter, Woodruff and Zuidhof2015; Bailey et al., Reference Bailey, Jaeger, Schmidt, Waggoner, Pacheco, Thomson and Olson2016; Rogers et al., Reference Rogers, Miles, Renter, Sears and Woodruff2016; White et al., Reference White, Theurer, Goehl and Thompson2017).
Fig. 1. Flowchart describing the flow of literature through the review.
The 53 studies were conducted in five countries, most commonly in the USA (N = 30/53; 56.5%) and Canada (N = 20/53; 37.5%). The year the study was conducted was reported for 37 out of 53 studies (69.8%), which was between 1978 and 2016. The studies were conducted on commercial feedlots (19/53; 35.8%), university/research feedlots (15/53; 28.3%), feedlots of unspecified type (14/53; 26.4%), setting not reported (2/53; 3.8%), a custom feedlot (1/53; 1.9%), a backgrounding yard (1/53; 1.9%), and a beef rearing farm (1/53; 1.9%).
Presentation of network structure
Although 53 studies were potentially relevant to the review, their approach to conducting BRD vaccine trials had major implications for the ability to assess the comparative efficacy of vaccines. A commonly used approach was to employ a control arm that received vaccines. For example, some authors might refer to a four-arm study as follows:
• Arm 1 – placebo
• Arm 2 – Mannheimia heamolytica vaccine
• Arm 3 – Histophilus somnus vaccine
• Arm 4 – Histophilus somnus and Mannhemia heamolytica vaccines
However, other authors might report that all animals received a four-way modified live viral (MLV) vaccine containing antigens for infectious bovine rhinotracheitis, BVDV, PI3, and BRSV upon arrival at the feedlot. As a consequence, the actual treatment arms would be:
• Arm 1 – four-way MLV vaccine
• Arm 2 – four-way MLV vaccine + Mannhemia heamolytica vaccine
• Arm 3 – four-way MLV vaccine + Histophilus somnus
• Arm 4 – four-way MLV vaccine + Histophilus somnus and Mannhemia heamolytica vaccines
Similarly, another study might be described as a controlled two-arm study of the following vaccines:
• Arm 1 – four-way killed vaccine
However, as all animals in the feedlot, and therefore in the trial, were reported to have also received a Mannhemia heamolytica vaccine during processing, the actual treatment arms would be:
• Arm 1 – Mannhemia heamolytica vaccine + four-way killed vaccine
• Arm 2 – Mannhemia heamolytica vaccine + four-way MLV vaccine
This approach to study design means that each combination of vaccines must be treated as a novel treatment. Further fragmenting the data is the fact that it is not clear whether vaccines from different companies that target the same viral or bacterial organism are the same and that estimates of efficacy can or should be pooled. This situation contrasts with the situation encountered with antibiotics. For example, estimates of efficacy of the antibiotic oxytetracycline produced by many manufacturers were pooled to obtain a summary estimate, which seems reasonable because manufacturers must document equivalence with a registered product prior to approval by the US Food and Drug Administration. However, there is no documented evidence that such pooling is appropriate for BRD vaccines. For example, the USDA Center for Veterinary Biologics, which licenses vaccines in the USA, requires companies to document the efficacy of a vaccine rather than its equivalence to a previously registered product. This is presumably because different antigens, adjuvants, or challenge models could impact efficacy. The exception to this is when exactly the same product is re-marketed (i.e., re-bottled/re-labeled) and sold by a different company. However, it is often unclear to end-users when products are relabeled, so there is rarely the opportunity to combine vaccines that are known to be the same.
A consequence of these characteristics of the body of evidence for BRD vaccines is that the evidence network contains mainly novel vaccine protocol arms. As shown in Table S6, there were almost as many vaccine protocols as study arms, and almost 90% of vaccine protocols were unique. Also, because many trials used control arms with different products, it was not possible to link many vaccination protocols and compare efficacy. This point is illustrated in Fig. 2, which shows two important features. First, 17 groups of vaccine studies were not linked, and second, many of these groups evaluated unique combinations that were not replicated by other studies. The largest network was the star-shaped network that was tethered to a true non-active control (i.e., no other vaccine). Meta-analysis requires replication, otherwise, its major advantage (i.e., calculation of a pooled effect size using data from multiple studies) is not realized. Furthermore, it is difficult to make strong inferences about single study results because the result may be an isolated random effect.
Fig. 2. The full network of studies relevant to the review. Each circle represents a vaccine, and lines between circles indicate a direct comparison. The key is reported in Table S6.
Due to these characteristics, it was necessary to limit the network meta-analysis to treatments of single products linked in the largest network to true non-active control. We expected, when two products were used in combination, to be able to assess evidence of an interaction by comparing the interaction term from predicted single-vaccine effects from the model with the observed effect size reported by the combination arm; however, the comparisons required were not reported.
Therefore, the final network used in the meta-analysis to describe the efficacy of BRD vaccines contained vaccines evaluated in only 14 studies (Griffin et al., Reference Griffin, Amstutz, Morter, Hendrix and Crandall1979; Wohler and Baugh, Reference Wohler and Baugh1980; Confer et al., Reference Confer, Wright, Cummins, Panciera and Corstvet1983; Purdy et al., Reference Purdy, Livingston, Frank, Cummins, Cole and Loan1986; Thomas et al., Reference Thomas, Stott, Howard and Gourlay1986; Bateman, Reference Bateman1988; McLean et al., Reference McLean, Smith, Gill and Randolph1990; Thorlakson et al., Reference Thorlakson, Martin and Peters1990; Wright et al., Reference Wright, Mowat and Mallard1994; O'Connor et al., Reference O'Connor, Martin, Harland, Shewen and Menzies2001; Stilwell et al., Reference Stilwell, Matos, Carolino and Lima2008; McKaig and Taylor, Reference McKaig and Taylor2015; Richeson et al., Reference Richeson, Beck, Poe, Gadberry, Hess and Hubbell2015; Rogers et al., Reference Rogers, Miles, Hughes, Renter, Woodruff and Zuidhof2015), which included 17 vaccines and 73 treatment arms (Fig. 3). The vaccine regimens used by these studies are reported in Table 1. As shown in Table 1, not all trials report the use of the vaccine explicitly at arrival, but the results of all 17 vaccine protocols are included for completeness. Of these 14 studies, two were three-arm trials, one was a four-arm trial, and the remainder were two-arm trials. Twelve studies were non-active control-to-active comparisons, and two studies were active-to-active comparisons.
Fig. 3. The network of treatment arms used in mixed-treatment comparison meta-analysis. The size of the dot is a relative indicator of the number of arms, and the width of the line is a relative indicator of the number of direct comparisons (i.e., number of arms). Lines between circles indicate a direct comparison. Abbreviations are defined in Table 1.
Table 1. The vaccine product used and abbreviations in each study arm in the largest network of studies included in the review. The day the product was administered is included in parentheses
The day of arrival is considered day 0; (0–1) means that the vaccine was administered over 2 days (day 0 and day 1), and (0,1) means that the vaccine was administered on day 0 and day 1. NR means the date of vaccination was not reported, and NA means not applicable.
Summary of network geometry
The geometry of the meta-analysis network was sparse, with many vaccine regimens being assessed only once (Fig. 3). This network would be considered diverse as measured by a PIE index of 0.86 (Salanti et al., Reference Salanti, Kavvoura and Ioannidis2008). This finding is consistent with a visual examination of the network, which includes a large number of treatments (Fig. 3). The C-score was 1.6, and the C-score test had a large P-value (P = 0.57). These metrics evaluate how encounters occur in ecological populations, and when used in a network meta-analysis, they assess whether particular pairwise comparisons occur more (or less) often than expected by random encounter. Given the absence of replication in the entire network, the lack of statistical evidence of preferred comparisons is not surprising.
Study characteristics and study results
Descriptive information for studies is provided in the supplementary materials along with their definitions of success and exclusion criteria (Table S7). Particularly interesting information in this table relates to the baseline conditions applied to all animals in each trial. Notably, some authors failed to clearly document concurrent treatments such as non-BRD-related vaccinations, antiparasitic treatments, and antibiotics received.
Definitions of BRD are reported in Table S9. When multiple studies were described in the same article, they used the same definitions of the outcome and exclusion criteria, so the tables are indexed by article rather than study. The definitions of outcomes were very consistent across articles and frequently reported. The approach to handling BRD cases diagnosed at arrival is also reported in Table S8.
Individual study risk of bias
The results of individual study risk of bias assessments are shown in (Table S10). No studies were cluster-randomized trials, which is not unexpected for vaccine trials. For the design features most known to impact bias in veterinary science – randomization and blinding – most studies had an unclear risk of bias due to incomplete reporting. This is likely a function of the age of most vaccine studies. Although reporting of design features such as randomization and blinding has been improving (Totton et al., Reference Totton, Cullen, Sargeant and O'Connor2018), many of these studies are older.
Individual study results
As the individual study results were available in multiple forms (e.g., raw data, risk estimates, odds ratios), these were transformed to pooled risk ratios (when more than one study was available) in the final meta-analysis, which is shown in Table 2.
Table 2. Risk ratio of all possible pairwise comparisons within the evidence network
The upper right-hand quadrant represents the estimated risk ratio, and the lower quadrant represents the 95% confidence interval.
Abbreviations are defined in Table 1.
For the final meta-analysis including 14 studies, measures of convergence for the Bayesian model were within normal limits, which was assessed by visual examination of trace plots. The results of the model are presented in several ways. The estimates of average rank are provided in Fig. 4. Lower rankings are associated with a lower incidence of BRD post-vaccination (i.e., a vaccine associated with the lowest or highest BRD diagnosis post-arrival would have a ranking of 1 or 17, respectively). The non-active control group (NAC), which is based on 14 arms of data, has a middle rank. The rankings were extremely close to each other, suggesting little difference in the performance of vaccine protocols. There was not a single median rank separated by more than one unit for 16 of the 17 vaccines. For example, the second and third highest-ranked vaccine protocols had ranks of 6.82 and 7.74, respectively. Overall, the 16 products had rankings ranging from 6 to 12. By definition, a ranking plot must impose an order on the vaccines included in the meta-analysis, but the closeness of the ranking estimates illustrates that it was not possible to differentiate vaccines based on performance. The probability distributions of vaccination responses (i.e., control of BRD) is presented in Table S13 and Fig. S1. These distributions provide a different way of presenting information from the ranking plot and show that the vaccines are poorly differentiated. Similarly, Table 2 reports wide confidence intervals, which give no indication of any vaccine being substantially better or worse than the non-active control group in controlling BRD events.
Fig. 4. The ranking plot of vaccine protocols included in the largest connected network. The scale of rankings is 1 to 17, with lower numerical rankings indicating the lower incidence of BRD. The black box represents the point estimate of the ranking, and the horizontal line represents the 95% confidence interval. Abbreviations are defined in Table 1. The size of the black box is reflecting on the weighing, which is the inverse of the variance. Since NAC has the smallest variance, it has the largest precision and therefore larger size of box.
Exploration of inconsistency
The consistency between the direct and indirect sources of evidence of the final model including 14 trials and 73 arms is reported in Table 3. There was no evidence of inconsistency between the direct and indirect estimates. Again, the potential cause of this finding is that the small number of studies available for some comparisons means that the confidence intervals for direct estimates were wide, making it difficult to detect differences between direct and indirect estimates.
Table 3. Results of indirect comparisons for the consistency assumption
Posterior means (d) and standard deviations (SDs) of log odds ratios of treatment effects calculated using direct evidence only (dir), all evidence (MTC), or indirect evidence only (rest). The treatment on the left is the reference (denominator) and on the right is the comparator (numerator). w and SD(w) are the inconsistency estimate and SD of the inconsistency estimate, respectively.
Risk of bias across studies
As there were many possible pairwise comparisons for the risk of bias assessment across studies, we present a subset for illustrative purposes. As no vaccines appeared to be more effective than the non-active control, the overall picture of the risk of bias is more relevant than any particular pairwise comparison. Table S11 shows the number of direct comparisons available, with the largest number being two. Risk of bias assessment based on randomization status is presented in Figs. 5 and 6. Risk of bias assessment based on blinding status is presented in Figs. 7 and 8. These are split into two sets of studies for ease of presentation. The overall picture is of a body of work with incomplete reporting of randomization and blinding.
Fig. 5. Part 1: Contribution of studies to the point estimate based on the description of the allocation approach. Green indicates a study providing evidence of random allocation, yellow indicates a study reporting random allocation but providing no supporting evidence, and red indicates a study reporting no allocation approach or a non-random allocation approach. White vertical lines indicate the percentage contribution of separate studies. Each bar shows the percentage contribution from studies judged to be at low (green), moderate (yellow), and high (red) risk of bias.
Fig. 6. Part 2: Contribution of studies to the point estimate based on the description of the allocation approach. Green indicates a study providing evidence of random allocation, yellow indicates a study reporting random allocation but providing no supporting evidence, and red indicates a study reporting no allocation approach or a non-random allocation approach. White vertical lines indicate the percentage contribution of separate studies. Each bar shows the percentage contribution from studies judged to be at low (green), moderate (yellow) and high (red) risk of bias.
Fig. 7. Part 1: The contribution of studies to the point estimate based on the blinding approach. Green indicates a study providing evidence of blinding of caregivers and outcome assessors, yellow indicates a study providing evidence that either caregivers or outcome assessors were blinded, and red indicates a study reporting no blinding of caregivers or outcome assessors. White vertical lines indicate the percentage contribution of separate studies. Each bar shows the percentage contribution from studies judged to be at low (green), moderate (yellow), and high (red) risk of bias.
Results of additional analyses
Summary of evidence
The results of our network meta-analysis suggest that there is insufficient evidence to support the contention that commercial vaccines are effective at preventing the incidence of BRD among beef cattle when administered upon feedlot arrival. Given that 90% of feedlots use and 90% of cattle receive vaccines upon feedlot arrival in the USA, this may seem like a surprising finding. However, this finding is consistent with the existing primary and review literature.
An evaluation of the primary literature reveals that the observed magnitude of protection afforded by BRD vaccines is low, with some trials showing higher disease occurrence among vaccinated animals. This phenomenon is observable in the ranking plot, in which some vaccines have a lower rank than non-active controls. This result is not a product of the Bayesian model used but rather a product of the number of single studies. For single studies, the Bayesian model uses the point estimate of the risk ratio or odds ratio obtained from the study. That is, for treatments with a single evidence arm, the only evidence used in the point estimate is the original study data; however, the precision of the risk ratio estimate is influenced by the model, which uses between-study variation information drawn from the entire network. Vaccines with ranks lower than those of non-active controls had non-significant effects in the original studies, but empirically the reported disease risk was higher in vaccinated than in non-vaccinated animals. However, as we would not propose that effect sizes below the null value be interpreted as evidence of efficacy without replication of the results, we would not propose that there is evidence that vaccines with positive risk ratios have evidence of harm. A better alternative explanation might be that the use of vaccines upon feedlot arrival is ineffective (i.e., with effect sizes randomly distributed around the null value).
The finding that vaccines used upon feedlot arrival are not effective may not be surprising, as this has been known for years, and there are no new trials of vaccines reported that would be expected to produce different results. For example, below are quotes from various prior reviews of BRD vaccines:
• 'In North America, vaccination has resulted in equivocal changes in the incidence of BRD and many of the studies on vaccination of feeder calves in which adequate control groups were included, suggests the practice does not appreciably reduce the incidence or severity of BRD or have a beneficial effect on growth rate and feed conversion efficiency.' (Cusack et al., Reference Cusack, McMeniman and Lean2003)
• 'It is highly unlikely that control of BRD in the feedlot can be accomplished through an on-arrival vaccination program. … A literature review of scientifically valid field efficacy vaccine trials by Perino and Hunsaker found that modified-live BHV-1 (IBR) achieved equivocal results. Studies concerning efficacy of BVDV and PI3 vaccines lacked any reliable results, whereas BRSV vaccine studies showed efficacy was equivocal, lacking any negative impact on health.' (Edwards, Reference Edwards2010)
• 'Vaccination against bacterial and viral pathogens implicated in BRD is broadly accepted as an effective control measure and is widely practiced, although supportive evidence of efficacy is sometimes lacking'. (Theurer et al., Reference Theurer, Larson and White2015). This review reports that, with the exception of vaccination against BHV-1 (inactivated or modified live) and BVDV (inactivated only), meta-analyses of trials from experimental challenge studies show no strong evidence of differences in the risk of BRD-related morbidity in vaccinated animals compared with non-vaccinated controls.
Although many other examples are available, we conclude that, overall, our results are consistent with those of other reviews. There may be one important difference in the inference about vaccines between the present review and the prior review by Theurer et al. (Reference Theurer, Larson and White2015) as it relates to the effect of four-way modified live viral vaccines, which are the most commonly used vaccines in feedlot production. In Theurer et al.'s review and meta-analysis of viral BRD vaccines (antigens of interest were BHV-1, BVDV, BRSV, and PI3), pairwise meta-analysis was conducted for five trials that compared the efficacy of vaccines versus a no-vaccine placebo group against natural BRD infection in beef calves. Their summary risk ratio indicates that morbidity in vaccinated calves was lower than in non-vaccinated calves (risk ratio = 0.44; 95% confidence interval = 0.26–0.74), suggesting a protective effect. Factors contributing to this discrepancy between studies could include differences in eligibility criteria, choice of outcome measure, approach to combining vaccines, and the particular studies included in the analysis.
• Eligibility criteria: The pairwise meta-analysis by Theurer et al., included two studies that would not be eligible for our review, as their study populations were beef cattle aged 2–6 weeks and dairy cows (Makoschey et al., Reference Makoschey, Bielsa, Oliviero, Olivier, Pillet, Dufe, Giorgio and Cavirani2008).
• Outcome metrics: Due to our prioritization of metrics, we extracted the adjusted estimate of effect (i.e., odds ratio adjusted for breed and age at vaccination) from Stilwell et al. (Reference Stilwell, Matos, Carolino and Lima2008) study; therefore, this study was considered one adjusted analysis and result. Theurer et al. did not use this approach to data extraction and instead extracted raw data from three breeds and treated these as separate experiments.
• Pooling approach: BRD vaccines made by different companies potentially use different viral antigens and adjuvants. In a meta-analysis of active-to-active comparisons by Theurer et al., some cattle in the included studies received bacterial BRD vaccines (Presponse®, Vision 7 Somnus® with Spur®) at processing as part of the BRD vaccine intervention, which made comparisons between groups more complex to interpret.
• Studies included: In our review, we identified two studies that reported using Rispoval® (Thomas et al., Reference Thomas, Stott, Howard and Gourlay1986; Stilwell et al., Reference Stilwell, Matos, Carolino and Lima2008). One of these studies reported a very large vaccination effect (odds ratio = 0.2), and the other reported no vaccination effect (odds ratio = 1.02).
These factors impact the inferences made by the two reviews. However, rather than focusing on minor explainable differences, we believe that the more important message is that there is a lack of high-quality controlled studies in relevant study populations supporting the use of modified live vaccines upon feedlot arrival.
Risk of bias assessment in systematic reviews allows an understanding of how bias might impact estimates of efficacy. When a null effect is observed, it is hard to predict the direction of the risk of bias. However, we note that, overall, the reporting of studies is poor, with critical information about allocation approach and blinding of outcome assessment often missing. This may be because many of the included studies are older, whereas reporting standards have changed in the past decade, contributing to improvements in the quality of reporting over time (Totton et al., Reference Totton, Cullen, Sargeant and O'Connor2018).
It would be interesting to understand why producers bear the cost of vaccination despite evidence against the practice. For decades, commercial vaccines have been given to feedlot cattle in North America with the aim of controlling BRD (Cusack et al., Reference Cusack, McMeniman and Lean2003). This is despite the fact that registration of such vaccines relies on efficacy studies often utilizing artificial (as opposed to natural) antigen challenge (Cusack et al., Reference Cusack, McMeniman and Lean2003). However, challenge studies may overestimate the efficacy of vaccines against natural infection in feedlots (Theurer et al., Reference Theurer, Larson and White2015). Registration of vaccines is related to efficacy against an organism in healthy animals but not in a particular setting such as feedlot arrival. These factors suggest vaccines would not work when administered to animals arriving at a feedlot, especially considering that many feedlots use metaphylaxis concurrent with vaccination, which implies that the animals are not healthy. The question remains: When used in the field by producers, are vaccines effective and the research evidence incorrect? Or is efficacy not a factor in the decision to use vaccines? An important issue with this body of work is that groups identified as control groups may, in fact, have received BRD vaccines as part of the baseline processing protocol for all cattle entering the feedlot. However, studies of vaccine-to-vaccine comparisons did not acknowledge baseline processing vaccines as part of the protocol given to all animals, regardless of the intervention group. For example, a study that actually evaluated
often referred to the treatments as
in their title, abstract, results, and discussion. Other reviews of BRD vaccine use in feedlot beef cattle did not explicitly identify this issue. For example, MacGregor et al. report that there are no significant differences in BRD morbidity between 'vaccinated and unvaccinated groups' based on a study of field efficacy of vaccines in beef cattle (MacGregor et al., Reference MacGregor, Smith, Perino and Hunsaker2003). In that study (MacGregor et al., Reference MacGregor, Smith, Perino and Hunsaker2003), vaccinated groups received an M. haemolytica bacterin toxoid. However, the 'unvaccinated' group received pyramid MLV (as did the vaccinated group), so this is not a true comparison between vaccinated and unvaccinated cattle. Likewise, in a review of bacterial vaccines for naturally infected feedlot cattle, although the authors report excluding studies confounded by 'other vaccine treatment' and only including studies with a placebo/control group, in many of the included studies all enrolled cattle (including control groups) had been vaccinated with viral BRD vaccines (Larson and Step, Reference Larson and Step2012). The results of that review indicate possible evidence of the efficacy of M. haemolytica or M. haemolytica + P. multocida vaccines but no evidence of efficacy of H. somni vaccines against BRD (Larson and Step, Reference Larson and Step2012).
Our review question was restricted to the application of vaccines upon feedlot arrival, although the largest connected network included vaccination prior to feedlot entry with no other changes in management practice and delayed vaccination. Our rationale for focusing on vaccination upon arrival is that we were interested in the impact of interventions that could potentially substitute for antibiotic use to prevent BRD incidence. When considering alternatives to antibiotic use, many options are available that are not a direct substitute at the time of feedlot arrival. For example, ensuring that all calves are weaned and trained to use a bunk for feed for 4 weeks on the farm of origin prior to shipping is an alternative to antibiotics upon feedlot arrival. However, we focused on interventions that are widely used in approximately the same manner as antibiotics (i.e., vaccines upon arrival). This seems a sensible focus, as more than 90% of feedlots use and more than 90% of cattle in feedlots are vaccinated for BRD (United States Department of Agriculture, Animal and Plant Health Inspection Service, Veterinary Services, 2013). It would be interesting to understand why this practice is so prevalent. We could not find veterinary organizations that suggest vaccination upon feedlot arrival, even though some biologic companies suggest the practice might be useful (https://www.zoetisus.com/news-and-media/on-arrival-vaccination-research-shows-benefit-to-bottom-line.aspx).
Also, the results of our review should not be inferred to represent the efficacy of BRD vaccines in other settings or production systems or at other timings. Furthermore, authors of previous reviews suggest that results from dairy cattle or differently age animals may not be relevant to the feedlot population (Theurer et al., Reference Theurer, Larson and White2015). Vaccine-induced immunity may take 14–21 days to develop (Edwards, Reference Edwards2010), and risk factors for BRD morbidity may occur prior to feedlot arrival, including those related to weaning, mixing of animals from different farms of origin (e.g., at an auction barn), transport, and fasting during transport (Cusack et al., Reference Cusack, McMeniman and Lean2003; Edwards, Reference Edwards2010). Thus, BRD control may be best managed by limiting these risk factors.
It should also be noted that the limitations of the body of literature, in particular, the absence of evidence that vaccines are effective, is not related to the approach to evidence synthesis because prior systematic and narrative reviews have found the same information.
In conclusion, we found no evidence that vaccination of beef cattle upon feedlot arrival is effective in reducing BRD incidence. It was not possible to evaluate the comparative efficacy of vaccines, as we had proposed, because the products do not appear to be effective. If producers and veterinarians are under the impression that vaccines reduce the incidence of BRD when administered upon feedlot arrival, then this perception needs to be understood. The veterinary community should seek to understand why vaccines are used upon feedlot arrival and how we can either provide a better evidence base for their use or change the approach to vaccine use so that the products can better reduce BRD incidence in feedlots.
The supplementary material for this article can be found at https://doi.org/10.1017/S1466252319000288.
None to declare.
AOC developed the review protocol, coordinated the project team, assisted with the data analysis, interpreted the results, and prepared the manuscript drafts. DH conducted the data analysis, provided guidance for the interpretation of the results, commented on manuscript drafts, and approved the final manuscript version. ST conducted relevance screening, extracted data, provided guidance for the interpretation of the results, commented on manuscript drafts, and approved the final manuscript version. NS conducted relevance screening, extracted data, provided guidance for the interpretation of the results, commented on manuscript drafts, and approved the final manuscript version. CW developed the review protocol, provided guidance for the interpretation of the results, commented on manuscript drafts, and approved the final manuscript version. JG developed the review protocol, provided guidance on the creation of the search, commented on manuscript drafts, and approved the final manuscript version. HW developed the review protocol, developed and conducted the search, commented on manuscript drafts, and approved the final manuscript version. BWang developed the review protocol, provided guidance on the conduct of the analyses and interpretation of the results, commented on manuscript drafts, and approved the final manuscript version. CW developed the review protocol, provided guidance on the conduct of the analyses and interpretation of the results, commented on manuscript drafts, and approved the final manuscript version. BWhite developed the review protocol, provided guidance on the conduct of the analyses and interpretation of the results, commented on manuscript drafts, and approved the final manuscript version. RL developed the review protocol, provided guidance on the conduct of the analyses and interpretation of the results, commented on manuscript drafts, and approved the final manuscript version. JS developed the review protocol, provided guidance for the interpretation of the results, commented on manuscript drafts, and approved the final manuscript version.
Publication declaration
The authors declare that this is a full and accurate description of the project and no important information or analyses are omitted.
Support for this project was provided by The Pew Charitable Trusts.
AOC, ST, JS, CW, NS, CW, DH, JG, HW, BWang have no conflicts to declare. RL has conducted research or consulting for Zoetis Animal Health, Merck & Company, CEVA Biomune, Boehringer Ingelheim Vetmedica, and Merial Animal Health which may manufacture one or more of the products assessed. BW has conducted research or consulting for Bayer Animal Health, Boehringer Ingelheim, Elanco Animal Health, Merck Animal Health, Merial Animal Health and Zoetis Animal Health which may manufacture one or more of the products assessed.
American Veterinary Medical Association (none provided) AVMA vaccination principles. Last accessed on 2019-11-04. Available at https://www.avma.org/KB/Policies/Pages/Vaccination-Principles.aspx.Google Scholar
Amstutz, HE, Horstman, LA and Morter, RL (1981) Clinical evaluation of the efficacy of Haemophilus Somnus and Pasteurella sp. Bacterins. Bovine Practitioner 16, 106–108.Google Scholar
Anon (1913 as amended 1985) Virus-serum-toxin act 21 usc 151-159 et. seq. Last accessed on 2019-11-04. Available at https://www.aphis.usda.gov/animal_health/vet_biologics/publications/vsta.pdf.Google Scholar
Bailey, EA, Jaeger, JR, Schmidt, TB, Waggoner, JW, Pacheco, LA, Thomson, DU and Olson, KC (2016) Effects of number of viral respiratory disease vaccinations during preconditioning on health, performance, and carcass merit of ranch-direct beef calves during receiving and finishing. Professional Animal Scientist 32, 271–278.CrossRefGoogle Scholar
Bateman, KG (1988) Efficacy of a Pasteurella haemolytica vaccine/bacterial extract in the prevention of bovine respiratory disease in recently shipped feedlot calves. Canadian Veterinary Journal 29, 838–839.Google ScholarPubMed
Bechtol, DT, Ballinger, RT and Sharp, AJ (1991) Field trial of a Pasteurella haemolytica toxoid administered at spring branding and in the feedlot. Agri-Practice 12, 6–14.Google Scholar
Bennett, BW (1982) Efficacy of Pasteurella Bacterins for yearling feedlot cattle. Bovine Practice 3, 26–30.Google Scholar
Centre for Reviews & Dissemination (CRD) (2008) Systematic Reviews: CRD's Guidance for Undertaking Reviews in Health Care. Available at https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf.Google Scholar
CINeMA: Confidence in Network Meta-Analysis [Software] (2017) Available at https://cinema.ispm.unibe.chGoogle Scholar
Coleman, C, Phung, O, Cappelleri, J, Baker, W, Kluger, J, White, C and Sobieraj, D (2012) Use of Mixed Treatment Comparisons in Systematic Reviews [Internet]., Agency for Healthcare Research and Quality. Publication No. 12-EHC119-EF. Available at www.effectivehealthcare.ahrq.gov/reports/final.cfm.Google Scholar
Confer, AW, Wright, JC, Cummins, JM, Panciera, RJ and Corstvet, RE (1983) Use of a fluorometric immunoassay to determine antibody response to Pasteurella haemolytica in vaccinated and nonvaccinated feedlot cattle. Journal of Clinical Microbiology 18, 866–871.CrossRefGoogle ScholarPubMed
Cusack, PM, McMeniman, N and Lean, IJ (2003) The medicine and epidemiology of bovine respiratory disease in feedlots. Australian Veterinary Journal 81, 480–487.CrossRefGoogle ScholarPubMed
Dias, S, Welton, N, Caldwell, D and Ades, A (2010) Checking consistency in mixed treatment comparison meta-analysis. Statistics in Medicine 29, 932–944.CrossRefGoogle ScholarPubMed
Dias, S, Welton, NJ, Sutton, AJ and Ades, A (2011) NICE DSU technical support document 2: a generalised linear modelling framework for pairwise and network meta-analysis of randomised controlled trials. Available at https://research-information.bristol.ac.uk/files/7215331/TSD2_General_meta_analysis.final.08.05.12.pdf.Google Scholar
Edwards, TA (2010) Control methods for bovine respiratory disease for feedlot cattle. Veterinary Clinics of North American Food Animal Practice 26, 273–784.CrossRefGoogle ScholarPubMed
Frank, GH, Briggs, RE, Duff, GC, Loan, RW and Purdy, CW (2002) Effects of vaccination prior to transit and administration of florfenicol at time of arrival in a feedlot on the health of transported calves and detection of Mannheimia haemolytica in nasal secretions. American Journal of Veterinary Research 63, 251–256.CrossRefGoogle Scholar
Frank, GH, Briggs, RE, Duff, GC and Hurd, SH (2003) Effect of intranasal exposure to leukotoxin-deficient Mannheimia haemolytica at the time of arrival at the feedyard on subsequent isolation of M. haemolytica From nasal secretions of calves. American Journal of Veterinary Research 64, 580–585.CrossRefGoogle ScholarPubMed
Gotelli, N and Entsminger, G (2001) Ecosim: Null models software for ecology. Available at http://www.uvm.edu/ ngotelli/EcoSim/EcoSim.html.Google Scholar
Griffin, DD, Amstutz, HE, Morter, RL, Hendrix, KS and Crandall, RA (1979) Oxytetracycline toxicity associated with bovine respiratory disease therapy. Bovine Practitioner (14), 29–32, 34–35.Google Scholar
Grooms, DL, Brock, KV, Bolin, SR, Grotelueschen, DM and Cortese, VS (2014) Effect of constant exposure to cattle persistently infected with bovine viral diarrhea virus on morbidity and mortality rates and performance of feedlot cattle. Journal of the American Veterinary Medical Association 244, 212–224.CrossRefGoogle ScholarPubMed
Gummow, B and Mapham, PH (2000) A stochastic partial-budget analysis of an experimental Pasteurella haemolytica feedlot vaccine trial. Preventive Veterinary Medicine 43, 29–42.CrossRefGoogle ScholarPubMed
Harland, RJ, Potter, AA, van Drunen-Littel-van den Hurk, S, Van Donkersgoed, J, Parker, MD, Zamb, TJ and Janzen, ED (1992) The effect of subunit or modified live bovine herpesvirus-1 vaccines on the efficacy of a recombinant Pasteurella haemolytica vaccine for the prevention of respiratory disease in feedlot calves. Canadian Veterinary Journal 33, 734–741.Google ScholarPubMed
Higgins, J and Whitehead, A (1996) Borrowing strength from external trials in a meta-analysis. Statistics in Medicine 15, 2733–2749.3.0.CO;2-0>CrossRefGoogle ScholarPubMed
Higgins, J, Sterne, J, Savović, J, Page, M, Hróbjartsson, A, Boutron, I, Reeves, B and Eldridge, S (2016) A revised tool for assessing risk of bias in randomized trials, Vol. Issue 10 (Suppl 1). of In: Cochrane Methods., Cochrane Database of Systematic Reviews.Google Scholar
Hu, D, Wang, C and O'Connor, AM (2019) A method of back-calculating the log odds ratio and standard error of the log odds ratio from the reported group-level risk of disease. bioRxiv. Available at https://www.biorxiv.org/content/early/2019/09/06/760942.Google Scholar
Jim, K, Guichon, T and Shaw, G (1988) Protecting feedlot calves from pneumonic pasteurellosis. Veterinary Medicine 83, 1084–1087.Google Scholar
Kirkpatrick, JG, Step, DL, Payton, ME, Richards, JB, McTague, LF, Saliki, JT, Confer, AW, Cook, BJ, Ingram, SH and Wright, JC (2008) Effect of age at the time of vaccination on antibody titers and feedlot performance in beef calves. Journal of the American Veterinary Medical Association 233, 136–142.CrossRefGoogle ScholarPubMed
Koevering, MTV, Gill, DR, Owens, FN, Smith, RA and Ball, RL (1992) Vaccine treatments to improve health and performance of newly arrived stocker cattle. Animal Science Research Report, Agricultural Experiment Station, Oklahoma State University (MP-136), pp. 342–346.Google Scholar
Larson, RL and Step, DL (2012) Evidence-based effectiveness of vaccination against Mannheimia haemolytica, Pasteurella multocida, and Histophilus somni in feedlot cattle for mitigating the incidence and effect of bovine respiratory disease complex. Veterinary Clinics of North America Food Animal Practice 28, 97–106, 106e1–7, ix.CrossRefGoogle ScholarPubMed
Lu, G and Ades, A (2004) Combination of direct and indirect evidence in mixed treatment comparisons. Statistics in Medicine 23, 3105–3124.CrossRefGoogle ScholarPubMed
MacGregor, S, Smith, D, Perino, L and Hunsaker, B (2003) An evaluation of the effectiveness of a commercial Mannheimia (Pasteurella) haemolytica vaccine in a commercial feedlot. Bovine Practitioner 37, 78–82.Google Scholar
Makoschey, B, Bielsa, JM, Oliviero, L, Olivier, R, Pillet, F, Dufe, D, Giorgio, V and Cavirani, S (2008) Field efficacy of combination vaccines against bovine respiratory pathogens in calves. Acta Veterinaria Hungarica 56, 485–493.CrossRefGoogle ScholarPubMed
Malcolm-Callis, KJ, Galyean, ML and Duff, GC (1994) Effects of dietary supplemental protein source and a Pasteurella haemolytica toxoid on performance and health of newly received calves. Agri-Practice 15, 22–28.Google Scholar
Martin, W, Acres, S, Janzen, E, Willson, P and Allen, B (1984) A field trial of preshipment vaccination of calves. Canadian Veterinary Journal 25, 145–147.Google ScholarPubMed
McKaig, CM and Taylor, KI (2015) Comparison of efficacy of various Mannhaemia-Pasteurella vaccines against pneumonic pasteurellosis in young Holstein calves. American Association of Bovine Practitioners Conference Proceedings p. 279.Google Scholar
McLean, GS, Smith, RA, Gill, DR and Randolph, TC (1990) An evaluation of an inactivated, leukotoxin-rich, cell-free Pasteurella hemolytica vaccine for prevention of undifferentiated bovine respiratory disease. Animal Science Research Report (MP-129), pp. 135–140.Google Scholar
Mills, L (1991) Cross-protection of feedlot calves against Pasteurella Endotoxemia with an re mutant Salmonella typhimurium bacterin-toxoid. Agri-Practice 12, 35–36, 38–39.Google Scholar
Moher, D, Liberati, A, Tetzlaff, J and Altman, DG (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 339, b2535.CrossRefGoogle ScholarPubMed
Morter, RL and Amstutz, HE (1983) Evaluating the efficacy of a Haemophilus Somnus bacterin in a controlled field trial. Bovine Practitioner 18, 82–83.Google Scholar
Morter, RL, Amstutz, HE and Crandell, RA (1982) Clinical evaluation of prophylactic regimens for bovine respiratory disease. Bovine Practitioner 17, 56–58.Google Scholar
Morter, RL, Amstutz, HA and Roussel, AJ (1984) Prophylactic administration of hyperimmune serum when processing feedlot cattle. Bovine Practitioner 19, 45–48.Google Scholar
O'Connor, A, Martin, SW, Harland, R, Shewen, P and Menzies, P (2001) The relationship between the occurrence of undifferentiated bovine respiratory disease and titer changes to Haemophilus Somnus and Mannheimia haemolytica at 3 ontario feedlots.[erratum appears in Can J Vet Res 2001 oct;65(4):272]. Canadian Journal of Veterinary Research 65, 143–150.Google Scholar
O'Connor, AM, Coetzee, JF, da Silva, N and Wang, C (2013) A mixed treatment comparison meta-analysis of antibiotic treatments for bovine respiratory disease. Preventive Veterinary Medicine 110, 77–87.CrossRefGoogle ScholarPubMed
O'Connor, AM, Yuan, C, Cullen, JN, Coetzee, JF, da Silva, N and Wang, C (2016) A mixed treatment meta-analysis of antibiotic treatment options for bovine respiratory disease - an update. Preventive Veterinary Medicine 132, 130–139.CrossRefGoogle ScholarPubMed
Papakonstantinou, T, Nikolakopoulou, A, Rucker, G, Chaimani, A, Schwarzer, G, Egger, M and Salanti, G (2018) Estimating the contribution of studies in network meta-analysis: paths, flows and streams. F1000Research 7, 610.CrossRefGoogle ScholarPubMed
Perrett, T, Wildman, BK, Abutarbush, SM, Pittman, TJ, Jones, C, Pollock, CM, Schunicht, OC, Guichon, PT, Jim, GK and Booker, CW (2008) A comparison of two Mannheimia haemolytica immunization programs in feedlot calves at high risk of developing undifferentiated fever/bovine respiratory disease. Bovine Practitioner 42, 64–75.Google Scholar
Plummer, M (2015) rjags: Bayesian Graphical Models using MCMC. R package version 3-15. Available at http://CRAN.R-project.org/package=rjags.Google Scholar
Purdy, CW, Livingston, CW, Frank, GH, Cummins, JM, Cole, NA and Loan, RW (1986) A live Pasteurella haemolytica vaccine efficacy trial. Journal of the American Veterinary Medical Association 188, 589–591.Google ScholarPubMed
R Core Team (2015) R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. Available at https://www.R-project.org/.Google Scholar
Ribble, CS, Jim, GK and Janzen, ED (1988) Efficacy of immunization of feedlot calves with a commercial Haemophilus Somnus bacterin. Canadian Journal of Veterinary Research 52, 191–198.Google ScholarPubMed
Richeson, JT, Beck, PA, Poe, KD, Gadberry, MS, Hess, TW and Hubbell, DS (2015) Effects of administration of a modified-live virus respiratory vaccine and timing of vaccination on health and performance of high-risk beef stocker calves. Bovine Practitioner 49, 37–42.Google Scholar
Rogers, KC, Portillo, TA, Smialek, DE, Miles, DG, Lehenbauer, TW and Smyth, R (2009) A comparison of two Mannheimia haemolytica vaccination strategies in freshly weaned southeastern feedlot heifers. Bovine Practitioner 43, 27–31.Google Scholar
Rogers, KC, Miles, DG, Hughes, HD, Renter, DG, Woodruff, J and Zuidhof, S (2015) Effect of initial respiratory viral-bacterial combination vaccine on performance, health, and carcass traits of auction-market derived feedlot heifers. Bovine Practitioner 49, 43–47.Google Scholar
Rogers, KC, Miles, DG, Renter, DG, Sears, JE and Woodruff, JL (2016) Effects of delayed respiratory viral vaccine and/or inclusion of an immunostimulant on feedlot health, performance, and carcass merits of auction-market derived feeder heifers. Bovine Practitioner 50, 154–162.Google Scholar
Salanti, G, Kavvoura, FK and Ioannidis, JP (2008) Exploring the geometry of treatment networks. Annals of Internal Medicine 148, 544–553.CrossRefGoogle ScholarPubMed
Salanti, G, Del Giovane, C, Chaimani, A, Caldwell, DM and Higgins, JPT (2014) Evaluating the quality of evidence from a network meta-analysis. PLoS ONE 9, 1–14. Available at https://doi.org/10.1371/journal.pone.0099682.CrossRefGoogle ScholarPubMed
Schunicht, OC, Booker, CW, Jim, GK, Guichon, PT, Wildman, BK and Hill, BW (2003) Comparison of a multivalent viral vaccine program versus a univalent viral vaccine program on animal health, feedlot performance, and carcass characteristics of feedlot calves. Canadian Veterinary Journal-Revue Veterinaire Canadienne 44, 43–50.Google ScholarPubMed
Smith, RA, Hicks, RB, Gill, DR and Ball, RL (1986) The effect of live Pasteurella hemolytica vaccine on health and performance of newly arrived stocker cattle. Animal Science Research Report. Miscellaneous Publication, Oklahoma Agricultural Experiment Station 118, 244–249.Google Scholar
Stilwell, G, Matos, M, Carolino, N and Lima, MS (2008) Effect of a quadrivalent vaccine against respiratory virus on the incidence of respiratory disease in weaned beef calves. Preventive Veterinary Medicine 85, 151–157.CrossRefGoogle ScholarPubMed
Theurer, ME, Larson, RL and White, BJ (2015) Systematic review and meta-analysis of the effectiveness of commercially available vaccines against bovine herpesvirus, bovine viral diarrhea virus, bovine respiratory syncytial virus, and parainfluenza type 3 virus for mitigation of bovine respiratory disease complex in cattle. Journal of the American Veterinary Medical Association 246, 126–142.CrossRefGoogle ScholarPubMed
Thomas, LH, Stott, EJ, Howard, CJ and Gourlay, RN (1986) Development of a multivalent vaccine against calf respiratory disease. Proceedings of the 14th World Congress on Diseases of Cattle, Dublin 1, 691–696.Google Scholar
Thorlakson, B, Martin, W and Peters, D (1990) A field trial to evaluate the efficacy of a commercial Pasteurella haemolytica bacterial extract in preventing bovine respiratory disease. Canadian Veterinary Journal 31, 573–579.Google ScholarPubMed
Totton, SC, Cullen, JN, Sargeant, JM and O'Connor, AM (2018) The reporting characteristics of bovine respiratory disease clinical intervention trials published prior to and following publication of the reflect statement. Preventive Veterinary Medicine 150, 117–125. Available at http://www.sciencedirect.com/science/article/pii/S0167587717306487.CrossRefGoogle ScholarPubMed
United States Department of Agriculture, Animal and Plant Health Inspection Service, Veterinary Services (2013) National Animal Health Monitoring System Feedlot 2011 Part iv: Health and Health Management on U.S. Feedlots with a Capacity of 1000 or More Head. Available at https://www.aphis.usda.gov/animal_health/nahms/feedlot/downloads/feedlot2011/Feed11_dr_PartIV_1.pdf.Google Scholar
Van Donkersgoed, J, Janzen, ED, Townsend, HG and Durham, PJ (1990) Five field trials on the efficacy of a bovine respiratory syncytial virus vaccine. Canadian Veterinary Journal 31, 93–100.Google ScholarPubMed
van Donkersgoed, J, Schumann, FJ, Harland, RJ, Potter, AA and Janzen, ED (1993) The effect of route and dosage of immunization on the serological response to a Pasteurella haemolytica and Haemophilus Somnus vaccine in feedlot calves. Canadian Veterinary Journal 34, 731–735.Google ScholarPubMed
Viechtbauer, W (2010) Conducting meta-analyses in R with the metafor package. Journal of Statistical Software 36, 1–48. Available at http://www.jstatsoft.org/v36/i03/.CrossRefGoogle Scholar
White, BJ, Theurer, ME, Goehl, DR and Thompson, P (2017) Effect of modified-live bovine viral diarrhea virus type 2 vaccine on performance, health, temperature, and behavior response in high-risk beef heifer calves. Bovine Practitioner 51, 38–47.Google Scholar
Wildman, BK, Perrett, T, Abutarbush, SM, Guichon, PT, Pittman, TJ, Booker, CW, Schunicht, OC, Fenton, RK and Jim, GK (2008) A comparison of 2 vaccination programs in feedlot calves at ultra-high risk of developing undifferentiated fever/bovine respiratory disease. Canadian Veterinary Journal 49, 463–472.Google ScholarPubMed
Wildman, BK, Jim, GK, Perrett, T, Schunicht, OC, Hannon, SJ, Fenton, RK, Abutarbush, SM and Booker, CW (2009) A comparison of two multivalent viral vaccine programs in feedlot calves at high risk of developing undifferentiated fever/bovine respiratory disease. Bovine Practitioner 43, 130–139.Google Scholar
Wohler, WH and Baugh, CL (1980) Shipping fever pasteurellosis and salmonellosis prophylaxis. Modern Veterinary Practice 61, 921–923.Google ScholarPubMed
Wright, AJ, Mowat, DN and Mallard, BA (1994) Supplemental chromium and bovine respiratory disease vaccines for stressed feeder calves. Canadian Journal of Animal Science 74, 287–295.CrossRefGoogle Scholar
View in content
O'Connor et al. supplementary material
You have Access Open access
This article has been cited by the following publications. This list is generated based on data provided by Crossref.
Cooke, Reinaldo F Paiva, Rafael and Pohler, K G 2020. Technical Note: Using enzyme-linked immunosorbent assays to evaluate humoral responses to vaccination against respiratory viruses in beef cattle. Journal of Animal Science, Vol. 98, Issue. 8,
Abd El Fadeel, Maha Raafat El-Dakhly, Ashraf Taha Allam, Ahmad Mohammad Farag, Tarek Korany and El-kholy, Alaa Abdel-Moneim 2020. Preparation and efficacy of freeze-dried inactivated vaccine against bovine viral diarrhea virus genotypes 1 and 2, bovine herpes virus type 1.1, bovine parainfluenza-3 virus, and bovine respiratory syncytial virus. Clinical and Experimental Vaccine Research, Vol. 9, Issue. 2, p. 119.
Newcomer, Benjamin W. 2021. 75 years of bovine viral diarrhea virus: Current status and future applications of the use of directed antivirals. Antiviral Research, Vol. 196, Issue. , p. 105205.
Noyes, Noelle R. Slizovskiy, Ilya B. and Singer, Randall S. 2021. Beyond Antimicrobial Use: A Framework for Prioritizing Antimicrobial Resistance Interventions. Annual Review of Animal Biosciences, Vol. 9, Issue. 1, p. 313.
Nickell, Jason S Hutcheson, John P Renter, David G and Amrine, David A 2021. Comparison of a traditional bovine respiratory disease control regimen with a targeted program based upon individualized risk predictions generated by the Whisper On Arrival technology. Translational Animal Science, Vol. 5, Issue. 2,
Scott, Matthew A. Woolums, Amelia R. Swiderski, Cyprianna E. Perkins, Andy D. and Nanduri, Bindu 2021. Genes and regulatory mechanisms associated with experimentally-induced bovine respiratory disease identified using supervised machine learning methodology. Scientific Reports, Vol. 11, Issue. 1,
Wisnieski, Lauren Amrine, David E. and Renter, David G. 2021. Predictive modeling of bovine respiratory disease outcomes in feedlot cattle: A narrative review. Livestock Science, Vol. 251, Issue. , p. 104666.
Chen, Shih-Yu Negri Bernardino, Pedro Fausak, Erik Van Noord, Megan and Maier, Gabriele 2022. Scoping Review on Risk Factors and Methods for the Prevention of Bovine Respiratory Disease Applicable to Cow–Calf Operations. Animals, Vol. 12, Issue. 3, p. 334.
Zirra-Shallangwa, Bibiana González Gordon, Lina Hernandez-Castro, Luis E. Cook, Elizabeth A. J. Bronsvoort, Barend M. de Clare and Kelly, Robert F. 2022. The Epidemiology of Bovine Viral Diarrhea Virus in Low- and Middle-Income Countries: A Systematic Review and Meta-Analysis. Frontiers in Veterinary Science, Vol. 9, Issue. ,
Falkner, T. Robin 2022. Wellness Management in Beef Feeder Cattle. Veterinary Clinics of North America: Food Animal Practice, Vol. 38, Issue. 2, p. 273.
Stokka, Gerald L. and Falkner, T. Robin 2022. Systems Thinking Perspectives on Stewardship and Our Future. Veterinary Clinics of North America: Food Animal Practice, Vol. 38, Issue. 2, p. 201.
Ntakiyisumba, Eurade Lee, Simin and Won, Gayeon 2022. Evidence-Based Approaches for Determining Effective Target Antigens to Develop Vaccines against Post-Weaning Diarrhea Caused by Enterotoxigenic Escherichia coli in Pigs: A Systematic Review and Network Meta-Analysis. Animals, Vol. 12, Issue. 16, p. 2136.
Scott, Matthew A. Woolums, Amelia R. Karisch, Brandi B. Harvey, Kelsey M. and Capik, Sarah F. 2022. Impact of preweaning vaccination on host gene expression and antibody titers in healthy beef calves. Frontiers in Veterinary Science, Vol. 9, Issue. ,
Rojas, Hector A. White, Brad J. Amrine, David E. Larson, Robert L. Capik, Sarah F. and Depenbusch, Brandon E. 2022. Impact of Water Sources and Shared Fence Lines on Bovine Respiratory Disease Incidence in the First 45 Days on Feed. Veterinary Sciences, Vol. 9, Issue. 11, p. 646.
View all Google Scholar citations for this article.
Save article to Kindle
To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about saving to your Kindle.
A. M. O'Connor (a1), D. Hu (a2), S. C. Totton (a3), N. Scott (a1), C. B. Winder (a4), B. Wang (a5), C. Wang (a1) (a2), J. Glanville (a6), H. Wood (a6), B. White (a7), R. Larson (a7), C. Waldner (a8) and J. M. Sargeant (a3)
DOI: https://doi.org/10.1017/S1466252319000288
Save article to Dropbox
To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.
Save article to Google Drive
To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.
Reply to: Submit a response
Title * Please enter a title for your response.
Contents * Contents help
Close Contents help
- No HTML tags allowed
- Web page URLs will display as text only
- Lines and paragraphs break automatically
- Attachments, images or tables are not permitted
Please enter your response.
First name * Please enter your first name.
Last name * Please enter your last name.
Email * Email help
Close Email help
Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.
Occupation Please enter your occupation.
Affiliation Please enter any affiliation.
Conflicting interests
Do you have any conflicting interests? * Conflicting interests help
Close Conflicting interests help | CommonCrawl |
Research article | Open | Published: 13 June 2019
Enhancing ontology-driven diagnostic reasoning with a symptom-dependency-aware Naïve Bayes classifier
Ying Shen ORCID: orcid.org/0000-0002-3220-904X1,
Yaliang Li2,
Hai-Tao Zheng3,
Buzhou Tang4 &
Min Yang5
Ontology has attracted substantial attention from both academia and industry. Handling uncertainty reasoning is important in researching ontology. For example, when a patient is suffering from cirrhosis, the appearance of abdominal vein varices is four times more likely than the presence of bitter taste. Such medical knowledge is crucial for decision-making in various medical applications but is missing from existing medical ontologies. In this paper, we aim to discover medical knowledge probabilities from electronic medical record (EMR) texts to enrich ontologies. First, we build an ontology by identifying meaningful entity mentions from EMRs. Then, we propose a symptom-dependency-aware naïve Bayes classifier (SDNB) that is based on the assumption that there is a level of dependency among symptoms. To ensure the accuracy of the diagnostic classification, we incorporate the probability of a disease into the ontology via innovative approaches.
We conduct a series of experiments to evaluate whether the proposed method can discover meaningful and accurate probabilities for medical knowledge. Based on over 30,000 deidentified medical records, we explore 336 abdominal diseases and 81 related symptoms. Among these 336 gastrointestinal diseases, the probabilities of 31 diseases are obtained via our method. These 31 probabilities of diseases and 189 conditional probabilities between diseases and the symptoms are added into the generated ontology.
In this paper, we propose a medical knowledge probability discovery method that is based on the analysis and extraction of EMR text data for enriching a medical ontology with probability information. The experimental results demonstrate that the proposed method can effectively identify accurate medical knowledge probability information from EMR data. In addition, the proposed method can efficiently and accurately calculate the probability of a patient suffering from a specified disease, thereby demonstrating the advantage of combining an ontology and a symptom-dependency-aware naïve Bayes classifier.
An ontology is a set of concepts in a domain space, along with their properties and the relationships between them [1]. The past couple of decades have witnessed many successful real-world applications of ontologies in the medical and health domain, such as in medical diagnosis [2], disease classification [3], clinical inference learning [4], and medical knowledge representation and storage [5].
Despite their effectiveness of previous studies, existing ontologies for the medical domain are missing an important component: the knowledge-triplet probability. Due to the uncertainty and complexity of knowledge in the medical domain, the probability of a knowledge triplet depends on its head entity and tail entity. For example, the probability of knowledge triplet (poor appetite, symptom-disease, cirrhosis) is 0.20; hence, when suffering from cirrhosis, 20% of patients have poor appetite. Such probabilities in medical knowledge are crucial for decision-making in various medical applications. Therefore, it is important to supplement medical ontologies with probability information.
An electronic medical record (EMR) is a structured collection of patient health information and medical knowledge that contains valuable information about probabilities. Thus, it can be a high-quality resource for the discovery of medical knowledge probabilities. After investigating the uncertainty regarding the actual situation of the patient, it is necessary to separate the symptoms and diseases that are possible from those that are impossible to determine which measures might be effective [6].
To overcome the challenges that are discussed above, we propose a novel knowledge acquisition method for medical probability discovery. Patients' medical records are used to construct an ontology and train a symptom-dependency-aware naïve Bayes classifier (SDNB classifier) to evaluate the probability of a disease before we observe any symptoms and the posterior probability considering the correlations among symptoms.
To evaluate the performance of the proposed method, we conduct experiments to evaluate the combined performance of the generated ontology and the symptom-dependency-aware naïve Bayes classifier on the medical diagnostic classification task. The experimental results demonstrate that our method can effectively discover medical knowledge probabilities and accurately classify diseases and pathologies.
In addition, we evaluate the performance of the proposed method under various scenarios in disease reasoning tasks by visualizing how ontological analysis is combined with a symptom-dependency-aware weighted naïve Bayes classifier to conduct the probability estimation and how probability enhances the interactions between the user and the computer in gastroenterology disease reasoning.
Our main contributions are threefold: 1) We enrich medical knowledge graphs with probability information by discovering the knowledge-triplet probability information from EMR data, which renders the corresponding medical ontology more accurate and more applicable to medical tasks. 2) We present a method for improving the naïve Bayes classifier based on the relevance of various attributes to disease diagnosis. 3) We demonstrate that the proposed method can reliably discover knowledge-triplet probabilities for medical ontologies. We also demonstrate the viability of training naïve Bayes classifiers to support medical decision-making.
Knowledge discovery from EMRs
EMR data on the phenotypes and treatments of patients are an underused data source that has much higher research potential than is currently realized. With their high-quality medical data, EMRs open new possibilities for data-driven knowledge discovery towards medical decision support. The mining of EMRs may establish new patient-stratification principles and reveal unknown disease correlations [7].
There are various medical knowledge discovery applications that are based on EMRs, including the discovery over-structured data (e.g., demographics, diagnoses, medications, and laboratory measurements) [8] and unstructured clinical text (e.g., radiology reports [9] and discharge summaries [10]). The research can be divided into entity discovery [11], phenotype extraction [12], disease topic discovery [13], temporal pattern mining [14], and medical event detection [15]. Several NLP techniques have been developed for clinical texts, e.g., coreference resolution [16], word sense disambiguation [17] and temporal relations [18]. Many studies have attempted to create annotated corpora [19] to facilitate the development and testing of these algorithms, which has also been the emphasis of the biomedical and clinical informatics community.
Probability discovery
In the literature, ontologies have been extensively studied with naïve Bayes classifiers via various approaches, such as document classification [20], ontology mapping [21, 22], and sentiment analysis [23]. However, the combined application of an ontology and a naïve Bayes classifier in medical uncertainty reasoning remains relatively new territory that is underexplored.
A naïve Bayes classifier is a probabilistic classifier that is based on Bayes' theorem that imposes strong (naive) independence assumptions between the features [24]. For example, the disease diagnosis module for the Global Infectious Disease and Epidemiology Network (GIDEON) [25] was developed using a naïve Bayes classifier that evaluates disease probabilities based on the patient's background, incubation period, symptoms and signs, and laboratory test results. Naïve Bayes classifiers have also been applied in many clinical decision support tasks, e.g., curing mammographic mass lesions [26], optimizing brain tumor treatment [27], and predicting the likelihood of a diabetic patient getting heart disease [28].
However, such fruitful results are subject to the assumption that attributes (symptoms) are independent from each other conditioned on the class variable (disease) [29]. This assumption of attribute independence need not necessarily hold true in disease diagnostic reasoning because a symptom can be strongly correlated with many diseases or symptoms [30]. For example, the symptom "diarrhea" may cause serum-electrolyte-disturbance–associated symptoms, e.g., hypokalemia and hyponatremia, while "hypokalemia" can cause decreased intestinal peristalsis, thereby leading to loss of appetite, nausea, and constipation. Therefore, the assumption of attribute independence of naïve Bayes classifiers may severely reduce its diagnostic accuracy.
Ontology enrichment
Many studies have constructed ontologies, including Freebase, DBpedia, and Disease Ontology (DO) [31]. These ontologies often suffer from incompleteness and sparseness since most of them have been built either collaboratively or semiautomatically. Thus, it is necessary to supplement these ontologies with extra information. An ontology can be enriched via two approaches: The first is to enrich the distributed knowledge representation by incorporating extra knowledge into knowledge embeddings [32]. The other is to reconstruct the ontology with new elements, such as probability information [33], temporal information [34], and space constraints [35]. In this study, we exploit the probability information in the ontology, which has received little attention so far.
Symptom-disease network reasoning
In the medical field, many studies explore the elucidation of the relationship between the molecular origins of diseases and their resulting symptoms. For example, Hidalgo et al. [36] introduce a new phenotypic database that summarizing correlations that were obtained from the disease histories of more than 30 million patients in a phenotypic disease network. Zhou et al. [37] use large-scale medical bibliographic records and the related medical subject heading (MeSH) metadata from PubMed to generate a symptom-based network of human diseases, where the link weight between two diseases quantifies the similarity of their corresponding symptoms. The main difference between our work and these existing works is that we incorporate AdaBoost optimization with a medical-specific OR value evaluation that can identify the variables of health features and attributes to evaluate the co-occurrence frequency among symptoms in the EMRs. In addition, the final output of our task is an ontology rather than a symptom-based network. The annotations in the generated ontology, such as the disease introduction, disease/syndrome synonym, category, pathology, department, part of body, and lesion, can provide disease-related details to the user and facilitate clinical decision-making.
Ontology component analysis
First, we evaluate the quality of the generated ontology, which is the final output of our task. Based on over 30,000 deidentified medical records, we explore 336 gastrointestinal diseases and 81 related symptoms. Among these 336 gastrointestinal diseases, the probabilities of 31 diseases are obtained via our method. These 31 probabilities of diseases and 189 conditional probabilities between diseases and symptoms are added to the generated ontology. We cannot obtain the probabilities of other diseases since they are difficult to subjectively quantify or their statistical results are unconvincing due to insufficient medical records (e.g., there are only 2 medical records that correspond to gastrointestinal stromal tumors).
A subset of the diseases and their syndromes, along with their conditional probabilities, are summarized in Table 1.
Table 1 Examples of the diseases and their syndromes and conditional probabilities
Figure 1 is a subgraph of the generated ontology. For the disease "gastric ulcer", the solid lines represent the taxonomy of the class relationships, while the dotted lines indicate the relationships between diseases and their relevant symptoms. The numbers on the dotted lines represent the occurrence probabilities of the symptoms and the corresponding diseases. We observe the following:
Disease-symptom mentions are identified via the proposed method. For example, the triplet (acid reflex, symptom-disease, gastric ulcer) indicates that acid reflex is a symptom of a gastric ulcer, which is useful for analyzing possible clinical signs and predicting possible subsequent probabilities of diseases.
The discovery of disease-relevant relationships, including disease-lesion, disease-pathology, disease-susceptible population, disease-part of body, and disease-cure rate, is also helpful for gaining insight into the proposed method.
The included probabilities can contribute to gastroenterology diagnosis for medical applications. The probabilities of knowledge triplets (nausea, symptom-disease, gastric ulcer) and (tummy ache, symptom-disease, gastric ulcer) are 0.20 and 0.25, respectively; hence, if suffering from a gastric ulcer, the occurrence probability of nausea is nearly the same as that of tummy ache.
Ontology class: Gastric ulcer
Diagnostic classification
To evaluate the performance of the knowledge-triplet probability of the proposed method, we conduct experiments on the diagnostic classification task, namely, the classification of a disease or pathology.
As a test set, 1660 medical records were randomly selected and analyzed to identify the presence or absence of cirrhosis. In our pre-experiment, we adopted the 6-fold cross-validation method. The results of each cross-validation experiment were highly similar because the medical record text that we used was homogeneous and of high quality. Therefore, we randomly selected 1660 records as the test set in the current study.
In the medical record, the most important disease from which the patient suffers is listed first and the complications are listed subsequently. This study only focused on the first disease that is listed in the medical record. Based on the doctors' diagnosed cases, we calculate and compare the classification accuracy of the generated ontology (SDNB ontology) in four scenarios: (a) without the naïve Bayes classifier (SDNB ontology); (b) with the original naïve Bayes classifier (SDNB ontology + NB); and (c) with an improved naïve Bayes classifier that is based on the co-occurrence frequency, which was presented in [38] (SDNB ontology + improved NB); and (d) with a symptom-dependency-aware weighted naïve Bayes classifier that is realized via odds ratio (OR) value [39] evaluation and AdaBoost optimization (SDNB ontology+ SDNB classifier).
For the first scenario, we use the original ontology without the newly added probabilities and apply the path ranking algorithm (PRA) [40] to model the ontology relationships and train the classifier for each relationship. In the ontology, a relationship path can be formed by connected ontology triplets. For example, (disease, alias, disease) and (disease, corresponding symptoms, symptoms) can be connected as a path. Considering the ontology as a directed graph, PRA adopts the relationship path as a feature and represents all the relationship paths in the ontology as feature vectors. Afterwards, the classifiers are trained to identify the relationships between the entity pairs.
For the third scenario, we designed an improved Naïve Bayes classifier that is based on syndrome correlations. The correlation between symptoms Sij1 and Sij2 can be calculated via Equation (1), where P((Sij1,Sij2)| Df) denotes the class conditional probability of (Sij1,Sij2) and P(Sij1| Df) and P(Sij2| Df) denote the class conditional probabilities of Sij1 and Sij2, respectively. If P((Sij1,Sij2)| Df) > P(Sij1| Df) ∙ P(Sij2| Df) , Sij1 and Sij2 are considered positively correlated; otherwise, they are negatively correlated. If \( {\mathrm{Corr}}_{\left({\mathrm{S}}_{\mathrm{ij}1,}{\mathrm{S}}_{\mathrm{ij}2}\right)\left|{\mathrm{D}}_{\mathrm{f}}\right.}=1 \), symptoms Sij1 and Sij2 are independent. The Bayesian formula, which takes the correlation weight of the symptom vector for the posterior probability calculation into account, is presented as Equation (2):
$$ {\mathrm{Corr}}_{\left({S}_{ij1,}{S}_{ij2}\right)\left|{D}_f\right.}=\frac{P\left(\left({S}_{ij1,}{S}_{ij2}\right)|{D}_f\right)}{P\left({S}_{ij1}|{D}_f\right)\bullet P\left({S}_{ij2}|{D}_f\right)} $$
$$ \mathrm{P}\left({\mathrm{D}}_{\mathrm{f}}\left|{\mathrm{S}}_{\mathrm{i}}\right.\right)={\mathrm{Corr}}_{{\mathrm{S}}_{\mathrm{i}}\left|{\mathrm{D}}_{\mathrm{f}}\right.}\bullet \mathrm{P}\left({\mathrm{D}}_{\mathrm{f}}\right)\bullet \frac{\prod \limits_{\mathrm{j}=1}^{\mathrm{n}}\mathrm{P}\left({\mathrm{S}}_{\mathrm{i}\mathrm{j}}|{\mathrm{D}}_{\mathrm{f}}\right)}{\mathrm{P}\left({\mathrm{S}}_{\mathrm{i}}\right)} $$
For the experiment, a receiver operating characteristic curve (ROC) is utilized to evaluate the accuracy of the ontology-driven diagnosis classification in which formal measures are used to evaluate the rate of success in distinguishing the correct disease and identifying an appropriate therapeutic regimen. An ROC curve is related to the number of true positives (TP), the number of false positives (FP), the number of true negatives (TN), and the number of false negatives (FN). An ROC space is defined by the false positive rate (1 − specificity = FP ∕ (TN + FP)) and the true positive rate (sensitivity = TP ∕ (TP + FN)) as the x- and y-axes, respectively. Each prediction result produces a (1-specificity, sensitivity) pair and represents a point in the ROC space. Then, we plot the ROC point for each possible threshold value result (the threshold specifies the minimum a posteriori probability for assigning a sample to the positive class), thereby forming a curve. In this study, we use the area under the curve (AUC), whose value is typically between 0 and 1, to measure and compare the classification performances of classifiers. An AUC value of 0.5 corresponds to random predictions. A satisfactory classifier should have an AUC value that substantially exceeds 0.5. The higher the AUC value is, the better is the classification performance.
The ROC curves that are presented in Fig. 2 represent the simulation results. Using various threshold values, we aim at determining whether the experimental result can yield an accurate diagnosis based on various ontologies, where 0 denotes no and 1 denotes yes. The calculation of a classifier with the test data returns a probability pair, namely, [P1, P2], that specifies a probability of 0 or 1. The obtained results, such as 0: [3.63E-09, 1.00E+ 00] and 1: [0.962542578, 0.037457422], can be connected by a line and presented as ROC curves.
ROC chart and AUC for classifier evaluations
As shown in Fig. 2, the ROC curve that corresponds to the operation combination of the SDNB ontology and the SDNB classifier shows the highest performance at most tested noise levels, which demonstrates the effectiveness of incorporating OR value evaluation and AdaBoost optimization into the base model. The ontology that was developed with probabilities and enriched by more complete knowledge can accurately represent the relationships between diseases and symptoms and can provide superior data support for decision-making during diagnosis.
Comparing the blue curve with the red curve, the accuracy of the diagnosis has been significantly improved. This is expected since the OR value is particularly suitable for comparing the relative odds of the occurrence of disease outcomes given exposure to the health feature variable and attribute.
All ROC curves that are discussed above are obtained from the experimental results, which are listed in Table 2. The p-values are calculated using the GraphPad Prism 7 software based on the principle of the Z test by comparing the AUC values with 0.5. The null hypothesis, namely, H0, is AUC = 0.5 and the alternative hypothesis, namely, H1, is AUC > 0.5.
Table 2 Experimental results in four scenarios: (a) without the naïve Bayes classifier; (b) with the original naïve Bayes classifier; (c) with an improved naïve Bayes classifier that is based on the co-occurrence frequency; and (d) with the symptom-dependency-aware weighted naïve Bayes classifier
Diagnostic reasoning cases
Three positive sample cases that use a small part of the EMR dataset and their prediction results that are based on our generated ontology are listed in Table 3. The correctly identified diseases were the top scored diseases by each model. Our symptom-dependency-aware naïve Bayes classifier substantially and consistently outperforms the baselines, thereby demonstrating the remarkable applicability and effectiveness of our method.
Table 3 Diagnostic reasoning results in four scenarios: (a) without any naïve Bayes classifier; (b) with the original naïve Bayes classifier; (c) with the improved naïve Bayes classifier that is based on the co-occurrence frequency; and (d) with the symptom-dependency-aware weighted naïve Bayes classifier
[Case 1: Jaundice] The classification results for the four scenarios are all correct. The probability of the disease that is predicted by the symptom-dependency-aware naïve Bayes classifier is higher; hence, by taking into account the correlations among symptoms, the more symptoms the patient has, the more accurate the prediction is.
[Case 2: Pancreatic Cancer] The classification results for the four scenarios are correct. If there is no significant correlation among the selected symptoms, the probabilities of disease that are predicted by the baseline classifiers and the symptom-dependency-aware naïve Bayes classifier are similar.
[Case 3: Liver disease] The improved naïve Bayes classifier correctly classifies the disease, while the other two methods (SDNB ontology and SDNB ontology +NB) do not accurately identify the disease. For example, the predicted score for liver disease that was provided by the SDNB ontology is 0.42; hence, the total score for other possible diseases is 0.58. Scores that are not well differentiated cannot provide useful support for clinical decision-making. It is also observed that the improved naïve Bayes classifiers outperform the original classifiers if there are few symptoms but strong correlations among these symptoms.
A typical research case that involved answering clinical queries about gastroenterological disease was developed to evaluate the diagnostic reasoning and probability computations based on the ontology (see Fig. 3). The UI interface is an HTML page that is based on the bootstrap framework.
Diagnosis of cirrhosis based on the generated SDNB ontology and the proposed SDNB classifier
As shown in the upper-left part of Fig. 3, after receiving an initial query from a user, our proposed model (SDNB ontology + SDNB classifier) outputs the standard symptom expressions. First, we match the input query in the SDNB ontology via ontology components "class name" and "alias" (represented by the relation "hasExactSynonym" in OWL) via n-gram text matching. Then, the detected symptoms and their synonyms are returned for the users as a reference. Finally, our model (SDNB ontology + SDNB classifier) identifies the standard symptom expressions for conducting diagnostic reasoning. Based on the involved standard symptoms, our model provides a list of relevant symptoms from which the user can select according to the entity relevance within the ontology (see the lower-left part of Fig. 3). With all selected symptoms, our model calculates the probability of illness using the proposed naïve Bayes classifier. The diagnostic results are presented in the upper-right part with a description of the possible disease. In addition, the symptoms' conditional probabilities are presented as details in the bottom-right part and serve as references for the patient.
This manuscript combined research on knowledge discovery and probability discovery from EMRs with ontology completion in the medical field. This study explored a symptom-dependency-aware naïve Bayes classifier, which involves the automatic determination of probabilities between diseases and syndromes to facilitate ontology applications in probabilistic diagnosis inference.
Technically, we present a reproducible approach for learning probability information that involves diseases and symptoms from an EMR. The proposed operation depends on various methods that are based on EMRs, as described in this manuscript. In contrast to our previous approach that evaluated the attribute correlation based on the attribute co-occurrence frequency, we explore the acquisition of disease-symptom factors from EMR texts using an OR value that is especially suitable for medical applications. In our study, the OR value measures the association that compares the likelihood of disease of exposed patients to the likelihood of disease of unexposed patients. Compared with the existing ontologies, we built a more domain-specific and complete ontology for gastrointestinal diseases. The experimental results demonstrate that the direct and automated construction of a high-quality health ontology from medical records is feasible.
Practically, the proposed approach provides possible references for clinicians and ontologists. The proposed approaches can offer a quick overview of disease-relevant factors and their probability distribution to users. The learned probabilities render the ontology more interpretable.
Several limitations are encountered in this study. The disease/symptom modeling is conducted based on EMR records; thus, it is critical to have a large volume of high-quality EMR records. However, the records could easily be biased. In addition, this study focused only on the first disease that is listed in the medical record and ignored the other diseases and complications. Although this method accords with clinical logic and effectively reduces noise during the reasoning process, it will reduce the amount of useful information.
Accordingly, one of the more promising avenues for future research is the incorporation of other data-mining techniques, such as heuristic learning and clustering, for attribute distillation [41]. Meanwhile, we will study the entire diagnosis results in terms of the data integrity and distribution. A distribution plot of the numbers of identified/associated diseases per EMR record will be explored to identify important information.
In this paper, we present a medical knowledge probability discovery method that is based on the analysis and extraction of EMR text data for enriching medical ontologies with probability information. The experimental results demonstrate that the proposed method can effectively identify accurate medical knowledge probability information from EMR data. In addition, we evaluate the performance of the proposed method under various scenarios, including diagnosis classification and diagnosis reasoning.
Although we have presented an application of the ontology-based Bayesian approach in gastrointestinal diseases, the search algorithm is not limited to gastrointestinal diseases. Our ontology-based Bayesian approach is amenable to a wide range of extensions that may be useful in scenarios in which the features are interrelated.
In this section, we introduce an improved naïve Bayes classifier for triplet probability computation for conducting a medical knowledge probability discovery task and enrich the ontology with knowledge-triplet probability information.
Ontology construction with EMRs
We obtain 100,198 EMRs, collecting from February 2015 to July 2016, from a partner clinic located in a municipality of China. Among all these EMRs, 31,120 are about gastrointestinal diseases, and they are adopted as training and testing sets in this study. In the medical record, according to the patient's symptoms, the number of diseases diagnosed by the doctor ranges from 1 to 7, and the corresponding medical records account for 64.30, 23.03, 10.21, 1.88, 0.47, 0.1 and 0.01% of the total medical records, respectively (see Fig. 4). It should be noted that we only count the primary disease listed in the medical record. For example, the EMR with ID 00292987 is about an 80 years old male, who suffers from chronic gastritis and left ureteral calculi. Since he was in the Department of Gastroenterology, the doctor focused on his primary disease chronic gastritis and listed his known long-term disease (left ureteral calculi) as other diseases.
Distribution of the number of diseases diagnosed by doctors in all involved medical record data
As the EMRs are provided in the formats of image and PDF, we transform them into texts using an Optical Character Recognition (OCR) tool. At present, the accuracy of data recognition through OCR tools varies from 90 to 99% depending on the identification content. We randomly sample 20 transformed EMRs to find frequent error characters that are caused by the OCR tool. Then, based on these OCR error patterns and the EMR organization formats, we design a set of regular expressions to extract the patient fields as needed. To be more specific, the EMRs from our partner clinic can be categorized into three organization formats and have similar segmentation indicators, including "sex", "age", "symptom", "diagnosis", "admissions records", "discharge records" and "medical history", which facilitates the design of regular expressions.
For the proofreading of medical record data, if errors occur frequently in the same situation (e.g., when identifying information in a table, the presence of table line may result in the appearance of meaningless symbols), they would be statistically adjusted and removed. To further ensure the accuracy of text recognition, we invited three medical students to proofread all the extracted texts. According to statistics, word recognition errors that require their correction exist in less than 2% of medical records. Some common mistakes include the Chinese word "脉" being misidentified as recognized as "Sz1" for unknown reason, and the word "日" being misidentified as "曰".
As this analysis focuses on diseases that are related to gastrointestinal diseases, we attempt to identify the medical data that pertain to gastrointestinal diseases. Based on the diagnosis results that are presented in the EMRs, we filter out those data for which the premier diagnosis is not a gastrointestinal disease. After preprocessing steps, we retain 31,720 EMR data, which correspond to different patients according to the serial numbers of the outpatient clinic and hospital.
The inputs of this task are a set of EMRs, an example of which is presented in Table 4.
Table 4 Example of Chinese EMR data that has been translated into English
The EMR texts are in Chinese and require word segmentation to divide the text into Chinese component words. In this paper, we use a Chinese word segmentation tool, namely, jieba,Footnote 1 to generate the tokenized causal-mention sentences.
We use the International Classification of Diseases (ICD-10) in the Chinese language and the largest medical e-dictionaryFootnote 2 for word matching. The e-dictionary contains 12 million terms in Chinese, which cover vocabulary in various clinical departments, basic medicine, molecular biology, medicines, instruments and traditional Chinese medicine. Selecting these two medical dictionaries as the target, we perform n-gram entity name matching to extract medical entities from raw texts. Typically, an n-gram is a contiguous sequence of n items from a specified sample of text.
The disease-symptom mentions are extensive in EMR data. The patient usually describes his/her symptoms and medical history with explicit temporal and causal indicators (e.g., "before", "after", and "since"), while the doctor usually provides diagnosis and therapy suggestions in response to questions, in which the doctor refers to symptoms and diseases, along with their relationships. The mentions of lesions, pathologies, and susceptible populations, among others, are also extracted. Then, we match entity pairs in the same text to possible knowledge triplets using an alias table. Via this approach, we extract the knowledge triplets from the raw medical data.
Afterwards, we add the entity tag in the EMR data to each matched entity and the triplet is transformed into an entity pair: (entity1; tag1) → (entity2; tag2) (e.g., (catch-a-cold; symptom) → (fever; disease)). The same entity may have multiple tags (e.g., a disease can become a symptom under various clinical conditions) and play multiple roles in the ontology. Finally, such triplets are composed as an ontology by combining the aliases (see Fig. 5).
Subgraph of the generated ontology
Via entity name matching, the knowledge of gastrointestinal system diseasesFootnote 3 in the disease ontology is adopted to enrich the generated ontology. Consider the disease "allergic bronchopulmonary aspergillosis" as an example. We can obtain its superclass (aspergillosis), disease ID (DOID:13166) and other cross-reference information (e.g., OMIM:103920, MESH:D001229, and ICD9CM:518.6).
However, the generated SDNB ontology is not sufficiently accurate for use because there is no information that explicitly specifies the probability of the co-occurrence of a disease and a symptom. In the remainder of this section, we introduce an improved naïve Bayes classifier for conducting probability discovery.
Symptom-dependency-aware Naïve Bayes classifier
We propose a symptom-dependency-aware naïve Bayes classifier that is based on the assumption that symptoms have a level of dependency among them. The proposed naïve Bayes classifier calculates the probability that a patient is suffering from a specified disease and outputs the relevant symptoms of that disease. Afterwards, via innovative approaches, we incorporate the value of the probability of a disease into the ontology.
Figure 6 shows a flow diagram for calculating the disease probability using the symptom-dependency-aware naïve Bayes classifier. The calculation process includes ontology queries and naïve Bayes classification. During the gastroenterology diagnosis, the proposed method reads the proposed ontology using Java code to query the following information in the ontology: a disease and its relevant symptoms, the probability of a disease before we observe any symptoms, and the conditional probability of a symptom given a disease. All this information is considered as the basis for classification.
Flow diagram of disease probability calculation using the improved naïve Bayes classifier based on attribute relevance
Then, the naïve Bayes classification steps determine the probabilities that various diseases will occur when symptom Si occurs. Finally, the classifier outputs a set of diseases that have high probabilities and other symptoms that are associated with these diseases. Our model allows the user to select additional relevant symptoms as a supplement to the initial query. The classifier will continue to operate until the user completes symptom selection, at which point the diagnosis results will be complete.
Naïve Bayes
Formally, we consider k disease categories, namely, {D1, D2, D3 … Dk}, and m diagnostic samples, namely, {S1, S2, S3, …Sm}, where each sample contains n symptom attributes, which are denoted as Si = {Si1, Si2, Si3, …Sin}.
Equation (3) expresses the naïve Bayes computation, where P(Df) denotes the probability of disease Df before we observe any symptoms. We obtain P(Df) based on statistical results or expert experiences. Given a symptom Si, P(Df| Si) is the posterior probability of Df.
The conditional probability of Si equals P(Si| Df) if Df holds. Here, \( \frac{P\left({S}_i|{D}_f\right)}{P\left({S}_i\right)} \) can be treated as an adjustment factor for the disease probability P(Df). If the adjustment factor is > 1, P(Df) will be augmented; hence, the probability of occurrence of disease Df is higher; if the adjustment factor is < 1, P(Df) will be weakened; hence, the probability of occurrence of Df is lower. If the value of the adjustment factor = 1, the probability of occurrence of disease Df is unaffected.
$$ \mathrm{P}\left({D}_f|{S}_i\right)=\frac{P\left({D}_f\right)\bullet \mathrm{P}\left({S}_i|{D}_f\right)}{P\left({S}_i\right)} $$
According to the assumption of attribute independence, which underlies naïve Bayes, the Bayesian multiplicative equation can be simplified to Equation (4):
$$ \mathrm{P}\left({D}_f|{S}_i\right)=\frac{P\left({D}_f\right)\bullet \prod \limits_{j=1}^n\mathrm{P}\left({S}_{ij}|{D}_f\right)}{P\left({S}_i\right)} $$
A symptom-dependency-aware naïve Bayes classifier is designed based on the attribute relevance. Naïve Bayes evaluates the correlation between symptoms in terms of the dependency degree between symptom vectors. The conditional probability of a symptom vector is evaluated as the product of the conditional probability of each symptom and the dependency degree of the symptom vector. By calculating the symptom vectors, the probability of a disease, namely, P(Df), is used to estimate its posterior probability.
Correlations between symptoms
As expressed in Equation (5), the OR value between any two nodes is evaluated based on the co-occurrence frequency among symptoms in the EMRs. Using 30,060 EMR data as training set, a threshold of at least 5 co-occurrences between symptom pairs was selected as a denoising measure. Here, 5 corresponds to the number of co-occurrences between symptom pairs in each EMR record. We experimented with several co-occurrence thresholds (0, 2, 5 and 10) and selected the smallest value that performed well in the automatic evaluation. According to the pre-experiment, the number of EMRs has little impact on the threshold setting.
The OR value can be used to estimate the mutual information strength between symptom Si and disease Df. If the OR between symptom Si and disease Df exceeds 1, then having symptom Si is considered to be a risk factor for disease Df. If the OR value is less than 1, symptom Si is not highly relevant to disease Df:
$$ \mathrm{OR}\left({S}_i,{D}_f\right)=\frac{P\left({S}_i=1|{D}_f=1\right)\ast P\left({S}_i=0|{D}_f=0\right)}{P\left({S}_i=0|{D}_f=1\right)\ast P\left({S}_i=1|{D}_f=0\right)} $$
To estimate the mutual information between symptoms, namely, to quantify how strongly the presence or absence of symptom Si is associated with the presence or absence of symptom Sj, we simultaneously calculate OR(Si, Sj) as:
$$ \mathrm{OR}\left({S}_i,{S}_j\right)=\frac{P\left({S}_i=1|{S}_j=1\right)\ast P\left({S}_i=0|{S}_j=0\right)}{P\left({S}_i=0|{S}_j=1\right)\ast P\left({S}_i=1|{S}_j=0\right)} $$
Based on the obtained OR value, the correlations between the symptoms is:
$$ {\mathrm{Corr}}_{\left({S}_i,{S}_j\right)\left|{D}_f\right.}=\frac{\mathrm{OR}\left({S}_i,{S}_j\right)}{\mathrm{OR}\left({S}_i,{D}_f\right)\bullet \mathrm{OR}\left({S}_j,{D}_f\right)},\left(j!=i\right) $$
The symptom-dependency-aware naïve Bayes classifier that is based on attribute relevance
The improved formula, which evaluates the posterior probability by taking into account the dependency degree of the symptom vector, is presented as Equation (8):
$$ \mathrm{P}\left({D}_f|{S}_i\right)=\frac{{\mathrm{Corr}}_{S_i\left|{D}_f\right.}\bullet P\left({D}_f\right)\bullet \prod \limits_{j=1}^n\mathrm{P}\left({S}_{ij}|{D}_f\right)}{P\left({S}_i\right)} $$
where \( {\mathrm{Corr}}_{S_i\left|{D}_f\right.} \) denotes the dependency degree of symptom vector Si, which can be calculated via Equation (9). There are n symptoms and \( {C}_n^2 \) denotes the number of pairwise symptom combinations:
$$ {\mathrm{Corr}}_{S_i\left|{D}_f\right.}=\sqrt[{C}_n^2]{\prod_{i,j=1}^n{\mathrm{Corr}}_{\left({S}_i,{S}_j\right)\left|{D}_f\right.}}\ \left(j<i\right) $$
The main strategy is to represent the dependency degree of a symptom vector as the correlation product of symptom pairs approximately, since the dependency degree of the symptom vector is proportional to the correlations between the pairs of symptoms.
Optimization of the Symptom-dependency-aware Naïve Bayes classifier
Adaptive boosting (AdaBoost) [42] is used to optimize the proposed naive Bayes classifier. AdaBoost randomly selects the symptom vectors from the training database and trains the proposed classifier on the selected subset. The remaining data are used as test data. Vectors that are misclassified will form the subset for training; hence, the proposed classifier will learn the misclassified symptom vectors in the next round.
We utilize the effect of the number of symptoms in the symptom vector to smooth the product by calculating the correlation coefficient. The training process is described as follows:
[Step 1] Sample Statistics.
We count the number of samples #Df for disease Df, the number of samples #Sij|Df in which symptom Sij is associated with disease Df, and the number of samples #(Si,Sj)|Df in which symptom pair (Si,Sj) occurs with disease Df.
[Step 2] Disease and Symptom Probability Evaluation.
Using the results from the sample statistics, the probability of a disease, namely, P(Df), and the conditional probability of a symptom, namely, P(Sij| Df), can be calculated via Equation (10) and Equation (11), respectively:
$$ P\left({D}_f\right)=\left({\mathrm{Count}}_{{\mathrm{D}}_{\mathrm{f}}}+1\right)/\left(\mathrm{m}+\mathrm{k}\right) $$
$$ P\left({S}_{ij}|{D}_f\right)=\left({Count}_{S_{ij}\left|{D}_f\right.}+1\right)/\left({Count}_{D_f}+k\right) $$
where m is the number of samples in the training set S and k is the number of diseases. The Laplace correction (the "+ 1" in the numerator and the "+ k" in the denominator) is utilized to estimate probabilities in machine learning.
[Step 3] Pairwise Symptom Conditional Probability and Symptom Correlation Matrix.
We estimate the conditional probability P((Si,Sj)|Df) of symptom pair (Si,Sj). The correlation of each symptom pair is evaluated via Equation (7) to produce a matrix of symptom correlations.
In the classification process, given the symptom vectors, we calculate the posterior probability of a disease and select the disease that has the maximum posteriori probability.
[Step 1] Vector Correlation.
Given a test sample Si = {Si1, Si2, Si3, …Sin}, the dependency degree \( {Corr}_{S_i\left|{D}_f\right.} \) of symptom vector Si is calculated via Equation (9) with the symptom correlation matrix.
[Step 2] Symptom Posterior Probability and Diagnosis Classification.
We calculate the disease posterior probability P(Df|Si) via Equation (8) and select the diseases with high posteriori probability values as the diagnosis classification results.
Enriching the ontology with probabilities
After obtaining the disease- and symptom-relevant probabilities via the symptom-dependency-aware naïve Bayes calculation, we need to add the values of the probabilities into the ontology.
A MySQL database is used to store the disease probability and symptom conditional probability that were evaluated via the original naïve Bayes classifier or the improved naïve Bayes classifier. The data conversion between this MySQL database and the ontology in web ontology language (OWL) is conducted by the Owlready package [43]. The probability values of a disease are added to DataProperty of the ontology rather than to AnnotationProperty. Thus, the ontology metrics can be calculated by Protégé and read by Owlready, rdflib or any other ontology development tool [44]. Via this approach, the symptom-dependency-aware naïve Bayes classifier can perform the disease probability calculation.
Source code about the symptom dependency-aware Naïve Bayes probability computation and the ontology are accessible via: https://github.com/shenyingpku/IASO
https://github.com/fxsjy/jieba
http://dic.medlive.cn
https://www.ebi.ac.uk/ols/ontologies/doid/terms?iri=http%3A%2F%2Fpurl.obolibrary.org%2Fobo%2FDOID_77
AUC:
Disease Ontology
EMRs:
FN:
Number of false negative
FP:
Number of False Positives
GIDEON:
Global Infectious Disease and Epidemiology Network
OWL:
Web ontology language
PRA:
Path ranking algorithm
ROC:
Receiver operating characteristic curve
SDNB:
The name of the proposed classifier and the generated ontology
Number of true negatives
Number of true positives
Robinson P, Bauer S. Introduction to bio-ontologies. Florida: CRC Press; 2011.
Bisson LJ, Komm JT, Bernas GA, et al. Accuracy of a computer-based diagnostic program for ambulatory patients with knee pain. Am J Sports Med. 2014;42(10):2371–6.
Power D, Sharda R, Burstein F. Decision support systems. New Jersey: John Wiley & Sons; 2015.
Zhu J, Fung GPC, Lei Z, Yang M, Shen Y. An in-depth study of similarity predicate committee. Inf Process Manag. 2019;56(3):381–93.
Gruber T. A translation approach to portable ontology specifications. Knowl Acquis. 1993;5(2):199–220.
Seidenberg J, Rector A. Web ontology segmentation: analysis, classification and use, 15th international conference on World Wide Web; 2006 May 22–26. Edinburgh: ACM; 2006. p. 13–22.
Jensen PB, Jensen LJ, Brunak S. Mining electronic health records: towards better research applications and clinical care. Nat Rev Genet. 2012;13(6):395.
Wright A, Pang J, Feblowitz JC, et al. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record. J Am Med Inform Assoc. 2011;18(6):859–67.
Garvin JH, DuVall SL, South BR, et al. Automated extraction of ejection fraction for quality measurement using regular expressions in unstructured information management architecture (UIMA) for heart failure. J Am Med Inform Assoc. 2012;19(5):859–66.
Patrick JD, Nguyen DHM, Wang Y, et al. A knowledge discovery and reuse pipeline for information extraction in clinical notes. J Am Med Inform Assoc. 2011;18(5):574–9.
Yin X, Tan W. Semi-supervised truth discovery. In: Proceedings of the 20th international conference on world wide web. ACM; 2011. p. 217–26.
Hripcsak G, Albers DJ. Next-generation phenotyping of electronic health records. J Am Med Inform Assoc. 2012;20(1):117–21.
Li C, Rana S, Phung D, et al. Hierarchical Bayesian nonparametric models for knowledge discovery from electronic medical records. Knowl-Based Syst. 2016;99:168–82.
Tourille J, Ferret O, Neveol A, et al. Neural architecture for temporal relation extraction: a bi-LSTM approach for detecting narrative containers. In: Proceedings of the 55th annual meeting of the Association for Computational Linguistics (Volume 2: Short Papers), vol. 2; 2017. p. 224–30.
Jagannatha AN, Yu H. Bidirectional RNN for medical event detection in electronic health records. Proc Conf. 2016;2016:473.
Ware H, Mullett CJ, Jagannathan V, et al. Machine learning-based coreference resolution of concepts in clinical documents. J Am Med Inform Assoc. 2012;19(5):883–7.
Garla VN, Brandt C. Knowledge-based biomedical word sense disambiguation: an evaluation and application to clinical document classification. J Am Med Inform Assoc. 2012;20(5):882–6.
Sohn S, Wagholikar KB, Li D, et al. Comprehensive temporal information detection from clinical text: medical events, time, and TLINK identification. J Am Med Inform Assoc. 2013;20(5):836–42.
Albright D, Lanfranchi A, Fredriksen A, et al. Towards comprehensive syntactic and semantic annotations of the clinical narrative. J Am Med Inform Assoc. 2013;20(5):922–30.
Chang YH, Huang HY. An automatic document classifier system based on naive bayes classifier and ontology. Machine learning and cybernetics, 2008 international conference on. IEEE. 2008;6:3144–9.
Kim H, Chen SS. Associative naive bayes classifier: automated linking of gene ontology to medline documents. Pattern Recogn. 2009;42(9):1777–85.
Choi N, Song IY, Han H. A survey on ontology mapping. ACM SIGMOD Rec. 2006;35(3):34–41.
Kontopoulos E, Berberidis C, Dergiades T, et al. Ontology-based sentiment analysis of twitter posts. Expert Syst Appl. 2013;40(10):4065–74.
Michalski RS, Carbonell JG, Mitchell TM. Machine learning: an artificial intelligence approach. In: Springer Science & Business Media; 2013.
Yu VEdberg S. Global Infectious diseases and epidemiology network (GIDEON): a world wide web-based program for diagnosis and informatics in infectious diseases. Clin Infect Dis. 2005;40(1):123–6.
Benndorf M, Kotter E, Langer M, Herda C, Wu Y, Burnside E. Development of an online, publicly accessible naive Bayesian decision support tool for mammographic mass lesions based on the American College of Radiology (ACR) BI-RADS lexicon. Eur Radiol. 2015;25(6):1768–75.
Kazmierska J, Malicki J. Application of the Naïve Bayesian classifier to optimize treatment decisions. Radiother Oncol. 2008;86(2):211–6.
Parthiban G, Rajesh A, Srivatsa SK. Diagnosis of heart disease for diabetic patients using naive bayes method[J]. Int J Comput Appl. 2011;24(3):7–11.
Jiang L, Cai Z, Wang D, Zhang H. Improving tree augmented naive Bayes for class probability estimation. Knowl-Based Syst. 2012;26:239–45.
Wu J, Cai Z, Pan S, Zhu X, Zhang C. Attribute weighting: how and when does it work for Bayesian network classification, 2014 international joint conference on neural networks (IJCNN); 2014 July 06–11; Beijing (China). New York: IEEE; 2014:4076–83.
Schriml LM, Arze C, Nadendla S, et al. Disease ontology: a backbone for disease semantic integration. Nucleic Acids Res. 2011;40(D1):D940–6.
Moon C, Jones P, Samatova NF. Learning entity type Embeddings for knowledge graph completion, Proceedings of the 2017 ACM on conference on information and knowledge management; 2017 November 06–10. Singapore: ACM; 2017:2215–8.
Jiang J, Li X, Zhao C, et al. Learning and inference in knowledge-based probabilistic model for medical diagnosis. Knowl-Based Syst. 2017;138:58–68.
Hoffart J, Suchanek FM, Berberich K, et al. YAGO2: exploring and querying world knowledge in time, space, context, and many languages, Proceedings of the 20th international conference companion on world wide web: ACM; 2011. p. 229–32.
Chekol MW, Pirrò G, Schoenfisch J, et al. Marrying uncertainty and time in knowledge graphs. AAAI. 2017:88–94.
Hidalgo CA, Blumm N, Barabási AL, et al. A dynamic network approach for the study of human phenotypes[J]. PLoS Comput Biol. 2009;5(4):e1000353.
Zhou XZ, Menche J, Barabási AL, et al. Human symptoms–disease network[J]. Nat Commun. 2014;5:4212.
Cronin RM, Fabbri D, Denny JC, Jackson G. Automated classification of consumer health information needs in patient portal messages. In: AMIA annual symposium proceedings: American Medical Informatics Association; 2015. p. 1861.
Glas AS, Lijmer JG, Prins MH, Bonsel GJ, Bossuyt PM. The diagnostic odds ratio: a single indicator of test performance. J Clin Epidemiol. 2003;56(11):1129–35.
Lao N, Cohen WW. Relational retrieval using a combination of path-constrained random walks. Mach Learn. 2010;81(1):53–67.
Johnston M, Langton K, Haynes R. Effects of computer-based clinical decision support systems on clinician performance and patient outcome: a critical appraisal of research. Ann Intern Med. 1994;120(2):135–42.
Korada NK, Kumar NSP, Deekshitulu YVNH. Implementation of naïve Bayesian classifier and ada-boost algorithm using maize expert system. International Journal of Information Sciences and Techniques. 2012;2(3):63–75.
Lamy JB. Owlready: ontology-oriented programming in Python with automatic classification and high level constructs for biomedical ontologies. Artif Intell Med. 2017;80:11–28.
Shen Y, Wen D, Li Y, Du N, Zheng HT, Yang M. Path-based attribute-aware representation learning for relation prediction. In: Proceedings of the 2019 SIAM international conference on data mining: Society for Industrial and Applied Mathematics; 2019. p. 639–47.
Ying Shen is now an Assistant Researcher Professor in School of Electronics and Computer Engineering (SECE) at Peking University. She received her Ph.D. degree from the University of Paris Ouest Nanterre La Défense (France), specialized in Medical & Biomedical Information Science. She received her Erasmus Mundus Master degree in Natural Language Processing from the University of Franche-Comté (France) and University of Wolverhampton (England). Her research interest is mainly focused in the area of Medical Informatics, Natural Language Processing and Machine Learning.
Yaliang Li received his Ph.D. degree in Computer Science from University at Buffalo, USA, in 2017. He is broadly interested in machine learning, data mining and information analysis. In particular, he is interested in analyzing information from multiple heterogeneous sources, including but not limited to information integration, knowledge graph, anomaly detection, data stream mining, trustworthiness analysis and transfer learning.
Haitao Zheng is now an Associate Professor in School of Information Science and Technology at Tsinghua University. He received his Ph.D. degree from the Seoul National University (Korea), specialized in Medical Informatics. He received his Master and bachelor degree in Computer Science from the Sun Yat-Sen University (China). His research fields include Web Science, Semantic Web, Information Retrieval, Machine Learning, Medical Informatics, and Artificial Intelligence.
Buzhou Tang is now an Associate Professor in School of Computer Science and Technology at Harbin Institute of Technology. He received his Ph.D. degree and master degree from the Harbin Institute of Technology (China), specialized in Natural Language Processing. He received his bachelor degree in Computer Science from the Jilin University (China). His research fields include Artificial Intelligence, Machine Learning, Data Mining, Natural Language Processing and Biomedical Informatics.
Min Yang is currently an Assistant Professor with the Shenzhen Institutes of Advanced Technology, Chinese Academy of Science. She received her Ph.D. degree from the University of Hong Kong in February 2017. Prior to that, she received her B.S. degree from Sichuan University in 2012. Her current research interests include machine learning, deep learning and natural language processing.
School of Electronics and Computer Engineering, Peking University Shenzhen Graduate School, Shenzhen, 518055, People's Republic of China
Ying Shen
Alibaba Group, Bellevue, WA, USA
Yaliang Li
School of Information Science and Technology, Graduate School at Shenzhen, Tsinghua University, Shenzhen, 518055, People's Republic of China
Hai-Tao Zheng
Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, People's Republic of China
Buzhou Tang
SIAT, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China
Min Yang
Search for Ying Shen in:
Search for Yaliang Li in:
Search for Hai-Tao Zheng in:
Search for Buzhou Tang in:
Search for Min Yang in:
YS carried out the application of mathematical techniques. YL realized the development methodology and the creation of models. HZ and BT conducted the assessment of system operation. MY analyzed and counted ontology information, and was responsible for the management and coordination responsibility for the research activity planning and execution. All authors read and approved the final manuscript.
This work was financially supported by the National Natural Science Foundation of China (No.61602013 and No. 61773229), the Shenzhen Key Fundamental Research Projects (Grant No. JCYJ20170818091546869), and the Basic Scientific Research Program of Shenzhen City (Grant No. JCYJ20160331184440545). Min Yang was sponsored by CCF-Tencent Open Research Fund. The funding body had no role in the design of this study and collection, analysis, and interpretation of data and in writing the manuscript.
Correspondence to Min Yang.
The authors declare that they have no competing interests. Any opinions, findings, and conclusions or recommendations expressed in this research are those of the author(s) and do not reflect the views of the company or organization.
Uncertainty reasoning
Knowledge-based analysis | CommonCrawl |
各位朋友大家好,欢迎您进入solidot新版网站,在使用过程中有任何问题或建议,请与很忙的管理员联系。
12-12 13 xkx 399
CAC of madgets.com madgets.com Inc.
发表评论 查看更多 翻译此文
只需五步,自己动手写一个静态博客
08-13 15 muxueqz 25365
为什么要自己动手写一个静态博客? 众所周知,随着Github Pages这样的服务越来越流行,现在像Hexo、Hugo、Pelican这样的静态博客越来越多, 像我以前就是用`Pelican`的,但因为`Pelican`的依赖比较多(其实是想自己造轮子), 自从见过`Nim`就一直很想自己写一个静态博客,但总是觉得比较麻烦, 直到看到 [Writing a small static site generator](https://blog.thea.codes/a-small-static-site-generator/) , 才发现原来写一个静态博客竟如此简单。 废话不多说,那我们就开始动手做吧!
[$] A way to do atomic writes
05-29 LWN 13676
Finding a way for applications to do atomic writes to files, so that either the old or new data is present after a crash and not a combination of the two, was the topic of a session led by Christoph Hellwig at the 2019 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM). Application developers hate the fact that when they update files in place, a crash can leave them with old or new data—or sometimes a combination of both. He discussed some implementation ideas that he has for atomic writes for XFS and wanted to see what the other filesystem developers thought about it.
[$] Improving .deb
Debian Linux and its family of derivatives (such as Ubuntu) are partly characterized by their use of .deb as the packaging format. Packages in this format are produced not only by the distributions themselves, but also by independent software vendors. The last major change of the format internals happened back in 1995. However, a discussion of possible changes has been brought up recently on the debian-devel mailing list by Adam Borowski.
Yes, You Can Write an Awesome Game in Just 10 Lines of Basic
05-29 IEEE 22099
An annual contest challenges programmers to create 8-bit games of intrigue and adventure
YouTube Gaming App Shuts Down This Week
05-29 Slashdot 22033
An anonymous reader quotes a report from Ars Technica: YouTube Gaming is more or less shutting down this week. Google launched the standalone YouTube gaming vertical almost four years ago as a response to Amazon's purchase of Twitch, and on May 30, Google will shut down the standalone YouTube Gaming app and the standalone gaming.youtube.com website. The plan to shut down the gaming portal was announced in September, with a report from The Verge saying the site was dying because it simply wasn't popular. YouTube serves more than 50 billion hours of gaming content a year, but people just aren't viewing those hours through the gaming-specific site and apps. "A support page does detail some of the changes users will have to deal with, like the merging of YouTube Gaming and normal YouTube subscriptions," the report adds. "Users will also lose their list of followed games, which isn't supported on YouTube." "Google is directing former YouTube Gaming users to a gaming sub-page on YouTube.com
Why Facebook is right not to take down the doctored Pelosi video
05-29 MIT Technology 22788
Taking down the 'drunk ' Pelosi video could set a precedent for censoring political satire or dissent.
[$] Storage testing
Ted Ts'o led a discussion on storage testing and, in particular, on his experience getting blktests running for his test environment, in a combined storage and filesystem session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. He has been adding more testing to his automated test platform, including blktests, and he would like to see more people running storage tests. The idea of his session was to see what could be done to help that cause.
iPhone apps share data with trackers, ad companies and research firms
05-29 Hacker News 22369
When algorithms mess up, the nearest human gets the blame
A look at historical case studies shows us how we handle the liability of automated systems.
[$] Memory: the flat, the discontiguous, and the sparse
The physical memory in a computer system is a precious resource, so a lot of effort has been put into managing it effectively. This task is made more difficult by the complexity of the memory architecture on contemporary systems. There are several layers of abstraction that deal with the details of how physical memory is laid out; one of those is simply called the "memory model". There are three models supported in the kernel, but one of them is on its way out. As a way of understanding this change, this article will take a closer look at the evolution of the kernel's memory models, their current state, and their possible future.
[$] Testing and the stable tree
The stable tree was the topic for a plenary session led by Sasha Levin at the 2019 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM). One of the main areas that needs attention is testing, according to Levin. He wanted to discuss how to do more and better testing as well as to address any concerns that attendees might have with regard to the stable tree.
"Please don't theme our apps"
05-28 OSnews 20604
We are developers and designers making apps for the GNOME platform. We take pride in our craft and work hard to make sure our applications are a great experience for people. Unfortunately, all our efforts designing, developing, and testing our apps are made futile by theming in many cases. This is insanity – even if they claim it only applies to distribution makers. Their argument basically comes down to certain themes making certain applications look bad, and that theming removes branding from applications. First, theming making applications look bad is either an issue with the theme that needs to be fixed or an issue with Gtk+/GNOME being bad at theming, and second, your branding is irrelevant on my computer, or on my distribution. I use KDE, and one of the main reasons I do so is to ensure I can make my desktop and its applications look exactly the way I want them to look.
Why the world's biggest CO<sub>2</sub>-sucking plant would be
And how it might even be a good thing.
Wiretap and Gelfand-Pinsker Channels Analogy and its Applications. (arXiv:1
05-26 arXiv 21736
An analogy framework between wiretap channels (WTCs) and state-dependent point-to-point channels with non-causal encoder channel state information (referred to as Gelfand-Pinker channels (GPCs)) is proposed. A good sequence of stealth-wiretap codes is shown to induce a good sequence of codes for a corresponding GPC. Consequently, the framework enables exploiting existing results for GPCs to produce converse proofs for their wiretap analogs. The analogy readily extends to multiuser broadcasting scenarios, encompassing broadcast channels (BCs) with deterministic components, degradation ordering between users, and BCs with cooperative receivers. Given a wiretap BC (WTBC) with two receivers and one eavesdropper, an analogous Gelfand-Pinsker BC (GPBC) is constructed by converting the eavesdropper's observation sequence into a state sequence with an appropriate product distribution (induced by the stealth-wiretap code for the WTBC), and non-causally revealing the states to the encoder. The t
Wi-Fi Sensing: Applications and Challenges. (arXiv:1901.00715v4 [cs.HC] UPD
Wi-Fi technology has strong potentials in indoor and outdoor sensing applications, it has several important features which makes it an appealing option compared to other sensing technologies. This paper presents a survey on different applications of Wi-Fi based sensing systems such as elderly people monitoring, activity classification, gesture recognition, people counting, through the wall sensing, behind the corner sensing, and many other applications. The challenges and interesting future directions are also highlighted.
Why does Windows really use backslash as path separator?
More or less anyone using modern PCs has to wonder: why does Windows use backslash as a path separator when the rest of the world uses forward slash? The clear intermediate answer is "because DOS and OS/2 used backslash". Both Windows 9x and NT were directly or indirectly derived from DOS and OS/2, and certainly inherited much of the DOS cultural landscape. That, of course, is not much of an answer. The obvious next question is, why did DOS use backslash as a path separator? When DOS 2.0 added support for hierarchical directory structure, it was more than a little influenced by UNIX (or perhaps more specifically XENIX), and using the forward slash as a path separator would have been the logical choice. That's what everyone can agree on. Beyond that, things get a bit muddled. A fascinating bit of sleuthing, and the author comes to an interesting theory. What's fascinating to me is that I don't even consciously realise the MS-DOS is the odd one out here – I just adapt t
Why Linux on Desktop 'Failed': A discussion with Mark Shuttleworth
In an interesting video interview, Canonical founder Mark Shuttleworth shares his thoughts on desktop Linux. Some of his most prominent statements include: "I think the bigger challenge has been that we haven't invented anything in the Linux that was like deeply, powerfully ahead of its time" and, "if in the free software community we only allow ourselves to talk about things that look like something that already exists, then we're sort of defining ourselves as a series of forks and fragmentations."
[$] New system calls for memory management
Several new system calls have been proposed for addition to the kernel in a near-future release. A few of those, in particular, focus on memory-management tasks. Read on for a look at process_vm_mmap() (for zero-copy data transfer between processes), and two new APIs for advising the kernel about memory use in a different process.
[$] Lazy file reflink
Amir Goldstein has a use case for a feature that could be called a "lazy file reflink", he said, though it might also be described as "VFS-level snapshots". He went through the use case, looking for suggestions, in a session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM). He has already implemented parts of the solution, but would like to get something upstream, which would mean shifting from the stacked-filesystem approach he has taken so far.
[$] LWN.net Weekly Edition for May 23, 2019
The LWN.net Weekly Edition for May 23, 2019 is available.
[$] New system calls: pidfd_open() and close_range()
The linux-kernel mailing list has recently seen more than the usual amount of traffic proposing new system calls. LWN is endeavoring to catch up with that stream, starting with a couple of proposals for the management of file descriptors. pidfd_open() is a new way to create a "pidfd" file descriptor that refers to a process in the system, while close_range() is an efficient way to close many open descriptors with a single call.
[$] Transparent huge pages for filesystems
One thing that is known about using transparent huge pages (THPs) for filesystems is that it is a hard problem to solve, but is there a solid first step that could be taken toward that goal? That is the question Song Liu asked to open his combined filesystem and memory-management session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM). His employer, Facebook, has a solid use case for using THPs on files in the page cache, which may provide a starting point.
[$] Filesystems and crash resistance
The "guarantees" that existing filesystems make with regard to persistence in the face of a system crash was the subject of a session led by Amir Goldstein at the 2019 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM). The problem is that filesystem developers are not willing to make much in the way of guarantees unless applications call fsync()—something that is not popular with application developers, who want a cheaper option.
Zork and the Z-Machine: Bringing the Mainframe to 8-Bit Home Computers
"Quacks" blamed for HIV outbreak that infected hundreds of kids
05-22 Ars Technica 21406
Local health officials say cheap charlatans are likely using contaminated equipment.
Windows 10 May 2019 Update now rolling out to everyone… slowly
Unless you explicitly want it installed, you probably won't get this update.
ZetaSQL – A SQL Analyzer Framework from Google
[$] openSUSE considers governance options
The relationship between SUSE and the openSUSE community is currently under discussion as the community considers different options for how it wants to be organized and governed in the future. Among the options under consideration is the possibility of openSUSE setting up an entirely independent foundation, as it seeks greater autonomy and control over its own future and operations.
[$] Asynchronous fsync()
The cost of fsync() is well known to filesystem developers, which is why there are efforts to provide cheaper alternatives. Ric Wheeler wanted to discuss the longstanding idea of adding an asynchronous version of fsync() in a filesystem session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM). It turns out that what he wants may already be available via the new io_uring interface.
X-rays reveal the colors of a 3 million-year-old fossil mouse
We can now see the Neogene period in color.
openSUSE Leap 15.1 released
The openSUSE project has announced the release of openSUSE Leap 15.1. "Leap releases are scalable and both the desktop and server are equally important for professional's workloads, which is reflected in the installation menu as well as the amount of packages Leap offers and hardware it supports. Leap is well suited and prepared for usage as a Virtual Machine (VM) or container guest, allowing professional users to efficiently run network services no matter whether it's a single server or a data center."
Where the Engineering Jobs Are in 2019
Cybersecurity experts and data wranglers are in high demand
X-ray Detection May Be Perovskites' Killer App
The wonder crystal could yield imagers that are far more sensitive than commercial detectors
[$] Supporting the UFS turbo-write mode
In a combined filesystem and storage session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit, Avri Altman wanted to discuss the "turbo-write" mode that is coming for Universal Flash Storage (UFS) devices. He wanted to introduce this new feature to assembled developers and to get some opinions on how to support this mode in the kernel.
Wide color photos are coming to Android
Android is now at the point where sRGB color gamut with 8 bits per color channel is not enough to take advantage of the display and camera technology. At Android we have been working to make wide color photography happen end-to-end, e.g. more bits and bigger gamuts. This means, eventually users will be able to capture the richness of the scenes, share a wide color pictures with friends and view wide color pictures on their phones. And now with Android Q, it's starting to get really close to reality: wide color photography is coming to Android. So, it's very important to applications to be wide color gamut ready. This article will show how you can test your application to see whether it's wide color gamut ready and wide color gamut capable, and the steps you need to take to be ready for wide color gamut photography.
[$] Filesystems for zoned block devices
Damien Le Moal and Naohiro Aota led a combined storage and filesystem session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit (LSFMM) on filesystem work that has been done for zoned block devices. These devices have multiple zones with different characteristics; usually there are zones that can only be written in sequential order as well as conventional zones that can be written in random order. The genesis of zoned block devices is shingled magnetic recording (SMR) devices, which were created to increase the capacity of hard disks, but at the cost of some flexibility.
[$] The rest of the 5.2 merge window
By the time Linus Torvalds released the 5.2-rc1 kernel prepatch and closed the merge window for this development cycle, 12,064 non-merge changesets had been pulled into the mainline repository — about 3,700 since our summary of the first "half" was written. Thus, as predicted, the rate of change did slow during the latter part of the merge window. That does not mean that no significant changes have been merged, though; read on for a summary of what else has been merged for 5.2.
"Blockchain Week" gives us presidential candidates, parties, and signs
It might have been less exuberant than last year, but crypto hype isn't going away anytime soon—and there's still clearly big money to be made
[$] Testing in the Yocto Project
The ever-increasing complexity of the software stacks we work with has given testing an important role. There was a recent intersection between the automated testing being done by the Yocto Project (YP) and a bug introduced into the Linux kernel that gives some insight into what the future holds and the potential available with this kind of testing.
WhatsApp voice calls used to inject Israeli spyware on phones
A vulnerability in the messaging app WhatsApp has allowed attackers to inject commercial Israeli spyware on to phones, the company and a spyware technology dealer said. WhatsApp, which is used by 1.5bn people worldwide, discovered in early May that attackers were able to install surveillance software on to both iPhones and Android phones by ringing up targets using the app's phone call function. The malicious code, developed by the secretive Israeli company NSO Group, could be transmitted even if users did not answer their phones, and the calls often disappeared from call logs, said the spyware dealer, who was recently briefed on the WhatsApp hack. I never answer phone calls from telephone numbers I am not familiar with, let alone when the incoming callers his their number blocked. Apparently, though, not even protects you from attacks such as these.
Wireless Network Brings Dust-Sized Brain Implants a Step Closer
Engineers have designed a scheme to let thousands of brain implants talk at up to 10 megabits per second
Drivers Think Bikers Are Less Than Human, Survey Says
Researchers have found an explanation for why many drivers act out toward cyclists: They are actually dehumanizing people who ride bikes, according to an April study by Australian researchers in the journal Transportation Research. From a report: And this dehumanization -- the belief that a group of people are less than human -- correlates to drivers' self-reported aggressive behavior. Since 2010, cyclist fatalities have increased by 25 percent in the US. A total of 777 bicyclists were killed in crashes with drivers in 2017, and 45,000 were injured from crashes in 2015. Data compiled by the League of American Bicyclists also suggests that, in some states, bicyclists are overrepresented in the number of traffic fatalities. "The idea is that if you don't see a group of people as fully human, then you're more likely to be aggressive toward them," said Narelle Haworth, a professor and director of the Centre for Accident Research and Road Safety at Queensland University of Technology, one
EPA administrator asked to back up climate claims made on TV with science
Freedom of Information Act seems to be latest weapon to fight climate misinformation.
Enterpriseification
Enumeration of bounded lecture hall tableaux. (arXiv:1904.10602v1 [math.CO]
Recently the authors introduced lecture hall tableaux in their study of multivariate little $q$-Jacobi polynomials. In this paper, we enumerate bounded lecture hall tableaux. We show that their enumeration is closely related to standard and semistandard Young tableaux. We also show that the number of bounded lecture hall tableaux is the coefficient of the Schur expansion of $s_\lambda(m+y_1,\dots,m+y_n)$. To prove this result, we use two main tools: non-intersecting lattice paths and bijections. In particular we use ideas developed by Krattenthaler to prove bijectively the hook content formula.
Elliptic classes of Schubert cells via Bott-Samelson resolution. (arXiv:190
We study the equivariant elliptic characteristic classes of Schubert varieties of the generalized full flag variety $G/B$. For this first we need to twist the notion of elliptic characteristic class of Borisov-Libgober by a line bundle, and thus allow the elliptic classes to depend on extra variables. Using the Bott-Samelson resolution of Schubert varieties we prove a BGG-type recursion for the elliptic classes, and study the Hecke algebra of our elliptic BGG operators. For $G=GL_n(C)$ we find representatives of the elliptic classes of Schubert varieties in natural presentations of the K theory ring of $G/B$, and identify them with the Tarasov-Varchenko weight function (a.k.a. elliptic stable envelopes for $T^*G/B$). As a byproduct we find another recursion, different from the known R-matrix recursion for the fixed point restrictions of weight functions. On the other hand the R-matrix recursion generalizes for arbitrary reductive group $G$.
Dynamical systems and operator algebras associated to Artin's representatio
Artin's representation is an injective homomorphism from the braid group $B_n$ on $n$ strands into $\operatorname{Aut}\mathbb{F}_n$, the automorphism group of the free group $\mathbb{F}_n$ on $n$ generators. The representation induces maps $B_n\to\operatorname{Aut}C^*_r(\mathbb{F}_n)$ and $B_n\to\operatorname{Aut}C^*(\mathbb{F}_n)$ into the automorphism groups of the corresponding group $C^*$-algebras of $\mathbb{F}_n$. These maps also have natural restrictions to the pure braid group $P_n$. In this paper, we consider twisted versions of the actions by cocycles with values in the circle, and discuss the ideal structure of the associated crossed products. Additionally, we make use of Artin's representation to show that the braid groups $B_\infty$ and $P_\infty$ on infinitely many strands are both $C^*$-simple.
Draw-down Parisian ruin for spectrally negative L\'{e}vy process. (arXiv:19
In this paper we study the draw-down related Parisian ruin problem for spectrally negative L\'{e}vy risk processes. We introduce the draw-down Parisian ruin time and solve the corresponding two-sided exit time via excursion theory. We also obtain an expression of the potential measure for the process killed at the draw-down Parisian time. As applications, new results are obtained for spectrally negative L\'{e}vy risk process with dividend barrier and Parisian ruin.
Device-independent dimension test in a multiparty Bell experiment. (arXiv:1
A device-independent dimension test for a Bell experiment aims to estimate the underlying Hilbert space dimension that is required to produce given measurement statistical data without any other assumptions concerning the quantum apparatus. Previous work mostly deals with the two-party version of this problem. In this paper, we propose a very general and robust approach to test the dimension of any subsystem in a multiparty Bell experiment. Our dimension test stems from the study of a new multiparty scenario which we call prepare-and-distribute. This is like the prepare-and-measure scenario, but the quantum state is sent to multiple, non-communicating parties. Through specific examples, we show that our test results can be tight. Furthermore, we compare the performance of our test to results based on known bipartite tests, and witness remarkable advantage, which indicates that our test is of a true multiparty nature. We conclude by pointing out that with some partial information about
Efficient Simulation Budget Allocation for Subset Selection Using Regressio
This research considers the ranking and selection (R&S) problem of selecting the optimal subset from a finite set of alternative designs. Given the total simulation budget constraint, we aim to maximize the probability of correctly selecting the top-m designs. In order to improve the selection efficiency, we incorporate the information from across the domain into regression metamodels. In this research, we assume that the mean performance of each design is approximately quadratic. To achieve a better fit of this model, we divide the solution space into adjacent partitions such that the quadratic assumption can be satisfied within each partition. Using the large deviation theory, we propose an approximately optimal simulation budget allocation rule in the presence of partitioned domains. Numerical experiments demonstrate that our approach can enhance the simulation efficiency significantly.
Design and properties of wave packet smoothness spaces. (arXiv:1904.10687v1
We introduce a family of quasi-Banach spaces - which we call wave packet smoothness spaces - that includes those function spaces which can be characterised by the sparsity of their expansions in Gabor frames, wave atoms, and many other frame constructions. We construct Banach frames for and atomic decompositions of the wave packet smoothness spaces and study their embeddings in each other and in a few more classical function spaces such as Besov and Sobolev spaces.
Decay and Scattering in energy space for the solution of weakly coupled Cho
We prove decay with respect to some Lebesgue norms for a class of Schr\"odinger equations with non-local nonlinearities by showing new Morawetz inequalities and estimates. As a byproduct, we obtain large-data scattering in the energy space for the solutions to the systems of $N$ defocusing Choquard equations with mass-energy intercritical nonlinearities in any space dimension and of defocusing Hartree-Fock equations, for any dimension $d\geq3$.
Embedded nonlinear model predictive control for obstacle avoidance using PA
We employ the proximal averaged Newton-type method for optimal control (PANOC) to solve obstacle avoidance problems in real time. We introduce a novel modeling framework for obstacle avoidance which allows us to easily account for generic, possibly nonconvex, obstacles involving polytopes, ellipsoids, semialgebraic sets and generic sets described by a set of nonlinear inequalities. PANOC is particularly well-suited for embedded applications as it involves simple steps, its implementation comes with a low memory footprint and its fast convergence meets the tight runtime requirements of fast dynamical systems one encounters in modern mechatronics and robotics. The proposed obstacle avoidance scheme is tested on a lab-scale autonomous vehicle.
Descartes' rule of signs and moduli of roots. (arXiv:1904.10694v1 [math.CA]
A hyperbolic polynomial (HP) is a real univariate polynomial with all roots real. By Descartes' rule of signs a HP with all coefficients nonvanishing has exactly $c$ positive and exactly $p$ negative roots counted with multiplicity, where $c$ and $p$ are the numbers of sign changes and sign preservations in the sequence of its coefficients. For $c=1$ and $2$, we discuss the question: When the moduli of all the roots of a HP are arranged in the increasing order on the real half-line, at which positions can be the moduli of its positive roots depending on the positions of the sign changes in the sequence of coefficients?
Electron qubit non-destructively read: Silicon qubits may be better
Qubit avoids quantum wrecking ball, silicon may be future for quantum computers.
Division algebras graded by a finite group. (arXiv:1904.10686v1 [math.RA])
Let $k$ be a field containing an algebraically closed field of characteristic zero. If $G$ is a finite group and $D$ is a division algebra over $k$, finite dimensional over its center, we can associate to a faithful $G$-grading on $D$ a normal abelian subgroup $H$, a positive integer $d$ and an element of $Hom(M(H), k^\times)^G$, where $M(H)$ is the Schur multiplier of $H$. Our main theorem is the converse: Given an extension $1\rightarrow H\rightarrow G\rightarrow G/H\rightarrow 1$, where $H$ is abelian, a positive integer $d$, and an element of $Hom(M(H), k^\times)^G$, there is a division algebra with center containing $k$ that realizes these data. We apply this result to classify the $G$-simple algebras over an algebraically closed field of characteristic zero that admit a division algebra form over a field containing an algebraically closed field.
Deza graphs with parameters (v,k,k-2,a). (arXiv:1904.06974v2 [math.CO] UPDA
A Deza graph with parameters $(v,k,b,a)$ is a $k$-regular graph on $v$ vertices in which the number of common neighbors of two distinct vertices takes two values $a$ or $b$ ($a\leq b$) and both cases exist. In the previous papers Deza graphs with parameters $(v,k,b,a)$ where $k-b = 1$ were characterized. In the paper we characterise Deza graphs with $k-b = 2$.
Electrostatic T-matrix for a torus on bases of toroidal and spherical harmo
Semi-analytic expressions for the static limit of the $T$-matrix for electromagnetic scattering are derived for a circular torus, expressed in both a basis of toroidal harmonics and spherical harmonics. The scattering problem for an arbitrary static excitation is solved using toroidal harmonics, and these are then compared to the extended boundary condition method to obtain analytic expressions for auxiliary $Q$ and $P$-matrices, from which $\mathbf{T}=\mathbf{P}\mathbf{Q}^{-1}$ (in a toroidal basis). By applying the basis transformations between toroidal and spherical harmonics, the quasi-static limit of the $T$-matrix block $\mathbf{T}^{22}$ for electric-electric multipole coupling is obtained. For the toroidal geometry there are two similar $T$-matrices on a spherical basis, for computing the scattered field both near the origin and in the far field. Static limits of the optical cross-sections are computed, and analytic expressions for the limit of a thin ring are derived.
Energy Efficient Node Deployment in Wireless Ad-hoc Sensor Networks. (arXiv
We study a wireless ad-hoc sensor network (WASN) where $N$ sensors gather data from the surrounding environment and transmit their sensed information to $M$ fusion centers (FCs) via multi-hop wireless communications. This node deployment problem is formulated as an optimization problem to make a trade-off between the sensing uncertainty and energy consumption of the network. Our primary goal is to find an optimal deployment of sensors and FCs to minimize a Lagrange combination of the sensing uncertainty and energy consumption. To support arbitrary routing protocols in WASNs, the routing-dependent necessary conditions for the optimal deployment are explored. Based on these necessary conditions, we propose a routing-aware Lloyd algorithm to optimize node deployment. Simulation results show that, on average, the proposed algorithm outperforms the existing deployment algorithms.
Diffraction of a model set with complex windows. (arXiv:1904.08285v2 [math.
The well-known plastic number substitution gives rise to a ternary inflation tiling of the real line whose inflation factor is the smallest Pisot-Vijayaraghavan number. The corresponding dynamical system has pure point spectrum, and the associated control point sets can be described as regular model sets whose windows in two-dimensional internal space are Rauzy fractals with a complicated structure. Here, we calculate the resulting pure point diffraction measure via a Fourier matrix cocycle, which admits a closed formula for the Fourier transform of the Rauzy fractals, via a rapidly converging infinite product.
Drift Estimation for Discretely Sampled SPDEs. (arXiv:1904.10884v1 [math.PR
The aim of this paper is to study the asymptotic properties of the maximum likelihood estimator (MLE) of the drift coefficient for fractional stochastic heat equation driven by an additive space-time noise. We consider the traditional for stochastic partial differential equations statistical experiment when the measurements are performed in the spectral domain, and in contrast to the existing literature, we study the asymptotic properties of the maximum likelihood (type) estimators (MLE) when both, the number of Fourier modes and the time go to infinity. In the first part of the paper we consider the usual setup of continuous time observations of the Fourier coefficients of the solutions, and show that the MLE is consistent, asymptotically normal and optimal in the mean-square sense. In the second part of the paper we investigate the natural time discretization of the MLE, by assuming that the first N Fourier modes are measured at M time grid points, uniformly spaced over the time inte
Energy-Efficient Mobile-Edge Computation Offloading over Multiple Fading Bl
By allowing a mobile device to offload computation-intensive tasks to a base station, mobile edge computing (MEC) is a promising solution for saving the mobile device's energy. In real applications, the offloading may span multiple fading blocks. In this paper, we investigate energy-efficient offloading over multiple fading blocks with random channel gains. An optimization problem is formulated, which optimizes the amount of data for offloading to minimize the total expected energy consumption of the mobile device. Although the formulated optimization problem is non-convex, we prove that the objective function of the problem is piecewise convex, and accordingly develop an optimal solution for the problem. Numerical results verify the correctness of our findings and the effectiveness of our proposed method.
Discrete convolution operators and Riesz systems generated by actions of ab
We study the bounded endomorphisms of $\ell_{N}^2(G)=\ell^2(G)\times \dots \times\ell^2(G)$ that commute with translations, where $G$ is a discrete abelian group. It is shown that they form a C*-algebra isomorphic to the C*-algebra of $N\times N$ matrices with entries in $L^\infty(\widehat{G})$, where $\widehat{G}$ is the dual space of $G$. Characterizations of when these endomorphisms are invertible, and expressions for their norms and for the norms of their inverses, are given. These results allow us to study Riesz systems that arise from the action of $ G $ on a finite set of elements of a Hilbert space.
Enhancing logical deduction with math: the rationale behind Gardner and Car
Math is widely considered as a powerful tool and its strong appeal depends on the high level of abstraction it allows in modelling a huge number of heterogeneous phenomena and problems, spanning from the static of buildings to the flight of swarms. As further proof, Gardner's and Carroll's problems have been intensively employed in a number of selection methods and job interviews. Despite the mathematical background, these problems are based on, several solutions and explanations are given in a trivial way. This work proposes a thorough investigation of this framework, as a whole. The results of such study are three mathematical formulations that express the understood mathematical relationship in these well-known riddles. The proposed formulas are of help in the formalization of the solutions, which have been proven to be less time-taking when compared to the well-known classic ones, that look more heuristic than rigorous.
Differential evolution algorithm of solving an inverse problem for the spat
The differential evolution algorithm is applied to solve the optimization problem to reconstruct the production function (inverse problem) for the spatial Solow mathematical model using additional measurements of the gross domestic product for the fixed points. Since the inverse problem is ill-posed the regularized differential evolution is applied. For getting the optimized solution of the inverse problem the differential evolution algorithm is paralleled to 32 kernels. Numerical results for different technological levels and errors in measured data are presented and discussed.
Debian project leader election 2019 results
The election for the Debian project leader has concluded; the leader for the next year will be Sam Hartman. See this page for the details of the vote.
Intel buys into an AI chip that can transfer data 1,000 times faster
A look at the data shows that despite the crypto market's long downturn, VCs are still betting big.
Intel's new assault on the data center: 56-core Xeons, 10nm FPGAs, 100gig
Intel wants to sell you more than just some CPUs for your servers.
IT and Security Professionals Think Normal People Are Just the Worst
Two new studies reaffirm every computer dunce's worst fears: IT professionals blame the employees they're bound to help for their computer problems -- at least when it comes to security. From a report: One, courtesy of SaaS operations management platform BetterCloud, offers grim reading. 91 percent of the 500 IT and security professionals surveyed admitted they feel vulnerable to insider threats. Which only makes one wonder about the supreme (over-)confidence of the other 9 percent. [...] Yet now I've been confronted with another survey. This one was performed by the Ponemon Institute at the behest of security-for-your-security company nCipher. Its sampling was depressingly large. 5,856 IT and security professionals from around the world were asked for their views of corporate IT security. They seemed to wail in unison at the lesser and more unwashed. Oh, an objective 30 percent insisted that external hackers were the biggest cause for concern. A teeth-gritting 54 percent, however, s
India ASAT test debris poses danger to International Space Station, NASA sa
Impact of weapon on satellite threw some debris into orbits that could strike space station.
Inside look at BioWare explains exactly how fake E3 2017's Anthem demo wa
Kotaku report cites 19 sources from various BioWare studios to explain what went wrong.
Lego Education's Newest Spike Prime Programmable Robots Aim For the Classro
Lego Education, the education-focused arm of the veteran Denmark company, is making its biggest product debut in three years, unveiling Spike Prime, a new kit that aims to mix the company's familiar bricks with motors, sensors and introductory coding lessons. The company is targeting kids aged between 11 to 14. From a report: Lego Mindstorms have been around for years. The Mindstorms EV3 robotics kit remains a staple of many learning centers and robotics classrooms. Lego's newest kit looks more like Lego Boost, a programmable kit that aimed to win over families in 2017 and was compatible with regular Lego bricks. It's compatible with Lego Boost, Lego Technic sets and classic Lego pieces, but not with Lego's previous Mindstorms accessories. Lego Mindstorms EV3 is remaining alongside Lego Spike Prime in Lego Education's lineup, and looks like it's aiming more at the high school crowd, while Lego Spike Prime could bridge to that higher-end projects. The Spike Prime set is created specif
Microsoft Stops Selling eBooks, Will Refund Customers For Previous Purchase
Starting today, Microsoft is ending all ebook sales in its Microsoft Store for Windows PCs. "Previously purchased ebooks will be removed from users' libraries in early July," reports The Verge. "Even free ones will be deleted. The company will offer full refunds to users for any books they've purchased or preordered." From the report: Microsoft's "official reason," according to ZDNet, is that this move is part of a strategy to help streamline the focus of the Microsoft Store. It seems that the company no longer has an interest in trying to compete with Amazon, Apple Books, and Google Play Books. It's a bit hard to imagine why anyone would go with Microsoft over those options anyway. If you have purchased ebooks from Microsoft, you can continue accessing them through the Edge browser until everything vanishes in July. After that, customers can expect to automatically receive a refund. According to a newly published Microsoft Store FAQ, "refund processing for eligible customers start rol
K-Means Clustering: Unsupervised Learning Applied on Magic:The Gathering
Microsoft Launches Visual Studio 2019 For Windows and Mac
An anonymous reader writes: Microsoft today announced that Visual Studio 2019 for Windows and Mac has hit general availability — you can download it now from visualstudio.microsoft.com/downloads. Visual Studio 2019 includes AI-assisted code completion with Visual Studio IntelliCode. Separately, real-time collaboration tool Visual Studio Live Share has also hit general availability, and is now included with Visual Studio 2019.
Intel Announces Cascade Lake With Up To 56 Cores and Optane Persistent Memo
At its Data-Centric Innovation Day, Intel today announced its Cascade Lake line of Xeon Scalable data center processors. From a report: The second-generation lineup of Xeon Scalable processors comes in 53 flavors that span up to 56 cores and 12 memory channels per chip, but as a reminder that Intel company is briskly expanding beyond "just" processors, the company also announced the final arrival of its Optane DC Persistent Memory DIMMs along with a range of new data center SSDs, Ethernet controllers, 10nm Agilex FPGAs, and Xeon D processors. This broad spectrum of products leverages Intel's overwhelming presence in the data center, it currently occupies ~95% of the worlds server sockets, as a springboard to chew into other markets, including its new assault on the memory space with the Optane DC Persistent Memory DIMMs. The long-awaited DIMMs open a new market for Intel and have the potential to disrupt the entire memory hierarchy, but also serve as a potentially key component that ca
Implementing API Billing with Stripe
How BioWare's Anthem went wrong
This account of Anthem's development, based on interviews with 19 people who either worked on the game or adjacent to it (all of whom were granted anonymity because they were not authorized to talk about Anthem's development), is a story of indecision and mismanagement. It's a story of technical failings, as EA's Frostbite engine continued to make life miserable for many of BioWare's developers, and understaffed departments struggled to serve their team's needs. It's a story of two studios, one in Edmonton, Alberta, Canada and another in Austin, Texas, that grew resentful toward one another thanks to a tense, lopsided relationship. It's a story of a video game that was in development for nearly seven years but didn't enter production until the final 18 months, thanks to big narrative reboots, major design overhauls, and a leadership team said to be unable to provide a consistent vision and unwilling to listen to feedback. Perhaps most alarming, it's a story about a studio in crisis. Do
Keysight Basic Instruments Flyer
Get Keysight's Basic Instruments Flyer Featuring the New 1000 X-series Scopes
上一页12345678910下一页
你在活着的同时,也在学习着,无论如何,你活着。--道格拉斯·亚当斯 | CommonCrawl |
Berlin Oberseminar:
Optimization, Control and Inverse Problems
This seminar serves as a knowledge exchange and networking platform for the broad area of mathematical optimization and related applications within Berlin.
Place: Weierstrass Institute for Applied Analysis and Stochastics
Place: Mohrenstraße 39
Place: 10117 Berlin
Organizers: René Henrion (WIAS)
Organizers: Michael Hintermüller (WIAS, HU Berlin)
Organizers: Dietmar Hömberg (WIAS, TU Berlin)
Organizers: Gabriele Steidl (TU Berlin)
Organizers: Andrea Walther (HU Berlin)
22.03.2023 Dr. Constantin Christof (Technische Universität München, Germany)
On the identification and optimization of nonsmooth superposition operators in semilinear elliptic PDEs
We study an infinite-dimensional optimization problem that aims to identify the Nemytskii operator in the nonlinear part of a prototypical semilinear elliptic partial differential equation which minimizes the distance between the PDE-solution and a given desired state. In contrast to previous works, we consider this identification problem in a low-regularity regime in which the function inducing the Nemytskii operator is a-priori only known to be an element of H1loc. This makes the studied problem class a suitable point of departure for the rigorous analysis of training problems for learning-informed PDEs in which an unknown superposition operator is approximated by means of a neural network with nonsmooth activation functions (ReLU, leaky-ReLU, etc.). We establish that, despite the low regularity of the controls, it is possible to derive a classical stationarity system for local minimizers and to solve the considered problem by means of a gradient projection method. It is also shown that the established first-order necessary optimality conditions imply that locally optimal superposition operators share various characteristic properties with commonly used activation functions: They are always sigmoidal, continuously differentiable away from the origin, and typically possess a distinct kink at zero.
10.11.2022 Dr. Jonas Latz (Heriot-Watt University, Edinburgh, Scotland)
Analysis of stochastic gradient descent in continuous time
Optimisation problems with discrete and continuous data appear in statistical estimation, machine learning, functional data science, robust optimal control, and variational inference. The 'full' target function in such an optimisation problem is given by the integral over a family of parameterised target functions with respect to a discrete or continuous probability measure. Such problems can often be solved by stochastic optimisation methods: performing optimisation steps with respect to the parameterised target function with randomly switched parameter values. In this talk, we discuss a continuous-time variant of the stochastic gradient descent algorithm. This so-called stochastic gradient process couples a gradient flow minimising a parameterised target function and a continuous-time 'index' process which determines the parameter. We first briefly introduce the stochastic gradient processes for finite, discrete data which uses pure jump index processes. Then, we move on to continuous data. Here, we allow for very general index processes: reflected diffusions, pure jump processes, as well as other Lévy processes on compact spaces. Thus, we study multiple sampling patterns for the continuous data space. We show that the stochastic gradient process can approximate the gradient flow minimising the full target function at any accuracy. Moreover, we give convexity assumptions under which the stochastic gradient process with constant learning rate is geometrically ergodic. In the same setting, we also obtain ergodicity and convergence to the minimiser of the full target function when the learning rate decreases over time sufficiently slowly.
30.05.22 Pier Luigi Dragotti (Imperial College London)
Computational Imaging and Sensing: Theory and Applications
The revolution in sensing, with the emergence of many new imaging techniques, offers the possibility of gaining unprecedented access to the physical world, but this revolution can only bear fruit through the skilful interplay between the physical and computational realms. This is the domain of computational imaging which advocates that, to develop effective imaging systems, it will be necessary to go beyond the traditional decoupled imaging pipeline where device physics, image processing and the end-user application are considered separately. Instead, we need to rethink imaging as an integrated sensing and inference model.
In the first part of the talk we highlight the centrality of sampling theory in computational imaging and investigate new sampling modalities which are inspired by the emergence of new sensing mechanisms. We discuss time-based sampling which is connected to event-based cameras where pixels behave like neurons and fire when an event happens. We derive sufficient conditions and propose novel algorithms for the perfect reconstruction of classes of non-bandlimited functions from time-based samples. We then develop the interplay between learning and computational imaging and present a model-based neural network for the reconstruction of video sequences from events. The architecture of the network is model-based and is designed using the unfolding technique, some element of the acquisition device are part of the network and are learned with the reconstruction algorithm.
In the second part of the talk, we focus on the heritage sector which is experiencing a digital revolution driven in part by the increasing use of non-invasive, non-destructive imaging techniques. These new imaging methods provide a way to capture information about an entire painting and can give us information about features at or below the surface of the painting. We focus on Macro X-Ray Fluorescence (XRF) scanning which is a technique for the mapping of chemical elements in paintings and introduce a method that can process XRF scanning data from paintings. The results presented show the ability of our method to detect and separate weak signals related to hidden chemical elements in the paintings. We analyse the results on Leonardo's 'The Virgin of the Rocks' and show that our algorithm is able to reveal, more clearly than ever before, the hidden drawings of a previous composition that Leonardo then abandoned for the painting that we can now see.
This is joint work with R. Alexandru, R. Wang, Siying Liu, J. Huang and Y.Su from Imperial College London; C. Higgitt and N. Daly from The National Gallery in London and Thierry Blu from the Chinese University of Hong Kong.
Bio: Pier Luigi Dragotti is Professor of Signal Processing in the Electrical and Electronic Engineering Department at Imperial College London and Fellow of the IEEE. He received the Laurea Degree (summa cum laude) in Electronic Engineering from the University Federico II, Naples, Italy, in 1997; the Master degree in Communications Systems from the Swiss Federal Institute of Technology of Lausanne (EPFL), Switzerland in 1998; and PhD degree from EPFL, Switzerland, in 2002. He has held several visiting positions. In particular, he was a visiting student at Stanford University, Stanford, CA in 1996, a summer researcher in the Mathematics of Communications Department at Bell Labs, Lucent Technologies, Murray Hill, NJ in 2000, a visiting scientist at Massachusetts Institute of Technology (MIT) in 2011 and a visiting scholar at Trinity College Cambridge in 2020.
Dragotti was Editor-in-Chief of the IEEE Transactions on Signal Processing (2018-2020), Technical Co-Chair for the European Signal Processing Conference in 2012, Associate Editor of the IEEE Transactions on Image Processing from 2006 to 2009. He was also Elected Member of the IEEE Computational Imaging Technical Committee and the recipient of an ERC starting investigator award for the project RecoSamp. Currently, he is IEEE SPS Distinguished Lecturer.
His research interests include sampling theory, wavelet theory and its applications, computational imaging and sparsity-driven signal processing.
06.12.21 Juan Carlos de los Reyes (Escuela Politécnica Nacional, Ecuador)
Bilevel learning for inverse problems
In recent years, novel optimization ideas have been applied to several inverse problems in combination with machine learning approaches, to improve the inversion by optimally choosing different quantities/functions of interest. A fruitful approach in this sense is bilevel optimization, where the inverse problems are considered as lower-level constraints, while on the upper-level a loss function based on a training set is used. When confronted with inverse problems with nonsmooth regularizers or nonlinear operators, however, the bilevel optimization problem structure becomes quite involved to be analyzed, as classical nonlinear or bilevel programming results cannot be directly utilized. In this talk, I will discuss on the different challenges that these problems pose, and provide some analytical results as well as a numerical solution strategy.
05.07.2021 Patrick Farrell (University of Oxford, UK)
Computing disconnected bifurcation diagrams of partial differential equations
Computing the distinct solutions $u$ of an equation $f(u, \lambda) = 0$ as a parameter $\lambda \in \mathbb{R}$ is varied is a central task in applied mathematics and engineering. The solutions are captured in a bifurcation diagram, plotting (some functional of) $u$ as a function of $\lambda$. In this talk I will present a new algorithm, deflated continuation, for this task.
Deflated continuation has three advantages. First, it is capable of computing disconnected bifurcation diagrams; previous algorithms only aimed to compute that part of the bifurcation diagram continuously connected to the initial data. Second, its implementation is very simple: it only requires a minor modification to an existing Newton-based solver. Third, it can scale to very large discretisations if a good preconditioner is available; no auxiliary problems must be solved.
We will present applications to hyperelastic structures, liquid crystals, and Bose-Einstein condensates, among others.
14.06.2021 Ozan Öktem (KTH, Sweden)
Data driven large-scale convex optimisation
This joint work with Jevgenjia Rudzusika (KTH), Sebastian Banert (Lund University) and Jonas Adler (DeepMind) introduces a framework for using deep-learning to accelerate optimisation solvers with convergence guarantees. The approach builds on ideas from the analysis of accelerated forward-backward schemes, like FISTA. Instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set through a handcrafted method, we train a deep neural network to pick the best update. The method is applicable to several smooth and non-smooth convex optimisation problems and it outperforms established accelerated solvers.
03.05.2021 Lars Ruthotto (Emory University, USA)
This talk was also part of the SPP 1962 Priority Program 2021 Keynote Presentation series.
A Machine Learning Framework for Mean Field Games and Optimal Control
We consider the numerical solution of mean field games and optimal control problems whose state space dimension is in the tens or hundreds. In this setting, most existing numerical solvers are affected by the curse of dimensionality (CoD). To mitigate the CoD, we present a machine learning framework that combines the approximation power of neural networks with the scalability of Lagrangian PDE solvers. Specifically, we parameterize the value function with a neural network and train its weights using the objective function with additional penalties that enforce the Hamilton Jacobi Bellman equations. A key benefit of this approach is that no training data is needed, e.g., no numerical solutions to the problem need to be computed before training. We illustrate our approach and its efficacy using numerical experiments. To show the framework's generality, we consider applications such as optimal transport, deep generative modeling, mean field games for crowd motion, and multi-agent optimal control.
29.03.2021 Serge Gratton (ENSEEIHT, Toulouse, France)
On a multilevel Levenberg-Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations
We propose a new multilevel Levenberg-Marquardt optimizer for the training of artificial neural networks with quadratic loss function. When the least-squares problem arises from the training of artificial neural networks, the variables subject to optimization are not related by any geometrical constraints and the standard interpolation and restriction operators cannot be employed any longer. A heuristic, inspired by algebraic multigrid methods, is then proposed to construct the multilevel transfer operators. We test the new optimizer on an important application: the approximate solution of partial differential equations by means of artificial neural networks. The learning problem is formulated as a least squares problem, choosing the nonlinear residual of the equation as a loss function, whereas the multilevel method is employed as a training method. Numerical experiments show encouraging results related to the efficiency of the new multilevel optimization method compared to the corresponding one-level procedure in this context. | CommonCrawl |
Jump to: Conference Paper | Journal Article
Siva Rama Krishna, V and Bhat, Navakanta and Amrutur, Bharadwaj S and Chakrapani, K and Sampath, S (2011) Detection of glycated hemoglobin using 3-Aminophenylboronic acid modified graphene oxide. In: 2011 IEEE/NIH Life Science Systems and Applications Workshop (LiSSA), 7-8 April 2011, Bethesda, MD.
Krishna, Siva Rama V and Bhat, Navakanta and Amrutur, Bharadwaj and Sampath, S (2008) Micromachined Electrochemical Cell Platform for Biosensors. In: International Conference on Smart Materials Structures and Systems, Bangalore, India, Bangalore, India.
Jayanthi, Swetha and Jayaraman, N and Chatterjee, Kaushik and Sampath, S and Sood, A K (2019) Giant dielectric macroporous graphene oxide foams with aqueous salt solutions: Impedance spectroscopy. In: CARBON, 155 . pp. 44-49.
Jenjeti, Ramesh Naidu and Kumar, Rajat and Sampath, S (2019) Two-dimensional, few-layer NiPS3 for flexible humidity sensor with high selectivity. In: JOURNAL OF MATERIALS CHEMISTRY A, 7 (24). pp. 14545-14551.
Kumar, Rajat and Jenjeti, Ramesh Naidu and Austeria, Muthu P and Sampath, S (2019) Bulk and few-layer MnPS3: a new candidate for field effect transistors and UV photodetectors. In: JOURNAL OF MATERIALS CHEMISTRY C, 7 (2). pp. 324-329.
Kumar, B V V S Pavan and Sonu, K P and Rao, K Venkata and Sampath, S and George, Subi J and Eswaramoorthy, M (2018) Supramolecular Switching of Ion-Transport in Nanochannels. In: ACS APPLIED MATERIALS & INTERFACES, 10 (28). pp. 23458-23465.
Jenjeti, Ramesh Naidu and Kumar, Rajat and Austeria, Muthu P and Sampath, S (2018) Field Effect Transistor Based on Layered NiPS3. In: SCIENTIFIC REPORTS, 8 .
Sellam, A and Jenjeti, Ramesh Naidu and Sampath, S (2018) Ultrahigh-Rate Supercapacitors Based on 2-Dimensional, 1T MoS2xSe2(1-x) for AC Line-Filtering Applications. In: JOURNAL OF PHYSICAL CHEMISTRY C, 122 (25, SI). pp. 14186-14194.
Lakshmi, R and Aruna, ST and Sampath, S (2017) Ceria nanoparticles vis-a-vis cerium nitrate as corrosion inhibitors for silica-alumina hybrid sol-gel coating. In: APPLIED SURFACE SCIENCE, 393 . pp. 397-404.
Naik, Keerti M and Sampath, S (2017) Cubic Mo6S8-Efficient Electrocatalyst Towards Hydrogen Evolution Over Wide pH Range. In: ELECTROCHIMICA ACTA, 252 . pp. 408-415.
Lakshmi, RV and Aruna, ST and Anandan, C and Bera, Parthasarathi and Sampath, S (2017) EIS and XPS studies on the self-healing properties of Ce-modified silica-alumina hybrid coatings: Evidence for Ce(III) migration. In: SURFACE & COATINGS TECHNOLOGY, 309 . pp. 363-370.
Kibechu, Rose Waithiegeni and Ndinteh, Derek Tantoh and Msagati, Titus Alfred Makudali and Mamba, Bhekie Briliance and Sampath, S (2017) Effect of incorporating graphene oxide and surface imprinting on polysulfone membranes on flux, hydrophilicity and rejection of salt and polycyclic aromatic hydrocarbons from water. In: PHYSICS AND CHEMISTRY OF THE EARTH, 100 . pp. 126-134.
Kukunuri, Suresh and Naik, Keerti and Sampath, S (2017) Effects of composition and nanostructuring of palladium selenide phases, Pd4Se, Pd7Se4 and Pd17Se15, on ORR activity and their use in Mg-air batteries. In: JOURNAL OF MATERIALS CHEMISTRY A, 5 (9). pp. 4660-4670.
Mukherjee, Debdyuti and Gowda, Guruprasada Y K and Kotresh, Harish Makri Nimbegondi and Sampath, S (2017) Porous, Hyper-cross-linked, Three-Dimensional Polymer as Stable, High Rate Capability Electrode for Lithium-Ion Battery. In: ACS APPLIED MATERIALS & INTERFACES, 9 (23). pp. 19446-19454.
Jayanthi, Swetha and Muthu, D V S and Jayaraman, N and Sampath, S and Sood, A K (2017) Semiconducting Conjugated Microporous Polymer: An Electrode Material for Photoelectrochemical Water Splitting and Oxygen Reduction. In: CHEMISTRYSELECT, 2 (16). pp. 4522-4532.
Anju, VG and Sampath, S (2017) Stable, Rechargeable Lithium - Oxygen Battery in Liquid and Gel-Based Electrolytes. In: ELECTROCHIMICA ACTA, 252 . pp. 119-126.
Anju, VG and Austeria, Muthu P and Sampath, S (2017) Work Function Tunable Titanium Carbonitride Nanostructures for High-Efficiency, Rechargeable Li-Iodine Batteries. In: ADVANCED MATERIALS INTERFACES, 4 (15).
Sarkar, Sujoy and Sampath, S (2016) Ambient temperature deposition of gallium nitride/gallium oxynitride from a deep eutectic electrolyte, under potential control. In: CHEMICAL COMMUNICATIONS, 52 (38). pp. 6407-6410.
Sarkar, Sujoy and Sampath, S (2016) Ambient temperature deposition of gallium nitride/gallium oxynitride from a deep eutectic electrolyte, under potential control (vol 52, pg 6407, 2016). In: CHEMICAL COMMUNICATIONS, 52 (43). p. 7051.
Urumese, Ancila and Jenjeti, Ramesh Naidu and Sampath, S and Jagirdar, Balaji R (2016) Colloidal europium nanoparticles via a solvated metal atom dispersion approach and their surface enhanced Raman scattering studies. In: JOURNAL OF COLLOID AND INTERFACE SCIENCE, 476 . pp. 177-183.
Kukunuri, Suresh and Austeria, Muthu P and Sampath, S (2016) Electrically conducting palladium selenide (Pd4Se, Pd17Se15, Pd7Se4) phases: synthesis and activity towards hydrogen evolution reaction. In: CHEMICAL COMMUNICATIONS, 52 (1). pp. 206-209.
Sampath, S and Sarma, D D and Shukla, A K (2016) Electrochemical Energy Storage: The Indian Scenario. In: ACS ENERGY LETTERS, 1 (6). pp. 1162-1164.
Sridevi, S and Vasu, KS and Sampath, S and Asokan, S and Sood, AK (2016) Optical detection of glucose and glycated hemoglobin using etched fiber Bragg gratings coated with functionalized reduced graphene oxide. In: JOURNAL OF BIOPHOTONICS, 9 (7). pp. 760-769.
Ntsendwana, B and Sampath, S and Mamba, BB and Oluwafemi, OS and Arotiba, OA (2016) Photoelectrochemical degradation of eosin yellowish dye on exfoliated graphite-ZnO nanocomposite electrode. In: JOURNAL OF MATERIALS SCIENCE-MATERIALS IN ELECTRONICS, 27 (1). pp. 592-598.
Anju, VG and Manjunatha, R and Austeria, Muthu P and Sampath, S (2016) Primary and rechargeable zinc-air batteries using ceramic and highly stable TiCN as an oxygen reduction reaction electrocatalyst. In: JOURNAL OF MATERIALS CHEMISTRY A, 4 (14). pp. 5258-5264.
Tamilarasan, S and Mukherjee, Debdyuti and Sampath, S and Natarajan, S and Gopalakrishnan, J (2016) Synthesis, structure and electrochemical behaviour of new Ru-containing lithium-rich layered oxides. In: SOLID STATE IONICS, 297 . pp. 49-58.
Mukherjee, Debdyuti and Austeria, Muthu P and Sampath, S (2016) Two-Dimensional, Few-Layer Phosphochalcogenide, FePS3: A New Catalyst for Electrochemical Hydrogen Evolution over Wide pH Range. In: ACS ENERGY LETTERS, 1 (2). pp. 367-372.
Kumar, Sachin and Raj, Shammy and Kolanthai, Elayaraja and Sood, AK and Sampath, S and Chatterjee, Kaushik (2015) Chemical Functionalization of Graphene To Augment Stem Cell Osteogenesis and Inhibit Biofilm Formation on Polymer Composites for Orthopedic Applications. In: ACS APPLIED MATERIALS & INTERFACES, 7 (5). pp. 3237-3252.
Ravikumar, MK and Rajan, Sundar A and Sampath, S and Priolkar, KR and Shukla, AK (2015) In Situ Crystallographic Probing on Ameliorating Effect of Sulfide Additives and Carbon Grafting in Iron Electrodes. In: JOURNAL OF THE ELECTROCHEMICAL SOCIETY, 162 (12). A2339-A2350.
Sinha, SK and Srivastava, C and Sampath, S and Chattopadhyay, K (2015) Morphology control synthesis of Au-Cu2S metal-semiconductor hybrid nanostructures by modulating reaction constituents. In: RSC ADVANCES, 5 (70). pp. 56629-56635.
Vasu, KS and Sridevi, S and Sampath, S and Sood, AK (2015) Non-enzymatic electronic detection of glucose using aminophenylboronic acid functionalized reduced graphene oxide. In: SENSORS AND ACTUATORS B-CHEMICAL, 221 . pp. 1209-1214.
Sarkar, Sumanta and Jana, Rajkumar and Suchitra, * and Waghmare, Umesh V and Kuppan, Balamurugan and Sampath, S and Peter, Sebastian C (2015) Ordered Pd2Ge Intermetallic Nanoparticles as Highly Efficient and Robust Catalyst for Ethanol Oxidation. In: CHEMISTRY OF MATERIALS, 27 (21). pp. 7459-7467.
Kukunuri, Suresh and Karthick, SN and Sampath, S (2015) Robust, metallic Pd17Se15 and Pd7Se4 phases from a single source precursor and their use as counter electrodes in dye sensitized solar cells. In: JOURNAL OF MATERIALS CHEMISTRY A, 3 (33). pp. 17144-17153.
Sinha, SK and Srivastava, C and Sampath, S and Chattopadhyay, K (2015) Tunability of monodispersed intermetallic AuCu nanoparticles through understanding of reaction pathways. In: RSC ADVANCES, 5 (6). pp. 4399-4405.
Chakrapani, Kalapu and Sampath, S (2015) The dual role of borohydride depending on reaction temperature: synthesis of iridium and iridium oxide. In: CHEMICAL COMMUNICATIONS, 51 (47). pp. 9690-9693.
Kukunuri, Suresh and Krishnan, Reshma M and Sampath, S (2015) The effect of structural dimensionality on the electrocatalytic properties of the nickel selenide phase. In: PHYSICAL CHEMISTRY CHEMICAL PHYSICS, 17 (36). pp. 23448-23459.
Dar, Ibrahim M and Sampath, S and Shivashankar, SA (2014) Exploiting oriented attachment in stabilizing La3+-doped gallium oxide nano-spindles. In: RSC ADVANCES, 4 (90). pp. 49360-49366.
Moses, Kota and Kiran, Vankayala and Sampath, S and Rao, CNR (2014) Few-Layer Borocarbonitride Nanosheets: Platinum-Free Catalyst for the Oxygen Reduction Reaction. In: CHEMISTRY-AN ASIAN JOURNAL, 9 (3). pp. 838-843.
Dar, Ibrahim M and Arora, Neha and Singh, Nagendra Pratap and Sampath, S and Shivashankar, Srinivasrao A (2014) Role of spectator ions in influencing the properties of dopant-free ZnO nanocrystals. In: NEW JOURNAL OF CHEMISTRY, 38 (10). pp. 4783-4790.
Chakrapani, Kalapu and Sampath, S (2014) Spontaneous assembly of iridium nanochain-like structures: surface enhanced Raman scattering activity using visible light. In: CHEMICAL COMMUNICATIONS, 50 (23). pp. 3061-3063.
Kumar, Pavan BVVS and Rao, Venkata K and Sampath, S and George, Subi J and Eswaramoorthy, Muthusamy (2014) Supramolecular Gating of Ion Transport in Nanochannels. In: ANGEWANDTE CHEMIE-INTERNATIONAL EDITION, 53 (48). pp. 13073-13077.
Goriparti, Subrahmanyam and Harish, MNK and Sampath, S (2013) Ellagic acid - a novel organic electrode material for high capacity lithium ion batteries. In: Chemical Communications, 49 (65). pp. 7234-7236.
Ntsendwana, B and Sampath, S and Mamba, BB and Arotiba, OA (2013) Photoelectrochemical oxidation of p-nitrophenol on an expanded graphite-TiO2 electrode. In: Photochemical & Photobiological Sciences, 12 (6). pp. 1091-1102.
Vasu, KS and Krishnaswamy, Rema and Sampath, S and Sood, AK (2013) Yield stress, thixotropy and shear banding in a dilute aqueous suspension of few layer graphene oxide platelets. In: Soft Matter, 9 (25). pp. 5874-5882.
Haramagatti, Chandrashekara R and Raj, BV Ashok and Sampath, S (2012) Surfactant solubility and micellization in ternary eutectic melt (acetamide plus urea plus ammonium nitrate). In: COLLOIDS AND SURFACES A-PHYSICOCHEMICAL AND ENGINEERING ASPECTS, 403 . pp. 110-113.
Ntsendwana, B and Mamba, BB and Sampath, S and Arotiba, OA (2012) Electrochemical Detection of Bisphenol A Using Graphene-Modified Glassy Carbon Electrode. In: INTERNATIONAL JOURNAL OF ELECTROCHEMICAL SCIENCE, 7 (4). pp. 3501-3512.
Dar, Ibrahim M and Sampath, S and Shivashankar, SA (2012) Microwave-assisted, surfactant-free synthesis of air-stable copper nanostructures and their SERS study. In: JOURNAL OF MATERIALS CHEMISTRY, 22 (42). pp. 22418-22423.
Thotiyl, Ottakam MM and Basit, Hajra and Sanchez, Julio A and Goyer, Cedric and Coche-Guerente, Liliane and Dumy, Pascal and Sampath, S and Labbe, Pierre and Moutet, Jean-Claude (2012) Multilayer assemblies of polyelectrolyte-gold nanoparticles for the electrocatalytic oxidation and detection of arsenic(III). In: JOURNAL OF COLLOID AND INTERFACE SCIENCE, 383 (1). pp. 130-139.
Ndlovu, T and Arotiba, OA and Sampath, S and Krause, RW and Mamba, BB (2012) Reactivities of Modified and Unmodified Exfoliated Graphite Electrodes in Selected Redox Systems. In: INTERNATIONAL JOURNAL OF ELECTROCHEMICAL SCIENCE, 7 (10). pp. 9441-9453.
Ramesha, GK and Sampath, S (2011) In-situ formation of graphene-lead oxide composite and its use in trace arsenic detection. In: Sensors and Actuators B: Chemical, 160 (1). pp. 306-311.
Ramesha, GK and Kumara, Vijaya A and Muralidhara, HB and Sampath, S (2011) Graphene and graphene oxide as effective adsorbents toward anionic and cationic dyes. In: Journal of Colloid and Interface Science, 361 (1). pp. 270-277.
Vasu, KS and Sampath, S and Sood, AK (2011) Nonvolatile unipolar resistive switching in ultrathin films of graphene and carbon nanotubes. In: Solid State Communications, 151 (16, SI). pp. 1084-1087.
Dilimon, VS and Sampath, S (2011) Electrochemical preparation of few layer-graphene nanosheets via reduction of oriented exfoliated graphene oxide thin films in acetamide-urea-ammonium nitrate melt under ambient conditions. In: Thin Solid Films, 519 (7). pp. 2323-2327.
Kannan, Palanisamy and Sampath, S and John, Abraham S (2010) Direct Growth of Gold Nanorods on Gold and Indium Tin Oxide Surfaces: Spectral, Electrochemical, and Electrocatalytic Studies. In: The Journal of Physical Chemistry C, 114 (49). pp. 21114-21122.
Thotiyl, Ottakam MM and Kumar, Ravi K and Sampath, S (2010) Pd Supported on Titanium Nitride for Efficient Ethanol Oxidation. In: The Journal of Physical Chemistry , 114 (41). pp. 17934-17941.
Dilimon, VS and Narayanan, Venkata NS and Sampath, S (2010) Electrochemical reduction of oxygen on gold and boron-doped diamond electrodes in ambient temperature, molten acetamide-urea-ammonium nitrate eutectic melt. In: Electrochimica Acta, 55 (20). pp. 5930-5937.
Vasu, KS and Chakraborty, Biswanath and Sampath, S and Sood, AK (2010) Probing top-gated field effect transistor of reduced graphene oxide monolayer made by dielectrophoresis. In: Solid State Communications, 150 (29-30). pp. 1295-1298.
Narayanan, Venkata NS and Raj, Ashok BV and Sampath, S (2010) Physicochemical, spectroscopic and electrochemical characterization of magnesium ion-conducting, room temperature, ternary molten electrolytes. In: Journal of Power Sources, 195 (13, Sp). pp. 4356-4364.
Venkata Narayanan, NS and Ashokraj, BV and Sampath, S (2010) Ambient temperature, zinc ion-conducting, binary molten electrolyte based on acetamide and zinc perchlorate: Application in rechargeable zinc batteries. In: Journal of Colloid and Interface Science, 342 (2). pp. 505-512.
Kiran, V and Ravikumar, T and Kalyanasundaram, NT and Krishnamurty, S and Shukla, AK and Sampath, S (2010) Electro-Oxidation of Borohydride on Rhodium, Iridium, and Rhodium-Iridium Bimetallic Nanoparticles with Implications to Direct Borohydride Fuel Cells. In: Journal of the Electrochemical Society, 157 (8). B1201-B1208.
Thotiyl, Ottakam MM and Ravikumar, T and Sampath, S (2010) Platinum particles supported on titanium nitride: an efficient electrode material for the oxidation of methanol in alkaline media. In: Journal of Materials Chemistry, 20 (47). pp. 10643-10651.
Narayanan, Venkata NS and Raj, Ashok BV and Sampath, S (2009) Magnesium ion conducting, room temperature molten electrolytes. In: Electrochemistry Communications, 11 (10). pp. 2027-2031.
Sampath, S and Choudhury, NA and Shukla, AK (2009) Hydrogel membrane electrolyte for electrochemical capacitors. In: Proceedings of the Indian Academy of Sciences - Chemical Sciences, 121 (5). pp. 727-734.
Narayanan, NS Venkata and Sampath, S (2009) Amide-based Room Temperature Molten Salt as Solvent cum Stabilizer for Metallic Nanochains. In: Journal of Cluster Science, 20 (2). pp. 375-387.
Sudhir, V Sai and Venkateswarlu, Ch and Musthafa, OT Muhammed and Sampath, S and Chandrasekaran, Srinivasan (2009) Click Chemistry Inspired Synthesis of Novel Ferrocenyl-Substituted Amino Acids or Peptides. In: European Journal Of Organic Chemistry (13). pp. 2120-2129.
Subramanian, S and Sampath, S (2009) Self Assembled Monolayers of Alkanethiolson Silver Surfaces. In: Journal of the Indian Institute of Science, 89 (1). pp. 1-7.
Choudhury, NA and Sampath, S and Shukla, AK (2009) Hydrogel-polymer electrolytes for electrochemical capacitors: an overview. In: Energy & Environmental Science, 2 (1). pp. 55-67.
Narayanan, NS Venkata and Ashokraj, BV and Sampath, S (2009) Physicochemical, Electrochemical, and Spectroscopic Characterization of Zinc-Based Room-Temperature Molten Electrolytes and Their Application in Rechargeable Batteries. In: Journal of the Electrochemical Chemistry, 156 (11). A863-A872.
Mitra, Sagar and Lokesh, KS and Sampath, S (2008) Exfoliated graphite–ruthenium oxide composite electrodes for electrochemical supercapacitors. In: Journal of Power Sources, 185 (2). pp. 1544-1549.
Tharamani, CN and Thejaswini, HC and Sampath, S (2008) Synthesis of size-controlled Bi particles by electrochemical deposition. In: Bulletin of Materials Science, 31 (3). pp. 207-212.
Behera, Susmita and Sampath, S and Raj, C Retna (2008) Electrochemical Functionalization of a Gold Electrode with Redox-Active Self-Assembled Monolayer for Electroanalytical Application. In: Journal of Physical Chemistry C, 112 (10). pp. 3734-3740.
Choudhury, NA and Sampath, S and Shukla, AK (2008) Gelatin hydrogel electrolytes and their application to electrochemical supercapacitors. In: Journal of the Electrochemical Society, 155 (1). A74-A81.
Ramesha, GK and Sampath, S (2007) Exfoliated Graphite Oxide Modified Electrode for the Selective Determination of Picomolar Concentration of Lead. In: Electroanalysis, 19 (23). pp. 2472-2478.
Subramanian, S and Sampath, S (2007) Dewetting phenomenon: Interfacial water structure at well-organized alkanethiol-modified gold–aqueous interface. In: Journal of Colloid and Interface Science, 313 (1). pp. 64-71.
Subramanian, S and Sampath, S (2007) Enhanced stability of short- and long-chain diselenide self-assembled monolayers on gold probed by electrochemistry, spectroscopy, and microscopy. In: Journal of Colloid and Interface Science, 312 (2). pp. 413-424.
Subramanian, S and Sampath, S (2007) Adsorption of Zein on Surfaces with Controlled Wettability and Thermal Stability of Adsorbed Zein Films. In: Biomacromolecules, 8 (7). pp. 2120-2128.
Subramanian, S and Sampath, S (2007) Enhanced thermal stability and structural ordering in short chain n-alkanethiol monolayers on gold probed by vibrational spectroscopy and EQCM. In: Analytical and Bioanalytical Chemistry, 388 (1). pp. 135-145.
Prasad, Krishna S and Sandhya, KL and Nair, Geetha G and Hiremath, Uma S and Yelamaggad, CV and Sampath, S (2006) Electrical conductivity and dielectric constant measurements of liquid crystal-gold nanoparticle composites. In: Liquid Crystals, 33 (10). pp. 1121-1125.
Praveen, RS and Daniel, S and Rao, Prasada T and Sampath, S and Rao, Sreenivasa K (2006) Flow injection on-line solid phase extractive preconcentration of palladium(II) in dust and rock samples using exfoliated graphite packed microcolumns and determination by flame atomic absorption spectrometry. In: Talanta, 70 (2). pp. 437-443.
Sarkar, Smita and Sampath, S (2006) Spectroscopic and Spectroelectrochemical Characterization of Acceptor-Sigma Spacer-Donor Monolayers. In: Langmuir, 22 (7). 3396 -3403.
Sarkar, Smita and Sampath, S (2006) Stepwise Assembly of Acceptor-Sigma Spacer-Donor Monolayers: Preparation and Electrochemical Characterization. In: Langmuir, 22 (7). 3388 -3395.
Choudhury, NA and Shukla, AK and Sampath, S and Pitchumanic, S (2006) Cross-Linked Polymer Hydrogel Electrolytes for Electrochemical Capacitors. In: Journal Of The Electrochemical Society, 153 (3). A614-A620.
Devarajan, Supriya and Bera, Parthasarathi and Sampath, S (2005) Bimetallic nanoparticles: A single step synthesis, stabilization, and characterization of Au-Ag, Au-Pd, and Au-Pt in sol-gel derived silicates. In: Journal of Colloid and Interface Science, 290 (1). pp. 117-129.
Subramanian, S and Sampath, S (2005) Effect of chain length on the adhesion behaviour of n-alkanethiol self-assembled monolayers on Au(111): An atomic force microscopy study. In: Pramana-journal of physics, 65 (4). pp. 753-761.
Bandaru, Narasimha Murthy and Sampath, S and Jayaraman, Narayanaswamy (2005) Synthesis and Langmuir Studies of Bivalent and Monovalent $\alpha$-D-Mannopyranosides with Lectin Con A. In: Langmuir, 21 (21). pp. 9591-9596.
Rikhie, J and Sampath, S (2005) Reversible electrochemistry of cytochrome c on recompressed, binderless exfoliated graphite electrodes. In: Electroanalysis, 17 (9). pp. 762-768.
Choudhury, NA and Raman, RK and Sampath, S and Shukla, AK (2005) An alkaline direct borohydride fuel cell with hydrogen peroxide as oxidant. In: Journal of Power Sources, 143 (1-2). pp. 1-8.
Prathima, N and Harini, A and Rai, Neeraj and Chandrashekara, RH and Ayappa, KG and Sampath, S and Biswas, SK (2005) Thermal Study of Accumulation of Conformational Disorders in the Self-Assembled Monolayers of C-8 and C-18 Alkanethiols on the Au(111) Surface. In: Langmuir, 21 (6). pp. 2364-2374.
Kumar, Girish G and Sampath, S (2005) Electrochemical and spectroscopic investigations of a gel polymer electrolyte of poly(methylmethacrylate) and zinc triflate. In: Solid State Ionics, 176 (7-8). pp. 773-780.
Mitra, Sagar and Sampath, S (2005) Alternating Current Conductivity and Spectroscopic Studies on Sol-Gel Derived, Trivalent Ion Containing Silicate-Tetra(ethylene glycol)-Based Composites. In: Macromolecules, 38 (1). pp. 134-144.
Devarajan, Supriya and Vimalan, B and Sampath, S (2004) Phase transfer of Au–Ag alloy nanoparticles from aqueous medium to an organic solvent: effect of aging of surfactant on the formation of Ag-rich alloy compositions. In: Journal of Colloid and Interface Science, 278 (1). pp. 126-132.
Ramesh, P and Bhagyalakshmi, S and Sampath, S (2004) Preparation and physicochemical and electrochemical characterization of exfoliated graphite oxide. In: Journal of Colloid and Interface Science, 274 (1). pp. 95-102.
Kumar, Girish G and Sampath, S (2004) Spectroscopic characterization of a gel polymer electrolyte of zinc triflate and polyacrylonitrile. In: Polymer, 45 (9). pp. 2889-2895.
Somashekarappa, MP and Sampath, S (2004) Sol–gel derived, silicate-phthalocyanine functionalized exfoliated graphite based composite electrodes. In: Analytica Chimica Acta, 503 (2). pp. 195-201.
Ramesh, P and Suresh, GS and Sampath, S (2004) Selective determination of dopamine using unmodified, exfoliated graphite electrodes. In: Journal of Electroanalytical Chemistry, 561 (1-2). pp. 173-180.
Devaprakasam, D and Sampath, S and Biswas, SK (2004) Thermal Stability of Perfluoroalkyl Silane Self-Assembled on a Polycrystalline Aluminum Surface. In: Langmuir, 20 (4). pp. 1329-1334.
Mitra, Sagar and Sampath, S (2004) Electrochemical Capacitors Based on Exfoliated Graphite Electrodes. In: Electrochemical and Solid-State Letters, 7 (9). A264-A268.
Ramesh, P and Sampath, S (2004) Selective Determination of Uric Acid in Presence of Ascorbic Acid and Dopamine at Neutral pH Using Exfoliated Graphite Electrodes. In: Electroanalysis, 16 (10). pp. 866-869.
Ramesh, P and Sampath, S (2003) Electrochemical Characterization of Binderless, Recompressed Exfoliated Graphite Electrodes: Electron-Transfer Kinetics and Diffusion Characteristics. In: Analytical Chemistry, 75 (24). pp. 6949-6957.
Mitra, Sagar and Shukla, AK and Sampath, S (2003) Electrochemical Capacitors Based on Sol-Gel Derived, Ionically Conducting Composite Solid Electrolytes. In: Electrochemical and Solid State letters, 6 (8). A419-A513.
Kumar, Girish G and Sampath, S (2003) Electrochemical characterization of poly(vinylidenefluoride)-zinc triflate gel polymer electrolyte and its application in solid-state zinc batteries. In: Solid State Ionics, 160 (3-4). pp. 289-300.
Kumar, Girish G and Sampath, S (2003) Electrochemical Characterization of a Zinc-Based Gel-Polymer Electrolyte and Its Application in Rechargeable Batteries. In: Journal of the Electrochemical Society, 150 (5). A608-A615.
Ramesh, P and Sivakumar, P and Sampath, S (2003) Phenoxazine Functionalized, Exfoliated Graphite Based Electrodes for NADH Oxidation and Ethanol Biosensing. In: Electroanalysis, 15 (23-24). pp. 1850-1858.
Somashekarappa, MP and Keshavayya, J and Sampath, S (2002) Self-assembled molecular films of tetraamino metal (Co, Cu, Fe) phthalocyanines on gold and silver. Electrochemical and spectroscopic characterization. In: Pure and Applied Chemistry, 74 (9). pp. 1609-1620.
Ramesh, P and Sivakumar, P and Sampath, S (2002) Renewable surface electrodes based on dopamine functionalized exfoliated graphite: NADH oxidation and ethanol biosensing. In: JOURNAL OF ELECTROANALYTICAL CHEMISTRY, 528 (1-2). pp. 82-92.
DSouza, Lawrence and Bera, Parthasarathi and Sampath, S (2002) Silver-palladium nanodispersions in silicate matrices: Highly uniform, stable, bimetallic structures. In: Journal of Colloid and Interface Science, 246 (1). pp. 92-99.
Mitra, S and Shukla, AK and Sampath, S (2001) Electrochemical capacitors with plasticized gel-polymer electrolytes. In: Journal of Power Sources, 101 (2). pp. 213-218.
Ramesh, P and Sampath, S (2001) Electrochemical and spectroscopic characterization of quinone functionalized exfoliated graphite. In: Analyst, 126 (11). pp. 1872-1877.
Bera, Parthasarathi and Mitra, Sagar and Sampath, S and Hegde, MS (2001) Promoting effect of CeO2 in a Cu/CeO2 catalyst: lowering of redox potentials of Cu species in the CeO2 matrix. In: Chemical Communications (10). pp. 927-928.
Shukla, AK and Sampath, S and Vijayamohanan, K (2000) Electrochemical supercapacitors: Energy storage beyond batteries. In: Current Science, 79 (12). pp. 1656-1661.
D'Souza, Lawrence and Sampath, S (2000) Preparation and characterization of silane-stabilized, highly uniform, nanobimetallic Pt-Pd particles in solid and liquid matrixes. In: Langmuir, 16 (22). pp. 8510-8517.
Ramesh, P and Sampath, S (2000) A binderless, bulk modified, renewable surface amperometric sensor for NADH and ethanol. In: Analytical Chemistry, 72 (14). pp. 3369-3373.
Ramesh, P and Sampath, S (1999) Chemically functionalised exfoliated graphite: a new bulk modified, renewable surface electrode. In: Chemical Communications (21). pp. 2221-2222.
Sampath, S and Gandhi, KS (1999) Ammonia synthesized at atmospheric pressure. In: Current Science, 76 (1). pp. 14-15.
Vishnievsky, AI and Sampath, S and Upadhyay, CS (1958) ABOUT FORMS OF ELECTRONIC AND IONIC DEVICES WITH THERMIONIC CATHODES. In: Journal of the Indian Institute of Science, 40 (4). pp. 150-164. | CommonCrawl |
Intuition for what happens to the standard deviation after plugging a normal variable into a function
Let's say I have a normal variable $X$. I feed $X$ into some function, which applies varies mathematical operations to it - for instance, addition, subtraction, multiplication, division, $x^r$, $e^x$, trigonometric functions, etc.
What happens to the standard deviation?
Addition and subtraction seems trivial: If you imagine the data plotted as a scatter plot, $X+nq$ is just $X$ with the vertical axis moved by $q$. Since standard deviation characterizes the spread of the point, and moving the axis does not change the shape of the plot, these two operations will not change standard deviation.
By the same logic, multiplication clearly scales the standard deviation by the same amount as the axis. One could literally draw the distribution on paper, mark the standard deviation as a line segment, then erase the axis labels and write the scaled ones and obtain the adjusted standard deviation.
However, for powers this doesn't work: For example, with $e^x$ the points that are farther away from the axis become more spread out, and the ones below become less spread out. In fact, $e^x$ will change the mean and skew of the distribution, and the histogram will look completely different - thus it is not surprising that $std(f(x))$ is not just $f(std(x))$.
For trigonometric functions, my intuition breaks down completely. I don't even know how to interpret it in a useful away. I know I can simply write out the equation describing the normal distribution, apply the trigonometric function, do the algebra and go from there, but intuitively I am stuck.
Are there broad, easily determinable classes of functions that "leave the standard deviation alone" and functions that warp it in strange ways? Is there an easy way to tell, without doing a lot of algebra, if a given function's standard deviation can be easily related to the standard deviation of its argument?
Motivation: In physical sciences, one often ends up some measured quantity being mathematically related to some other measured quantities. For example, the period of a swinging pendulum is approximated by:
$$T \approx 2\pi\sqrt{\frac{L}{g}}$$
This can even be derived by simple unit analysis. But if one wants to experimentally confirm (or reject) this rule, there is a practical challenge besides constructing the experimental apparatus: None of the inputs ($L$ and $g$, though $\pi$ is also problematic, albeit more exotically so) can be known precisely. They must be measured, and the measurement will have some error (often thought to have normal-like character).
The question is, then, how does error in $L$ or $g$ affect the error of the predicted value for $T$ (let's ignore the other problem of actually measuring $T$)? More importantly, can one easily decide this without extensive calculation? Consider, for instance, an experimentalist who finds himself thinking: "Do I really have to go all the way upstairs to fetch the ruler? Will it ruin the whole calculation if I just ballpark the rope length?"
Note how solving the problem analytically is not an option here: You could easily go fetch the ruler in a fraction of the time and be done with it, if that was the best option.
Another example: The Drake equation uses simple multiplication to estimate an unfamiliar variable from well-known quantities (the so called Fermi problem). Supposedly, because the equation is a product, even if there is some variance in the estimates for the parameters, the result of the equation can be very accurate. Again the critical question is (I think), how does the variance in inputs affect the variance of the outputs?
Stated in yet another way, my question is: When applying a mathematical model to empirical data, can you easily tell which measurements really have to be very precise, but which ones are okay if they are off by a bit?
standard-deviation experiment-design measurement-error
Superbest
SuperbestSuperbest
The most general way to do this is to transform the variable and compute the standard deviation of the transformed variable.
For example, if $X\sim N(\mu,\sigma^2)$, then $e^X\sim\text{logN}(\mu,\sigma^2)$, which has s.d. $e^{\mu+\sigma^2/2}.\sqrt{e^{\sigma^2}-1}$.
However, rather than having to do the transformation, we can also use the law of the unconscious statistician to compute the variance. Or there are a variety of other approaches that can sometimes be used.
Another possibility is to use Taylor series to attempt to approximate the variance of the transformed variable
... and I expect that's what you're actually after.
With that, to first order, the variance of the transformed variable can approximately be written in terms of the variance of the original variable:
$\operatorname{var}\left[f(X)\right]\approx \left(f'(\operatorname{E}\left[X\right])\right)^2\operatorname{var}\left[X\right] = \left(f'(\mu_X)\right)^2\sigma^2_X\,,$
$\operatorname{sd}\left[f(X)\right]\approx f'(\mu_X)\,\sigma_X\,.$
This expression can be used to get some intuitive sense of how the standard deviation changes as we transform the variable.
However, you must exercise some caution -- it's easy to get oneself into a deal of trouble just assuming this will always work.
See also this, especially this part.
$\begingroup$ Thank you for the great answer! Both LOTUS and the other method was exactly what I was looking for. $\endgroup$ – Superbest Nov 19 '14 at 20:32
Not the answer you're looking for? Browse other questions tagged standard-deviation experiment-design measurement-error or ask your own question.
Why doesn't the standard deviation represent a normal distribution?
What are the variance and standard deviation for a standard six-sided die?
Error bars, linear regression and "standard deviation" for point
Standard deviation in impulse response function and significance of IRF
How can one incorporate an incomplete experiment into standard deviation?
Usefulness of standard deviation/alternatives for highly variable measurements
What problem or game are variance and standard deviation optimal solutions for? | CommonCrawl |
Possible typographical mistake in the definition of the normal curvature
Asked 1 year ago
Active 15 days ago
On the book Differential Geometry of Curves and Surfaces by Manfredo P. do Carmo the following definition can be found:
My question is very concrete: is there a typographical mistake in the formula boxed in red?
differential-geometry
Antoni ParelladaAntoni Parellada
No, this is absolutely correct. You're looking at the portion of the curvature vector $k\mathbf n$ that is normal to the surface (i.e., in the direction of $\mathbf N$). That is, you take $(k\mathbf n)\cdot \mathbf N = k(\mathbf n\cdot \mathbf N) = k\cos\theta$.
Ted ShifrinTed Shifrin
This complicated setup starts with a surface $S$ in $\mathbb R^3$ and a point $P$ with a normal unit vector to the surface $\vec N_S.$
Each point on the surface has an associated orthogonal vector (shortened in the diagram above to let the vector at $P$ stand out). The surface $S$ has domain boundaries between $-1<x<1$ and $-1<y<1$ and is governed by the equation:
$$f(x,y)=-x^2+\cos(x)+\cos(y)$$
The normal vector to the surface at any given point $\vec N_S(P)$ was calculated as:
$$\begin{align} \vec N(t)&=\left (-\frac{\partial f}{\partial x}\frac{\partial x}{\partial t} ,-\frac{\partial f}{\partial y}\frac{\partial y}{\partial t},1\right)\\[3ex] &=\left(2x+\sin(x),\sin(y),1\right) \end{align}$$
On $S$ a space curve $C\in \mathbb R^3$ was parameterized by $t$ with $-1<t<1$ as:
$$C(t)=(t,t^2,f(x,y))$$
On this space curve a tangent vector can be defined at each point as the curve derivative:
$$\vec T(t)=(1,2t,-2t-\sin(t)-2t\sin(t^2))$$
In the Frenet Serret or TNB frame $\vec T$ would be of unit $1,$ and defined as $\vec T=\frac{\vec r'(t)}{\vert \vec r'(t)\vert}=\vec r'(s),$ the latter part indicating that there is no need to normalize if the curve is parameterized by arc length $(s).$
A second orthogonal vector called the normal vector to the curve $C$ at $P$ can be calculated as the derivative to the tangent vector, provided that is parameterized by arc length as
$$\vec n(s)=\frac{\vec T'(s)}{\vert T'(s)\vert}$$
with $k(s)=\vert T'(s)\vert=\frac{\vert T'(t)\vert}{\vert r'(t)\vert}$ corresponding to the curvature of $C$ at $P.$ However, this is not straightforward to compute given the square roots to normalize the derivatives, and as reflected here. A way to circumvent this problem is to generate a vector via the wedge products
$$\vec n= \left( C'(t)\times C''(t)\right)\times C'(t)$$
and then proceeding to normalize it.
This vector can be used to generate the osculating circle, knowing that the curvature can also be calculated as $k(t)=\frac{C'(t)\wedge C''(t)}{\vert C'(t)\vert^3},$ and that the radius of the osculating circle is $r=1/k:$
In the animation $\vec B(t)$ completes the Frenet-Serret triad. $\vec B(t)$ is the binormal unit vector, the cross product of $\vec T$ and $\vec n.$ It is worth noting that since the derivative of the tangent vector $C'(t)$ is normalized its derivative $C''(t)$ is orthogonal. Together, $\{\vec T, \vec n, \vec B\}$ form an orthonormal basis for $\mathbb R^3.$
The derivatives of $\vec T$ and $\vec B$ are in the span of $\vec n:$ The derivative of the tangent vector can be expressed as $T'(s)=k(s) \vec n.$ As a scalar product, $k(s) =\langle \vec n(s), T'(s) \rangle.$ Similary, the derivative of the binormal vector $\vec B$ can be expressed as $B'(s)=\tau (s) \vec n,$ where $\tau (s)$ is the torsion of the curve $C$ at $s,$ which can also be expressed as a scalar product as $\tau (s) =\langle \vec n(s), B'(s) \rangle.$ That it makes sense for $\tau$ to denote the torsion of the curve can be seen by noticing how it stays constant if the curve is planar:
Here is an animation to illustrate the Frenet triad moving along the curve in relation to vector field $\vec N:$
The normal vector to the curve is in the span of the normal vector to the surface at any points along a geodesic curve:
Delving in the topic of the OP, $k_n=k\cos\theta$ is a scalar value with $\theta$ corresponding to the angle between $\vec N_S(t)$ and $\vec n(t):$
Since both $\vec N$ and $\vec n$ are unitary, the $\cos(\theta)=\langle \vec N, \vec n\rangle$ is given by the scalar product of the vectors.
The unit vector $\vec n$ multiplied by the scalar value of the curvature $k$ yields a vector $k\vec n,$ whose projection on $\vec N$ is $k_n\vec N:$
$K_n=k\cos\theta$ is called the normal curvature of $C$ at $P.$
The geodesic curvature, $k_g,$ is the curvature of the curve projected onto the surface tangent plane. The geodesic curvature measures how far the curve is from being a geodesic:
And the punch line of the story is that if a vector $\vec v \in T_P S$ has norm $\vert v \vert=1,$ the second fundamental form applied to the vector, i.e. $\vec v^\top \mathbf{{II}_P} \vec v$ equals the normal curvature of any curve through $P$ at velocity $\vec v.$
The second fundamental form corresponds to the Hessian of a surface with a chart $f\left(u,v, h(u,v)\right),$ while the trace of the Hessian is the Laplacian. This makes intuitive sense, since the normal vector at a point $\vec N$ of a graph of a function is the gradient (vector of first derivatives), $\nabla F(x,t,f(x,y))=\left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, -1\right)(*),$ and the second fundamental form involved the derivative of the normal, $dN.$ The second fundamental form can be expressed as a symmetrical matrix, which when applied to the derivative of a curve $\alpha(0)$ coursing through $p\in S$ at $t=0$ will result in (see here):
$$\begin{align} \mathrm{II}_p(\alpha'(0),\alpha'(0))&=-\langle dN_p(\alpha'(0),\alpha'(0) \rangle\\ &=-\langle N'(0),\alpha'(0) \rangle\\ &\underset{*}{=}\bbox[5px,border:2px solid black]{\langle N(p),\alpha''(0) \rangle}\\ &=\langle N(p),k(p) \vec n(p) \rangle\\ &=k\langle \vec N_p,\vec n(p)\rangle\\ &=k_n(p) &=k\cos\theta \end{align} $$
The boxed result being Euler's theorem: the acceleration of curve $\alpha$ at point $p$ dotted with the normal to the surface at the same point is the second fundamental form.
$(*)$ in an arc length parameterized curve: $$\begin{align} &\langle N(s),\alpha'(s) \rangle =0\\ &\implies \langle N'(s),\alpha'(s) \rangle+\langle N(s),\alpha''(s) \rangle=0\\ \end{align}$$
The associated vectors to the minimum $k_1(p)$ and maximum $k_2(p)$ eigenvalues of the $\mathrm{II}_p$ restricted to vectors of norm $1$ in $T_pS$ will form an orthonormal basis of $T_pS,$ because $\mathrm{II}$ is a symmetric matrix. $\{k_1,k_2\}$ are the principal curvatures of the surface at $p.$
A unit vector $\vec v\in T_pS$ in the tangent space can be thus represented in relation to the angle with these orthonormal basis vectors with $\vec v:$
$$\vec v=\cos \varphi \vec e_1 + \sin \varphi e_2$$
and applying the quadratic of the second fundamental form to $\vec v:$
$$\begin{align} k_n=\mathrm{II}_p(\vec v)&=-\langle dN_p(\vec v), \vec v \rangle\\ &=-\langle dN_p(\cos \varphi \vec e_1 + \sin \varphi e_2), \cos \varphi \vec e_1 + \sin \varphi e_2 \rangle\\ &=\bbox[5px,border:2px solid black]{-\cos^2 \varphi k_1 - \sin^2 \varphi k_2} \end{align}$$
which is Euler's curvature formula.
The surface $z=f(x,y)$ is identical to $F(x,y,z)=0,$ where $F(x,y,z)=f(x,y)-z.$ Hence $\left(\frac{\partial}{\partial x} F, \frac{\partial}{\partial y} F, \frac{\partial}{\partial z} F \right)=\left(\frac{\partial}{\partial x} f(x,y), \frac{\partial}{\partial y} f(x,y),-1 \right).$
edited Jan 3 at 1:38
Not the answer you're looking for? Browse other questions tagged differential-geometry or ask your own question.
Eigenvectors of real symmetric matrices are orthogonal
Why is the derivative of a vector orthogonal to the vector itself?
Why can we think of the second fundamental form as a Hessian matrix?
Determinant of the second fundamental form in Gauss's curvature
Calculating the normal vector to a space curve to construct a 3D plot
Confusing about coordinate curves and quadrilateral formed?
Differential Geometry - Inverse is continuous
For which of the pairs of surfaces shown below there is a local isometry?
Maximum of the length of a curve and its curvature
Simple Differential Geometry/Analysis question: Prove that $f:\mathbb{R^2}\to\mathbb{R}$ is continuous
Next book in learning Differential Geometry
necessary and sufficient condition for the lines of curvature
A question about the definition of regular surfaces in Manfredo do Carmo's book
Differential geometry textbook to supplement a General Relativity self-study/refresher course | CommonCrawl |
Why does ice water get colder when salt is added?
It is well known that when you add salt to ice, the ice not only melts but will actually get colder. From chemistry books, I've learned that salt will lower the freezing point of water. But I'm a little confused as to why it results in a drop in temperature instead of just ending up with water at 0 °C.
What is occurring when salt melts the ice to make the temperature lower?
water solutions heat phase
cspiroucspirou
$\begingroup$ I think it's related to the Raoult's law $\endgroup$ – glepretre Jun 20 '14 at 6:57
$\begingroup$ There are some assumptions here, I think: The salt is NaCl, and the system is at thermal equilibrium (ice and water at 0 °C throughout). Also, the system is isolated (i.e the surrounding is not heating it up or cooling it down). $\endgroup$ – Karsten Theis Aug 13 '19 at 7:22
When you add salt to an ice cube, you end up with an ice cube whose temperature is above its melting point.
This ice cube will do what any ice cube above its melting point will do: it will melt. As it melts, it cools down, since energy is being used to break bonds in the solid state.
(Note that the above point can be confusing if you're new to thinking about phase transitions. An ice cube melting will take up energy, while an ice cube freezing will give off energy. I like to think of it in terms of Le Chatelier's principle: if you need to lower the temperature to freeze an ice cube, this means that the water gives off heat as it freezes.)
The cooling you get, therefore, comes from the fact that some of the bonds in the ice are broken to form water, taking energy with them. The loss of energy from the ice cube is what causes it to cool.
chipbusterchipbuster
$\begingroup$ …so not all of the water is getting colder. Some of it is getting warmer? $\endgroup$ – Neil G Feb 18 '15 at 2:49
$\begingroup$ ...maybe you could give your opinion on the question, whether an ice cube that is heated will invest all the absorbed energy into phase transition and thereby not heat its core up until it melts, or an temp. gradient will form like in any other body. $\endgroup$ – cirko Jul 28 '15 at 10:29
$\begingroup$ Correct answer, but terminology can be confusing! If water "gives off heat as it freezes", it sounds like the water is heating something up - well, it is heating up the cooling device, but it is clearer to say that heat is withdrawn from the water to make it freeze. IMHO $\endgroup$ – James Gaidis Aug 11 '19 at 13:22
$\begingroup$ The OP said it more clearly: the freezing point of salt water is lower than the freezing point of pure water. Saying that the melting point of ice cubes changes when adding salt is a bit strange because we still have pure water in the ice cube. It is the melted water that is no longer pure. $\endgroup$ – Karsten Theis Aug 13 '19 at 7:24
$\begingroup$ @KarstenTheis I have removed my poor comment. A better explanation is here chemistry.stackexchange.com/questions/116302/… $\endgroup$ – porphyrin Aug 13 '19 at 20:32
We know that melting or freezing is an equilibrium process. The energy that is required to melt an ice cube will not contribute in elevating its temperature until all the solid water is molten.
If we take two ice cubes and add salt to one of them, then put each of them at room temperature, both of the ice cubes will absorb energy from the surroundings, and this energy as we said will contribute in breaking down the bonds between water molecules.
The cube that salt has not been added to, has a melting point $0~\mathrm{^\circ C}$ and so if we measure its temperature during melting it will remain zero until all ice is molten. That ice cube to which we have added salt, the salt that is added lowers the melting and freezing points of water because it lowers the vapor pressure of water. This ice cube will absorb energy from the environment to help break bonds between water molecules. We know that the salt added will dissolve in the melted portion of the ice. This formed solution of salt will have a lowered freezing point, so the equilibrium between the solid phase and the aqueous phase will be shifted towards the liquid phase since such a solution will freeze at say $-2~\mathrm{^\circ C}$. Since both phases are close together, the ice will absorb energy from the salt solution and will reduce its temperature to the $-2~\mathrm{^\circ C}$ to maintain the equilibrium. When all ice is molten we end up with a salt solution that has got a temperature of say $-1.5~\mathrm{^\circ C}$. This is due to the solution being diluted now. After that, it will start absorbing heat from the room and reach zero and above. So, in conclusion that is how salt melts ice.
$\begingroup$ The question was about why the temp would drop, not how salt melts ice, so the concluding "that is how salt melts ice" is misleading. but the text does add some explanation. However, it might be the reason why the other answer got more upvotes, as it more directly focussed on the temp decrease. $\endgroup$ – redfox05 Mar 12 '18 at 16:53
A mixture of water and ice stabilizes at the freezing point of water.
If the ice were any colder, it would absorb heat from the water, in the process raising its own temperature while freezing some part of the water.
If the water is any hotter, it will cool down by melting some of the ice.
This works because ice thawing is endothermic; energy (heat) is used up to turn solid into liquid even though the temperature is staying the same.
The freezing point of water is $0 \pu{°C}$, so water-ice slush stays at $0 \pu{°C}$. If it was lower, it would stabilize at the lower temperature. By adding salt, you are lowering the freezing temperature. The mixture stabilizes there and is colder.
SuperbestSuperbest
$\begingroup$ Not as technical as the higher voted answers, but done in a very easy (non-chemist) to understand way. Combined with the others, this answers all my questions, thanks. $\endgroup$ – redfox05 Mar 12 '18 at 16:55
When you dissolve $\ce{NaCl}$ in water, it will have to take energy from the system to break its structure so it can dissolve in water. This is the reason the water gets colder because the salt uses the energy from the water to solve it. Now let's look at why ice melts when salt is added. This is based on a so-called colligative attribute. These attributes are only dependent on the amount of substance. When you add particles to a solvent, its vapor pressure lowers. This will result in a higher boiling point(using salt for cooking) and a lower freezing temperature for the solution.
I hope this gives a starting point for further reading consult books on physical chemistry(for e.g. Atkins).
PythonicChemistPythonicChemist
Asking why
When you ask why, you want to know about causality. If I ask "why does the cold pack show a decrease in temperature" and the answer is "because the reaction is endothermic", this might be considered a tautology. After all, endothermic means that energy is needed, and this energy can come from the surrounding, lowering the temperature.
As the OP states, this lowers the freezing point of the liquid. The system is no longer at equilibrium, and some ice will melt in an endothermic process. As a consequence, the temperature drops and the salt water gets diluted. The melting process stops when salt concentration and temperature are matched again, i.e. the freezing point of the liquid is equal to the temperature of the system.
It is well known that when you add salt to ice, the ice not only melts but will actually get colder.
The melting process is at the interface of liquid and solid, so both the solution and the ice will get colder.
From chemistry books, I've learned that salt will lower the freezing point of water. But I'm a little confused as to why it results in a drop in temperature instead of just ending up with water at 0 °C.
So the question is given that some ice melts, why does the temperature drop. Saying that ice melting is an endothermic process maybe does not fully answer the question (explain the causality).
In terms of kinetics, the salt does not melt the ice. Instead, it lowers the rate of water freezing. The net effect is that ice melts. At the molecular level, according to https://www.nyu.edu/pages/mathmol/textbook/info_water.html, "In liquid water each molecule is hydrogen bonded to approximately 3.4 other water molecules. In ice each each molecule is hydrogen bonded to 4 other molecules." So upon melting, water loses about half a hydrogen bond. Also, the remaining hydrogen bonds might have less ideal distances and angles. So that's what makes the process endothermic. The NaCl has little role in the energetics, as any other solute has pretty much the same effect (colligative property).
Karsten TheisKarsten Theis
When you add salt to the ice it melts, I won't go into why since you didn't ask that; all you need to do is that it does if you don't believe me ->
http://science.howstuffworks.com/nature/climate-weather/atmospheric/road-salt.htm
Moving on, whenever a substance undergoes a phase change it's temperature does not rise and usually stays relatively constant, if you looked at a graph of most substances when undergoing different phase changes (i.e. solid to liquid to gas) you will observe regions that are 'flat' or horizontal this is because the energy is no longer causing a rise in temperature but a change in state. Since you have dissolved salt in the ice it will lower the freezing point (note that freezing and melting point of any substance is the same they can be seen as mirrors for one another) this means that water can now exist at lower temperatures and not turn into ice or in other words it will begin to melt at lower temperatures this could attribute as to why the temperature would LOWER as it no longer needs to reach as high a temperature to begin to melt. I haven't explained it very clearly but I hope you understand it consulting a physics and chemistry text book as the person above has suggested is a good idea.
STUDENT_PCBSTUDENT_PCB
Melting is endothermic and freezing is exothermic. We never observe water warm up when it freezes because more energy has to be lost from the system before more water freezes. When water freezes from being in cold air, the release of heat actually slows down the freezing. When you add salt to a mixture of water and ice, it causes more ice to melt by depressing the freezing point and not by adding internal energy so it actually gets colder.
TimothyTimothy
Let's do a thought experiment to see what is going on. Combine 500 mL of pure H2O and 500 grams of ice, each at 0 C, in a perfectly insulated container. Then add 300 grams of NaCl (also at 0 C). At first, not all the NaCl will dissolve, because NaCl is only soluble 26% in water at 0 C. (The temperature of the water will decrease slightly because of the energy required to dissolve the salt: about 1 kcal/mole of salt.)
The melting point of the ice cube is still 0 C. However, the liquid surrounding it, although at approximately 0 C, is a near-saturated salt solution, not pure water.
Water molecules from the ice cube will diffuse into the salt solution, diluting it. Energy is required for the transition from solid ice to water: the amount of heat required to melt ice is 68.3 kcal/mole (heat of fusion). This energy comes from the salt solution, reducing its temperature. The ice cube will continue to melt (turns from solid to liquid) as long as it is not in equilibrium with the solution. The ice cube does not melt because the surrounding liquid has a lower freezing point, it melts because the surrounding solution is less than 100% water. The surrounding solution is already colder than 0C and the ice cube is still pure solid water at 0C. In other words, the driving force is a non-equilibrium of concentration. At the beginning of the experiment, there was thermal equilibrium (everything at 0C), but a concentration non-equilibrium forced the melting of ice by heat energy from the solution. Freezing point depression characterizes a solution, but is not a driving force.
James GaidisJames Gaidis
No. Unfortunately we seem to all be forgetting a fundamental property of ice - I kicked myself when I realized it. Think of a large chunk of ice taken out of your home freezer, sitting in a bucket, floating in its own melt-water. All the liquid water surrounding that ice is at $\pu{0 ^\circ C}$. The extreme surface of that ice, that is exposed to the meltwater, (the layer that is about to transition to liquid) is also at $\pu{0 ^\circ C}$. BUT the solid (non-transitioning) interior of the ice chunk is still at about the temperature of the freezer that it came from, about $\pu{-18 ^\circ C}$! So we shouldn't think of all the water in the bucket (liquid & solid) as being at $\pu{0 ^\circ C}$. Instead, most of the frozen water is much colder than $\pu{0 ^\circ C}$. So now we see that within the chunk of ice there is a significant temperature gradient. The exact center is coldest (at perhaps $\pu{-18 ^\circ C}$). Moving out from there the temperature increases until you reach the surface where the ice has warmed to a temperature of $\pu{0 ^\circ C}$ and has begun melting. Now we can readily see why adding salt to the liquid surrounding the ice chunk would cause a lowering of the temperature of that salty melt-water. By adding salt you have lowered the melting point. The surface layer of the ice that was $\pu{0 ^\circ C}$ would rapidly melt because the salt is in close contact with it and acts on it such that the ice surface is now much too warm to maintain the solid state. Underneath that quickly melting ice layer is another layer at say $\pu{-1 ^\circ C}$. The salt would be able to act on that colder ice in turn and cause it also to melt while maintaining that negative temperature creating a small amount of $\pu{-1 ^\circ C}$ briny water. This below-zero melting action caused by the briny liquid would continue toward a new temperature equilibrium (between the solid surface and the liquid) somewhere below $\pu{0 ^\circ C}$ in accordance with the salinity level of the melt-water.
$\begingroup$ To warm up one gram of ice from -18 to zero degree celsius requires about 36 joules. To melt the same amount takes about 334 joules. So even if the ice were colder than zero degrees celsius, the effect would be small compared to the effect of melting some ice. The question, admittedly, is a bit vague about the starting conditions, but a water/ice slush is pretty much almost zero degrees celsius throughout. $\endgroup$ – Karsten Theis Aug 13 '19 at 7:31
$\begingroup$ So what is the point of using salt on ice in an ice cream making machine? $\endgroup$ – Drew Aug 14 '19 at 22:41
$\begingroup$ It would be quite the feat to debunk all those ice cream makers with nothing but a thermometer and bowl of brine slush - heck, an elementary school child could do it. Challenge taken! $\endgroup$ – Drew Aug 14 '19 at 23:23
$\begingroup$ My comment might have been unclear: Pure water/ice slush is zero degrees celsius. Ice/brine slush is much colder (about -21 degree celsius, which is why our ice boxes are still set to that temperature to reproduce the conditions in an old fashioned ice box). $\endgroup$ – Karsten Theis Aug 15 '19 at 7:59
$\begingroup$ So points not in dispute: 1) Ice / water slush = 0 degrees. 2) ice / water / salt = below 0. Question in dispute: How does the ice slush temperature go FROM 0 degrees TO sub-0 when salt is added? My answer: The input into the system of sub-0 temps is the sub-0 ice which the salt melts while it is still below 0. Instead, everyone here seems to say that the sub-0 ice has no role / is irrelevant to the drop in temp below 0. You all say that the entire source of the loss in temperature of the water slush upon adding salt is a chemical reaction between the salt and the water. $\endgroup$ – Drew Aug 16 '19 at 23:16
Not the answer you're looking for? Browse other questions tagged water solutions heat phase or ask your own question.
Where does the energy come from to lower the temperature of a brine solution?
Is it possible to freeze water by dissolving a salt?
Does salt on the snow and ice covered roads and sidewalks make the nearby air temperature cooler?
Freezing Point Depression and Temperature Decrease
How does the Freezing Point fall in a solution?
Effect of impurities on enthalpy of fusion of ice
Why the dilution of solvent doesn't affect the solid phase, in comparison to the liquid phase?
How can sodium chloride melt ice or keep it frozen?
How does a liquid melt a hotter solid?
Numerical related to crystallisation of water below its freezing point
Freezing trend of water | CommonCrawl |
Optimal control of a diffusion/reaction/switching system
EECT Home
The approximate controllability of a model for mutant selection
December 2013, 2(4): 733-740. doi: 10.3934/eect.2013.2.733
Locally smooth unitary groups and applications to boundary control of PDEs
Stephen W. Taylor 1,
Mathematics Department, The University of Auckland, Private Bag 92019, Auckland, New Zealand
Received July 2013 Revised September 2013 Published November 2013
Let $\mathcal{P}$ be the projection operator for a closed subspace $\mathcal{S}$ of a Hilbert space $\mathcal{H}$ and let $U$ be a unitary operator on $\mathcal{H}$. We consider the questions
1. Under what conditions is $\mathcal{P}U\mathcal{P}$ a strict contraction?
2. If $g$, $h\in \mathcal{S}$, can we find $f\in \mathcal{H}$ such that $\mathcal{P}f=g$ and $\mathcal{P}Uf=h$?
The results are abstract versions and generalisations of results developed for boundary control of partial differential equations. We discuss how these results can be used as tools in the direct construction of boundary controls.
Keywords: Boundary control, smoothing., unitary group.
Mathematics Subject Classification: Primary: 35Q40, 35Q41, 35Q93; Secondary: 35Nxx, 35Lx.
Citation: Stephen W. Taylor. Locally smooth unitary groups and applications to boundary control of PDEs. Evolution Equations & Control Theory, 2013, 2 (4) : 733-740. doi: 10.3934/eect.2013.2.733
M. Ben-Artzi and S. Klainerman, Decay and Regularity for the Schrödinger equation,, J. Anal. Math., 58 (1992), 25. doi: 10.1007/BF02790356. Google Scholar
N. Burq, Smoothing Effect for Schrödinger boundary value problems,, Duke Mathematical Journal, 123 (2004), 403. doi: 10.1215/S0012-7094-04-12326-7. Google Scholar
P. Constantin and J.-C. Saut, Local smoothing properties of Schrödinger equations,, Indiana Univ. Math. J., 38 (1989), 791. doi: 10.1512/iumj.1989.38.38037. Google Scholar
S.-I. Doi, Remarks on the Cauchy problem for Schrödinger-type equations,, Comm. Partial Differential Equations, 21 (1996), 163. doi: 10.1080/03605309608821178. Google Scholar
S.-I. Doi, Smoothing effects of Schrödinger evolution groups on Riemannian manifolds,, Duke Math. J., 82 (1996), 679. doi: 10.1215/S0012-7094-96-08228-9. Google Scholar
S.-I. Doi, Smoothing effects for Schrödinger equation and global behaviour of geodesic flow,, Math. Ann., 318 (2000), 355. doi: 10.1007/s002080000128. Google Scholar
M. A. Horn and W. Littman, Boundary control of a Schrödinger Equation with nonconstant principal part,, Control of Partial Differential Equations and Applications, 174 (1996), 101. Google Scholar
I. Lasiecka and R. Triggiani, Optimal regularity, exact controllability and uniform stabilization of the Schrödinger equation with Dirichlet control,, Differential and Integral Equations, 5 (1992), 521. Google Scholar
I. Lasiecka, R. Triggiani and X. Zhang, Global Uniqueness, Observability and Stabilization of Nonconservative Schrödinger Equations via Pointwise Carleman Estimates. Part I: $H^1(\Omega)$-estimates,, J. Inverse and Ill-Posed Problems, 12 (2004), 1. doi: 10.1163/156939404773972761. Google Scholar
W. Littman, Boundary Control Theory for Beams and Plates,, Proceedings, (1985). doi: 10.1109/CDC.1985.268511. Google Scholar
W. Littman and L. Markus, Exact boundary controllability of a hybrid system of elasticity,, Archive for Rational Mechanics and Analysis, 103 (1988), 193. doi: 10.1007/BF00251758. Google Scholar
W. Littman and S. W. Taylor, Smoothing evolution equations and boundary control theory,, J. d'Analyse Math., 59 (1992), 117. doi: 10.1007/BF02790221. Google Scholar
W. Littman and S. W. Taylor, Local Smoothing and Energy Decay for a Semi-Infinite Beam Pinned at Several Points and Applications to Boundary Control,, Differential Equations, 152 (1994), 683. Google Scholar
W. Littman and S. W. Taylor, The Heat and Schrödinger Equations: Boundary Control with One Shot,, Control methods in PDE-dynamical systems, 426 (2007), 293. doi: 10.1090/conm/426/08194. Google Scholar
W. Littman and S. W. Taylor, The balayage method: Boundary control of a thermo-elastic plate,, Applicationes Mathematicae, 35 (2008), 467. doi: 10.4064/am35-4-5. Google Scholar
E. Machtyngier, Controlabilité exact et stabilisation frontiere de l'equation de Schrödinger,, (French) [Exact boundary controllability and stabilizability for the Schr?dinger equation], 310 (1990), 801. Google Scholar
A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations,, Applied Mathematical Sciences, (1983). doi: 10.1007/978-1-4612-5561-1. Google Scholar
D. Tataru, A priori estimates of Carleman's type in domains with boundary,, J. Math. Pures Appl., 73 (1994), 355. Google Scholar
D. Tataru, Boundary controllability for conservative PDEs,, Appl. Math. Optim., 31 (1995), 257. doi: 10.1007/BF01215993. Google Scholar
D. Tataru, Carleman estimates and unique continuation for solutions to boundary value problems,, J. Math. Pures Appl., 75 (1996), 367. Google Scholar
D. Tataru, Carleman estimates, unique continuation and controllability for anizotropic PDEs,, Optimization methods in partial differential equations (South Hadley, 209 (1997), 267. doi: 10.1090/conm/209/02771. Google Scholar
S. W. Taylor, Gevrey smoothing properties of the Schrödinger evolution group in weighted Sobolev spaces,, Journal of Mathematical Analysis and Applications, 194 (1995), 14. doi: 10.1006/jmaa.1995.1284. Google Scholar
S. W. Taylor, Exact Boundary Controllability of a Beam and Mass System,, Computation and Control IV, (1995). Google Scholar
S. W. Taylor, A smoothing property of a hyperbolic system and boundary controllability,, Journal of Computational and Applied Mathematics, 114 (2000), 23. doi: 10.1016/S0377-0427(99)00286-1. Google Scholar
S. W. Taylor and S. Yau, Boundary Control of a Rotating Timoshenko Beam,, ANZIAM Journal, 44 (2003). Google Scholar
R. Triggiani, Carleman estimates and exact boundary controllability for a system of coupled non-conservative Schrödinger equations,, Dedicated to the memory of Pierre Grisvard. Rend. Istit. Mat. Univ. Trieste, 28 (1996), 453. Google Scholar
R. Triggiani and X. Xu, Pointwise Carleman Estimates, Global Uniqueness, Observability, and Stabilization for Schrödinger Equations on Riemannian Manifolds at the $H^1(\Omega)$-Level,, Control methods in PDE-dynamical systems, 426 (2007), 339. doi: 10.1090/conm/426/08197. Google Scholar
R. Triggiani and P. -F. Yao, Inverse/observability estimates for Schrödinger equations with variable coefficients,, Recent advances in control of PDEs. Control Cybernet, 28 (1999), 627. Google Scholar
Uri Bader, Roman Muchnik. Boundary unitary representations-irreducibility and rigidity. Journal of Modern Dynamics, 2011, 5 (1) : 49-69. doi: 10.3934/jmd.2011.5.49
Uri Bader, Jan Dymara. Boundary unitary representations—right-angled hyperbolic buildings. Journal of Modern Dynamics, 2016, 10: 413-437. doi: 10.3934/jmd.2016.10.413
Orazio Arena. On some boundary control problems. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 613-618. doi: 10.3934/dcdss.2016015
Jean Creignou, Hervé Diet. Linear programming bounds for unitary codes. Advances in Mathematics of Communications, 2010, 4 (3) : 323-344. doi: 10.3934/amc.2010.4.323
Atte Aalto, Jarmo Malinen. Compositions of passive boundary control systems. Mathematical Control & Related Fields, 2013, 3 (1) : 1-19. doi: 10.3934/mcrf.2013.3.1
Alexander Arguchintsev, Vasilisa Poplevko. An optimal control problem by parabolic equation with boundary smooth control and an integral constraint. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 193-202. doi: 10.3934/naco.2018011
Christophe Prieur. Control of systems of conservation laws with boundary errors. Networks & Heterogeneous Media, 2009, 4 (2) : 393-407. doi: 10.3934/nhm.2009.4.393
Bopeng Rao, Laila Toufayli, Ali Wehbe. Stability and controllability of a wave equation with dynamical boundary control. Mathematical Control & Related Fields, 2015, 5 (2) : 305-320. doi: 10.3934/mcrf.2015.5.305
Xiaoshan Chen, Fahuai Yi. Free boundary problem of Barenblatt equation in stochastic control. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1421-1434. doi: 10.3934/dcdsb.2016003
Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571
Klaus-Jochen Engel, Marjeta Kramar FijavŽ. Exact and positive controllability of boundary control systems. Networks & Heterogeneous Media, 2017, 12 (2) : 319-337. doi: 10.3934/nhm.2017014
Xavier Litrico, Vincent Fromion, Gérard Scorletti. Robust feedforward boundary control of hyperbolic conservation laws. Networks & Heterogeneous Media, 2007, 2 (4) : 717-731. doi: 10.3934/nhm.2007.2.717
Yury Arlinskiĭ, Eduard Tsekanovskiĭ. Constant J-unitary factor and operator-valued transfer functions. Conference Publications, 2003, 2003 (Special) : 48-56. doi: 10.3934/proc.2003.2003.48
Grégory Berhuy. Algebraic space-time codes based on division algebras with a unitary involution. Advances in Mathematics of Communications, 2014, 8 (2) : 167-189. doi: 10.3934/amc.2014.8.167
Percy A. Deift, Thomas Trogdon, Govind Menon. On the condition number of the critically-scaled Laguerre Unitary Ensemble. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4287-4347. doi: 10.3934/dcds.2016.36.4287
Vincent Astier, Thomas Unger. Galois extensions, positive involutions and an application to unitary space-time coding. Advances in Mathematics of Communications, 2019, 13 (3) : 513-516. doi: 10.3934/amc.2019032
Yanli Han, Yan Gao. Determining the viability for hybrid control systems on a region with piecewise smooth boundary. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 1-9. doi: 10.3934/naco.2015.5.1
Chao Xu, Yimeng Dong, Zhigang Ren, Huachen Jiang, Xin Yu. Sensor deployment for pipeline leakage detection via optimal boundary control strategies. Journal of Industrial & Management Optimization, 2015, 11 (1) : 199-216. doi: 10.3934/jimo.2015.11.199
Sergei A. Avdonin, Boris P. Belinskiy. On the basis properties of the functions arising in the boundary control problem of a string with a variable tension. Conference Publications, 2005, 2005 (Special) : 40-49. doi: 10.3934/proc.2005.2005.40
Cheng-Zhong Xu, Gauthier Sallet. Multivariable boundary PI control and regulation of a fluid flow system. Mathematical Control & Related Fields, 2014, 4 (4) : 501-520. doi: 10.3934/mcrf.2014.4.501
Stephen W. Taylor | CommonCrawl |
Reduction of rounding noise and lifting steps in non-separable four-dimensional quadruple lifting integer wavelet transform
Fairoza Amira Binti Hamzah ORCID: orcid.org/0000-0002-2554-27091,
Sayaka Minewaki1,
Taichi Yoshida2 &
Masahiro Iwahashi1
The wavelet transform (WT)-based JPEG 2000 is a standard for the compression of digital images that uses a separable lifting structure in which a multidimensional image signal is transformed separately along its horizontal and vertical dimensions. A non-separable three-dimensional (3D) structure is used to minimize the number of lifting steps in existing methods and can reduce the delay between input and output as each process is implemented by cascading in lifting calculation. This structure reduces rounding noise and the number of steps of the lifting scheme in the transform. The non-separable 3D structure in the 5/3-type transform for lossless coding reduces rounding noise, but it increases in the 9/7-type transform for lossy coding in the structure. A combination of 2D and 3D non-separable structures for 4D integer WT has been proposed to solve this problem, but the original filter arrangements need to be preserved to reduce rounding noise. Therefore, in this study, a non-separable 2D structure for the integer implementation of a 4D quadruple lifting WT with a 9/7 filter is proposed. The proposed wavelet transform has the same output signal as the conventional separable structure except for the rounding noise. As the order of the original lifting scheme is preserved, rounding noise in pixels of the decoded image can be significantly reduced, and the upper bounds of quality and lossy decoded 4D medical images can be improved.
Discrete cosine transform (DCT)-based digital image signal compression was superseded by the discrete wavelet transform (DWT)-based JPEG 2000 as the standard used to compress digital images [1]. The JPEG 2000 restricts the user's choice to two wavelet transforms—Daubechies 9/7 for lossy compression [2] and the 5/3 LeGall wavelet [3], which has rational coefficients for reversible or lossless compression. The JPEG 2000 also supports arbitrary transform kernels and specifies that they should be implemented by using a lifting scheme [4].
Recent advances in multidimensional image data have enhanced the importance of research on suitable compression methods. Digital multimedia technologies have progressed from one-dimensional (1D) audio signals, and 2D and 3D image signals to 4D signals. The medical imaging industry has also progressed to a filmless environment where the amount of digital data that need to be managed presents a significant challenge. Four-dimensional images are increasingly being collected and used in clinical and research applications, such as 4D magnetic resonance imaging (MRI), computed tomography (CT), ultrasound, and functional MRI. Four-dimensional medical images have had a significant influence on the diagnosis of diseases and surgical planning [5]. An image slice resolution of 512 × 512 has been the minimum standard, but nowadays, state-of-the-art scanning systems can output image slices at spatial resolutions of 1024 × 1024 or more at increasing pixel bit depths [6]. Limitations on storage space and transmission bandwidth, on the other hand, and the growing size of medical image datasets, on the other, have spurred research on the design of ad hoc tools. The increasing demand for efficiently storing and transmitting digital medical datasets has triggered investigations into multidimensional and dynamic image compression. Thus, a number of studies have examined the compression of 4D medical images, such as [7,8,9].
By adopting the Joint Photographic Experts Group (JPEG) international standard, a class of separable 2D WT has been broadly developed for various applications designed to efficiently compress digital still images. As its transfer function is composed of the product of a 1D transfer function in two spatial dimensions, it can inherit the legacy of previously designed 1D structures suitable for hardware implementation [10, 11]. It can also feature regularity and low sensitivity to various kinds of noise [12]. Non-separable structures have been primarily introduced to enhance the accuracy of prediction by adapting to the local context of neighboring pixels [13, 14]. Furthermore, several studies on reducing hardware complexity by introducing parallel processing to image coding were reviewed in [15] and a parallelization of the 2D fast wavelet transform was proposed in [16]. Directionality has recently been utilized in a generalized poly-phase representation [17,18,19] with the aim of designing adaptive high-pass filters of wavelet transforms.
A new class of non-separable 2D structures has been reported in [20,21,22], where the transfer function can be expressed as a product of four 1D functions. The transform based on this structure is compatible with the separable transform. The non-separable structure is not a cascade of instances of 1D signal processing in a 1D structure, but requires multidimensional memory access. This structure can reduce the total number of steps of the lifting scheme and the rounding operations therein.
Various types of wavelet transforms have been reported to analyze the geometry of 4D images [23], 4D hyperspectral images [24], 4D medical volumetric data [8, 9], 4D light field data [25], and 4D color images [26]. However, most of them use the separable 4D WT that contains a large amount of rounding noise. Therefore, a non-separable 3D integer WT was proposed in [27] to overcome its limitations. Unfortunately, this was limited to a double-lifting integer WT with a 5/3 filter especially applied for lossless coding. A non-separable quadruple 3D WT with a 9/7 filter was subsequently proposed in [28] for lossy coding. Nevertheless, unlike in the double-lifting WT, the variance of rounding noise increased in the quadruple lifting WT even though the number of lifting steps decreased. Rounding noise in the transform can reduce the efficiency of the lossy coding structure. This paper is the first to use a non-separable 4D quadruple lifting integer WT with the aim of reducing rounding noise inside the transform as well as improving its coding performance. Note that a part of this paper was presented in [29].
This paper focuses on lossy coding of 4D signals using the 9/7-type transform based on the quadruple lifting steps, and a reduction in rounding noise in the integer implementation of the transform is affected. As a lifting step needs to wait for the results of calculations from the previous lifting step, many sequential lifting steps incur a long delay between input and output. The real numbers assumed as signal values inside the transform are rounded into finite-length rational numbers. Shorter lengths imply lower computational load but more rounding noise. The space needed for memory storage can be reduced in a tradeoff with rounding noise [30].
This paper proposes a non-separable 2D quadruple lifting structure for 4D input signals to deal with the problem of degradation in image quality due to integer implementation. It has the advantage that its output signals, apart from rounding noise, are identical to those of a conventional transform the transfer function of which is a product of 16 1D transfer functions. Unlike the prevalent 3D quadruple lifting structure, the order of lifting steps in the original separable 4D transform is preserved. Thus, the total rounding noise is reduced even though the total number of rounding operators remains the same as in prevalent methods. Experiments confirmed that the total rounding noise observed in each frequency band of the decoded images was significantly smaller. The upper bounds of the quality-decoded images in lossy coding mode also improved.
The remainder of this paper is organized as follows: Section 2 introduces the two types of WT and the lifting structure in 1D WT. This structure is extended to the 4D case in Section 3, where the separable 4D structure is presented as "existing I" method. The non-separable structure is introduced in Section 3.2. It uses a 3D structure for 4D WT and is referred to as "existing II" method. In Section 4, the proposed methods are introduced and compared with the existing methods. They are combinations of non-separable 2D and 3D structures, called the "proposed I method," and a non-separable 2D structure called "proposed II" method. All methods are experimentally compared in terms of various aspects of six input signals in Section 5. The conclusions of this paper are detailed in Section 6.
Wavelet transform
Figure 1 shows the forward and backward transforms of the integer WT developed for the 5/3-type transform of the lossless coding of a discrete 1D signal in JPEG 2000 [31]. Figure 1a shows the forward transform of the integer WT and Fig. 1b shows its backward transform. It is composed of two lifting steps. The input signal X is down-sampled and fed into the forward transform, which is transformed into a low-frequency band signal, YL, and a high-frequency band signal YH. A1 and A2 are the coefficients of the filter bank, and YL and YH are coded with an entropy encoder to generate a bit stream for storage and communication. The band signals are then decoded and inversely transformed to obtain the reconstructed signal, X.
The 1D integer WT for 5/3-type transform. a Forward transform. b Backward transform
Figure 2 shows a 9/7-type transform developed for lossy coding. Two more lifting steps and scaling with a constant k are added in this type. This paper focuses on the 9/7-type transform for lossy coding of 4D signals. A problem related to the integer implementation of the transform is addressed here.
The integer WT of the 9/7-type transform for lossy coding
In detail, the input signal x(n), n = 0,1, ···, N-1 is divided into two groups x0(m) and x1(m), m = 0,1,···, M-1, M = N/2. It is expressed with the z transform as
$$ {X}_c(z)=\downarrow 2\left[{z}^cX(z)\right],\kern0.75em c\in \left\{0,1\right\}, $$
$$ \downarrow 2\left[X(z)\right]=\frac{1}{Q}{\sum}_{p=0}^{Q-1}X\left({z}^{\frac{1}{Q}}{W}_Q^p\right),\kern1em {W}_Q={e}^{\frac{j2\pi }{Q}}, $$
where Q = 2 and
$$ X(z)={\sum}_{n=0}^{N-1}x(n){z}^{-n}, $$
Secondly, the first lifting step is applied as
$$ {X}_1^{(1)}(z)={X}_1(z)+R\left[{A}_1(z){X}_0(z)\right], $$
and the second lifting is applied as
$$ {X}_0^{(2)}(z)={X}_0(z)+R\left[{A}_2(z){X}_1^{(1)}(z)\right], $$
where A1(z) and A2(z) are filters given as
$$ \left[\begin{array}{c}{A}_1(z)\\ {}{A}_2(z)\end{array}\right]=\left[\begin{array}{c}{h}_1\left(1+{z}^{-1}\right)\\ {}{h}_2\left(1+{z}^{-1}\right)\end{array}\right], $$
Finally, the frequency band signals are generated as
$$ \left[\begin{array}{c}{Y}_L(z)\\ {}{Y}_H(z)\end{array}\right]=\left[\begin{array}{c}{X}_0^{(2)}\\ {}{X}_1^{(1)}\end{array}\right], $$
$$ {Y}_b(z)={\sum}_{m=0}^{M-1}{y}_b(m){z}^{-m},\kern0.75em b\in \left\{L,H\right\}, $$
Note that R[ ] denotes the rounding operator that truncates a pixel value in real numbers to an integer. Calculation in a lifting step starts after the calculation results of the previous have been obtained. The greater the number of lifting steps, the higher the latency (or delay). Therefore, the authors reduce the total numbers of lifting steps and rounding operators in the 4D integer WT in the 9/7-type transform. Note that this paper focuses on reducing rounding noise in the transform to increase the coding performance for 4D data in lossy mode.
In 5/3 type transform, the coefficient values are given as
$$ \left[\begin{array}{ccc}{h}_1& {h}_3& {k}^{-1}\\ {}{h}_2& {h}_4& k\end{array}\right]=\left[\begin{array}{ccc}-\frac{1}{2}& 0& 1\\ {}\frac{1}{4}& 0& 1\end{array}\right], $$
Lossless reconstruction can be guaranteed with the scaling factors, k−1 and k are 1.
The 9/7 type transform has two more lifting steps and scaling. Namely, the third lifting step
$$ {X}_1^{(3)}(z)={X}_1^{(1)}(z)+R\left[{A}_3(z){X}_0^{(2)}(z)\right], $$
and the fourth lifting is applied as
$$ \left\{\begin{array}{c}{h}_1=-1.586134342059924\\ {}{h}_2=-0.052980118572961\\ {}\begin{array}{c}{h}_3=0.882911075530934\\ {}{h}_4=0.443506852043971\\ {}k=1.230174104914001\end{array}\end{array},\right. $$
Finally, the frequency band signals are generated with scaling as
$$ \left[\begin{array}{c}{Y}_L(z)\\ {}{Y}_H(z)\end{array}\right]=\left[\begin{array}{c}R\left[{k}^{-1}{2}^{-F}{X}_0^{(4)}\right]\\ {}R\left[{k}^{+1}{2}^{-F}{X}_1^{(3)}\right]\end{array}\right], $$
Note that the input signal is scaled with 2F beforehand as shown in Fig. 2. In the integer implementation, F is set as a positive number. The smaller the F is, the shorter the bit depth of signals inside the transform will be.
Existing methods
Separable 4D structure (existing I)
Figure 4 shows the 9/7-type separable 4D integer WT. In the JPEG 2000 standard, the 1D processing shown in Fig. 2 is applied to a 4D signal along the x, y, z, and t dimensions, where x and y denote two spatial dimensions within a slice, z denotes the third spatial dimension within a volume, and t denotes the fourth, temporal, dimension. However, the separable 4D structure increases the number of rounding operators in the transform. This structure has 192 rounding operators.
For a 4D input signal X(z), the transform splits the input signal into 16 channels, X0000, X0001, X0010, X0011, X0100, X0101, X0110, X0111, X1000, X1001, X1010, X1011, X1100, X1101, X1110, and X1111 as shown in Fig. 3. It is denoted as
$$ \left[\begin{array}{c}\begin{array}{c}{X}_{0000}\left(\mathbf{z}\right)\\ {}{X}_{0010}\left(\mathbf{z}\right)\end{array}\\ {}\begin{array}{c}\begin{array}{c}{X}_{0100}\left(\mathbf{z}\right)\\ {}{X}_{0011}\left(\boldsymbol{z}\right)\end{array}\\ {}\begin{array}{c}\begin{array}{c}\vdots \\ {}{X}_{1110}\left(\mathbf{z}\right)\end{array}\\ {}{X}_{1111}\left(\mathbf{z}\right)\end{array}\end{array}\end{array}\right]=\left[\begin{array}{c}\downarrow {2}_D\left[\left[\begin{array}{c}1\\ {}{z}_D\end{array}\right]{W}_1\left(\mathbf{z}\right)\right]\\ {}\begin{array}{c}\downarrow {2}_D\left[\left[\begin{array}{c}1\\ {}{z}_D\end{array}\right]{W}_2\left(\mathbf{z}\right)\right]\\ {}\vdots \\ {}\downarrow {2}_D\left[\left[\begin{array}{c}1\\ {}{z}_D\end{array}\right]{W}_8\left(\mathbf{z}\right)\right]\end{array}\end{array}\right], $$
$$ \left[\begin{array}{c}{W}_1\left(\mathbf{z}\right)\\ {}\begin{array}{c}{W}_2\left(\mathbf{z}\right)\\ {}{W}_3\left(\mathbf{z}\right)\\ {}\begin{array}{c}{W}_4\left(\mathbf{z}\right)\\ {}{W}_5\left(\mathbf{z}\right)\\ {}\begin{array}{c}{W}_6\left(\mathbf{z}\right)\\ {}{W}_7\left(\mathbf{z}\right)\\ {}{W}_8\left(\mathbf{z}\right)\end{array}\end{array}\end{array}\end{array}\right]=\left[\begin{array}{c}\downarrow {2}_C\left[\left[\begin{array}{c}1\\ {}{z}_C\end{array}\right]{V}_1\left(\mathbf{z}\right)\right]\\ {}\begin{array}{c}\downarrow {2}_C\left[\left[\begin{array}{c}1\\ {}{z}_C\end{array}\right]{V}_2\left(\mathbf{z}\right)\right]\\ {}\downarrow {2}_C\left[\left[\begin{array}{c}1\\ {}{z}_C\end{array}\right]{V}_3\left(\mathbf{z}\right)\right]\\ {}\downarrow {2}_C\left[\left[\begin{array}{c}1\\ {}{z}_C\end{array}\right]{V}_4\left(\mathbf{z}\right)\right]\end{array}\end{array}\right], $$
$$ \left[\begin{array}{c}\begin{array}{c}{V}_1\left(\mathbf{z}\right)\\ {}{V}_2\left(\mathbf{z}\right)\end{array}\\ {}{V}_3\left(\mathbf{z}\right)\\ {}{V}_4\left(\mathbf{z}\right)\end{array}\right]=\left[\begin{array}{c}\downarrow {2}_B\left[\left[\begin{array}{c}1\\ {}{z}_B\end{array}\right]{P}_1\left(\mathbf{z}\right)\right]\\ {}\downarrow {2}_B\left[\left[\begin{array}{c}1\\ {}{z}_B\end{array}\right]{P}_2\left(\mathbf{z}\right)\right]\end{array}\right] $$
$$ \left[\begin{array}{c}{P}_1\left(\mathbf{z}\right)\\ {}{P}_2\left(\mathbf{z}\right)\end{array}\right]=\downarrow {2}_A\left[\left[\begin{array}{c}1\\ {}{z}_A\end{array}\right]X\left(\mathbf{z}\right)\right] $$
Decomposition of a 4D signal
$$ \left[\begin{array}{c}\downarrow {2}_A\left[X\left(\mathbf{z}\right)\right]\\ {}\downarrow {2}_B\left[X\left(\mathbf{z}\right)\right]\\ {}\begin{array}{c}\downarrow {2}_C\left[X\left(\mathbf{z}\right)\right]\\ {}\downarrow {2}_D\left[X\left(\mathbf{z}\right)\right]\end{array}\end{array}\right]=\left[\begin{array}{c}\frac{1}{Q}{\sum}_{p=0}^{Q-1}X\left({z}_A^{1/Q}\bullet {W}_Q^p,{z}_B,{z}_C,{z}_D\right)\\ {}\frac{1}{Q}{\sum}_{p=0}^{Q-1}X\left({z}_A,{z}_B^{1/Q}\bullet {W}_Q^p,{z}_C,{z}_D\right)\\ {}\begin{array}{c}\frac{1}{Q}{\sum}_{p=0}^{Q-1}X\left({z}_A,{z}_B,{z}_C^{1/Q}\bullet {W}_Q^p,{z}_D\right)\\ {}\frac{1}{Q}{\sum}_{p=0}^{Q-1}X\left({z}_A,{z}_B,{z}_C,{z}_D^{1/Q}\bullet {W}_Q^p\right)\end{array}\end{array}\right]\bullet {2}^F, $$
$$ X\left(\mathbf{z}\right)={\sum}_{n_1=0}^{N_1-1}{\sum}_{n_2=0}^{N_2-1}{\sum}_{n_3=0}^{N_3-1}{\sum}_{n_4=0}^{N_4-1}X\left(\mathbf{n}\right){z}_A^{-{n}_1}{z}_B^{-{n}_2}{z}_C^{-{n}_4}{z}_D^{-{n}_4}, $$
where z = (zA, zB, zC, zD) and n = (n1, n2, n3, n4).
In JPEG 2000 standard, applying the 1st, 2nd, 3rd, and 4th lifting steps in the spatial dimension, x with
$$ \left[\begin{array}{cc}{A}_1\left(\mathbf{z}\right)& {A}_3\left(\mathbf{z}\right)\\ {}{A}_2\left(\mathbf{z}\right)& {A}_4\left(\mathbf{z}\right)\end{array}\right]=\left[\begin{array}{cc}{h}_1\left(1+{z}_A^{+1}\right)& {h}_3\left(1+{z}_A^{+1}\right)\\ {}{h}_2\left(1+{z}_A^{-1}\right)& {h}_4\left(1+{z}_A^{-1}\right)\end{array}\right],\kern0.5em $$
and the 5th, 6th, 7th, and 8th lifting steps in the spatial dimension, y with
$$ \left[\begin{array}{cc}{B}_1\left(\mathbf{z}\right)& {B}_3\left(\mathbf{z}\right)\\ {}{B}_2\left(\mathbf{z}\right)& {B}_4\left(\mathbf{z}\right)\end{array}\right]=\left[\begin{array}{cc}{h}_1\left(1+{z}_B^{+1}\right)& {h}_3\left(1+{z}_B^{+1}\right)\\ {}{h}_2\left(1+{z}_B^{-1}\right)& {h}_4\left(1+{z}_B^{-1}\right)\end{array}\right],\kern0.5em $$
and the 9th, 10th, 11th, and 12th lifting steps in the spatial dimension, z with
$$ \left[\begin{array}{cc}{C}_1\left(\mathbf{z}\right)& {C}_3\left(\mathbf{z}\right)\\ {}{C}_2\left(\mathbf{z}\right)& {C}_4\left(\mathbf{z}\right)\end{array}\right]=\left[\begin{array}{cc}{h}_1\left(1+{z}_C^{+1}\right)& {h}_3\left(1+{z}_C^{+1}\right)\\ {}{h}_2\left(1+{z}_C^{-1}\right)& {h}_4\left(1+{z}_C^{-1}\right)\end{array}\right], $$
and the 13th, 14th, 15th, and 16th lifting steps in the temporal dimension, t with
$$ \left[\begin{array}{cc}{D}_1\left(\mathbf{z}\right)& {D}_3\left(\mathbf{z}\right)\\ {}{D}_2\left(\mathbf{z}\right)& {D}_4\left(\mathbf{z}\right)\end{array}\right]=\left[\begin{array}{cc}{h}_1\left(1+{z}_D^{+1}\right)& {h}_3\left(1+{z}_D^{+1}\right)\\ {}{h}_2\left(1+{z}_D^{-1}\right)& {h}_4\left(1+{z}_D^{-1}\right)\end{array}\right],\kern0.5em $$
to the channel signals in (7), the transform outputs sixteen frequency band signals Y LLLL (z), Y LLLH (z), Y LLHL (z), Y LLHH (z), Y LHLL (z), Y LHLH (z), Y LHHL (z), Y LHHH (z), Y HLLL (z), Y HLLH (z), Y HLHL (z), Y HLHH (z), Y HHLL (z), Y HHLH (z), Y HHHL (z), and Y HHHH (z) as illustrated in Fig. 4. This is referred to as a separable structure. As it has a large number of rounding operators, there is a large volume of rounding noise in the transform. A non-separable 3D structure was thus proposed in [27]. However, when used for a 4D signal, the rounding noise in it significantly increases compared with that in a separable 4D structure. Thus, its coding performance is significantly affected by the rounding noise generated inside it.
Separable 4D structure for 9/7-type of transform (existing I)
Non-separable 3D structure (existing II)
Figure 5 shows the non-separable 3D structure of integer WT for a 4D input signal designed in the 9/7-type transform based on the structure proposed in [28]. In the first to the fourth lifting steps, the 4D input signal, once it is decomposed into 16 channels, is applied to the spatial dimension x as in Eq. (19).
$$ \left[\begin{array}{c}{X}_{0000}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0001}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}\vdots \\ {}{X}_{1111}^{(B)}\left(\mathbf{z}\right)\end{array}\end{array}\right]=\left[\begin{array}{c}R\left[{k}^{-1}{X}_{0000}^{(A)}\left(\mathbf{z}\right)\right]\\ {}R\left[{k}^{-1}{X}_{0001}^{(A)}\left(\mathbf{z}\right)\right]\\ {}\begin{array}{c}\vdots \\ {}R\left[{k}^{+1}{X}_{1111}^{(A)}\left(\mathbf{z}\right)\right]\end{array}\end{array}\right], $$
Non-separable 3D structure for 9/7-type of transform (existing II)
Then, from the fifth to the 12th lifting steps, the signals are transformed simultaneously in spatial dimensions y and z, and temporal dimension t using the non-separable 3D structure. For instance, the signal in YLHHH is produced as
$$ {X}_{0111}^{(D)}\left(\mathbf{z}\right)={X}_{0111}^{(B)}\left(\mathbf{z}\right)+R\left[{k}^{+3}{2}^{-F}{P}_{LHHH}^{(D)}\left(\mathbf{z}\right)\right] $$
$$ {P}_{LHHH}^{(D)}={\left[\begin{array}{c}{B}_1\left(\mathbf{z}\right){C}_1\left(\mathbf{z}\right){D}_1\left(\mathbf{z}\right)\\ {}{B}_1\left(\mathbf{z}\right){C}_1\left(\mathbf{z}\right)\\ {}\begin{array}{c}{B}_1\left(\mathbf{z}\right){D}_1\left(\mathbf{z}\right)\\ {}{B}_1\left(\mathbf{z}\right)\\ {}\begin{array}{c}{C}_1\left(\mathbf{z}\right){D}_1\left(\mathbf{z}\right)\\ {}{C}_1\left(\mathbf{z}\right)\\ {}{D}_1\left(\mathbf{z}\right)\end{array}\end{array}\end{array}\right]}^T\bullet \left[\begin{array}{c}{X}_{0000}^{(B)}\ \left(\mathbf{z}\right)\\ {}{X}_{0001}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}{X}_{0010}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0011}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}{X}_{0100}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0101}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0110}^{(B)}\left(\mathbf{z}\right)\end{array}\end{array}\end{array}\right]+{\left[\begin{array}{c}{B}_3\left(\mathbf{z}\right){C}_3\left(\mathbf{z}\right){D}_3\left(\mathbf{z}\right)\\ {}{B}_3\left(\mathbf{z}\right){C}_3\left(\mathbf{z}\right)\\ {}\begin{array}{c}{B}_3\left(\mathbf{z}\right){D}_3\left(\mathbf{z}\right)\\ {}{B}_3\left(\mathbf{z}\right)\\ {}\begin{array}{c}{C}_3\left(\mathbf{z}\right){D}_3\left(\mathbf{z}\right)\\ {}{C}_3\left(\mathbf{z}\right)\\ {}{D}_3\left(\mathbf{z}\right)\end{array}\end{array}\end{array}\right]}^T\bullet \left[\begin{array}{c}{X}_{0000}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0001}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}{X}_{0010}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0011}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}{X}_{0100}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0101}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0110}^{(B)}\left(\mathbf{z}\right)\end{array}\end{array}\end{array}\right], $$
in the fifth lifting step. In this step, a 3D filtering with 3D memory accessing B1(z)C1(z)D1(z) is used. In the sixth lifting step, the calculation of YLHHL, YLHLH, and YLLHH:
$$ \left[\begin{array}{c}{X}_{0110}^{(D)}\left(\mathbf{z}\right)\\ {}{X}_{0101}^{(D)}\left(\mathbf{z}\right)\\ {}{X}_{0011}^{(D)}\left(\mathbf{z}\right)\end{array}\right]=\left[\begin{array}{c}{X}_{0110}^{(B)}\left(\mathbf{z}\right)+R\left[{k}^{+1}{2}^{-F}{P}_{LHHL}^{(D)}\left(\mathbf{z}\right)\right]\\ {}{X}_{0101}^{(B)}\left(\mathbf{z}\right)+R\left[{k}^{+1}{2}^{-F}{P}_{LHLH}^{(D)}\left(\mathbf{z}\right)\right]\\ {}{X}_{0011}^{(B)}\left(\mathbf{z}\right)+R\left[{k}^{+1}{2}^{-F}{P}_{LLHH}^{(D)}\left(\mathbf{z}\right)\right]\end{array}\right], $$
$$ \left[\begin{array}{c}{P}_{LHHL}^{(D)\prime}\left(\mathbf{z}\right)\\ {}{P}_{LHLH}^{(D)\prime}\left(\mathbf{z}\right)\\ {}{P}_{LLHH}^{(D)\prime}\left(\mathbf{z}\right)\end{array}\right]=\left[\begin{array}{ccccc}{B}_1\left(\mathbf{z}\right){C}_1\left(\mathbf{z}\right)& 0& {B}_1\left(\mathbf{z}\right)& {C}_1\left(\mathbf{z}\right)& {D}_2\left(\mathbf{z}\right)\\ {}{B}_1\left(\mathbf{z}\right){D}_1\left(\mathbf{z}\right)& {B}_1\left(\mathbf{z}\right)& 0& {D}_1\left(\mathbf{z}\right)& {C}_2\left(\mathbf{z}\right)\\ {}{C}_1\left(\mathbf{z}\right){D}_1\left(\mathbf{z}\right)& {C}_1\left(\mathbf{z}\right)& {D}_1\left(\mathbf{z}\right)& 0& {B}_2\left(\mathbf{z}\right)\end{array}\right]\left[\begin{array}{c}{X}_{0000}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0001}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}{X}_{0010}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}{X}_{0100}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0111}^{(B)}\left(\mathbf{z}\right)\end{array}\end{array}\end{array}\right], $$
$$ \left[\begin{array}{c}{P}_{LHHL}^{(D)\prime \prime}\left(\mathbf{z}\right)\\ {}{P}_{LHLH}^{(D)\prime \prime}\left(\mathbf{z}\right)\\ {}{P}_{LLHH}^{(D)\prime \prime}\left(\mathbf{z}\right)\end{array}\right]=\left[\begin{array}{ccccc}{B}_3\left(\mathbf{z}\right){C}_3\left(\mathbf{z}\right)& 0& {B}_3\left(\mathbf{z}\right)& {C}_3\left(\mathbf{z}\right)& {D}_4\left(\mathbf{z}\right)\\ {}{B}_3\left(\mathbf{z}\right){D}_3\left(\mathbf{z}\right)& {B}_3\left(\mathbf{z}\right)& 0& {D}_3\left(\mathbf{z}\right)& {C}_4\left(\mathbf{z}\right)\\ {}{C}_3\left(\mathbf{z}\right){D}_3\left(\mathbf{z}\right)& {C}_3\left(\mathbf{z}\right)& {D}_3\left(\mathbf{z}\right)& 0& {B}_4\left(\mathbf{z}\right)\end{array}\right]\left[\begin{array}{c}{X}_{0000}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0001}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}{X}_{0010}^{(B)}\left(\mathbf{z}\right)\\ {}\begin{array}{c}{X}_{0100}^{(B)}\left(\mathbf{z}\right)\\ {}{X}_{0111}^{(B)}\left(\mathbf{z}\right)\end{array}\end{array}\end{array}\right], $$
$$ \left\{\begin{array}{c}{P}_{LHHL}^{(D)}\left(\mathbf{z}\right)={P}_{LHHL}^{(D)\prime}\left(\mathbf{z}\right)+{P}_{LHHL}^{(D)\prime \prime}\left(\mathbf{z}\right)\\ {}{P}_{LHLH}^{(D)}\left(\mathbf{z}\right)={P}_{LHLH}^{(D)\prime}\left(\mathbf{z}\right)+{P}_{LHLH}^{(D)\prime \prime}\left(\mathbf{z}\right)\\ {}{P}_{LLHH}^{(D)}\left(\mathbf{z}\right)={P}_{LLHH}^{(D)\prime}\left(\mathbf{z}\right)+{P}_{LLHH}^{(D)\prime \prime}\left(\mathbf{z}\right)\end{array}\right., $$
where R[] denotes the rounding operation on a signal value. Similarly, prediction of X1111, X1110, X1101, X1100, X1011, X1010, X1001, X1000, X0100, X0010, X0001, and X0000are also independent. The total numbers of lifting steps and rounding operators in the non-separable structure were hence reduced from 16 to 12 and 192 to 96, respectively, comparing with the separable structure in Fig. 4. However, the quality of the decoded image was degraded by the rounding noise inside the transform in its integer implementation. The proposed methods solve this problem as explained below.
Proposed methods
Non-separable 2D and 3D structure (proposed I)
To solve the problems of higher rounding noise and degraded quality of lossy coding problems, the non-separable structure, which combines both 2D and 3D structures, is proposed. The total number of rounding operators is thus further reduced from 96 to 72. Figure 6 illustrates the structure of the proposed I method. Once the 4D input signal is decomposed into 16 channels, the first, second, and third lifting steps are cascaded by using the non-separable 2D structure, followed by the non-separable 3D structure and the separable 1D structure. For example, the first lifting step is expressed as
$$ {X}_{11{c}_1{c}_2}^{(1)}\left(\mathbf{z}\right)={X}_{11{c}_1{c}_2}\left(\mathbf{z}\right)+R\left[{A}_1{B}_1{X}_{00{c}_1{c}_2}\left(\mathbf{z}\right)+{A}_1{X}_{01{c}_1{c}_2}\left(\mathbf{z}\right)+{B}_1{X}_{10{c}_1{c}_2}\left(\mathbf{z}\right)\right],\kern0.75em \mathrm{where}\ {c}_1,{c}_2\in \left\{0,1\right\}, $$
Non-separable 2D combined with 3D structure for 9/7-type of transform (proposed I)
Even though this proposed method has fewer rounding operators, the rounding noise in it is still higher than in Existing I method, the separable 4D structure. Therefore, it is necessary to maintain the original structure so that rounding noise in the transform is lower.
Non-separable 2D structure (proposed II)
To lower the rounding noise inside the transform, the non-separable 2D structure for 4D input signals is proposed. Figure 7 shows the proposed II method. Unlike the existing I, existing II, and proposed I methods, it consists of only a non-separable 2D structure. The original order of lifting steps for each dimension is also maintained in this structure. The total number of lifting steps, however, is larger in existing II but smaller than existing I. The total number of rounding operators is smaller in existing I, and the rounding noise is lower, as confirmed in Section 5.
Non-separable 2D structure for 9/7-type of transform (proposed II)
The first and second lifting steps involve 1D structures expressed as
$$ \left\{\begin{array}{c}{X}_{1{c}_1{c}_2{c}_3}^{(1)}\left(\mathbf{z}\right)={X}_{1{c}_1{c}_2{c}_3}\left(\mathbf{z}\right)+R\left[{A}_1{X}_{0{c}_1{c}_2{c}_3}\left(\mathbf{z}\right)\right]\\ {}{X}_{0{c}_1{c}_2{c}_3}^{(1)}\left(\mathbf{z}\right)={X}_{0{c}_1{c}_2{c}_3}\left(\mathbf{z}\right)+R\left[{A}_2{X}_{1{c}_2{c}_3}\left(\mathbf{z}\right)\right]\end{array},\kern1.5em where\ {c}_1,{c}_2,{c}_3\in \left\{0,1\right\}\right., $$
The third, fourth, and fifth lifting steps consist of non-separable 2D structures, and the same goes for the fifth to the 11th steps. Finally, the 12th and 13th lifting steps involve the separable 1D structure. For example, the third lifting step is expressed as
$$ {X}_{11{c}_1{c}_2}^{(2)}\left(\mathbf{z}\right)={X}_{11{c}_1{c}_2}^{(1)}\left(\mathbf{z}\right)+R\left[{A}_3{B}_1{X}_{00{c}_1{c}_2}^{(1)}\left(\mathbf{z}\right)+{A}_3{X}_{01{c}_1{c}_2}^{(1)}\left(\mathbf{z}\right)+{B}_1{X}_{10{c}_1{c}_2}^{(1)}\left(\mathbf{z}\right)\right],\kern0.75em \mathrm{where}\ {c}_1,{c}_2\in \left\{0,1\right\}, $$
Thus, proposed II is a combination of non-separable 2D structures and separable 1D structures.
Comparison of the structures
Table 1 compares the four structures: separable 4D, non-separable 3D, non-separable 2D and 3D, and non-separable 2D. As summarized in the table, the total number of lifting steps in the proposed non-separable 2D structure increases from 12 to 13 comparing with the existing non-separable 3D structure. However, the total number of rounding operators remains the same. Note that both are still smaller in number than the existing separable 4D structure. A different structure obtained by combining the 2D and 3D structures is first proposed to further reduce the number of rounding operators, but to reduce rounding noise, it is necessary to maintain the original, separable 4D structure. Therefore, the non-separable 2D structure is proposed based on this original structure.
Table 1 Comparison of the methods
As shown in Fig. 4, the existing I method is composed of lifting steps A1, A2, …., D4, and is expressed as Separable 4D
$$ {A}_1{A}_2{A}_3{A}_4{B}_1{B}_2{B}_3{B}_4{C}_1{C}_2{C}_3{C}_4{D}_1{D}_2{D}_3{D}_4 $$
In existing I, the order of lifting steps for spatial dimension x remains the same, but those of the steps for dimensions y and z, and temporal dimension t, change as
Separable 3D'
$$ {B}_1{B}_2{C}_1{C}_2{D}_1{D}_2{B}_3{B}_4{C}_3{C}_4{D}_3{D}_4 $$
Part of this is implemented in the non-separable 3D structure (existing II).
Non-separable 3D
$$ {A}_1{A}_2{A}_3{A}_4{\left({B}_1{B}_2{C}_1{C}_2{D}_1{D}_2\right)}_{3D}{\left({B}_3{B}_4{C}_3{C}_4{D}_3{D}_4\right)}_{3D} $$
Unlike existing II, the proposed I structure is expressed as.
Non-separable 2D & 3D
$$ {\left({A}_1{A}_2{B}_1{B}_2\right)}_{2D}{\left({A}_3{A}_4{B}_3{B}_4{C}_1{C}_2\right)}_{3D}{\left({C}_3{C}_4{D}_1{D}_2\right)}_{2D}{D}_3{D}_4 $$
To maintain the original structure of existing I, proposed II is expressed as.
$$ {A}_1{A}_2{\left({A}_3{A}_4{B}_1{B}_2\right)}_{2D}{\left({B}_3{B}_4{C}_1{C}_2\right)}_{2D}{\left({C}_3{C}_4{D}_1{D}_2\right)}_{2D}{D}_3{D}_4 $$
Note that the brackets refer to the non-separable structure.
Derivation of the structure
The derivation of the non-separable structure in Figs. 5, 6, and 7 from the separable structure in Fig. 4 is detailed here. We derived it using two the basic properties illustrated in Fig. 8 and as described in [32]. They are denoted by
$$ {\mathrm{P}}_{\mathrm{I}}:\kern2.50em \mathbf{Y}=\mathbf{BA}\bullet \mathbf{X}=\mathbf{A}{\mathbf{C}}_0\mathbf{B}\bullet \mathbf{X}, $$
$$ {\mathrm{P}}_{\mathrm{II}}:\kern2.25em \mathbf{Y}=\mathbf{AB}\bullet \mathbf{X}=\mathbf{B}{\mathbf{C}}_1\mathbf{A}\bullet \mathbf{X}, $$
Basic properties for modification. a Property I. b Property II
$$ \mathbf{A}=\left[\begin{array}{ccc}1& 0& 0\\ {}A& 1& 0\\ {}0& 0& 1\end{array}\right],{\mathbf{C}}_0=\left[\begin{array}{ccc}1& 0& 0\\ {}0& 1& 0\\ {}+ AB& 0& 1\end{array}\right],\mathbf{X}=\left[\begin{array}{c}{X}_0\\ {}{X}_1\\ {}{X}_2\end{array}\right], $$
$$ \mathbf{B}=\left[\begin{array}{ccc}1& 0& 0\\ {}0& 1& 0\\ {}0& B& 1\end{array}\right],{\mathbf{C}}_1=\left[\begin{array}{ccc}1& 0& 0\\ {}0& 1& 0\\ {}- AB& 0& 1\end{array}\right],\mathbf{Y}=\left[\begin{array}{c}{Y}_0\\ {}{Y}_1\\ {}{Y}_2\end{array}\right], $$
By using both the above properties, the process of derivation of the non-separable structure for 4D WT is clarified. The basic properties for modification to be implemented in the non-separable structure shown in Fig. 8a, b have different lifting procedures but are equivalent as shown in Eqs. (38) and (39). By adding properties I and II to the conventional separable structure, the lifting can be grouped together, thus reducing the numbers of lifting steps and rounding operations in the transform. Therefore, the non-separable structure can be created systematically to reduce rounding and enhance coding performance.
The process of deriving the non-separable 2D for double lifting is shown in Fig. 9 based on [32]. Note that the same derivation process is used to obtain the non-separable quadruple 3D structure in Fig. 5, to combine the non-separable 2D and 3D structures for quadruple 4D integer WT as in Fig. 6, and to employ the non-separable 2D structure for quadruple 4D integer WT as in Fig. 7.
Derivation process for non-separable double-lifting 2D structure. a Separable 2D structure. b Rearranging the lifting structure. c Applying property I and property II. d Non-separable 2D structure
A clear comparison in terms of the numbers of rounding operators and lifting steps between the separable and non-separable structures is provided in Fig. 10.
Separable and non-separable 2D structures. a Separable 2D structure for double lifting with eight rounding operators and four lifting steps. b Non-separable 2D structure for double lifting with four rounding operators and three lifting steps
Lifting steps and latency
The non-separable structure can reduce the numbers of rounding operations and lifting steps in the transform. The smaller the number of lifting steps, the lower the overall latency of the transform, as shown in [32]. Figure 11a illustrates an example of the implementation of the first lifting step in the double-lifting separable 2D structure shown in Fig. 10a. In this structure, each adder is implemented one by one in a parallel processor with the latency denoted by "A." Similarly, denoting the latency of the multipliers by "M," it takes M + 2A steps in total. Compared with the non-separable 2D structure in Fig. 10b, four adders are simultaneously implemented in a parallel processor for the first and third lifting steps as shown in Fig. 11b and take M + 4A steps each. The second lifting step implemented in Fig. 11c is according to Fig. 10b and has M + 3A steps. As a result, the separable 2D structure and the non-separable 2D structure took 4M + 8A and 3M + 11A steps, respectively. Thus, the latency ratio is defined as
$$ L=\frac{3\eta +11}{4\eta +8},\kern0.5em \eta =\frac{M}{A}, $$
Implementation examples in parallel processing platform
Owing to the number of lifting steps, latency is reduced if η > 3. Note that the above estimation is valid only for examples shown in Fig. 11. As the non-separable 2D and 3D for quadruple 4D integer WT are designed according to the same derivation of non-separable 2D for double 2D integer WT, the implementation examples in Fig. 11 are applicable to them. The proposed non-separable 2D for quadruple 4D integer WT is expected to have lower latency than the separable 4D structure.
Experimental results and discussions
In the following experiments, six types of data were used to evaluate the rounding noise and coding performance, as shown in Table 2.
Table 2 Type of data used in the experiments
Note that the MRI, functional magnetic resonance image (fMRI)(II), and US data were retrieved from [33, 34] and [35], respectively. MRI data represent highly correlated data at consecutive time points with limited motion in the temporal dimension t and structural changes in the spatial dimension z.
Each image is normalized to the range [0, 255] for display purposes as shown in Fig. 12a–f. In this paper, the variance in noise in frequency domain and coding performance in lossless and lossy coding modes were investigated.
Tested data. a fMRI (I). b CT. c MRI. d fMRI (II). e AR. f US
The 4D auto-regressive (AR) model used in our experiments can be expressed as
$$ \left\{\begin{array}{c}{x}^{(1)}\left({n}_1,{n}_2,{n}_3,{n}_4\right)=x\left({n}_1,{n}_2,{n}_3,{n}_4\right)+\rho \bullet {x}^{(1)}\left({n}_1-1,{n}_2,{n}_3,{n}_4\right)\\ {}{x}^{(2)}\left({n}_1,{n}_2,{n}_3,{n}_4\right)={x}^{(1)}\left({n}_1,{n}_2,{n}_3,{n}_4\right)+\rho \bullet {x}^{(2)}\left({n}_1,{n}_2-1,{n}_3,{n}_4\right)\\ {}\begin{array}{c}{x}^{(3)}\left({n}_1,{n}_2,{n}_3,{n}_4\right)={x}^{(2)}\left({n}_1,{n}_2,{n}_3,{n}_4\right)+\rho \bullet {x}^{(3)}\left({n}_1,{n}_2,{n}_3-1,{n}_4\right)\\ {}{x}^{(4)}\left({n}_1,{n}_2,{n}_3,{n}_4\right)={x}^{(3)}\left({n}_1,{n}_2,{n}_3,{n}_4\right)+\rho \bullet {x}^{(4)}\left({n}_1,{n}_2,{n}_3,{n}_4-1\right)\end{array}\end{array}\right., $$
Note that ρ was set to 0.9 in the experiments in this paper, as the typical values of ρ for natural images are between 0.9 and 0.98 [36].
Evaluation of rounding noise
As fewer rounding operators do not imply fewer rounding errors in total [21, 22], the total rounding noise in the output frequency band signal was investigated. The rounding operation is a vital part of lossy compression.
A non-linear operation transforms a floating-point signal into an integer signal. An equivalent expression to the rounding operation is shown in Fig. 13. A non-linear equation with additive noise is defined as
$$ {S}_{R_o}={S}_{R_i}(z)+{N}_R(z), $$
Rounding operation and its equivalent expression
where \( {S}_{R_i} \), \( {S}_{R_o} \), and N R denote the input signal, the output signal, and the additive noise of the rounding operation, respectively. As the correlation between each of the of the errors and the signals was zero (based on statistical independence), the variance of output signal was calculated from
$$ {\sigma}_{S_{R_o}}^2={\sigma}_{S_{R_i}}^2+{\sigma}_{N_R}^2, $$
where \( {\sigma}_{S_{R_i}}^2 \), \( {\sigma}_{S_{R_o}}^2 \), and \( {\sigma}_{N_R}^2 \) refer to the variance of the input signal, the output signal, and the additive noise of the rounding operators, respectively. As the power of the probability density function (PDF) of additive noise is approximately flat, as shown in Fig. 14, the variance of the additive noise for the rounding operations was calculated as
$$ {\sigma}_{N_R}^2={\int}_{-0.5}^{0.5}{x}^2 dx=\frac{1}{12}, $$
Probability density function (PDF) for an amplitude of the additive noise
In this study, rounding noise inside the circuit was measured to observe the accumulative error in it due to the rounding operators. Figure 15 illustrates that rounding noise was examined by using the error between the integer signal and the real number signal. Rounding noise is defined by Eq. (47).
$$ error=y-\widehat{y,} $$
Rounding noise between integer and real number
Rounding noise was evaluated in six types of input signals to compare the conventional and the proposed lifting structures, as shown in Fig. 16a–f.
Variance of rounding noise for all data in each frequency band. a fMRI(I). b CT. c fMRI(II). d MRI. e AR. f US
Figure 16a–f indicate the variance of the rounding error in each frequency band for fMRI(I), CT, fMRI(II), MRI, AR, and US data respectively. The average magnitude of rounding errors for all data was reduced to 19.11, 16.08, 16.81, 5.63, and 16.09%, respectively, except for US data, which was increased by 15.47%, between existing I structure and proposed II structure, as shown in Fig. 17. However, as the rounding noise in the LLLL frequency band for proposed II structure for the US data was lower than that in existing I, the coding performance on the US data for proposed II method improved as shown in Fig. 20f. The HHHH frequency band of the existing II structure was the highest and resulted in a degradation in coding performance. The energy of the frequency band should be compacted to a low-frequency band signal to reduce entropy in the compressed image and enhanced coding performance. The higher the variance of rounding noise in the low-frequency band signal, the lower the entropy of the compressed image. However, all structures had the highest compacted energy in the high-frequency band signals. Therefore, we conclude that the lowest variance in rounding noise among all frequency band signals yields the best coding performance.
Average variance of rounding noise in each frequency band
The reason for why the total number of rounding operators did not influence total rounding noise is investigated in Fig. 18, which shows the variance of rounding noise in the frequency domain. In an experiment, only one rounding operator in the forward transform was activated. The horizontal axis denotes the activated rounding operator, numbered from top to bottom, from left to right in Figs. 4 to 7. As summarized in Table 1, the existing I, existing II, proposed I, and proposed II structures had 192, 96, 72, and 96 rounding operations, respectively. The total rounding noise in proposed II decreased even though the number of rounding operations, the source of the noise, remained the same as that in the existing II structure. This is because of the relatively large variance in error in the structure. The variance in rounding noise was the highest in the 63rd rounding operator at 0.1844, the ninth rounding operator at 1.2217, the 14th rounding operator at 0.5964, and the 56th rounding operator at 0.2127 for existing I, existing II, proposed I, and proposed II, respectively. However, the average rounding noise in proposed II relative to existing I decreased by almost half from 0.0646 to 0.0382. It can be concluded that the more frequent and the greater the extent to which noise is amplified in the lifting structure, the higher the rounding noise inside it. As the rounding noise inside the transform influences coding performance, it is investigated in Section 5.2.
Effect of one rounding operator for fMRI(II) data
Evaluation of coding performance
Figure 19 illustrates the rate distortion curve, which compares the performance of the methods in the lossy coding mode for CT images. The horizontal and vertical axes represent the entropy rate measured in bits per pixel (bpp) and the peak signal-to-noise ratio (PSNR) of the reconstructed signal, respectively. It shows that the proposed II structure outperformed the others at the same bitrate. The quality of the reconstructed signal in proposed II increased by 0.18, 16.11, and 26.44 dB over existing I, existing II, and proposed I, respectively. As the quality of the reconstructed image deteriorated considerably if it was transformed using existing II and proposed I. Figure 20a–e shows the coding performance results when images were transformed using existing I and proposed II only.
Coding performance in lossy mode for CT image
Coding performance in lossy mode for fMRI(I), fMRI(II), MRI, AR, and US data. a fMRI(I). b fMRI(II). c MRI. d AR. e US
Figure 20a–e shows the coding performance of existing I and proposed II on fMRI(I), fMRI(II), MRI, AR, and US data, respectively. Under the same bitrate, the quality of the reconstructed signal for proposed II increased by 0.40, 2.10, 0.27, 2.57, and 0.72 dB for fMRI(I), fMRI(II), MRI, AR, and US data, respectively. As both quantization noise and rounding noise were present in the lossy compression system, even though the former for all structures was the same, rounding noise in the proposed II structure was the lowest. Thus, the coding performance of this structure was the best. In conclusion, proposed II structure outperformed all other structures in the experiments.
This paper proposed a non-separable 2D structure for 4D input signals in lieu of non-separable 3D structures. The total number of rounding operators was reduced by half compared with the prevalent separable 4D structure. However, rounding noise owing to the integer implementation of signal values inside the transform increased in the non-separable 3D structure, due to a change in its original lifting scheme. Therefore, by maintaining the original scheme, the proposed non-separable 2D structure reduced total rounding noise in it and enhanced the quality of the reconstructed signal in lossy coding. Furthermore, the number of lifting steps, a reduction in which reduces the latency of the overall transform, was also reduced by 18.75% between the proposed non-separable 2D structure and the conventional separable 4D structure on the quadruple 4D integer WT. Our integer WT has the advantage of compatibility with the conventional integer WT. It also enhances compression performance on 4D images, such as medical images.
1D:
One-dimensional
Two-dimensional
Four-dimensional
AR:
Auto-regressive
Bpp:
Bits per pixel
CT:
DCT:
Discrete cosine transform
Discrete wavelet transform
fMRI:
Functional magnetic resonance image
JPEG:
Joint photographic experts group
MRI:
Probability density function
PSNR:
Peak signal-to-noise ratio
WT:
M Unser, T Blu, Mathematical properties of the JPEG 2000 wavelet filters. IEEE Trans. Image Process. 12(9), 1080–1090 (2003)
M Antonini, M Barlaud, P Mathieu, I Daubechies, Image coding using wavelet transforms. IEEE Trans. Image Process. 1(2), 205–220 (1992)
D Le Gall, A Tabatai, in International Conference on Acoustics, Speech and Signal Processing. Subband coding of digital images by using symmetric short kernel filters and arithmetic coding techniques (1988)
W Sweldens, The lifting scheme: a custom design construction of biorthogonal wavelets. Appl. Comput. Harmon. Analysis 3(2), 186–200 (1996)
Article MathSciNet MATH Google Scholar
A Nait-Ali, C Cavaro-Menard, Compression of Biomedical Images and Signals (Wiley, USA, 2008)
HK Huang, PACS and Imaging Informatics: Basic Principles and Applications (Wiley, USA, 2010)
R Rajeswari, R Rajesh, in World Congress on Nature & Biologically Inspired Computing. Efficient compression of 4D fMRI images using Bandelet transform and fuzzy thresholding (IEEE, India, 2009).
V Sanchez, P Nasiopoulos, R Abughrabieh, Novel lossless fMRI image compression based on motion compensation and customized entropy coding. IEEE Trans. Inf Techno Biomed 13(4), 645–655 (2009)
HG Lalgudi, A Bilgin, MW Marcellin, A Tabesh, MS Nadar, TP Trouard, Four-dimensional compression of fMRI using JPEG 2000. Proc. of SPIE, Medical Imaging: Image Processing 5747, 1028–1037 (2005)
C Chrysafis, A Ortega, Line-based, reduced memory, wavelet image compression. IEEE Trans. Image Process. 9(3), 378–389 (2000)
G Shi, W Liu, L Zhang, F Li, An efficient folded architecture for lifting-based discrete wavelet transform. IEEE Trans. Circuits, Systems II express briefs 56(4), 290–294 (2009)
M Vetterli, C Herley, Wavelets and filter banks: theory and design. IEEE Trans. Signal Processing 40(9), 2207–2232 (1992)
Article MATH Google Scholar
DS Taubman, in IEEE International Conference on Image Processing. Adaptive, non-separable lifting transforms for image compression (1999)
S Fukuma, M Iwahashi, N Kambayashi, in IEEE International Symposium on Circuits and Systems. Adaptive multi-channel prediction for lossless scalable coding (1999)
C Yan, Y Zhang, J Xu, F Dai, L Li, Q Dai, F Wu, A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Process Lett 21(5), 573–576 (2014)
J Franco, G Bernabe, J Fernandez, ME Acacio, in Euromicro Int. Conf. On Parallel, Distributed and Network-Based Processing. A parallel implementation of the 2D wavelet transform using CUDA (IEEE, Germany, 2009)
T Yoshida, T Suzuki, S Kiyochi, M Ikehara, Two dimensional non-separable adaptive directional lifting structure of discrete wavelet transform. IEICE Trans. Fundam E94-A(10), 1920–1927 (2011)
T Bruylants, A Munteanu, P Schelkens, Wavelet based volumetric medical image compression. Signal Process. Image Commun. 31, 112–133 (2015)
FA Binti Hamzah, T Yoshida, M Iwahashi, H Kiya, Adaptive directional lifting structure of three dimensional non-separable discrete wavelet transform for high resolution volumetric data compression. IEICE Trans. Fundam E99-A(5), 892–899 (2016)
M Iwahashi, H Kiya, in IEEE International Conference Image Processing. Non separable 2D factorization of separable 2D DWT for lossless image coding (2009)
T Strutz, I Rennert, Two-dimensional integer wavelet transform with reduced influence of rounding operations. EURASIP J Advances Signal Process 75 (2012). https://doi.org/10.1186/1687-6180-2012-75
M Iwahashi, H Kiya, in Discrete Wavelet Transforms - a Compendium of New Approaches and Recent Applications. Discrete wavelet transforms: non separable two dimensional discrete wavelet transform for image signals (InTechOpen, 2013). Available from: https://www.intechopen.com/books/discrete-wavelet-transforms-a-compendium-of-new-approaches-and-recent-applications/non-separable-two-dimensional-discrete-wavelet-transform-for-image-signals
Y Wang, H Hamza, in Industrial Engineering Research Conference. 4D geometry compression based on lifting wavelet transform (The Institute for Operations Research and the Management Sciences, Florida, 2006)
JM Gomez, JB Rapesta, I Blanes, LJ Rodriguez, FA Llinas, JS Sagrista, "4D remote sensing image coding with JPEG2000," Satellite Data Compression, Communications and Processing VI, vol. 7810, 2010. https://doi.org/10.1117/12.860545
C-L Kuo, Y-Y Lin, Y-C Lu, in IEEE Int. SOC Conference. Analysis and implementation of discrete wavelet transform for compressing four-dimensional light field data (IEEE, Germany, 2013)
A Sang, T Sun, H Chen, H Feng, in Int. Conf. On Image Analysis and Signal Processing. A 4D nth-order Walsh orthogonal transform algorithm used for color image coding (IEEE, China, 2010)
M Iwahashi, T Orachon, H Kiya, in IEEE International Conference on Image Processing. Three dimensional discrete wavelet transform with deduced number of lifting steps (2013)
M Iwahashi, T Orachon, H Kiya, in Proc. of Asia Pacific Signal and Information Processing (APSIPA) Annual Summit and Conference (ASC). Non separable 3D lifting structure compatible with separable quadruple lifting DWT (2013)
FA Binti Hamzah, T Yoshida, M Iwahashi, in Proc. of IEEE ICASSP. Non-separable Quaruple lifting structure for four-dimensional integer wavelet transform with reduced rounding noise (IEEE, New Orleans, 2017)
M Iwahashi, H Kiya, in Discrete Wavelet Transforms. Condition on word length of signals and coefficients for DC lossless property (InTechOpen, ISBN 978-953-307-313-2, 2011), pp. 231–254. Available from: https://www.intechopen.com/books/discrete-wavelet-transforms-algorithms-and-applications/condition-on-word-length-of-signals-and-coefficients-for-dc-lossless-property-of-dwt
"Joint Photographic Experts Group: JPEG 2000 Image Coding System". Patent ISO / IEC FCD 15444-1, 2000.
S Poomrittigul, M Iwahashi, H Kiya, Reduction of lifting steps of non separable 2D quadruple lifting DWT compatible with separable 2D DWT. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E97-A(7), 1492–1499 (2014)
D Boye et al., in Proc. SPIE 8669, Medical Imaging 2013: Image Processing. Population based modeling of respiratory lung motion and prediction from partial information (2013)
JV Haxby, MI Gobbini, ML Furey, A Ishai, JL Schouten, P Pietrini, Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293(5539), 2425–2430 (2001)
C Cortes, L Kabongo, I Macia, OE Ruiz, J Florez, Ultrasound image dataset for image analysis algorithms evaluation, Innovation in Medicine and Healthcare 2015. Smart Innovation, Systems and Technologies, vol 45 (2015), pp. 447–457
J-R Ohm, in Multimedia communication technology: representation, transmission and identification of multimedia signals. Linear systems and transforms (Springer-Verlag, New York, 2004), pp. 96–104
We thank Saad Anis, Ph.D,, from the Edanz Group (www.edanzediting.com/ac) for editing a draft of this manuscript.
Fairoza Amira Binti Hamzah received the Diploma of Electrical and Electronics Engineering from the Japanese Associate Degree Program, Selangor Industrial University, Malaysia, in 2012. Then, she received B. Eng. and M. Engrg. degree in Electrical, Electronics and Information Engineering in Nagaoka University of Technology in 2014 and 2016, respectively, and is currently pursuing Ph.D. of Engineering in the Information Science and Control Engineering department of the same university. In 2017, she had a research internship experience in Department of Computer Science, University of Warwick, UK. Her research interests are in digital signal processing and image compression. She is a Graduate Member of Board of Engineers Malaysia (BEM) and the Institute of Engineers Malaysia (IEM), and a Graduate Student Member of IEEE.
Sayaka Minewaki received her B. Eng. and M. Eng. degrees in Engineering from Kyushu Institute of Technology in 2001 and 2003, respectively. In 2006, she finished a Ph.D. program without dissertation at the Department of Artificial Intelligence, Kyushu Institute of Technology. In 2006, she joined the Yuge National College of Technology, where she served concurrently as a Lecturer. In 2016, she joined Nagaoka University of Technology, where she is currently an Assistant Professor of the Department of Electrical, Electronics and Information Engineering. Her research interests are in the fields of digital signal processing, image compression and natural language processing.
Taichi Yoshida received B. Eng., M.Eng., and Ph.D. degrees in Engineering from Keio University, Yokohama, Japan, in 2006, 2008, and 2013, respectively. In 2014, he joined Nagaoka University of Technology as an Assistant Professor in the Department of Electrical, Electronics and Information Engineering. He then joined the University of Electro-Communications, Tokyo, Japan in 2018. His research interests are in the field of filter bank design and image coding application. He is a member of IEEE.
Masahiro Iwahashi received his B. Eng, M. Eng., and D. Eng. degrees in electrical engineering from Tokyo Metropolitan University in 1988, 1990, and 1996, respectively. In 1990, he joined Nippon Steel Co. Ltd. From 1991 to 1992, he was dispatched to Graphics Communication Technology Co. Ltd. In 1993, he joined Nagaoka University of Technology, where he is currently a professor of the Department of Electrical Engineering, Faculty of Technology. From 1995 to 2001, he served concurrently as a lecturer of Nagaoka Technical College. From 1998 to 1999, he was dispatched to Thammasat University in Bangkok, Thailand, as a JICA expert.
His research interests are in the area of digital signal processing, multi-rate systems, and image compression. From 2007 to 2011, he served as an editorial committee member of the transaction on fundamentals. He is serving as a reviewer of IEEE, IEICE, and APSIPA. He is currently a senior member of the IEEE and IEICE.
Please contact author for data requests.
Department of Electrical, Electronics and Information Engineering, Nagaoka University of Technology, Niigata, Japan
Fairoza Amira Binti Hamzah, Sayaka Minewaki & Masahiro Iwahashi
Department of Computer and Network Engineering, The University of Electro-Communications, Tokyo, Japan
Taichi Yoshida
Fairoza Amira Binti Hamzah
Sayaka Minewaki
Masahiro Iwahashi
The corresponding and first author is the person who made the substantial contributions to the conception and design, acquisition, analysis and interpretation of the data. The manuscript is critically revised for intellectual content by the second, third, and fourth authors.
Correspondence to Fairoza Amira Binti Hamzah.
Binti Hamzah, F.A., Minewaki, S., Yoshida, T. et al. Reduction of rounding noise and lifting steps in non-separable four-dimensional quadruple lifting integer wavelet transform. J Image Video Proc. 2018, 36 (2018). https://doi.org/10.1186/s13640-018-0271-0 | CommonCrawl |
BMC Systems Biology
Cooperative binding mitigates the high-dose hook effect
Ranjita Dutta Roy1,2,
Christian Rosenmund2 &
Melanie I. Stefan ORCID: orcid.org/0000-0002-6086-73573,4,5
BMC Systems Biology volume 11, Article number: 74 (2017) Cite this article
The high-dose hook effect (also called prozone effect) refers to the observation that if a multivalent protein acts as a linker between two parts of a protein complex, then increasing the amount of linker protein in the mixture does not always increase the amount of fully formed complex. On the contrary, at a high enough concentration range the amount of fully formed complex actually decreases. It has been observed that allosterically regulated proteins seem less susceptible to this effect. The aim of this study was two-fold: First, to investigate the mathematical basis of how allostery mitigates the prozone effect. And second, to explore the consequences of allostery and the high-dose hook effect using the example of calmodulin, a calcium-sensing protein that regulates the switch between long-term potentiation and long-term depression in neurons.
We use a combinatorial model of a "perfect linker protein" (with infinite binding affinity) to mathematically describe the hook effect and its behaviour under allosteric conditions. We show that allosteric regulation does indeed mitigate the high-dose hook effect. We then turn to calmodulin as a real-life example of an allosteric protein. Using kinetic simulations, we show that calmodulin is indeed subject to a hook effect. We also show that this effect is stronger in the presence of the allosteric activator Ca 2+/calmodulin-dependent kinase II (CaMKII), because it reduces the overall cooperativity of the calcium-calmodulin system. It follows that, surprisingly, there are conditions where increased amounts of allosteric activator actually decrease the activity of a protein.
We show that cooperative binding can indeed act as a protective mechanism against the hook effect. This will have implications in vivo where the extent of cooperativity of a protein can be modulated, for instance, by allosteric activators or inhibitors. This can result in counterintuitive effects of decreased activity with increased concentrations of both the allosteric protein itself and its allosteric activators.
Since the early 20th century, immunologists have noted that more is not always better: Increasing the amount of antibody in an antibody-antigen reaction could reduce, instead of increase, the amount of precipitating antibody-antigen complex [1]. Similarly, mice receiving larger doses of anti-pneumococcus horse serum were not more, but less protected against pneumococcus infection [2, 3]. There was clearly a range of antibody concentrations above the optimum at which no effects (or negative effects) were obtained. This region of antibody concentrations was named the prozone, and the related observation the "prozone effect" [1–3] or (after the shape of the complex formation curve) the "high-dose hook effect" (reviewed in [4, 5]).
Over the following decades, the high-dose hook effect became better understood beyond its first application in immunology, and as a more general property of systems involving multivalent proteins. In 1997, Bray and Lay showed using simulations of various types of protein complexes that the prozone effect is a general phenomenon in biochemical complex formation, and occurs whenever one protein acts as a "linker" or "bridge" between parts of a complex [6]. This was corroborated using a mathematical model of an antibody with two antigen-binding sites by Bobrovnik [7] and in a DNA-binding experiment by Ha et al. [8].
The hook effect thus results from partially bound forms of the "linker" proteins competing with each other for binding partners, and as a consequence, there is a regime of concentrations where adding more linker protein will decrease the amount of fully formed complexes, rather than increase it (see Fig. 1).
Binding of ligands A, B to a bivalent linker protein L. a Low linker concentration: availability of L limits the formation of total complexes (LAB, in colour). b Linker concentration on the order of ligand concentration: Formation of fully formed complex (LAB) reaches its maximum. c Concentration of linker L much higher than that of A or B: partially bound forms prevail, and formation of fully formed complex (LAB) goes down in absolute terms
Are all complexes with a central multivalent "linker" protein equally susceptible to the hook effect? Based on simulation of allosterically regulated proteins using the Allosteric Network Compiler (ANC), Ollivier and colleagues suggested that allostery can mitigate the prozone effect [9].
In this case, ligand binding to the linker protein is cooperative (reviewed in [10]), and the simulations by Ollivier et al. showed that the higher the cooperativity, the less pronounced the hook effect [9].
This agrees with what we know about cooperative binding: If ligand binding to one site is conducive to ligand binding to other sites, this will favour the formation of fully assembled complex over partial complexes, and thus increase the total amount of fully formed complex at a given linker concentration, compared to the non-cooperative case. In other words, partially bound forms of the linker protein still compete among themselves for binding partner, but cooperative binding skews the competition in favour of the forms that have more binding sites occupied and are thus closer to the fully bound form.
In this paper, we formalise and further develop these ideas. We first provide a mathematical description of the principle behind the high-dose hook effect and show that it is indeed smaller for proteins that display cooperative ligand binding.
We then go on to examine how this applies to allosteric proteins. We have decided to investigate the case of calmodulin, an allosteric tetra-valent calcium binding protein that is present in many tissues of the human body. In neurons, calmodulin acts as a switch between long-term potentiation and long-term depression of a synaptic connection in response to the frequency, duration and amplitude of a calcium signal [11]. We investigate the effects of both the hook effect itself and the allosteric nature of calmodulin under conditions comparable to its cellular environment.
A combinatorial model shows that increasing amounts of linker protein lead to decreasing amounts of complex
We start by looking at a case in which a linker protein L binds perfectly (i.e. with an infinitely small K d ) to one molecule each of A and B to form a ternary complex (LAB, see Fig. 1). The binding sites for A and B are separate and have the same affinity for the linker L.
In the following, we will denote amounts or numbers of molecules with lower-case letters: a will be the number of molecules of A, b the number of molecules of B, and λ the number of molecules of L. Without loss of generality, we will assume that b≤a.
In this case (see "Methods" section for details), we can write the expected amount of LAB as a function of λ as a three-part function:
$$E_{\text{LAB}}(\lambda) = \left\{ \begin{array}{ll} \lambda & \text{if}~ \lambda \leq b \\ b & \text{if}~ b < \lambda \leq a \\ \frac{ab}{\lambda} & \text{if}~ a < \lambda \\ \end{array} \right. $$
A plot of the above function for a = 80, b=50, and λ=1 to 400 is shown in Fig. 2 (black line). In order to visualise the stochastic fluctuation around those expected values, for each value of λ, the figure also shows the result of 100 stochastic simulations (grey dots). For each of these, a molecules of L were randomly chosen for binding to A, and b molecules of L were randomly chosen for binding to B, and we then counted the resulting number of molecules of L that were bound to both A and B (see "Methods").
Prozone effect for a Linker protein without cooperativity, assuming perfect binding Black line: Expected value. Grey dots: Results of 100 stochastic simulations. Amount of linker protein (lambda) varied from 1 to 400, amounts of proteins A and B were 80 and 50, respectively. Simulations were run using MATLAB [36]
As we can see, the amount of fully bound complex will first increase with increasing amounts of L, then stay constant (at b) until the amount of L exceeds the amounts of both A and B, and then go down again as L increases further. In other words, for large enough L, adding L will decrease the expected amounts of fully bound complex LAB. This is the high-dose hook effect.
Cooperative binding attenuates the high-dose hook effect
Now, how does the situation change if binding to L is cooperative, i.e. if binding of L to a molecule of A (or B) is more likely when a molecule of B (or A) is already bound?
In that case (see "Methods" section for details), the function for E LAB changes to
$$E_{\text{LAB}}(\lambda) = \left\{ \begin{array}{ll} \lambda & \text{if} ~\lambda \leq b \\ b & \text{if}~ b < \lambda \leq ac \\ \frac{abc}{\lambda} & \text{if}~ ac < \lambda \\ \end{array} \right. $$
Here, c denotes a cooperativity coefficient, with c=1 for non-cooperative systems and c>1 for positively cooperative systems.
How is this cooperative case different from the non-cooperative case? It is easy to see that the maximum number of bound complexes is still the same, because this is determined by b (in other words, the availability of the scarcer of the two ligands). Two things, however change: First, the range of concentrations at which this maximum number of complexes is formed, becomes larger, i.e. we can increase λ further without seeing a detrimental effect on LAB formation. Second, after the maximum is reached, the decline in the expected number of LAB complexes as a function of λ is less steep. There is still a hook effect, but the effect is less drastic, and it sets in at higher concentrations of L. This is how cooperative binding works to counteract the hook effect. Figure 3 shows the cooperative case for the same values of a, b, and λ as the noncooperative example shown above.
The Prozone effect for a Linker protein with cooperativity, assuming perfect binding. Expected values are shown. Amount of linker protein (lambda) varied from 1 to 400, amounts of proteins A and B were 80 and 50, respectively. The cooperativity constant c was set to 2. Plot was drawn in MATLAB [36]
The above analysis assumes that binding of A and B to L is perfect, in the sense that if there is a free molecule of ligand and there is an unoccupied binding site, then binding will happen with a probability of 1. In real biological systems, of course, such certainty does not exist. The probability of a binding event depends not only on the availability of ligand and binding sites, but also on their affinities, usually measured in terms of association or dissociation constants.
This will affect the expected number of fully bound complexes, the range of concentrations at which certain behaviours can be observed, and the way we think about cooperativity. An analytical analysis is complicated by the fact that, unlike in most other binding scenarios that are well described in theoretical biochemistry, we are operating under conditions of "ligand depletion", where the limited availability of ligand will affect the dynamic behaviour of the system [12].
Therefore, the scenario of real-life biological systems with non-zero dissociation constants lends itself well to simulation approaches. In simulations of biochemical systems, one possible way of representing cooperative binding is as a decrease in dissociation constants (i.e. an increase in affinity) if one or more of the binding sites on the receptor are already occupied [10].
Calmodulin binding to calcium displays a high-dose hook effect
In order to investigate whether we can detect a hook effect in a simple linker protein under conditions found in biochemical systems (with finite association constants), we examined the high-dose hook effect using an earlier model of calmodulin activation by calcium [13].
Calmodulin is a calcium-sensing protein that has an important role in bidirectional neuronal plasticity. In the post-synaptic neuron, it acts as a "switch" between induction of long-term potentiation (LTP) and long-term depression (LTD), by activating either Ca 2+-/calmodulin-dependent kinase II (CaMKII) or calcineurin, respectively (reviewed in [14]). The decision to activate either one or the other depends on the input frequency, duration and amplitude of the postsynaptic calcium signal [11]. Each calmodulin molecule binds to four calcium ions in a cooperative manner [15]. Structural evidence [16, 17] suggests that this cooperativity arises from allosteric regulation. According to this model [13, 18], calmodulin can exist either in the T state with lower calcium binding affinities or in the R state with higher calcium binding affinities. The more calcium ions are bound to a calmodulin molecule, the higher the likelihood that it will transition from the T state to the R state.
Other models of calmodulin regulation exist [19, 20], but for our purposes of examining the relationship between cooperativity and the hook effect, the allosteric model proposed by Stefan et al. [13] is sufficiently detailed. The model accounts for two states of calmodulin (R and T) and four calcium binding sites, with different calcium affinities. In addition, R state calmodulin can bind to two allosteric activators, CaMKII or calcineurin (PP2B).
As expected, wildtype calmodulin displays a high-dose hook effect, as shown in the black line in Fig. 4: If we plot the formation of fully-bound calmodulin (calm-Ca4) as a function of the initial calmodulin concentration, then the curve initially rises, but then drops again at high doses of calmodulin, indicating that calmodulin molecules compete with each other for calcium binding.
Reduced hook effect in cooperative (wt) calmodulin. This figure shows the results of simulations on wildtype calmodulin (which is allosterically regulated, in black) compared to a non-cooperative in silico mutant (R state only, in red). The plot of fully bound calmodulin as a function of initial calmodulin concentration shows a prozone effect in both cases, but it is more pronounced in the non-cooperative version
Is the high-dose hook effect dependent on our particular parameter choices? In this model, we used parameters for dissociation constants and R-to-T transition that had previously been shown to produce simulation results consistent with the available literature on calmodulin binding to Calcium under a variety of conditions [13]. Nonetheless, we repeated the simulations at varying dissociation constants and varying values of L (which governs the transition between R and T states). As shown in Additional file 1, a high-dose hook effect exists in a variety of parameter regimes, although it can be more or less pronounced.
Allostery mitigates the high-dose hook effect in calmdoulin
If it is true that cooperativity helps mitigate the prozone effect, then a non-cooperative protein with similar properties to calmodulin would show a higher hook effect than calmodulin itself. To test this hypothesis, we created an artificial in silico variant of calmodulin that binds to calcium in a non-cooperative way. This was done by abolishing R to T state transitions in the model, so that calmodulin could exist in the R state only. It is important at this point to differentiate between affinity and cooperativity: The R state only version of calmodulin has higher calcium affinity than the "wildtype" version (which can exist in the R state or the T state). But the R state only version has itself no cooperativity, because cooperativity arises from the possibility of transitioning between the T and R states [21, 22].
Figure 4 shows the results of two simulations run on wildtype calmodulin and an R-state-only in silico mutant, respectively. Plotting fully bound calmodulin as a function of calcium concentration reveals a high-dose hook effect in both cases. However, despite the R-state only variant reaching a higher peak (due to its higher overall affinity), it also shows a more pronounced hook effect, with lower absolute levels of fully bound complex at higher calmodulin concentrations.
Molecular environment modulates calmodulin cooperativity and hence, susceptibility to the high-dose hook effect
We have shown that calmodulin binding to calcium can be affected by the hook effect, and that this hook effect is stronger in non-cooperative versions of calmodulin. In order to assess the relevance of these findings for the cellular function fo calmodulin, we need to answer two questions: First, are the concentration regimes under which this system displays a hook effect ever found under physiological conditions? And second, are there existing forms of calmodulin that resemble our "R state only" in silico mutation and are therefore non-cooperative?
Calmodulin is found in various concentrations in various tissues of the body, from micromolar concentrations in erythrocytes to tens of micromolar concentrations in some areas of the brain [23]. The calmodulin concentrations used in our simulations are therefore physiologically relevant, especially in the higher range, where the prozone effect is most pronounced.
Our mathematical treatment and simulations have shown that allosteric regulation mitigates the hook effect. But what is the relevance of this for calmodulin? After all, there is no known variant of calmodulin that exists only in the R state or only in the T state. However, there are allosteric modulators that will stabilise one of the two states, and they can exist in high concentrations. To investigate the effect of the presence of an allosteric modulator, we repeated the above simulations in the presence of 140 μM CaMKII. This number is consistent with the number of CaMKII holoenzymes found in post-synaptic densities in labelling studies [24].
The results of our simulations in the presence of 140 μM CaMKII are shown in Fig. 5 a. Since CaMKII is an allosteric activator, it stabilises the R state of calmodulin over the T state. At such high concentrations of CaMKII, the R state dominates, and calmodulin behaves almost like the theoretical R-state-only form. In particular, the hook effect is exacerbated at high calmodulin concentrations.
Allosteric modulators can exacerbate the hook effect by reducing cooperativity. As in the previous figure, we show fully bound calmodulin as a function of initial calmodulin concentration, both for wildtype calmodulin (black) and an in silico mutant that exists only in the R state (red). We also show the results of adding two concentrations of the allosteric activator CaMKII (blue). a At 140 μ M CaMKII, calmodulin exists almost exclusively in the R state and thus behaves like the non-cooperative in silico mutant. b at 1 μ M CaMKII, both states exist and the prozone effect is comparable to wildtype calmodulin
To assess the effect of concentration of the allosteric activator, we compared this scenario with one where the CaMKII concentration was reduced to 1,μM. In this case (shown in Fig. 5 b) the R state is stabilised to some extent, but R and T states still co-exist, and cooperativity is therefore preserved. While the initial peak of fully bound complex is higher than for wildtype calmodulin in the absence of any allosteric effectors, the prozone effect is reduced.
Taken together, this indicates that under conditions that render a protein susceptible to the high-dose hook effect, higher concentrations of an allosteric activator result in less activity than lower concentrations.
Cooperativity gives partially bound linkers a competitive edge
In this study we asked whether there is a general principle by which allosterically regulated proteins such as calmodulin are - to some extent - protected from the high-dose hook effect. To mathematically examine this question, we have used combinatorics to show how the high-dose hook effect arises in a simple trimolecular complex with perfect binding affinities. This is, in essence, due to the linker protein competing with other instances of itself for full complex formation. This result reproduces the one found by Ha and colleagues, who derived algebraic expressions for all concentrations in a similar system and systematically varied dissociation constants and concentrations of components to explore the prozone effect [8].
In addition, we also show that cooperative binding mitigates the high-dose hook effect. It does so by essentially giving partially bound versions of the linker protein a competitive advantage, so that the population is skewed towards either fully bound forms or fully unbound forms, at the expense of partially bound forms.
Cooperativity can protect calmodulin from the high-dose hook effect under physiological conditions
Calcium binding to calmodulin is cooperative, and this suggests that calmodulin would be protected, to some extent, from the high-dose hook effect. Indeed, we could show that is the case for physiological ranges of calmodulin concentration. However, the cooperative nature of calmodulin binding to calcium itself is not a fixed property, but can vary according to the cellular environment. Cooperativity can be reduced under conditions of ligand depletion [12], which are also the concentration regimes where the hook effect becomes noticeable. In addition, high concentrations of an allosteric modulator can reduce cooperativity. Thus, the susceptibility of a protein to the high-dose hook effect depends not only on intrinsic properties of the protein and its ligand, but also on the cellular context.
More is not always better
As we have seen, the presence of an allosteric activator can reduce cooperativity. This is because cooperativity in allosteric molecules arises, fundamentally, from the ability of the molecule to transition between T and R states, which have different ligand affinities. By pulling all of the allosteric protein towards either the T or the R state, cooperativity is reduced, and the high-dose hook effect becomes more pronounced. Interestingly, this is true no matter whether it is the T state or the R state that is stabilised or, in other words, whether the allosteric modulator is an inhibitor or an activator. Thus, under conditions where the hook effect is noticeable, allosteric activation behaves counterintuitively: There is less activity in the absence of the allosteric activator than in its presence, and less activity when the levels of allosteric activator are high than when they are low.
Possible experimental validation
Our model predicts that the presence of cooperativity protects, to some extent, against the high-dose hook effect.
Ha and Ferrell [25] have investigated the link between cooperativity and the high-dose hook effect using a binding system composed fo three DNA strands: One strand can bind to two others, and cooperativity can be engineered by tweaking the amount of overlap. Indeed, the construct identified as positively cooperative showed a less pronounced hook effect than the construct identified as non-cooperative [25]. This corroborates our ideas in a synthetic binding system.
In order to assess whether this is also the case for systems with more than two binding sites, and in particular for multivalent proteins, one would need to be able to do two things: First, to be able to measure full occupancy of some multivalent protein (without measuring partially occupied states). Second, this would need to be done on a linker protein of which there are two related forms, one of which shows cooperative binding and the other does not.
For calmodulin, as we have seen, a non-cooperative (or less cooperative) state can be obtained by adding a large concentration of CaMKII. This will stabilise the R state, which is not itself cooperative. In contrast, in the absence of allosteric modulators, both the R and T states are populated, and the transition between them is what confers cooperativity to calmodulin. Thus, creating a less cooperative form of calmodulin in vitro is easy. However, measuring full saturation of calmodulin is not. Fractional saturation (i.e. the ratio of occupied binding sites) cannot serve as a proxy, because it does not show a Hook effect, instead monotonically going down as calmodulin concentration increases, as expected. In addition, fractional occupancy profiles in the absence and presence of CaMKII do not show a big difference (see Additional file 1). Thus, fractional occupancy is not a good proxy for full saturation. Instead, is it possible to measure conformational state? Moree et al. have shown that it is possible to measure conformational change in calmodulin that occurs with Calcium binding [26]. However, conformational state and full saturation do not directly translate into each other, as can be seen in Additional file 1, where we plotted \(\bar {R}\) for calmodulin in the absence and presence of CaMKII. Thus, measuring fully saturated calmodulin (and therefore assessing the magnitude of the high-dose hook effect in vitro) is non-trivial. As molecular measurement techniques develop in the coming years, though, this work provides a hypothesis that will be amenable to testing.
Another possibility would be to compare hemoglobin and myoglobin. Both have similar properties, but hemoglobin exists as tetramer exhibiting cooperative binding to oxygen, while myoglobin is monomeric and therefore non-cooperative (reviewed in [22]). Obviously, since myoglobin has only one oxygen-binding site, it does not itself display a high-dose hook effect. Instead, it could be used as a proxy for what a non-cooperative version of hemoglobin would look like. The fraction of fully occupied "non-cooperative hemoglobin" can simply be computed by taking the fractional saturation of myoglobin to the fourth power (essentially grouping free myoglobin molecules into groups of four and declaring full occupation if all four are bound). The prediction is that this would show a stronger high-dose hook effect than hemoglobin itself.
Relevance to other systems
These results are likely to be relevant in a wide range of biological systems. For instance, neuronal signalling depends on a number of proteins with multiple ligand binding sites, including membrane receptors such as the AMPA receptors, NMDA receptors or other postsynaptic calcium sensors such as calbindin. The existence of multiple ligand binding sites and, under some conditions, the relative scarcity of ligands (e.g. of glutamate in the synaptic cleft, and of calcium in the postsynaptic neuron) makes those proteins, in principle, prone to the hook effect. Interestingly, several of these proteins are allosterically regulated (this is the case, for instance, for AMPA receptors [27] and for NMDA receptors [28]), which could confer a sensitivity advantage at high receptor-to-ligand ratios [12].
The hook effect is also a frequently discussed problem in medical diagnostics, because it can lead to false-negative effects if the levels of analyte to be detected are too high. Recent examples of this effect have been reported in the diagnosis of meningitis [29], malaria [30, 31], and even in pregnancy tests [32]. To avoid such cases, systematic dilution of the sample (and thus a reduction of analyte concentration) can help [33], but is not always practicable [34]. Given our results, another way to reduce the risk of false-negative results due to the hook effect would be to somehow make analyte binding to the reporter in the assay cooperative. One way of achieving this in a sandwich immunoassay by making one of the receptors multimeric has been patented in 2001 [34].
If a protein acts as a linker between different parts of a multimolecular complex, then there are concentration regimes where adding more of the linker protein to the mixture will result in less overall complex formation. This phenomenon is called the high-dose hook effect or prozone effect.
We have provided an idealised mathematical description of the hook effect and shown that allosteric regulation does indeed mitigate the hook effect, as has been predicted before [9].
Whilst this means that allosteric proteins such as calmodulin are, to some extent, protected from the high-dose hook effect, the presence of allosteric modulators can increase susceptibility to the high-dose hook effect. The extent of the hook effect is therefore strongly dependent on the cellular microenvironment.
Complex formation curve for LAB
Assume a perfect binding system with λ molecules of a linker molecule L, where every molecule of L can bind to one molecule of A and one molecule of B. Numbers of A and B are denoted by a and b, respectively, with b≤a (wlog).
Assuming perfect binding and no cooperativity, the molecules of A and B will be distributed randomly across molecules of L. At the end of the binding phase, any given molecule of L will be either free, bound to A only, bound to B only, or part of a complete LAB complex. Clearly, this is a combinatorial problem that can have a variety of possible outcomes in terms of the numbers of complete LAB complex, partial complexes (LA or LB) and free (unbound) L.
We are interested in expressing the expected number of full complexes (LAB) formed as a function of λ. We will denote this quantity as E LAB(λ)
As long as the number of linker proteins L is limiting, then the total number of ternary complexes formed will be λ.
$$E_{\text{LAB}}(\lambda) = \lambda \qquad \text{if}\ \lambda \leq b $$
If the amount of linker protein is larger than the amount of protein B, but smaller than the amount of protein A, then all of L will be bound to A at least, and the amount of completely formed LAB complex will depend on b alone.
$$E_{\text{LAB}}(\lambda) = b \qquad \text{if}\ b < \lambda \leq a $$
Finally, if the amount of linker protein is larger than both a and b, then we have to consider all possible binding scenarios. Figure 6 shows a probability tree for each molecule of L (with c=1 in the absence of cooperativity). For reasons of convenience, we show binding as a two-stage process (A binds first, then B), but this is not meant to represent a temporal order. The resulting probabilities for each end state would be the same if the order of binding was switched.
Probability of binding events for a cooperative linker L when both a and b are smaller than λ. For each L, the arrows are marked with the probabilities of the associated binding event. The amount of cooperativity is indicated by a multiplicative factor c, where c>1 denotes positive cooperativity, and c=1 in the absence of cooperativity
The expected number of LAB complex can be computed by taking the probability of each L to become an LAB complex, and multiplying with the amount of L:
$$E_{\text{LAB}}(\lambda) = \frac{a}{\lambda} \frac{b}{\lambda} \lambda = \frac{ab}{\lambda} \qquad \text{if}\ a < \lambda $$
Thus, for fixed amounts of A and B (with b≤a), we can write the expected amount of LAB as a function of λ as a three-part function:
$$E_{\text{LAB}}(\lambda) = \left\{ \begin{array}{ll} \lambda & \text{if}~ \lambda \leq b \\ b & \text{if} ~b < \lambda \leq a \\ \frac{ab}{\lambda} & \text{if}~ a < \lambda \\ \end{array} \right. $$
Formation of LAB complex if ligand binding is cooperative
The case where ligand binding is cooperative (i.e. where binding of a molecule of A facilitates the binding of a molecule of B to L, and vice versa) is analogous.
Again, as long as λ is smaller than both a and b, the amount of linker L will be limiting, and we thus have:
If the amount of linker protein is larger than the amount of protein B, then there can be at most b fully bound complexes, just like in the non-cooperative case. Thus, b is the maximum possible value for E LAB.
If λ exceeds both a and b by a sufficient amount, we can again follow a probability tree (displayed in Fig. 6) to determine the probability of a single linker protein being fully bound. Again, this is computed as the probability of A binding (\(\frac {a}{\lambda }\), as before) times the probability of B binding, given A is already bound, which will depend both on \(\frac {b}{\lambda }\) (as before) and on a cooperativity coefficient c. This is a coefficient that modulates the probability of subsequent binding events, with c>1 indicating positive cooperativity and c=1 no cooperativity. For instance, for calmodulin binding to calcium, c would be around 3 and for hemoglobin binding to oxygen around 1.5 (computed from dissociation constants reported in [35]). This gives us an expected value for the number of fully formed LAB complexes:
$$E_{\text{LAB}}(\lambda) = \frac{abc}{\lambda} $$
What do we mean by "a sufficient amount"? Clearly, λ must be bigger than both a and b. But remember also that E LAB is limited by b. So, the question is, when is \(\frac {abc}{\lambda }<b\)? This is the case when ac<λ.
Thus, the complete function for E LAB is as follows:
$$E_{\text{LAB}}(\lambda) = \left\{ \begin{array}{ll} \lambda & \text{if}\ \lambda \leq b \\ b & \text{if}\ b < \lambda \leq ac \\ \frac{abc}{\lambda} & \text{if}\ ac < \lambda \\ \end{array} \right. $$
Theoretical complex formation curves
The complex formation curves under the assumption of perfect binding shown in Fig. 2 were generated using MATLAB [36]. We also used MATLAB to simulate 100 cases of A and B binding to L as follows: For each simulation step, a (or λ, if λ<a binding sites were randomly chosen and defined as bound to A, b (or λ if λ<b) binding sites were chosen and defined as bound to B. The number of binding sites occupied by both A and B was then determined and plotted. The MATLAB script used to generate the plots is provided as Additional file 2.
Calmodulin simulation
For simulations of the prozone effect in calcium binding to calmodulin, we used a model of calmodulin published earlier [13]. The model accounts for two conformational states of calmodulin (R and T) and four different calcium binding sites (A, B, C, D). In addition, R state calmodulin can bind to CaMKII or PP2B. The full model is available in BioModels Database [37] as BIOMD0000000183.
For simulations of calcium binding to wildtype calmodulin, the concentrations of both CaMKII and PP2B were set to zero. The Copasi file used to run our simulations is provided as Additional file 3
For simulations of the R state only, the transition rates between R and T state were set to zero, and the initial concentration of calmodulin was set to be all in the R state. For simulations in the presence of an allosteric activator, we used a CaMKII concentration of 140 μM, which corresponds to reports of typical levels of around 30 holoenzymes of CaMKII found in post-synaptic densities with a volume of around 5×10−18 l [24]. To test the effect of reducing CaMKII concentration, simulations were run again setting CaMKII concentration to 1 μM.
Simulations were run using Copasi [38]. The simulations took the form of a parameter scan over initial calmodulin concentrations ranging from 10−7 to 10−5 M in 1000 steps. The scan was over free calmodulin T for all simulations, except for the "R state only model", where the scan was over free calmodulin R. All other calmodulin species were initially set to 0. Each parameter scan simulation was a time course lasting 1000 seconds, which was in all cases largely sufficient to equilibrate the model.
All simulation results were plotted in Grace (http://plasma-gate.weizmann.ac.il/Grace/).
Bayne-Jones S. Equilibria in precipitin reactions: The coexistence of a single free antigen and its antibody in the same serum. J Exp Med. 1917; 25(6):837–53.
Goodner K, Horsfall FL. The protective action of type I antipneumococcus serum in mice : I, the quantitative aspects of the mouse protection test. J Exp Med. 1935; 62(3):359–74.
Goodner K, Horsfall FL. The protective action of type I antipneumococcus serum in mice : IV, the prozone. J Exp Med. 1936; 64(3):369–75.
Dodig S. Interferences in quantitative immunochemical methods. Biochemia Medica. 2009:50–62. doi:10.11613/bm.2009.005.
Hoofnagle AN, Wener MH. The fundamental flaws of immunoassays and potential solutions using tandem mass spectrometry. J Immunol Methods. 2009; 347(1–2):3–11. doi:10.1016/j.jim.2009.06.003.
Bray D, Lay S. Computer-based analysis of the binding steps in protein complex formation. Proc Natl Acad Sci USA. 1997; 94(25):13493–8.
Bobrovnik SA. The problem of prozone in serum antibody titration and its mathematical interpretation. Ukr Biokhim Zh. 2003; 75(2):113–8.
Ha S, Kim S, Ferrell J. The prozone effect accounts for the paradoxical function of the cdk-binding protein suc1/cks. Cell Reports. 2016; 14(6):1408–21. doi:10.1016/j.celrep.2016.01.033.
Ollivier JF, Shahrezaei V, Swain PS. Scalable rule-based modelling of allosteric proteins and biochemical networks. PLoS Comput Biol. 2010; 6(11):1000975. doi:10.1371/journal.pcbi.1000975.
Stefan MI, Le Novère N. Cooperative binding. PLoS Comput Biol. 2013; 9(6):1003106. doi:10.1371/journal.pcbi.1003106.
Li L, Stefan MI, Le Novère N. Calcium input frequency, duration and amplitude differentially modulate the relative activation of calcineurin and CaMKII. PLoS One. 2012; 7(9):43810. doi:10.1371/journal.pone.0043810.
Edelstein SJ, Stefan MI, Le Novère N. Ligand depletion in vivo modulates the dynamic range and cooperativity of signal transduction. PLoS One. 2010; 5(1):8449. doi:10.1371/journal.pone.0008449.
Stefan MI, Edelstein SJ, Le Novère N. An allosteric model of calmodulin explains differential activation of PP2B and CaMKII. Proc Natl Acad Sci USA. 2008; 105(31):10768–73. doi:10.1073/pnas.0804672105.
Xia Z, Storm DR. The role of calmodulin as a signal integrator for synaptic plasticity. Nat Rev Neurosci. 2005; 6(4):267–76. doi:10.1038/nrn1647.
Crouch TH, Klee CB. Positive cooperative binding of calcium to bovine brain calmodulin. Biochemistry. 1980; 19(16):3692–8.
Kuboniwa H, Tjandra N, Grzesiek S, Ren H, Klee CB, Bax A. Solution structure of calcium-free calmodulin. Nat Struct Biol. 1995; 2(9):768–6.
Babu YS, Sack JS, Greenhough TJ, Bugg CE, Means AR, Cook WJ. Three-dimensional structure of calmodulin. Nature. 1985; 315(6014):37–40.
Czerlinski GH. Allosteric competition in calmodulin. Physiol Chem Phys Med NMR. 1984; 16:437–47.
Pepke S, Kinzer-Ursem T, Mihalas S, Kennedy MB. A dynamic model of interactions of Ca2+, calmodulin, and catalytic subunits of Ca2+/calmodulin-dependent protein kinase II. PLoS Comput Biol. 2010; 6(2):1000675. doi:10.1371/journal.pcbi.1000675.
Lai M, Brun D, Edelstein SJ, Le Novère N. Modulation of calmodulin lobes by different targets: An allosteric model with hemiconcerted conformational transitions. PLoS Comput Biol. 2015; 11(1):1004063. doi:10.1371/journal.pcbi.1004063.
Edelstein SJ. Extensions of the allosteric model for haemoglobin. Nature. 1971; 230(5291):224–7.
Edelstein SJ, Le Novère N. Cooperativity of allosteric receptors. J Mol Biol. 2013; 425(9):1424–32. doi:10.1016/j.jmb.2013.03.011.
Kakiuchi S, Yasuda S, Yamazaki R, Teshima Y, Kanda K, Kakiuchi R, Sobue K. Quantitative determinations of calmodulin in the supernatant and particulate fractions of mammalian tissues. J Biochem (Tokyo). 1982; 92(4):1041–8.
Petersen JD, Chen X, Vinade L, Dosemeci A, Lisman JE, Reese TS. Distribution of postsynaptic density (PSD)-95 and Ca2+/calmodulin-dependent protein kinase II at the PSD. J Neurosci. 2003; 23(35):11270–8.
Ha SH, Ferrell Jr J. Thresholds and ultrasensitivity from negative cooperativity. Science. 2016; 352(6288):990–3. doi:10.1126/science.aad5937.
Moree B, Connell K, Mortensen RB, Liu CT, Benkovic SJ, Salafsky J. Protein conformational changes are detected and resolved site specifically by second-harmonic generation. Biophys J. 2015; 109(4):806–15. doi:10.1016/j.bpj.2015.07.016.
Dutta-Roy R, Rosenmund C, Edelstein SJ, Le Novère N. Ligand-dependent opening of the multiple ampa receptor conductance states: a concerted model. PLoS One. 2015; 10(1):0116616. doi:10.1371/journal.pone.0116616.
Urakubo H, Honda M, Froemke RC, Kuroda S. Requirement of an allosteric kinetics of NMDA receptors for spike timing-dependent plasticity. J Neurosci. 2008; 28(13):3310–23. doi:10.1523/JNEUROSCI.0303-08.2008.
Lourens A, Jarvis JN, Meintjes G, Samuel CM. Rapid diagnosis of cryptococcal meningitis by use of lateral flow assay on cerebrospinal fluid samples: influence of the high-dose "hook" effect. J Clin Microbiol. 2014; 52(12):4172–5. doi:10.1128/JCM.01683-14.
Gillet P, Mori M, Van Esbroeck M, Van den Ende J, Jacobs J. Assessment of the prozone effect in malaria rapid diagnostic tests. Malar J. 2009; 8:271. doi:10.1186/1475-2875-8-271.
Santos L, Rocha Pereira N, Andrade P, Figueiredo Dias P, Lima Alves C, Abreu C, Serrão R, Ribeiro M, Sarmento A. Prozone-like phenomenon in travellers with fatal malaria: report of two cases. J Infect Dev Ctries. 2015; 9(3):321–4.
Nigam A, Kumari A, Gupta N. Negative urine pregnancy test in a molar pregnancy: is it possible?BMJ Case Rep. 2014. doi:10.1136/bcr-2014-206483.
Butch AW. Dilution protocols for detection of hook effects/prozone phenomenon. Clin Chem. 2000; 46(10):1719–21.
Neumann U, Lenz HL, Franken N. Method for reducing hook effect in an immunoassay. 2001. US Patent 6184042 B1.
Stefan MI, Edelstein SJ, Le Novère N. Computing phenomenologic Adair-Klotz constants from microscopic MWC parameters. BMC Syst Biol. 2009; 3:68. doi:10.1186/1752-0509-3-68.
The MathWorks Inc.MATLAB. 2013.
Juty N, Ali R, Glont M, Keating S, Rodriguez N, Swat MJ, Wimalaratne S, Hermjakob H, Le Novère N, Laibe C, Chelliah V. Biomodels database: content, features, functionality and use. CPT: Pharmacometrics Syst Pharmacol. 2015; 2:1–14. doi:10.1002/psp4.3.
Hoops S, Sahle S, Gauges R, Lee C, Pahle J, Simus N, Singhal M, Xu L, Mendes P, Kummer U. COPASI–a COmplex PAthway SImulator. Bioinformatics. 2006; 22(24):3067–74. doi:10.1093/bioinformatics/btl485.
The authors thank members of the Le Novère Lab at the Babraham Institute, Cambridge (UK) for helpful discussions.
The calmodulin model is based on an earlier calmodulin model by Stefan et al [13], which is available on BioModels Database under the following link: https://www.ebi.ac.uk/biomodels-main/BIOMD0000000183. The version used here differs in the concentrations of molecular species involved; the model with calmodulin only is included as a supplementary material to this article; the "R only" and "T only" mutants can be generated by setting the parameter kRT to zero and adjusting the initial calmodulin concentration. Simulations in the presence of CaMKII can be generated by setting the initial CaMKII concentration as needed. Matlab code used to generate the figures in the first part of the paper is also available in the supporting information.
Department of Medicine Solna, Karolinska Institutet, Stockholm, Sweden
Ranjita Dutta Roy
NWFZ, Charité Crossover, Charite Universitätsmedizin, Berlin, Germany
Ranjita Dutta Roy & Christian Rosenmund
Department of Neurobiology, Harvard Medical School, Boston, USA
Melanie I. Stefan
Babraham Institute, Cambridge, UK
Centre for Integrative Physiology, University of Edinburgh, Edinburgh, UK
Christian Rosenmund
Developed theoretical framework: MIS; designed and performed simulations: RDR,MIS; analysed and discussed results: RDR, CR, MIS; wrote the paper: RDR, MIS. All authors read and approved the final manuscript.
Correspondence to Melanie I. Stefan.
Additional file 1
Supplemental text. (PDF 106 kb)
Matlab code used to produce Figs. 4 and 6. (M 1.28 kb)
Copasi file for the calmdoulin model. (XML 421 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Roy, R.D., Rosenmund, C. & Stefan, M.I. Cooperative binding mitigates the high-dose hook effect. BMC Syst Biol 11, 74 (2017). https://doi.org/10.1186/s12918-017-0447-8
DOI: https://doi.org/10.1186/s12918-017-0447-8
Prozone effect
High-dose Hook effect
Mechanistic model
Cooperativity
Allostery
Calmodulin
Systems physiology, pharmacology and medicine | CommonCrawl |
Is the Casimir energy in CFT an observable?
We know that if we transform a 2d conformal field theory from a plane to a cylinder with perimeter $L$, the ground state energy will be shifted by $$E = -\frac{c}{24L}$$ due to the Schwarzian derivative term in the transformation of stress energy tensor.
This energy is the difference of a theory on a cylinder and the same theory on a plane. How can we compare the ground state energy of two theories on different spacetime? Therefore I would like to know is this energy a physical observable? And if not, why is it important?
conformal-field-theory observables casimir-effect
WunderNaturWunderNatur
Of course, the free energy on the cylinder is not a measurable observable if you're given the theory on the infinite plane. But one can measure other observables which are proportional to the central charge, such as the two-point function of the stress-energy tensor.
There are situations where that expression is an observable. If you have a one-dimensional quantum system with periodic boundary conditions that flows to a (1+1)-dimensional CFT, then its ground state energy will generically be given by the formula $$ E = E_1 L + E_0 - \frac{\pi v c}{6L} + \cdots, $$ where the higher-order terms are lower-order in $L$. (See below about the mismatch between our expressions.) Here, $E_1$, $E_0$, and $v$ are non-universal constants ($v$ is the velocity of excitations at low-energy, usually called the "speed of light" in a field theory textbook). Then it is possible to "measure" the central charge term. For example, say you do some Monte-Carlo simulations to obtain the velocity $v$ of excitations, and then numerically calculate the ground state energy for several (large) values of $L$ and match it to the above equation. This lets you determine $c$.
In practice, it is much easier to extract central charge from the entanglement entropy. In particular, for an open one-dimensional quantum system, the entropy associated with tracing out half of the system is $S = (c/6) \log L$.
As a side-note, I think that what you are calling $L$ is really the radius of the cylinder, which is related to the perimeter by a factor of $2 \pi$. Finally, you are only considering the holomorphic sector, and above I'm everywhere considering also the antiholomorphic sector with an identical central charge. So that's why my expression is off by $4 \pi$ compared to yours.
Seth WhitsittSeth Whitsitt
$\begingroup$ Thank you so much for the detailed answer. My aim is not to determine the central charge. I am actually confused by a more basic question: How would you measure (the shift of) the ground state energy? I would usually think that the ground state energy is defined. $\endgroup$ – WunderNatur Nov 23 '19 at 3:19
$\begingroup$ In a quantum CFT in one spatial dimension, one typically defines the ground state energy to vanish in an infinite volume, since in this limit there is no energy scale in the problem. Given this definition, one can ask what the ground state energy is in a periodic system with length L. The answer is $-\pi v c/6L$. This $L$-dependent energy shift can be "measured" in appropriate one-dimensional quantum systems as I describe in my answer. $\endgroup$ – Seth Whitsitt Nov 23 '19 at 4:02
Not the answer you're looking for? Browse other questions tagged conformal-field-theory observables casimir-effect or ask your own question.
Beginners questions concerning Conformal Field Theory
What is the connection between Conformal Field Theory and Renormalization group in QFT?
How do you measure numerically the central charge of a system?
Conformal properties of the energy-momentum tensor and Schwarzian derivative
What is the physical meaning of a primary field?
Differences and relations between CFTs defined on the complex plane and CFTs defined on the torus?
How does the stress tensor transform under a conformal transformation? | CommonCrawl |
Short-term synaptic plasticity
Misha Tsodyks and Si Wu (2013), Scholarpedia, 8(10):3153. doi:10.4249/scholarpedia.3153 revision #182521 [link to/cite this article]
Curator: Si Wu
Tiziano D'Albis
Misha Tsodyks
Stefano Fusi
Dr. Misha Tsodyks, Weizmann Institute, Rehovot, Israel
Prof. Si Wu, State Key Lab of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
Short-term plasticity (STP) (Stevens 95, Markram 96, Abbott 97, Zucker 02, Abbott 04), also called dynamical synapses, refers to a phenomenon in which synaptic efficacy changes over time in a way that reflects the history of presynaptic activity. Two types of STP, with opposite effects on synaptic efficacy, have been observed in experiments. They are known as Short-Term Depression (STD) and Short-Term Facilitation (STF). STD is caused by depletion of neurotransmitters consumed during the synaptic signaling process at the axon terminal of a pre-synaptic neuron, whereas STF is caused by influx of calcium into the axon terminal after spike generation, which increases the release probability of neurotransmitters. STP has been found in various cortical regions and exhibits great diversity in properties (Markram 98, Dittman 00, Wang 06). Synapses in different cortical areas can have varied forms of plasticity, being either STD-dominated, STF-dominated, or showing a mixture of both forms.
Compared with long-term plasticity (Bi 01), which is hypothesized as the neural substrate for experience-dependent modification of neural circuit, STP has a shorter time scale, typically on the order of hundreds to thousands of milliseconds. The modification it induces to synaptic efficacy is temporary. Without continued presynaptic activity, the synaptic efficacy will quickly return to its baseline level.
Although STP appears to be an unavoidable consequence of synaptic physiology, theoretical studies suggest that its role in brain functions can be profound (see, e.g., publications in (Research Topic) and the references therein). From a computational point of view, the time scale of STP lies between fast neural signaling (on the order of milliseconds) and experience-induced learning (on the order of minutes or more). This is the time scale of many processes that occur in daily life, for example motor control, speech recognition and working memory. It is therefore plausible that STP might serve as a neural substrate for processing of temporal information on the relevant time scales. STP implies that the response of a post-synaptic neuron depends of the history of presynaptic activity, creating information that in principle can be extracted and used. In a large-size network, STP can greatly enrich the network's dynamical behaviors, endowing the neural system with information processing capacities that would be difficult to implement using static connections. These possibilities have led to significant interest in the computational functions of STP within the field of Computational Neuroscience.
1 Phenomenological model
2 Effects on information transmission
2.1 Temporal filtering
2.2 Gain control
3 Effects on network dynamics
3.1 Prolongation of neural responses to transient inputs
3.2 Modulation of network responses to external input
3.3 Induction of instability or mobility of network state
3.4 Enrichment of attractor dynamics
4 Appendix A: Derivation of a temporal filter for short-term depression
Phenomenological model
The biophysical processes underlying STP are complex. Studies of the computational roles of STP have relied on the creation of simplified phenomenological models (Abbott 97,Markram 98,Tsodyks 98).
In the model proposed by Tsodyks and Markram (Tsodyks 98), the STD effect is modeled by a normalized variable \(x\) (\(0\leq x \leq1\)), denoting the fraction of resources that remain available after neurotransmitter depletion. The STF effect is modeled by a utilization parameter \(u\), representing the fraction of available resources ready for use (release probability). Following a spike, (i) \(u\) increases due to spike-induced calcium influx to the presynaptic terminal, after which (ii) a fraction \(u\) of available resources is consumed to produce the post-synaptic current. Between spikes, \(u\) decays back to zero with time constant \(\tau_f\) and \(x\) recovers to 1 with time constant \(\tau_d \). In summary, the dynamics of STP is given by
\[\begin{aligned} \frac{du}{dt} & = & -\frac{u}{\tau_f}+U(1-u^-)\delta(t-t_{sp}),\nonumber \\ \frac{dx}{dt} & = & \frac{1-x}{\tau_d}-u^+x^-\delta(t-t_{sp}), \\ \frac{dI}{dt} & = & -\frac{I}{\tau_s} + Au^+x^-\delta(t-t_{sp}), \nonumber \tag{1}\end{aligned}\]
where \(t_{sp}\) denotes the spike time and \(U\) is the increment of \(u\) produced by a spike. We denote as \(u^-, x^-\) the corresponding variables just before the arrival of the spike, and \(u^+\) refers to the moment just after the spike. From the first equation, \(u^+ = u^- + U(1-u^-)\). The synaptic current generated at the synapse by the spike arriving at \(t_{sp}\) is then given by
\[\Delta I(t_{sp}) = Au^+x^-, \tag{2}\]
where \(A\) denotes the response amplitude that would be produced by total release of all the neurotransmitter (\(u=x=1\)), called absolute synaptic efficacy of the connections (see Fig. 1A).
The interplay between the dynamics of \(u\) and \(x\) determines whether the joint effect of \(ux\) is dominated by depression or facilitation. In the parameter regime of \(\tau_d\gg \tau_f\) and large \(U\), an initial spike incurs a large drop in \(x\) that takes a long time to recover; therefore the synapse is STD-dominated (Fig.1B). In the regime of \(\tau_f \gg \tau_d\) and small \(U\), the synaptic efficacy is increased gradually by spikes, and consequently the synapse is STF-dominated (Fig.1C). This phenomenological model successfully reproduces the kinetic dynamics of depressed and facilitated synapses observed in many cortical areas.
Figure 1. (A) The phenomenological model for STP given by Eqs.(1) and (2). (B) The post-synaptic current generated by an STD-dominated synapse. The neuronal firing rate \(R=15\)Hz. The parameters \(A=1\), \(U=0.45\), \(\tau_s=20\)ms, \(\tau_d=750\)ms, and \(\tau_f=50\)ms. (C) The dynamics of a STF-dominating synapse. The parameters \(U=0.15\), \(\tau_f=750\)ms, and \(\tau_d=50\)ms.
Effects on information transmission
Because STP modifies synaptic efficacy based on the history of presynaptic activity, it can alter neural information transmission (Abbott 97, Tsodyks 97, Fuhrmann 02, Rotman 11, Rosenbaum 12). In general, an STD-dominated synapse favors information transfer for low firing rates, since high-frequency spikes rapidly deactivate the synapse. An STF-dominated synapse, however, tends to optimize information transfer for high-frequency bursts, which increase the synaptic strength.
Firing-rate-dependent transmission via dynamic synapses can be analyzed by examining the transmission of uncorrelated Poisson spike trains from a large neuronal population with global firing rate \(R(t)\). The time evolution for the postsynaptic current \(I(t)\) can be obtained by averaging Eq. (1) over different realization of Poisson processes corresponding to different spike trains (Tsodyks 98):
\[\begin{aligned} \frac{du}{dt} & = & -\frac{u}{\tau_f} + U(1-u^-)R(t),\nonumber \\ \frac{dx}{dt} & = & \frac{1-x}{\tau_d}-u^+xR(t), \\ I(t) &= & \tau_s Au^+xR(t), \nonumber \tag{3}\end{aligned} \]
where again \(u^+ = u^- + U(1-u^-)\) and we neglect time scales on the order of the synaptic time constant. For the stationary rate, \(R(t) \equiv R_0\), we obtain
\[\begin{aligned} u^+=u_0 & \equiv & U\frac{1+\tau_fR_0}{1+U\tau_fR_0}, \nonumber \\ x=x_0 & \equiv & \frac{1}{1+u_0\tau_d R_0},\\ I=I_0 & \equiv & \tau_s Au_0x_0 R_0, \nonumber \tag{4} \end{aligned}\]
which is shown in Fig. 2A,B. In particular, for depression-dominated synapses (\(u^+ \approx U\)), the average synaptic efficacy \(E=Au^+x\) decays inversely with the rate, and the stationary synaptic current saturates at the limiting frequency \(\lambda \sim \frac{1}{U\tau_d}\), above which dynamic synapses cannot transmit information about the stationary firing rate (Fig. 2A). On the other hand, facilitating synapses can be tuned for a particular presynaptic rate that depends on STP parameters (Fig. 2B).
The above analysis only describes neural population firing with stationary firing rates. Eq. (3) can be used to derive the filtering properties of dynamic synapses when the presynaptic population firing rate changes arbitrarily with time. In Appendix A we present the corresponding calculation for depression-dominated synapses (\(u^+ \approx U\)). By considering small perturbations $R(t):=R_0 + R_1 \rho (t)$ with $R_1\ll R_0$ around the constant rate $R_0>0 $, the Fourier transform of the synaptic current $I$ is approximated by
\( \begin{eqnarray} \widehat{I}(\omega) \approx I_0 \delta(\omega) + \frac{I_0 R_1}{R_0} \widehat{\chi}(\omega) \widehat{\rho}(\omega) \tag{5} \end{eqnarray} \) where we defined the filter \( \begin{eqnarray} \widehat{\chi}(\omega) := 1- \frac{1/x_0 -1}{1/x_0 + j\omega \tau_{d}} = \frac{1+(\tau_{d}\omega)^2x_0+j\omega\tau_{d}(1-x_0)}{1/x_0+(\tau_{d}\omega)^2 x_0}\,, \tag{6} \end{eqnarray} \)
$\widehat{\rho}$ is the Fourier transform of $\rho$, and $I_0$ and $x_0$ are the stationary values of $I$ and $x$, respectively [see Eq. (4) with $u_0 = U$]. The amplitude of the filter \(|\widehat{\chi}(w)|\) is shown in Fig. 2C, illustrating the high-pass filter properties of depressing synapses. In other words, fast changes in presynaptic firing rates are faithfully transmitted to the postsynaptic targets, while slow changes are attenuated by depression.
STP can also regulate information transmission in other ways. For instance, STD may contribute to remove auto-correlation in temporal inputs, since temporally proximal spikes tend to magnify the depression effect and hence reduce the output correlation of the post-synaptic potential (Goldman 02). On the other hand, STF, whose effect is enlarged by temporally proximal spikes, improves the sensitivity of a post-synaptic neuron to temporally correlated inputs (Mejías 08, Bourjaily 12).
By combining STD and STF, neural information transmission could be further improved. For example, by combining STF-dominated excitatory and STD-dominated inhibitory synapses, the detection of high-frequency epochs by a postsynaptic neuron can be enhanced (Klyachko 06). In a postsynaptic neuron receiving both STD-dominated and STF-dominated inputs, the neural response can show both low- and high-pass filtering properties (Fortune 01).
Since STD suppresses synaptic efficacy in a frequency-dependent manner, it has been suggested that STD provides an automatic mechanism to achieve gain control, namely, by assigning high gain to slowly firing afferents and low gain to rapidly firing afferents (Abbott 97, Abbott 04, Cook 03). If a steady presynaptic firing rate \(R\) changes abruptly by an amount \(\Delta R\), the first spike at the new rate will be transmitted with the efficacy \(E\) before the synapse is further depressed. Thus, the transient increase in synaptic input will be proportional to \(\Delta R E(R)\), which is approximately proportional to \(\Delta R/R\) for large rates (see above). This is reminiscent of Weber's law, which states that a transient synaptic response is roughly proportional to the percentage change of the input firing rate. Fig. 2D shows that for a fixed-size rate change \(\Delta R\), the response decreases as a function of the steady input value; whereas without STD, the response would be constant for a fixed-size rate change.
Figure 2. (A) The steady values of the efficacy of an STD-dominated synapse and the postsynaptic currents it generates, measured by \(ux\) and \(uxR\), respectively. The parameters are the same as in Fig.1B. (B) Same as (A) for an STF-dominated synapse. The parameters are the same as in Fig. 1C. (C) The filtering properties of an STD-dominated synapse, measured by \(|\widehat{\chi}(w)|\) [Eq. (6)]. (D) The neural response to an abrupt input change \(\Delta R\) vs. the steady rate value for a STD-dominating synapse. \(\Delta R=5\)Hz. The parameters are the same as in Fig.1B.
Effects on network dynamics
In addition to feedforward and feedback transmission, neural circuits generate recurrent interactions between neurons. With STP included in the recurrent interactions, the network dynamics exhibits many new interesting behaviors that do not arise with purely static synapses. These new dynamical properties could therefore implement STP-mediated network computation.
Prolongation of neural responses to transient inputs
Since STP has a much longer time scale than that of single neuron dynamics (the latter is typically in the time order of \(10-20\) milliseconds), a new feature STP can bring to the network dynamics is prolongation of neural responses to a transient input. This stimulus-induced residual activity therefore holds a memory trace of the input, lasting up to several hundred milliseconds in a large-size network, and can serve as a buffer for information processing. For example, it has been shown that STD-mediated residual activity can cause a neural system to discriminate between rhythmic inputs of different periods (Karmorkar 07). STP also plays an important role in a general computation framework called a reservoir network. In this framework, STP, together with other dynamical elements of a large-size network, effectively map the input features from a low-dimensional space to the high-dimensional state space of the network that includes both active (neural) and hidden (synaptic) components, so that the input information can be more easily read out (Buonomano 09). In a recent development it was proposed that STF-enhanced synapses themselves can hold the memory trace of an input without recruiting persistent firing of neurons, potentially providing the most economical and robust way to implement working memory (Mongillo 08).
Modulation of network responses to external input
Since STP modifies synaptic efficacy instantly, it can modulate the network response to sustained external inputs. An example of this is bursty synchronous firing in an STD-dominated network, either spontaneously or in response to external inputs. The resulting bursts of activity are called population spikes (Loebel 02). To understand this effect, consider a network with strong recurrent interactions between neurons. When a sufficiently large group of neurons fire together, e.g. triggered by external stimulus, they can recruit other neurons via an avalanche-like process. However, after a large synchronous burst of activity, the synapses are weakened by STD, reducing the recurrent currents rapidly, and consequently the network activity returns to baseline. The network will not be activated again until the synapses are sufficiently recovered from depression. Therefore, the rate of population spikes is determined by the time constant of STD (Fig.3A,B). STF can also modulate the network response to external inputs, but in a very different manner (Barak 07). The varied response properties mediated by STP may provide different ways of representing and conveying the stimulus information in a network.
Induction of instability or mobility of network state
Persistent firing, referring to situations in which a group of neurons continue firing without external drive, is widely regarded as a neural substrate for information representation (Fuster 71). To maintain persistent activity in a network, strong excitatory recurrent interactions between neurons are needed to establish a positive-feedback loop sustaining neuronal responses. Mathematically, persistent activity is often modeled as an active stationary state (attractor) of the network. Since STD weakens synaptic efficacy depending on the level of neuronal activity, it can suppress an attractor state. This property, however, can be used to carry out valuable computations.
Consider a network that holds multiple attractor states competing with each other, STD destabilizing one of them can incur the network to switch to another attractor state (Torres 07, Katori 11, Igarashi 12). This property has been linked to spontaneous transition between up and down states of cortical neurons (Holcman 06), to the binocular rivalry phenomenon (Kilpatrick 10), and to enhanced discrimination capacity for superimposed ambiguous inputs (Fung 13). STF can also induce state switching, but this is achieved in an indirect way through facilitating the excitatory synapses to interneurons, with the latter in turn suppressing excitatory neurons (Melamed 08).
The joint effect of STD and STF on the memory capacity of the classical Hopfield model has been investigated (Mejías 09). It was found that STD degrades the memory capacity of the network, but induces a novel computationally desirable property, that is, the network can hop among memory states, which could be useful for memory searching. Interestingly, STF can compensate for the lost memory capacity caused by STD.
Enrichment of attractor dynamics
Continuous Attractor Neural Networks (CANNs), also called neural field models or ring models (Amari 77), have been widely used to describe the encoding of continuous stimuli in the neural system, such as for head-direction, orientation, movement direction, and spatial location of objects. A CANN, due to its translation-invariant recurrent interactions between neurons, holds a continuous family of localized stationary states, called bumps. These stationary states form a subspace on which the network is neutrally stable, enabling the network to track time-varying stimuli smoothly.
With STP included, a CANN displays new interesting dynamical behaviors. One of them is a spontaneous traveling wave phenomenon (York 09, Fung 12, Bressloff 12) (Fig.3C). Consider a network that is initially in a localized bump state. Because of STD, the neural interactions in the bump region are weakened. As a result of competition from neighboring attractor states, a small displacement will push the bump away, and it will continue to move in that direction due to the STD effect. If the network is driven by a continuously moving input, in a proper parameter regime the bump movement can even lead the external drive by a constant time irrespective to the input moving speed, achieving an anticipative behavior that is reminiscent to the predictive responses of head-direction neurons in rodents (Fig.3D; Fung 12).
Figure 3. (A,B) Population spikes generated by a STD-dominating network in response to external excitatory pulses. When the presentation rate of the pulses is low (A), the network responds to each one of them. For higher presentation rate (B), the network only responds to a fraction of the inputs. Adapted from (Loebel 02). (C) The traveling wave generated by STD in a CANN. (D) The anticipative tracking behavior of a CANN with STD.
Appendix A: Derivation of a temporal filter for short-term depression
We consider the rate-based dynamics in Eq. (3) for depression-dominated synapses (\(u^+ \approx U\)) and for synaptic responses that are much faster than the depression dynamics ($\tau_s \ll \tau_d$)\[ \begin{eqnarray} {\frac{{\rm d} x}{{\rm d}t}}&=&\frac{1-x}{\tau_{d}}-Ux R(t) \tag{7}\\ I(t) &= & \tau_{s} AU x R(t) \tag{8} \,. \end{eqnarray} \]
The aim is to derive a filter $\chi$ that relates the output synaptic current $I$ to the input rate $R$. Note that because the input rate $R$ enters the equations in a multiplicative fashion the input-output transfer function is non linear. Yet a linear filter can be derived by considering small perturbations $R_1 \rho(t)$ of the firing rate $R(t)$ around a constant rate $R_0$, that is, \( R(t):=R_0 + R_1 \rho (t)\, \quad\text{with}\quad R_0,R_1>0 \quad\text{and}\quad R_1\ll R_0 \, . \tag{9} \)
We assume that such small perturbations in $R$ produce small perturbations in the variable $x$ around its steady state value $x_0>0$ \[ x(t) = x_0 + x_1(t)\quad\text{with}\quad x_0 = \frac{1}{1+UR_0\tau_{d}} \quad\text{and}\quad |x_1(t)| \ll x_0 \, . \tag{10} \]
We can now linearize the dynamics of $x(t)$ around the steady-state value $x_0$ by approximating the product
\( \begin{eqnarray} xR &=& (x_0+x_1)(R_0+R_1\rho)\\ &=& x_0 R_0 + x_0 R_1 \rho + x_1 R_0+ x_1 R_1\rho\\ &\approx& x_0 R_0 + x_0 R_1 \rho + x_1 R_0\\ &\approx& R_0 x+ x_0R -x_0 R_0 \tag{11} \end{eqnarray} \)
where in Eq. (11) we dropped the second-order term $x_1 R_1\rho$ because we assumed $R_1\ll R_0$ and $|x_1|\ll x_0$. Plugging Eq. (11) into Eq. (7) yields
\( \begin{eqnarray} {\frac{{\rm d} x}{{\rm d}t}} = \frac{1-x}{\tau_{d}} - U R_0 x - U x_0 R + U x_0 R_0\,.\tag{12} \end{eqnarray} \)
We now take the Fourier transform at both sides of Eq. (12) \( \begin{eqnarray} j\omega \tau_{d} \widehat{x} = -\widehat{x} - U R_0 \tau_{d} \widehat{x} - U x_0 \tau_{d}\widehat{R} + (1+ U R_0 \tau_{d} x_0) \delta(\omega) \tag{13} \end{eqnarray} \) where we defined the Fourier transform pair \( \begin{eqnarray} \widehat{x}(\omega) := \int \!{\rm d}{t}\, x(t) \exp(-j\omega t ) \quad ; \quad x(t) = \frac{1}{2\pi}\int \!{\rm d}\omega\, \widehat{x}(\omega) \exp(j\omega t) \tag{14} \end{eqnarray} \) and $j=\sqrt{-1}$ is the imaginary unit. Solving Eq. (13) for the variable $\widehat{x}$, we find \( \begin{eqnarray} \widehat{x} = -\frac{U\tau_{d}x_0}{1/x_0 + j \omega \tau_{d}} \widehat{R} + x_0 (2-x_0) \delta(\omega) \tag{15} \end{eqnarray} \) where from Eq. (10) we used $U R_0 \tau_{d}=1/x_0 - 1$.
Next, we plug Eq. (11) into Eq. (8) to linearize the dynamics of the synaptic current
\( \begin{eqnarray} I &=& \tau_{s}AU (R_0x+x_0R-x_0R_0)\\ &=& I_0 \left( \frac{x}{x_0}+ \frac{R}{R_0}-1\right) \tag{16} \end{eqnarray} \) around the steady-state value $I_0 = \tau_{s}AU x_0 R_0$.
By taking the Fourier transform at both sides of Eq. (16), using Eq. (15), we obtain \( \begin{eqnarray} \widehat{I} &=& I_0 \frac{\widehat{x}}{x_0} + I_0 \frac{\widehat{R}}{R_0} - I_0 \delta(\omega) \\ &=& \frac{I_0}{R_0} \widehat{\chi} \widehat{R} + I_0(1-x_0) \delta(\omega) \tag{17} \end{eqnarray} \) where we defined the filter \( \begin{eqnarray} \widehat{\chi}(\omega) := 1- \frac{1/x_0 -1}{1/x_0 + j\omega \tau_{d}} = \frac{1+(\tau_{d}\omega)^2x_0+j\omega\tau_{d}(1-x_0)}{1/x_0+(\tau_{d}\omega)^2 x_0}\,. \tag{18} \end{eqnarray} \)
To interpret the result, we plug into Eq. (17) the Fourier transform $\widehat{R}=R_0\delta(\omega)+R_1 \widehat{\rho}$, which yields
\( \begin{eqnarray} \widehat{I}(\omega) = I_0 \delta(\omega) + \frac{I_0 R_1}{R_0} \widehat{\chi}(\omega) \widehat{\rho}(\omega)\,. \tag{19} \end{eqnarray} \)
Finally, the inverse Fourier transform of Eq. (19) reads \( \begin{eqnarray} I(t) = I_0 + \frac{I_0 R_1}{R_0} \int {\rm d}\tau \, \chi(\tau) \rho(t-\tau) \tag{20} \end{eqnarray} \) with \( \begin{eqnarray} \chi(t)=\delta(t) - \frac{1/x_0-1}{\tau_{d}} \begin{cases} \displaystyle {\exp\left(-\frac{t}{x_0\tau_{d}}\right)} & \text{for}\quad t\ge0 \\ 0 & \text{for}\quad t<0 \end{cases}\,. \tag{21} \end{eqnarray} \)
Therefore the output current $I$ is the sum of the steady-state current $I_0$ and the filtered perturbation $\frac{I_0 R_1}{R_0} \int {\rm d}\tau \, \chi(\tau) \rho(t-\tau)$ where $\chi$ is the filter we are interested in.
Research Topic: Neural Information Processing with Dynamical Synapses. S. Wu, K. Y. Michael Wong and M. Tsodyks. Frontiers in Computational Neuroscience, 2013 link
Abbott, L. F. et al (1997). Synaptic Depression and Cortical Gain Control. Science. 275(5297): 221-224. doi:10.1126/science.275.5297.221.doi:10.1126/science.275.5297.221
Abbott, L. F. and Regehr, Wade G. (2004). Synaptic computation. Nature. 431(7010): 796-803. doi:10.1038/nature03010.doi:10.1038/nature03010
Amari, Shun-ichi (1977). Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics. 27(2): 77-87. doi:10.1007/bf00337259.doi:10.1007/BF00337259
Barak, Omri and Tsodyks, Misha (2007). Persistent Activity in Neural Networks with Dynamic Synapses. PLoS Computational Biology. 3(2): e35. doi:10.1371/journal.pcbi.0030104.doi:10.1371/journal.pcbi.0030035
G. Bi and M. Poo. Synaptic modification by correlated activity: Hebb's postulate revisited. Annu. Rev. Neurosci. 24: 139–66, 2001.
Bourjaily, M. A. and Miller, P. (2012). Dynamic afferent synapses to decision-making networks improve performance in tasks requiring stimulus associations and discriminations. Journal of Neurophysiology. 108(2): 513-527. doi:10.1152/jn.00806.2011.doi:10.1152/jn.00806.2011
P. C. Bressloff. Spatiotemporal Dynamics of Continuum Neural Fields J. Phys. A 45, 033001, 2012.
Buonomano, Dean V. and Maass, Wolfgang (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nature Reviews Neuroscience. 10(2): 113-125. doi:10.1038/nrn2558.doi:10.1038/nrn2558
Cook, Daniel L.; Schwindt, Peter C.; Grande, Lucinda A. and Spain, William J. (2003). Synaptic depression in the localization of sound. Nature. 421(6918): 66-70. doi:10.1038/nature01248.doi:10.1038/nature01248
J. S. Dittman, A. C. Kreitzer and W. G. Regehr. Interplay between facilitation, depression, and residual calcium at three presynaptic terminals. J. Neurosci. 20: 1374-1385, 2000.
Fortune, Eric S. and Rose, Gary J. (2001). Short-term synaptic plasticity as a temporal filter. Trends in Neurosciences. 24(7): 381-385. doi:10.1016/s0166-2236(00)01835-x.doi:10.1016/S0166-2236(00)01835-X
G. Fuhrmann et al. Coding of Temporal Information by Activity-Dependent Synapses. J. Neurophysiol. 87: 140-148, 2002.
Fung, C. C. Alan; Wong, K. Y. Michael; Wang, He and Wu, Si (2012). Dynamical Synapses Enhance Neural Information Processing: Gracefulness, Accuracy, and Mobility. Neural Computation. 24(5): 1147-1185. doi:10.1162/neco_a_00269.doi:10.1162/NECO_a_00269
C. C. Fung, K. Y. Michael Wong and S. Wu. Delay Compensation with Dynamical Synapses. Advances in Neural Information Processing Systems 16, 2012.
C. C. A. Fung, H. Wang, K. Lam, K. Y. M. Wong and S. Wu. Resolution enhancement in neural networks with dynamical synapses. Front. Comput. Neurosci. 7:73. doi: 10.3389/fncom.2013.00073, 2013.
Fuster, J. M. and Alexander, G. E. (1971). Neuron Activity Related to Short-Term Memory. Science. 173(3997): 652-654. doi:10.1126/science.173.3997.652.doi:10.1126/science.173.3997.652
Goldman, Mark S.; Maldonado, Pedro and Abbott, L. F. (2002). Redundancy Reduction and Sustained Firing with Stochastic Depressing Synapses The Journal of Neuroscience 22(2): 584-591.
Holcman, David and Tsodyks, Misha (2006). The Emergence of Up and Down States in Cortical Networks. PLoS Computational Biology. 2(3): e23. doi:10.1371/journal.pcbi.0020023.doi:10.1371/journal.pcbi.0020023
Y. Igarashi, M. Oizumi and M. Okada. Theory of correlation in a network with synaptic depression. Physical Review E, 85, 016108, 2012.
Karmarkar, Uma R. and Buonomano, Dean V. (2007). Timing in the Absence of Clocks: Encoding Time in Neural Network States. Neuron. 53(3): 427-438. doi:10.1016/j.neuron.2007.01.006.doi:10.1016/j.neuron.2007.01.006
Katori, Yuichi et al. (2011). Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex. PLoS Computational Biology. 7(11): e1002266. doi:10.1371/journal.pcbi.1002266.doi:10.1371/journal.pcbi.1002266
Kilpatrick, Zachary P. and Bressloff, Paul C. (2010). Binocular Rivalry in a Competitive Neural Network with Synaptic Depression. SIAM Journal on Applied Dynamical Systems. 9(4): 1303-1347. doi:10.1137/100788872.doi:10.1137/100788872
Klyachko, Vitaly A. and Stevens, Charles F. (2006). Excitatory and Feed-Forward Inhibitory Hippocampal Synapses Work Synergistically as an Adaptive Filter of Natural Spike Trains. PLoS Biology. 4(7): e207. doi:10.1371/journal.pbio.0040207.doi:10.1371/journal.pbio.0040207
A. Loebel and M. Tsodyks. Computation by ensemble synchronization in recurrent networks with synaptic depression. J. Comput. Neurosci. 13: 111-124, 2002.
Markram, H.; Wang, Y. and Tsodyks, M. (1998). Differential signaling via the same axon of neocortical pyramidal neurons. Proceedings of the National Academy of Sciences. 95(9): 5323-5328. doi:10.1073/pnas.95.9.5323.doi:10.1073/pnas.95.9.5323
Markram, Henry and Tsodyks, Misha (1996). Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature. 382(6594): 807-810. doi:10.1038/382807a0.doi:10.1038/382807a0
Mejías, Jorge F. and Torres, Joaquín J. (2008). The role of synaptic facilitation in spike coincidence detection. Journal of Computational Neuroscience. 24(2): 222-234. doi:10.1007/s10827-007-0052-8.doi:10.1007/s10827-007-0052-8
Mejías, Jorge F. and Torres, Joaquín J. (2009). Maximum Memory Capacity on Neural Networks with Short-Term Synaptic Depression and Facilitation. Neural Computation. 21(3): 851-871. doi:10.1162/neco.2008.02-08-719.doi:10.1162/neco.2008.02-08-719
Melamed, Ofer; Barak, Omri; Silberberg, Gilad; Markram, Henry and Tsodyks, Misha (2008). Slow oscillations in neural networks with facilitating synapses. Journal of Computational Neuroscience. 25(2): 308-316. doi:10.1007/s10827-008-0080-z.doi:10.1007/s10827-008-0080-z
Mongillo, G.; Barak, O. and Tsodyks, M. (2008). Synaptic Theory of Working Memory. Science. 319(5869): 1543-1546. doi:10.1126/science.1150769.doi:10.1126/science.1150769
Rosenbaum, Robert; Rubin, Jonathan and Doiron, Brent (2012). Short Term Synaptic Depression Imposes a Frequency Dependent Filter on Synaptic Information Transfer. PLoS Computational Biology. 8(6): e1002557. doi:10.1371/journal.pcbi.1002557.doi:10.1371/journal.pcbi.1002557
Rotman, Z.; Deng, P.-Y. and Klyachko, V. A. (2011). Short-Term Plasticity Optimizes Synaptic Information Transmission. Journal of Neuroscience. 31(41): 14800-14809. doi:10.1523/jneurosci.3231-11.2011.doi:10.1523/JNEUROSCI.3231-11.2011
Stevens, Charles F and Wang, Yanyan (1995). Facilitation and depression at single central synapses. Neuron. 14(4): 795-802. doi:10.1016/0896-6273(95)90223-6.doi:10.1016/0896-6273(95)90223-6
Torres, J. J.; Cortes, J. M.; Marro, J. and Kappen, H. J. (2007). Competition Between Synaptic Depression and Facilitation in Attractor Neural Networks. Neural Computation. 19(10): 2739-2755. doi:10.1162/neco.2007.19.10.2739.doi:10.1162/neco.2007.19.10.2739
Tsodyks, Misha and Markram, Henry (1997). The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proceedings of the National Academy of Sciences. 94(2): 719-723. doi:10.1073/pnas.94.2.719.doi:10.1073/pnas.94.2.719
Tsodyks, Misha; Pawelzik, Klaus and Markram, Henry (1998). Neural Networks with Dynamic Synapses. Neural Computation. 10(4): 821-835. doi:10.1162/089976698300017502.doi:10.1162/089976698300017502
Wang, Yun et al. (2006). Heterogeneity in the pyramidal network of the medial prefrontal cortex. Nature Neuroscience. 9(4): 534-542. doi:10.1038/nn1670.doi:10.1038/nn1670
York, Lawrence Christopher and van Rossum, Mark C. W. (2009). Recurrent networks with short term synaptic depression. Journal of Computational Neuroscience. 27(3): 607-620. doi:10.1007/s10827-009-0172-4.doi:10.1007/s10827-009-0172-4
Zucker, Robert S. and Regehr, Wade G. (2002). Short-Term Synaptic Plasticity. Annual Review of Physiology. 64(1): 355-405. doi:10.1146/annurev.physiol.64.092501.114547
Reviewed by: Prof. Boris Gutkin, (1) Group for Neural Theory, LNC INSERM U960, Département d'Études Cognitives, École Normale Supérieure, Paris, France; (2) Faculty of Psychology, HIgher Shcool of Economics, Moscow, Russia
Reviewed by: Dr. Stefano Fusi, Institute of Neuroinformatics, University of Zurich, Switzerland
Retrieved from "http://www.scholarpedia.org/w/index.php?title=Short-term_synaptic_plasticity&oldid=182521"
"Short-term synaptic plasticity" by Misha Tsodyks and Si Wu is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Permissions beyond the scope of this license are described in the Terms of Use | CommonCrawl |
Stabilized finite element methods based on multiscale enrichment for Allen-Cahn and Cahn-Hilliard equations
Asymptotics for some discretizations of dynamical systems, application to second order systems with non-local nonlinearities
Existence of solutions for a class of quasilinear Schrödinger equation with a Kirchhoff-type
Die Hu , Xianhua Tang and Qi Zhang ,
School of Mathematics and Statistics, HNP-LAMA, Central South University, Changsha, Hunan 410083, China
Received March 2021 Revised October 2021 Early access December 2021
Fund Project: This work is supported by the National Natural Science Foundation of China (No:11971485) and the NSFC (11871475)
In this paper, we discuss the generalized quasilinear Schrödinger equation with Kirchhoff-type:
$\left (1\!+\!b\int_{\mathbb{R}^{3}}g^{2}(u)|\nabla u|^{2} dx \right) \left[-\mathrm{div} \left(g^{2}(u)\nabla u\right)\!+\!g(u)g'(u)|\nabla u|^{2}\right] \!+\!V(x)u\! = \!f( u),(\rm P)$
$ b>0 $
is a parameter,
$ g\in \mathbb{C}^{1}(\mathbb{R},\mathbb{R}^{+}) $
$ V\in \mathbb{C}^{1}(\mathbb{R}^3,\mathbb{R}) $
$ f\in \mathbb{C}(\mathbb{R},\mathbb{R}) $
. Under some "Berestycki-Lions type assumptions" on the nonlinearity
$ f $
which are almost necessary, we prove that problem
$ (\rm P) $
has a nontrivial solution
$ \bar{u}\in H^{1}(\mathbb{R}^{3}) $
such that
$ \bar{v} = G(\bar{u}) $
is a ground state solution of the following problem
$-\left(1+b\int_{\mathbb{R}^{3}} |\nabla v|^{2} dx \right) \triangle v+V(x)\frac{G^{-1}(v)}{g(G^{-1}(v))} = \frac{f(G^{-1}(v))}{g(G^{-1}(v))},(\rm \bar{P})$
$ G(t): = \int_{0}^{t} g(s) ds $
. We also give a minimax characterization for the ground state solution
$ \bar{v} $
Keywords: Quasilinear Schrödinger equation, Kirchhoff-type, ground state solution.
Mathematics Subject Classification: Primary: 35J50, 35J62; Secondary: 35J20.
Citation: Die Hu, Xianhua Tang, Qi Zhang. Existence of solutions for a class of quasilinear Schrödinger equation with a Kirchhoff-type. Communications on Pure & Applied Analysis, doi: 10.3934/cpaa.2022010
H. Berestycki and P. Lions, Nonlinear scalar field equations, I. Existence of a ground state, Rational Mech. Anal., 82 (1983), 313-345. doi: 10.1007/BF00250555. Google Scholar
H. Brézis and E. Lieb, A relation between pointwise convergence of functions and convergence of functionals, Proc Amer Math Soc., 88 (1983), 486-490. doi: 10.2307/2044999. Google Scholar
S. Cuccagna, On instability of excited states of the nonlinear quasilinear Schrödinger equation, Phys. D., 238 (2009), 38-54. doi: 10.1016/j.physd.2008.08.010. Google Scholar
S. Chen and X. Tang, Berestycki-Lions conditions on ground state solutions for a nonlinear Schrödinger equation with variable potentials, Adv. Nonlinear Anal., 9 (2020), 496-515. doi: 10.1515/anona-2020-0011. Google Scholar
J. Chen, X. Tang, Z. Gao and B. Cheng, Ground state sign-changing solutions for a class of generelized quasilinear Schrödinger equations with Kirchhoff-type perturbation, J. Fixed Point Theory Appl., 19 (2017), 3127-3149. doi: 10.1007/s11784-017-0475-4. Google Scholar
M. Colin and L. Jeanjean, Louis solutions for a quasilinear Schrödinger equation: a dual approach, Nonlinear Anal., 56 (2004), 213-226. doi: 10.1016/j.na.2003.09.008. Google Scholar
J. Chen, X. Tang and B. Cheng, Existence and nonexistence of positive solutions for a class of generalized quasilinear Schrödinger equations involving a Kirchhoff-type perturbation with critical Sobolev exponent, J. Math. Phys., 59 (2018), 021505. doi: 10.1063/1.5024898. Google Scholar
Y. Deng, S. Peng and S. Yan, Positive soliton solutions for generalized quasilinear Schrödinger equations with critical growth, J. Differ. Equ., 260 (2015), 115-147. doi: 10.1016/j.jde.2014.09.006. Google Scholar
Y. Deng, W. Huang and S. Zhang, Ground state solutions for generalized quasilinear Schrödinger equations with critical growth and lower power subcritical perturbation, Adv. Nonlinear Stud., 19 (2019), 219-237. doi: 10.1515/ans-2018-2029. Google Scholar
Z. Guo, Ground states for Kirchhoff equations without compact condition, J. Differ. Equ., 259 (2015), 2884-2902. doi: 10.1016/j.jde.2015.04.005. Google Scholar
D. Hu, X. Tang and Q. Zhang, Existence of ground state solutions for Kirchhoff-type problem with variable potential, Appl. Anal., (2021), 1–14. doi: 10.1080/00036811.2021.1947499. Google Scholar
L. Jeanjean, On the existence of bounded Palais-Smale sequences and application to a Landesman-Lazer-type problem set on $\mathbb{R}^{N}$, Proc. Roy. Soc. Edinburgh Sect. A., 129 (1999), 787-809. doi: 10.1017/S0308210500013147. Google Scholar
L. Jeanjean and J. Toland, Bounded Palais-Smale mountain-pass sequences, C. R. Acad. Sci. Paris Sér. I Math., 327 (1998), 23-28. doi: 10.1016/S0764-4442(98)80097-9. Google Scholar
G. Kirchhoff, Mechanik, Teubner, Leipzig., 1883. Google Scholar
G. Li and H. Ye, Existence of positive ground state solutions for the nonlinear Kirchhoff type equations in $\mathbb{R}^{3}$, J. Differ. Equ., 257 (2014), 566-600. doi: 10.1016/j.jde.2014.04.011. Google Scholar
F. Li, X. Zhu and Z. Liang, Multiple solutions to a class of generalized quasilinear Schrödinger equations with a Kirchhoff-type perturbation, J. Math. Anal. Appl., 443 (2016), 11-38. doi: 10.1016/j.jmaa.2016.05.005. Google Scholar
J. Liu, Y. Wang and Z. Wang, Soliton solutions for quasilinear Schrödinger equations. II, J. Differ. Equ., 187 (2003), 473-493. doi: 10.1016/S0022-0396(02)00064-5. Google Scholar
J. Liu and Z. Wang, Multiple solutions for quasilinear elliptic equations with a finite potential well, Nonlinear Anal. RWA., 257 (2014), 2874-2899. doi: 10.1016/j.jde.2014.06.002. Google Scholar
X. Liu, J. Liu and Z. Wang, Quasilinear elliptic equations via perturbation method, Proc. Amer. Math. Soc., 141 (2013), 253-263. doi: 10.1090/S0002-9939-2012-11293-6. Google Scholar
P. Lions, The concentration-compactness principle in the calculus of variation. The locally compact case. Part I, Ann Inst H Poincaré Anal Non Linéaire, 1 (1984), 109-145. Google Scholar
J. Liu, Y. Wang and Z. Wang, Solutions for quasilinear Schrödinger equations via the Nehari method, Commun. Partial Differ. Equ., 29 (2004), 879-901. doi: 10.1081/PDE-120037335. Google Scholar
Y. Shen and Y. Wang, Soliton solutions for generalized quasilinear Schrödinger equations, Nonlinear Anal. TMA., 80 (2013), 194-201. doi: 10.1016/j.na.2012.10.005. Google Scholar
X. Tang and S. Chen, Ground stste solutions of Nehari-Pohozaev type for Kirchhoff-type problems with general potentials, Calc. Var. Partial Differ. Equ., 56 (2017), 110-134. doi: 10.1007/s00526-017-1214-9. Google Scholar
M. Willem, Minimax Theorems, Birkhäuser, Boston, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar
J. Zhao and X. Liu, Ground state solutions for quasilinear equations of Kirchhoff type, J. Differ. Equ., 2020 (2020), 1-14. Google Scholar
Q. Zhang and D. Hu, Existence of solutions for a class of quasilinear Schrödinger equation with a Kirchhoff-type, Complex Var. Elliptic Equ., (2021), 1–15. doi: 10.1080/17476933.2021.1916918. Google Scholar
J. Zhang, X. Tang and D. Qin, Infinitely many solutions for Kirchhoff problems with lack of compactness, Nonlinear Anal., 197 (2020), 111856, 31 pp. doi: 10.1016/j.na.2020.111856. Google Scholar
Yu Su. Ground state solution of critical Schrödinger equation with singular potential. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3347-3371. doi: 10.3934/cpaa.2021108
Marco A. S. Souto, Sérgio H. M. Soares. Ground state solutions for quasilinear stationary Schrödinger equations with critical growth. Communications on Pure & Applied Analysis, 2013, 12 (1) : 99-116. doi: 10.3934/cpaa.2013.12.99
Yongpeng Chen, Yuxia Guo, Zhongwei Tang. Concentration of ground state solutions for quasilinear Schrödinger systems with critical exponents. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2693-2715. doi: 10.3934/cpaa.2019120
Quanqing Li, Kaimin Teng, Xian Wu. Ground states for Kirchhoff-type equations with critical growth. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2623-2638. doi: 10.3934/cpaa.2018124
Maoding Zhen, Binlin Zhang, Xiumei Han. A new approach to get solutions for Kirchhoff-type fractional Schrödinger systems involving critical exponents. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021115
Jiu Liu, Jia-Feng Liao, Chun-Lei Tang. Positive solution for the Kirchhoff-type equations involving general subcritical growth. Communications on Pure & Applied Analysis, 2016, 15 (2) : 445-455. doi: 10.3934/cpaa.2016.15.445
Chungen Liu, Huabo Zhang. Ground state and nodal solutions for fractional Schrödinger-Maxwell-Kirchhoff systems with pure critical growth nonlinearity. Communications on Pure & Applied Analysis, 2021, 20 (2) : 817-834. doi: 10.3934/cpaa.2020292
Jianhua Chen, Xianhua Tang, Bitao Cheng. Existence of ground state solutions for a class of quasilinear Schrödinger equations with general critical nonlinearity. Communications on Pure & Applied Analysis, 2019, 18 (1) : 493-517. doi: 10.3934/cpaa.2019025
Yanfang Xue, Chunlei Tang. Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1121-1145. doi: 10.3934/cpaa.2018054
Xiang-Dong Fang. A positive solution for an asymptotically cubic quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (1) : 51-64. doi: 10.3934/cpaa.2019004
Norihisa Ikoma. Existence of ground state solutions to the nonlinear Kirchhoff type equations with potentials. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 943-966. doi: 10.3934/dcds.2015.35.943
Xiao-Jing Zhong, Chun-Lei Tang. The existence and nonexistence results of ground state nodal solutions for a Kirchhoff type problem. Communications on Pure & Applied Analysis, 2017, 16 (2) : 611-628. doi: 10.3934/cpaa.2017030
Daniele Garrisi, Vladimir Georgiev. Orbital stability and uniqueness of the ground state for the non-linear Schrödinger equation in dimension one. Discrete & Continuous Dynamical Systems, 2017, 37 (8) : 4309-4328. doi: 10.3934/dcds.2017184
Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1547-1565. doi: 10.3934/cpaa.2019074
Jincai Kang, Chunlei Tang. Ground state radial sign-changing solutions for a gauged nonlinear Schrödinger equation involving critical growth. Communications on Pure & Applied Analysis, 2020, 19 (11) : 5239-5252. doi: 10.3934/cpaa.2020235
Kenji Nakanishi, Tristan Roy. Global dynamics above the ground state for the energy-critical Schrödinger equation with radial data. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2023-2058. doi: 10.3934/cpaa.2016026
Chenmin Sun, Hua Wang, Xiaohua Yao, Jiqiang Zheng. Scattering below ground state of focusing fractional nonlinear Schrödinger equation with radial data. Discrete & Continuous Dynamical Systems, 2018, 38 (4) : 2207-2228. doi: 10.3934/dcds.2018091
Die Hu Xianhua Tang Qi Zhang | CommonCrawl |
Growth of alpine grassland will start and stop earlier under climate warming
Winter warming is ecologically more relevant than summer warming in a cool-temperate grassland
Juergen Kreyling, Kerstin Grant, … Carl Beierkuhnlein
Phenological mismatches between above- and belowground plant responses to climate warming
Huiying Liu, Hao Wang, … Madhav P. Thakur
Acclimation of phenology relieves leaf longevity constraints in deciduous forests
Laura Marqués, Koen Hufkens, … Benjamin D. Stocker
Even modest climate change may lead to major transitions in boreal forests
Peter B. Reich, Raimundo Bermudez, … Artur Stefanski
Warm springs alter timing but not total growth of temperate deciduous trees
Cameron Dow, Albert Y. Kim, … Kristina J. Anderson-Teixeira
Spatial variation and mechanisms of leaf water content in grassland plants at the biome scale: evidence from three comparative transects
Ruomeng Wang, Nianpeng He, … Mingxu Li
Flowering season of vernal herbs is shortened at elevated temperatures with reduced precipitation in early spring
Bo Eun Nam & Jae Geun Kim
Enhanced growth after extreme wetness compensates for post-drought carbon loss in dry forests
Peng Jiang, Hongyan Liu, … Hongya Wang
Tall Amazonian forests are less sensitive to precipitation variability
Francesco Giardina, Alexandra G. Konings, … Pierre Gentine
Patrick Möhl ORCID: orcid.org/0000-0002-5058-81351,
Raphael S. von Büren ORCID: orcid.org/0000-0002-9129-642X1 &
Erika Hiltbrunner ORCID: orcid.org/0000-0002-0704-07801
Ecophysiology
Grassland ecology
Plant ecology
Alpine plants have evolved a tight seasonal cycle of growth and senescence to cope with a short growing season. The potential growing season length (GSL) is increasing because of climate warming, possibly prolonging plant growth above- and belowground. We tested whether growth dynamics in typical alpine grassland are altered when the natural GSL (2–3 months) is experimentally advanced and thus, prolonged by 2–4 months. Additional summer months did not extend the growing period, as canopy browning started 34–41 days after the start of the season, even when GSL was more than doubled. Less than 10% of roots were produced during the added months, suggesting that root growth was as conservative as leaf growth. Few species showed a weak second greening under prolonged GSL, but not the dominant sedge. A longer growing season under future climate may therefore not extend growth in this widespread alpine community, but will foster species that follow a less strict phenology.
In extratropical alpine environments, low temperature confines the growing season to 6–12 weeks1, forcing high-elevation plants to complete their annual developmental cycle within a short time. Yet, the duration of the growing season has increased considerably over the past decades due to above-average warming in mountain regions2,3, which has led to advanced snowmelt4,5. By the end of the century, snowmelt is expected to occur up to one month earlier in the Swiss Alps5 and autumn warming may further prolong the growing season length (GSL). Early release from snow cover commonly advances flowering phenology in many alpine species6,7, but less is known about how a longer growing season affects the temporal dynamics of growth and senescence8,9.
Remote-sensing studies highlighted that the greening of alpine plants tracks snowmelt within the current interannual variation10,11. When alpine vegetation responds to advanced snowmelt by growing earlier, the onset of senescence will determine how effectively the season is used for growth and resource acquisition12. However, leaf browning and senescence have received less attention in ecological studies than greening and growth13, and it is unclear how an early season start affects the onset of senescence in alpine grasslands. Early senescence in early starters may attenuate any growth-related effects in alpine and arctic vegetation14,15,16. And, if present, species-specific differences in the capability to delay senescence under favourable conditions may shape community composition in future.
Aboveground growth and tissue maintenance commonly stops early to prepare alpine plants for winter, while roots are better screened from first frost events in autumn and could therefore continue growing. Roughly two-thirds of the world's grassland biomass is belowground17, and that fraction approaches 80–90% in arctic and alpine regions1,18. Despite the importance of roots and potential divergence between root and leaf phenology19,20, there is a lack of studies that explore the temporal dynamics of root growth in alpine grassland21. Unlike leaves, roots are hidden from remote sensing. Hence, our understanding of belowground processes relies entirely on local observations. Mini-rhizotrons are easily installed windows to examine root growth22 but processing the acquired images used to be extremely labour-intensive. Recently, machine learning algorithms have been developed that automatically distinguish between roots and soil in images23, allowing to analyze large datasets. Observations with high spatial or temporal resolution are needed to understand how above- and belowground phenology is linked24,25. This is crucial to understand current states and predict changes in alpine vegetation under climate warming.
Here, we assessed whether alpine grassland is capable of extending growth and maintaining green tissues when subjected to a significantly longer GSL. We experimentally advanced the growing season by exposing monoliths of typical alpine grassland (Caricetum curvulae, Fig. 1) to typical summer conditions in climate chambers—two to four months before the actual growing season started. We combined repeated censuses of above- and belowground growth parameters throughout the prolonged season and quantified leaf growth in additional field microsites with varying snowmelt timing. We hypothesize that (1) the start and rate of growth are tracking the provided temperature conditions. We assume that (2) the onset of aboveground senescence depends on season start and plant species. Further, (3) we expect root growth to continue as long as soil temperatures are high enough. By combining new methods to analyze root phenology with robust aboveground measurements, our study offers insights into the controls of seasonal growth in alpine plant species.
Fig. 1: Overview of the experimental setup.
A Scheme of a monolith with natural vegetation and its original soil, equipped with a transparent rhizotron tube to scan root growth. Roots grow along the tube surface (see insert below). B The dominant species Carex curvula. Photo: C. Körner. C Elongation and browning of a single Carex leaf in the course of a growing season. D Monoliths exposed to premature (+4 m, +2 m) summer conditions in climate chambers. E Monoliths at the alpine site during actual summer (July); note the advanced browning compared to the surrounding vegetation.
Aboveground growth
We experimentally initiated the growing season in climate chambers, 70 and 134 days (termed '+2 m' and '+4 m', respectively) before the in-situ growing season started (Fig. 1, Table 1). Plants experienced similar environmental conditions in the climate chambers as in the field during summer (Fig. 2A), albeit with fixed diurnal conditions (see Methods section). Mean soil temperature during the first 50 days of the season amounted to 10.2 ± 0.1 °C in +4 m, 11.0 ± 0.1 °C in +2 m, and 10.7 ± 0.1 °C and 11.1 ± 0.1 °C in field plots of 2020 and 2021. Snowmelt in the field plots was 2021 around 3–4 weeks later than 2020 (earlier season start than usual). In both monolith groups and the field plots, leaf elongation of the dominant sedge Carex curvula All. s.str. (Carex hereafter) started right after the release from winter dormancy with exposure to temperatures >5 °C (Fig. 2B). It peaked after 44 d in field plots (mean of 2020/2021) and continued 9.3 ± 2.3 d longer in +4 m and +2 m (t22 = 4.1, P < 0.001), a brief extension only, given the substantial increase in GSL (Fig. 3). Peak leaf length averaged at 9.4 cm ± 0.4 and was not affected by GSL (F3 = 0.1, P = 0.93). Similar to leaf length, canopy greenness (assessed from photographs) increased right after the start of the season and peaked after 39 d in +4 m and field plots (no difference), but already after 34 d in +2 m (−4.5 ± 1.3 d, t18 = 3.4, P = 0.002, Fig. 2C). Hence, canopy greenness was obviously not reached later when exposed to earlier summer conditions.
Table 1 Characteristics of each experimental group (+4 m, +2 m, field plots) and microsites
Fig. 2: Impact of growing season length (GSL) on the timing of growth and senescence.
Soil temperature and growth parameters with different growing season length, experimentally advanced in climate chambers (+4 m, +2 m, in 2021) and compared to field plots (2020, 2021). Day of the year is specified for the first day of each month below the x axis of A. GSL is indicated for each group at the top of A (dotted line during snowmelt). All growth data were scaled to 0–100% to ease comparison. A Daily mean soil temperature at 3–4 cm depth, close to the plants' meristems. B Green leaf length of Carex curvula. C Canopy greenness of the whole plant community (2021). Dashed, vertical lines show the mean date for the peak. D Seasonal gain in root area per unit image area (mm2 cm−2, scaled to percent). Points indicate raw data and lines are GAM smoothers in B–D (lines: mean, error band: 95% confidence interval).
Fig. 3: Timepoints related to growth and senescence for different growing season lengths (GSL).
Peak green leaf length and senescence down to 50% browning for the dominant species Carex curvula, peak canopy greenness of the entire community and its decline to 50% and the onset of growth, highest growth rate, 50% and 80% seasonal growth for roots. GSL amounted to 238 d (+4 m), 174 d (+2 m), 109 d (field 2020) and 103 d (field 2021). Grey points show data for each monolith and field plot (8 monoliths for +4 m and +2 m and five field plots), colored points refer to mean ± SE (SE smaller than points are not visible).
Leaves of Carex brown from the tip towards the base (Fig. 1C), such that the remaining (decreasing) green leaf length reflects the progression of senescence. The time between peak leaf length of Carex and 50% leaf browning was 45 d in field plots and 11.7 ± 3.0 d longer in monoliths (t22 = 3.9, P < 0.001) with no difference between +4 m and +2 m. However, this difference was largely due to field plots in 2021, when browning took only 37 d compared to 52 d in 2020 (t22 = 3.2, P < 0.01, Fig. 3). Canopy greenness faded from 100% to 50% within 33 d, independent of GSL (F2 = 1.5, P = 0.24, Fig. 3). But unlike the monotonic leaf browning of Carex, the decline in canopy greenness of the entire community was partly reversible and greenness temporarily increased again by 11% in +2 m and 36% in +4 m later in the season (Fig. 2C). Although greenness peaked early, these very low values during the rest of the season accumulated to 49 ± 8.3% higher greenness (integrated as area under the curve) in monoliths than in field plots (t18 = 5.2, P < 0.001; no difference between +4 m and +2 m).
Root growth
We observed root growth as increases in root area using mini-rhizotron tubes (Fig. 1A) and found that root growth started ca. 11 days after the onset of growing conditions in climate chambers (Fig. 3). Field plots of 2020 showed a similar delay as monoliths (8 days), but roots started 5.4 d earlier in the field in 2021 compared to monoliths (t21 = 2.4, P = 0.037). The majority of roots was produced within ca. two months after the start of the season: 80% of root growth was reached after 56 d in the field and after 73 d in +2 m and +4 m (Fig. 2D, Fig. 3). After that, root growth continued at a low rate, while +4 m even started to lose ca. −20% of its root area in the second half of the season (Fig. 2D). Thus, the experimentally added 134 d did not translate into sustained root growth in +4 m, and only 10% of root growth resulted from the additional 70 d in +2 m. Maximum increment rates were reached after 30–41 d, coinciding with peak canopy greenness (Fig. 2, Fig. 3). The total seasonal gain in root area was similar in all groups in 2021 (14–17 mm2 cm2), but significantly higher in the 2020 field plots (28 mm2 cm2; t21 = 4.7, P < 0.001). This is presumably related to the time since tube-installation (more unrooted space), as rooting had not yet reached steady-state. Overall, root diameters did not exceed 2.1 mm and averaged at 0.21 mm.
Green cover and species-specific vigour index
Total green plant cover decreased from ~65% during mid-season to <15% at the end of the season (Table 2, Supplementary Table 1). While green cover of all species was lower at the end of the season, some species lost more greenness compared to others. Carex was the dominant species during mid-season (28–37%), but made up only 1.4–12.7% of total green cover at the end of the season. Leaves of Ligusticum entirely disappeared within ca. 3 months, reducing green cover to zero. Green cover of Anthoxanthum, Leontodon, and Potentilla decreased to a similar degree as total green plant cover, leaving their relative contribution unchanged. In contrast, Helictotrichon and Soldanella constituted a 7% bigger fraction of the remaining green cover at the end of the season than during the mid-season (Table 2). Photosynthetic vigour index values (see Methods, Eq. 1) declined by 38–100% towards the end of the season in all species, except for the grass Helictotrichon (−24 ± 13%, t9 = 1.8, P = 0.16) and the forb Soldanella (−8 ± 15%, t14 = 0.5, P = 0.62; Fig. 4, Supplementary Table 2).
Table 2 Total green cover mid-season and at the end of the season and the contribution of the most abundant species (mean ± SE)
Fig. 4: Maintenance of photosynthetically active tissue in the seven most abundant species over the season.
Species-specific photosynthetic vigour index (mean ± SE) was calculated from number, size, green area, and chlorophyll content of leaves, in monoliths (+2 m, +4 m) and field plots. Data are scaled to percent of the maximum per species and group. Values were assessed for the same 1–3 individuals per experimental unit (8 monoliths for +4 m, +2 m, and 5 field plots) across the season. Arrows on the right side highlight the difference between the maximum and the last value of the season within the corresponding group. Asterisks indicate P < 0.05 (two-sided t tests, detailed statistics in Supplementary Table 2). Full species names are in Table 2. Illustrations provided by Oliver Tackenberg.
Temperature effects in the field
Due to low snow load and heavy storms in winter, snowmelt occurred exceptionally early at wind-exposed microsites in 2020. This led to substantial differences in snowmelt date between the 24 microsites (40 × 40 cm), where we monitored leaf elongation and browning in Carex (Table 1). Across microsites, leaf elongation until peak leaf length took longer under earlier snowmelt (F2.2 = 236.8, P < 0.001, Fig. 5A, Supplementary Table 3). As a consequence, the variation in snowmelt timing was considerably larger (103 days) than the resulting variation in the date of peak leaf length, which encompassed 31 days only. This variation in the leaf elongation period could be explained to 92% by soil temperature close to plants' meristems (F2.7 = 79.1, P < 0.001, Fig. 5B), with faster elongation rates under warmer conditions (F3.4 = 17.1, P < 0.001, Fig. 5C). Nevertheless, peak leaf length (and the onset of browning) was reached 0.21 ± 0.04 days earlier per day of earlier snowmelt (F1 = 36.5, P < 0.001, R2 = 0.61). Leaf browning to 50% of maximum green length took 26 days and was independent of the date of peak leaf length (F1 = 3.2, P = 0.90) and soil temperature (F1.6 = 1.0, P = 0.40, Fig. 5C, Supplementary Figure 1). In contrast to experimental groups, maximum green leaf length varied across microsites but was not affected by snowmelt date or soil temperature (Supplementary Table 3).
Fig. 5: Duration and rates of growth and senescence in microsites (2020).
Leaf elongation (green) and browning (orange) duration of Carex curvula related to A the onset of the respective period (n = 24 microsites for elongation and 20 for browning) and B to mean soil/meristem temperature (n = 23 for elongation and 20 for browning). C Daily rates of elongation and browning (negative) in relation to soil temperature (n = 43 measurement intervals for elongation and 22 for browning). D Exemplary data from one microsite illustrate how values in A–C were derived: elongation and browning period to 50% for A and B; rates (r1–3) for C, calculated for individual measurement intervals (mean ± SE, n = 5 leaves). Temperature was averaged over the corresponding periods. Smoothed curves (lines: mean, error band: 95% confidence interval) and variance explained (%) of smoothers are indicated only when smoothing terms were significant (F tests, P < 0.05, detailed statistics in Supplementary Table 3). DOY = day of year.
We advanced the start of the alpine growing season and thus, pushed its total length to extremes: Our experiment more than doubled the available time for seasonal plant development and revealed an overarching autonomous control over growth and senescence. Whether the season was prolonged by two or four months, typical alpine summer conditions always initiated plant growth without major delay. However, early-onset of growth was accompanied by early-onset of senescence, halting above- and belowground plant growth even under ongoing, favourable summer conditions. Therefore, our findings challenge the widely assumed rise in future productivity as the thermal growing season prolongs due to climate warming.
A close correlation between snowmelt and the onset of leaf greening and elongation has previously been observed in alpine26,27,28 and arctic vegetation29,30. While climatic conditions for arctic and alpine plants differ in important aspects such as solar angle, photoperiod, precipitation and frost regime, they also share important similarities such as the short GSL31. The tight link between the start of growing conditions and actual growth substantiates that seasonally snow-covered plants leave endodormancy far ahead of actual snowmelt.
Nevertheless, it was speculated that an unusually short photoperiod may prevent growth in early spring31. But in contrast to flowering6,7,32, there is little evidence that vegetative growth of alpine plants is delayed by photoperiod in spring. We observed normal growth rates with a day-length of 14.5 h (1–1.5 months ahead of the natural season start) and previously even initiated typical spring growth using an 11.5 h-photoperiod for the same vegetation type (unpublished data). A study across ca. 25 alpine sites and 17 years found no indication that photoperiod influenced leaf elongation after snowmelt28. Beside its signalling effect, a short photoperiod also encompasses lower levels of photon fluxes, possibly limiting carbon uptake. However, perennial alpine plants have large belowground reserves33 and are not carbon-limited34, even under shade35.
Following snowmelt, temperature directly influenced the rates of leaf expansion and growth and thus, affected the time needed to reach peak leaf lengths (or maximum canopy greenness) and to enter leaf senescence. A correlation between leaf growth and temperature is well established from physiological studies in various plant species (e.g.,36,37), including alpine ones38,39. Low ambient temperatures are typical when snow melts earlier in the year, prolonging the required time to complete leaf elongation. Consequently, one day advance in snowmelt was associated with only 0.2 days earlier peak leaf length in our microsite survey. This is similar to observations from an interannual remote sensing study in the Swiss Alps, where peak NDVI of alpine grassland shifted by 0.5 days per day of earlier snowmelt10. In our experiment, leaf elongation did not take substantially longer in monoliths than in field plots, despite extremely advanced season start, most likely due to similar temperature after snowmelt. Hence, warmer spring temperatures under earlier snowmelt will enhance elongation rates until peak leaf lengths and advance the onset of senescence.
Given that senescence started after a similar timeframe in field plots and monoliths, the latter experienced a comparably long period with already senescing leaves. Moreover, the speed of leaf browning in Carex was 25% slower in monoliths compared to field plots. As leaf browning was equally slow between the two monolith groups, we do not anticipate that this difference between monoliths and field resulted from earlier snowmelt. Perhaps the maintained photoperiod or more stable temperature conditions could cause slower leaf browning. Temperature was not related to the speed of browning in our microsite survey, but a meta-analysis across 18 alpine and arctic sites of the International Tundra Experiment found that warming of 0.5–2.3 K significantly delayed leaf senescence by ca. 1 day9—a minor delay in relation to the projected advance in snowmelt5.
It seems that numerous alpine plants evolved conservative controls over senescence to guarantee completion of the seasonal development cycle within the short growing season14,40,41. To some degree, this is reflected in the annual biomass production: there is cumulative evidence that peak photosynthetic biomass (proxies like peak standing biomass, canopy height, or NDVI) of alpine grassland is independent of GSL and conserved across seasons10,27,42,43,44. We found that Carex reached the same maximum leaf lengths in all three experimental groups (+2 m, +4 m, and field plots). Apparently, seasonal biomass gain is shaped by other factors than GSL, such as temperature, water, and nutrient availability1,45.
Similar to leaves, root growth was initiated by the onset of growing conditions but postponed by several days. We assume that roots depend on aboveground signals to initiate growth, most likely mediated by hormones, such as auxin produced in young leaves46. A delay between the onset of above- and belowground growth was also observed in different arctic plant communities, where leaves always started growing prior to roots19,29,47. Delayed root growth in arctic regions could be a consequence of more prevalent soil frost that takes longer to melt—especially under lower solar angles. At least in alpine species, roots grew substantially less below 3–5 °C and ceased to grow in the range of 0.8–1.4 °C39,48.
In contrast to our hypothesis, root growth was not stimulated by extended summer conditions. After the initial growing phase of ca. 3 months, we found either no root growth or at a minute rate. Thus, both above- and belowground phenology were mostly completed after the duration that corresponds to a natural growing season. It seems that root growth stops once aboveground demands for nutrients and water decline. Or root growth is internally controlled, following similar phenological controls as observed in leaves49.
Compared to leaves, root senescence is difficult to document and requires chemical or molecular tools50,51. Color-changes such as browning in leaves are not a specific characteristic of senescing roots. Also, the visual distinction between dead and living roots is error-prone. Therefore, only roots that started to structurally disintegrate were considered dead, which was true for 0.3% of root area in the manually annotated mini-rhizotron images (see Methods). Such a low number of dead roots two years after the installation of the rhizotron tubes matches the commonly low root turnover rates of several years in alpine grassland1,45. Even fine roots may reach a substantial age of up to 15 years, as determined by mean residence time of carbon52, although carbon in roots may be older than the roots themselves53,54.
While all species responded opportunistically to a variable start of the season, most species were senescent during the long favourable second half of the season. The grass Helictotrichon and the snowbed plant Soldanella maintained high photosynthetic vigour index and made up a bigger fraction of the remaining green cover at end- compared to mid-season, indicating that these species could benefit from a longer season in terms of assimilation. In contrast, senescence of the dominant Carex progressed fast and deterministically. In the long run, species with such a conservative phenology may become outcompeted when a longer GSL 'opens' a window for additional growth during late season16. Yet, a 32-year monitoring study of the same grassland type reported only very small changes in species composition over time, despite climate warming and a probable increase in GSL55. The authors attributed this manifest stability of species composition to a lack of unoccupied sites in this densely rooted, late-successional grassland. Moreover, clonal proliferation is the rule in alpine grasslands and alpine species can be extremely persistent. In fact, individual clones of Carex curvula were found to live up to 5000 years56. Thus, species composition may remain stable for the coming decades or even centuries.
Our results provide experimental evidence that early snowmelt due to climate warming will trigger early senescence in this alpine vegetation type, both above- and belowground. Therefore, growth and carbon uptake do not scale with growing season length but strongly depend on internal controls that reflect an evolutionary adjustment to a short growing season. It came as a surprise that a 2–4 months earlier start resulted in a long period of senescent and brown vegetation during the second half of the growing season. This may lead to mismatches with soil microbial activities and therefore, with the nutrient cycle. Such a conservative control over seasonal development will constrain adjustments to the current pace of environmental changes, and in the longer term, promote species with a more flexible timing of growth and senescence.
The study was conducted on a Caricetum curvulae Br.-Bl., which is the most common alpine grassland community on acidic soils in the Alps57. This grassland is widespread in European alpine environments58 and shares traits with alpine sedge mats around the world (e.g., Kobresia grassland on the Tibetan Plateau), having a similar growth form, short stature, and persisting predominantly clonally. The sedge Carex curvula (Fig. 1B) is the dominant species, contributing around one third to total annual biomass production35,59. Grasses like Helictotrichon versicolor Vill. and forbs such as Potentilla aurea L. and Leontodon helveticus Mérat were also very abundant (Table 2). Leaves of Carex occur in tillers of 2–5 leaves that originate from belowground meristems. Every year, 1–2 (rarely 3) new leaves are formed that re-sprout in the following 2–3 years and then die off59. Growth and leaf elongation start rapidly after snowmelt (usually late June to early July) and reach a maximum before leaf senescence materializes as progressive browning from the leaf tip towards the base (Fig. 1C). By the end of season, the length of the green leaf part is reduced to 0.5–1.5 cm.
Setup of the climate chamber experiment
In July 2019, we excavated 16 circular patches of homogenous vegetation (28 cm diameter, Fig. 1A) to a soil depth of ca. 22 cm, referred to as monoliths. They were collected in the vicinity of the ALPFOR research station at 2440 m a.s.l. in the Swiss Alps (46.577°N, 8.421°E) and fit into buckets with a perforated bottom to allow water to seep through (Fig. 1A). Soil and root systems of the monoliths were not further disrupted during that process. A transparent, acrylic rhizotron tube (inner diameter: 5.0 cm; outer: 5.6 cm) was installed in every monolith, protruding from the soil by ~15 cm (wrapped in a layer of black and white tape to block light and reduce heat absorption) and tilted at an angle of 35–45° to the surface (Fig. 1A). The lower opening of the tubes (in soil) was sealed with a rubber plug and the upper opening (outside of the soil) with a removable plastic cap. Polyethylene foam insulated the inside of the tubes.
During three summers, 2019–2021, the monoliths remained in sand beds in the natural, alpine environment next to the weather station of ALPFOR (Fig. 1E, www.alpfor.ch/weather.shtml). During alpine winter, monoliths were accessibly stored in a cold building at 1600 m elevation where monoliths were buffered from temperature fluctuations and screened from frost (Supplementary Figure 2). Monoliths were covered with cotton blankets and wooden boards to insulate plants, simulate snow pressure, and ensure complete darkness. This allowed a seamless transition to climate chambers before the start of the experiment, without exposing monoliths to freezing temperatures or sunlight. Monoliths had mean soil temperatures of 4.5 °C in the 2019/2020 winter and 3.5 °C in 2020/2021 (Oct–Feb, 3–4 cm soil depth, 3 HOBO TidBits, Onset Computer Corp., USA). In-situ, snow-covered soils rarely freeze due to the insulation by the snow pack and usually reach temperatures of around 0 °C. We do not expect that the slightly warmer soil affected temporal dynamics of plant growth, as roots and aboveground tissues remained visually dormant prior to the experiment. During a pilot study in April 2020, we exposed the monoliths to earlier summer conditions in climate chambers, but roots around the rhizotron tubes were not yet sufficiently established to permit root monitoring. Therefore, we postponed the experiment to 2021. Plants were moved to the climate chambers in February 2021, blankets still in place, and stored in the dark at 0 °C until the experiment started.
The 16 monoliths were equally distributed between two walk-in climate chambers (195 × 130 × 200 cm, L × W × H), in which temperature, light, humidity, and air circulation were controlled (Fig. 1D, phytotron facility60, University of Basel). Light was provided by 18 LED modules per chamber, comprising four separately dimmable light channels (blue [B], green, red [R], infrared [IR]; prototypes by DHL-Light, Hannover, GER). We took care to reach B:R ratios of natural sunlight on a bright day (ca. 0.861) and set an R:FR ratio of ca. 1.4, which is above the range that characterizes vegetation shade. For summer conditions, photoperiod was set to 14.5 h, corresponding to early May in the central Alps (1–1.5 months prior to natural snowmelt), of which 12 h were at maximum light intensity (photon flux density of ca. 1000 μmol m2 s−1, Supplementary Figure 3). We set temperatures between 5 °C (night) and 14 °C (day) and logged soil temperature at 3–4 cm depth hourly throughout the experiment in six buckets per chamber (iButton DS1922L, Maxim Integrated Products Inc., USA).
Monoliths in the first chamber (termed '+4 m') were exposed to alpine summer conditions on 18 February 2021, ~4 months before the in-situ start of the growing season. The second chamber remained dark at 0 °C until 23 April 2021, when the same summer settings were applied ('+2 m' group). Monoliths were watered twice a week with 0.8 L of deionized water per monolith. On 5 July 2021, all monoliths were transported to the alpine research site, experiencing natural growth conditions for the rest of the season. As a comparison, we studied five (untreated) plots of an already existing field experiment during two seasons (years 2020 and 2021), located at the same elevation 3 km away from the origin of the monoliths7. Each of these plots contained two rhizotron tubes within close proximity (30–40 cm apart; installed in July 2019). These in-situ plots became snow-free mid-June to early July and underwent natural growing seasons. As in +4 m and +2 m, soil temperature at 3-4 cm depth was logged once per hour in each field plot (HOBO TidBit, Onset Computer Corp., USA).
Aboveground plant traits
For Carex, aboveground growth and senescence were assessed by measuring green leaf length from the soil surface to the narrow zone of incipient browning (similar to27). Each time, 5–10 leaves were randomly selected among the longest leaves. In +4 m, +2 m, and field plots, we measured 6–10, and in field microsites 5 leaves. To monitor the aboveground development of the entire community, we photographed the vegetation every 3–6 weeks in 2021 (DSLR D800, Nikon Corporation, JPN). From these images, we calculated canopy greenness to track temporal variation in plant phenology62: canopy greenness = G/(R + G + B), where R, G, and B represent the red, green, and blue channel, respectively. For leaf lengths and canopy greenness, the period of growth was defined as the time from the onset of summer conditions until the peak (100%) was reached. Senescence was defined as the period from the peak to 50% of leaf browning.
We obtained a proxy for the photosynthetically active leaf area of seven species (Table 2). Three individuals (in the case of graminoids: tillers) per monolith and plot were marked at the start of the growing season in 2021. Every 2–5 weeks, we assessed the number of intact leaves and the length of the longest leaf for each individual. Also, we estimated the fraction of brown leaf area compared to the total leaf area and measured leaf chlorophyll content by fluorescence ratio (emission ratio of intensity at 735 nm/700 nm) in the biggest, healthy-looking leaf (CCM-300, Opti-Sciences, Inc., USA). From these data, we calculated the following photosynthetic vigour index:
$${photosyn}{thetic}\,{vigour}\,{index}= \,{max}\,{leaf}\,{length}\,\times \,(1+\sqrt{{number}\,{of}\,{leaves}})\,\\ \times \,(100\%\,-\,{brown}\,{leaf}\,{fraction})\,\\ \times {chlorophyll}\,{content}$$
We used the square root of number of leaves to reflect the decrease in leaf size in each additional leaf beside the biggest leaf. To assess species-specific contributions to canopy greenness, green cover (0–100%) was estimated for each species two times: once during the season—after 11 weeks in +4 m and +2 m and after 7 weeks in field plots (in the field by eye)—and once at the end of the season (19 October 2021; from images).
We used two identical root scanners to produce high-resolution images (Fig. 1A, 1200 DPI) of roots growing along the surface of the tubes (CI-602, CID BioScience, USA). The scanner is inserted into the rhizotron tube to produce a 360°-image (21.6 × 18.6 cm, W × H) that is focused on the outer surface of the transparent tube (Fig. 1A). Each monolith and field plot were scanned throughout the growing season, twice a week during the first month and then at 7–21 days intervals. The average soil area and depth covered by the scans amounted to 330 cm2 and 18 cm per tube, respectively.
Root images were processed using Python 3 (v. 3.6.9). Vertical striping artifacts, frequent with such scanners, were removed63 and the aboveground part of the images (sun-block tape) was replaced by black. Brightness and contrast were normalized for each image before all images per tube were aligned (planar shifts determined by phase correlation). In total, we acquired ~700 scans and each was split into 16 sub-images measuring 2550 × 2196 pixels. Two sub-images per monolith/plot (one of each tube in field plots) were randomly chosen for manual root annotation using the rhizoTrak64 plugin (v. 1.3) for Fiji65. Of these 42 annotated images, half were used for training and half for validation of a convolutional neural network66. The training dataset was augmented with annotated images from another experiment at the site of the field plots (50 additional images, same size). Validation was performed on images from this study only. After 60 training epochs (i.e., training cycles through the entire dataset), 84% of all pixels predicted as root actually belonged to roots and 82% of the actual root pixels were identified as such. Subsequently, all original (full-sized) images were automatically segmented. Mean root area per image area (mm2 cm−2) was determined using RhizoVision67 (v. 2.0.3). Predicted root area correlated well with the actual root area in the manually annotated images (R2 = 0.99, Supplementary Figure 4). Dead-looking roots were found in 15 annotated images (0.3% of the total root area). Root data from one monolith was excluded because roots at the tube surface were scarce for unknown reasons.
Microsites in the field
We chose 24 microsites (40 × 40 cm) covering different snowmelt dates and tracked leaf elongation and browning of Carex. Microsites were situated within an area of ~3 km2 around the research station (2283–2595 m a.s.l.) and were visited at irregular intervals during the growing season 2020. When microsites were measured twice within the same week (interval < 7 days), data were pooled and assigned to the mean date to reduce noise in the data. Each microsite was measured 5–10 times across the growing season (for an example, see Fig. 4D). As we suspected temperature to be a major driver of plant growth, and to determine the exact snowmelt-date, temperature sensors (iButton DS1922L) were installed 3 cm below the soil surface (close to Carex's meristems) in each microsite in September 2019, logging temperature every two hours until the end of the growing season 2020.
Data analyses were performed using the statistical programming language R68 (v. 4.0.5). To ease comparability between temporal sequences of response variables, Carex leaf length, canopy greenness, root area and photosynthetic vigour index were scaled to percent of the maximum (0–100%) for each group and species. Further, root area was set to zero at the start of the season. We fitted generalized additive models (GAM, mgcv-package69) with a thin-plate smoothing spline in the form 'response variable ~ s(day of year)' for each experimental unit. Number of knots (k) depended on sample size but was restricted to a maximum of eight and the estimated degrees of freedom varied between 3.1 and 6.9. Goodness of fit of smoothed terms was high in all cases (mean R2 > 0.88 for each response variable). The timepoints presented in Fig. 3 (e.g., 80% quantile of root growth) were interpolated using these GAMs, except for the day of 50% browning in green leaf length and greenness, which was linearly interpolated amid the closest measurements. Integrated area under the smoothed curve was approximated on a daily interval for greenness. The start of root growth was defined as the first date of a moving window, spanning three adjacent measurement dates, whose linear regression slope exceeded 0.5% d−1. Means and standard errors (SE) were calculated for each group (n = 7–8 in +4 m and +2 m, n = 5 in field plots). For visual simplicity, one GAM was fitted per group in Fig. 2.
For microsites, green leaf length of Carex was fixed at 0.5 cm at season start, which is about the amount of remaining green leaf previously observed after winter. Elongation and browning rates in microsites were calculated between consecutive measurements from season start to two weeks before the peak and from two weeks after the peak until one week following 50% browning, excluding the peak with intrinsically low rates. This yielded 43 elongation and 22 browning rates with intervals between measurements of 7–89 days. Corresponding mean soil temperature and growing degree hours (GDH) > 5 °C at 3 cm soil depth were calculated for each interval per microsite. Four microsites were not measured after 50% browning and were excluded from the analysis of browning periods. Also, one elongation period could not be related to temperature due to T-sensor failure.
The start of the growing season was defined as the day when snow disappeared, indicated by soil temperatures >3 °C and diurnal temperature fluctuations. Significant snowfall on 25 September in 2020 and a cold spell after 15 October in 2021 marked the meteorological end of the growing seasons for all plots. Differences between treatments were calculated by fitting linear regressions and subsequently calculating post-hoc contrasts using the R-package 'emmeans'70. Model assumptions regarding residual distribution were verified visually. In the case of photosynthetic vigour index, maximum and last values were compared by fitting mixed effect models to account for repeated measures (package nlme). P values as well as F or t values with degrees of freedom based on the overall model are reported in text and in Supplementary Tables.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data generated in this study and annotated images used to train the neural network have been deposited in the figshare repository under accession code https://doi.org/10.6084/m9.figshare.2044049771.
R-codes are published with the data.
Körner, C. Alpine Plant Life: Functional plant ecology of high mountain ecosystems. (Springer, 2021). https://doi.org/10.1007/978-3-030-59538-8.
Pepin, N. et al. Elevation-dependent warming in mountain regions of the world. Nat. Clim. Change 5, 424–430 (2015).
Pepin, N. C. et al. Climate changes and their elevational patterns in the mountains of the world. Rev. Geophys. 60, e2020RG000730 (2022).
Stewart, I. T. Changes in snowpack and snowmelt runoff for key mountain regions. Hydrol. Process 23, 78–94 (2009).
Vorkauf, M., Marty, C., Kahmen, A. & Hiltbrunner, E. Past and future snowmelt trends in the Swiss Alps: the role of temperature and snowpack. Clim. Change 165, 44–62 (2021).
Inouye, D. W. Effects of climate change on phenology, frost damage, and floral abundance of montane wildflowers. Ecology 89, 353–362 (2008).
Vorkauf, M., Kahmen, A., Körner, C. & Hiltbrunner, E. Flowering phenology in alpine grassland strongly responds to shifts in snowmelt but weakly to summer drought. Alp. Bot. 131, 73–88 (2021).
Wipf, S. & Rixen, C. A review of snow manipulation experiments in Arctic and alpine tundra ecosystems. Polar Res. 29, 95–109 (2010).
Collins, C. G. et al. Experimental warming differentially affects vegetative and reproductive phenology of tundra plants. Nat. Commun. 12, 3442 (2021).
Choler, P. Growth response of temperate mountain grasslands to inter-annual variations in snow cover duration. Biogeosciences 12, 3885–3897 (2015).
Xie, J. et al. Land surface phenology and greenness in alpine grasslands driven by seasonal snow and meteorological factors. Sci. Total Environ. 725, 138380 (2020).
Nord, E. A. & Lynch, J. P. Plant phenology: a critical controller of soil resource acquisition. J. Exp. Bot. 60, 1927–1937 (2009).
Gallinat, A. S., Primack, R. B. & Wagner, D. L. Autumn, the neglected season in climate change research. Trends Ecol. Evol. 30, 169–176 (2015).
Rosa, R. K. et al. Plant phenological responses to a long‐term experimental extension of growing season and soil warming in the tussock tundra of Alaska. Glob. Change Biol. 21, 4520–4532 (2015).
Livensperger, C. et al. Earlier snowmelt and warming lead to earlier but not necessarily more plant growth. AoB Plants 8, plw021 (2016).
Körner, C. & Hiltbrunner, E. Why is the alpine flora comparatively robust against climatic warming? Diversity 13, 383–397 (2021).
Ma, H. et al. The global distribution and environmental drivers of aboveground versus belowground plant biomass. Nat. Ecol. Evol. 5, 1110–1122 (2021).
Iversen, C. M. et al. The unseen iceberg: plant roots in arctic tundra. N. Phytol. 205, 34–58 (2015).
Abramoff, R. Z. & Finzi, A. C. Are above‐ and below‐ground phenology in sync? N. Phytol. 205, 1054–1061 (2015).
Liu, H. et al. Phenological mismatches between above- and belowground plant responses to climate warming. Nat. Clim. Change 12, 97–102 (2022).
Rixen, C. et al. Winters are changing: snow effects on Arctic and alpine tundra ecosystems. Arct. Sci. 1–37 (2022) https://doi.org/10.1139/as-2020-0058.
Johnson, M. G., Tingey, D. T., Phillips, D. L. & Storm, M. J. Advancing fine root research with minirhizotrons. Environ. Exp. Bot. 45, 263–289 (2001).
Atkinson, J. A., Pound, M. P., Bennett, M. J. & Wells, D. M. Uncovering the hidden half of plants using new advances in root phenotyping. Curr. Opin. Biotech. 55, 1–8 (2019).
Radville, L., McCormack, M. L., Post, E. & Eissenstat, D. M. Root phenology in a changing climate. J. Exp. Bot. 67, 3617–3628 (2016).
Blume-Werry, G. The belowground growing season. Nat. Clim. Change 12, 11–12 (2022).
Wipf, S., Stoeckli, V. & Bebi, P. Winter climate change in alpine tundra: plant responses to changes in snow depth and snowmelt timing. Clim. Change 94, 105–121 (2009).
Baptist, F., Flahaut, C., Streb, P. & Choler, P. No increase in alpine snowbed productivity in response to experimental lengthening of the growing season. Plant Biol. 12, 755–764 (2010).
Vitasse, Y. et al. 'Hearing' alpine plants growing after snowmelt: ultrasonic snow sensors provide long-term series of alpine plant phenology. Int J. Biometeorol. 61, 349–361 (2017).
Blume‐Werry, G., Jansson, R. & Milbau, A. Root phenology unresponsive to earlier snowmelt despite advanced above‐ground phenology in two subarctic plant communities. Funct. Ecol. 31, 1493–1502 (2017).
Darrouzet‐Nardi, A. et al. Limited effects of early snowmelt on plants, decomposers, and soil nutrients in Arctic tundra soils. Ecol. Evol. 9, 1820–1844 (2019).
Ernakovich, J. G. et al. Predicted responses of arctic and alpine ecosystems to altered seasonality under climate change. Glob. Change Biol. 20, 3256–3269 (2014).
Keller, F. & Körner, C. The role of photoperiodism in alpine plant development. Arct. Antarct. Alp. Res 35, 361–368 (2003).
Hiltbrunner, E., Arnaiz, J. & Körner, C. Biomass allocation and seasonal non-structural carbohydrate dynamics do not explain the success of tall forbs in short alpine grassland. Oecologia 1–15 (2021) https://doi.org/10.1007/s00442-021-04950-7.
Inauen, N., Körner, C. & Hiltbrunner, E. No growth stimulation by CO2 enrichment in alpine glacier forefield plants. Glob. Change Biol. 18, 985–999 (2012).
Möhl, P., Hiltbrunner, E. & Körner, C. Halving sunlight reveals no carbon limitation of aboveground biomass production in alpine grassland. Glob. Change Biol. 26, 1857–1872 (2020).
Porter, J. R. & Gawith, M. Temperatures and the growth and development of wheat: a review. Eur. J. Agron. 10, 23–36 (1999).
Parent, B., Turc, O., Gibon, Y., Stitt, M. & Tardieu, F. Modelling temperature-compensated physiological rates, based on the co-ordination of responses to temperature of developmental processes. J. Exp. Bot. 61, 2057–2069 (2010).
Körner, C. H. & Woodward, F. I. The dynamics of leaf extension in plants with diverse altitudinal ranges. Oecologia 72, 279–283 (1987).
Nagelmüller, S., Hiltbrunner, E. & Körner, C. Low temperature limits for root growth in alpine species are set by cell differentiation. AoB Plants 9, plx054 (2017).
Starr, G., Oberbauer, S. F. & Pop, E. W. Effects of lengthened growing season and soil warming on the phenology and physiology of Polygonum bistorta. Glob. Change Biol. 6, 357–369 (2000).
Yoshie, F. Vegetative phenology of alpine plants at Tateyama Murodo-daira in central Japan. J. Plant Res. 123, 675–688 (2010).
Jonas, T., Rixen, C., Sturm, M. & Stoeckli, V. How alpine plant growth is linked to snow cover and climate variability. J. Geophys. Res. 113, G03013 (2008).
Wang, H. et al. Alpine grassland plants grow earlier and faster but biomass remains unchanged over 35 years of climate change. Ecol. Lett. 23, 701–710 (2020).
Frei, E. R. & Henry, G. H. R. Long-term effects of snowmelt timing and climate warming on phenology, growth, and reproductive effort of Arctic tundra plant species. Arct. Sci. 1–22 (2021) https://doi.org/10.1139/as-2021-0028.
Schäppi, B. & Körner, C. Growth responses of an alpine grassland to elevated CO2. Oecologia 105, 43–52 (1996).
Aloni, R. Role of hormones in controlling vascular differentiation and the mechanism of lateral root initiation. Planta 238, 819–830 (2013).
Sloan, V. L., Fletcher, B. J. & Phoenix, G. K. Contrasting synchrony in root and leaf phenology across multiple sub‐Arctic plant communities. J. Ecol. 104, 239–248 (2016).
Nagelmüller, S., Hiltbrunner, E. & Körner, C. Critically low soil temperatures for root growth and root morphology in three alpine plant species. Alp. Bot. 126, 11–21 (2016).
Woo, H. R., Kim, H. J., Lim, P. O. & Nam, H. G. Leaf senescence: systems and dynamics aspects. Annu. Rev. Plant Biol. 70, 1–30 (2019).
Liu, Z., Marella, C. B. N., Hartmann, A., Hajirezaei, M. R. & Wirén, Nvon An age-dependent sequence of physiological processes defines developmental root senescence. Plant Physiol. 181, 993–1007 (2019).
Ryser, P., Puig, S., Müller, M. & Munné-Bosch, S. Abscisic acid responses match the different patterns of autumn senescence in roots and leaves of Iris versicolor and Sparganium emersum. Environ. Exp. Bot. 176, 104097 (2020).
Budge, K., Leifeld, J., Hiltbrunner, E. & Fuhrer, J. Alpine grassland soils contain large proportion of labile carbon but indicate long turnover times. Biogeosciences 8, 1911–1923 (2011).
Solly, E. F. et al. Unravelling the age of fine roots of temperate and boreal forests. Nat. Commun. 9, 3006 (2018).
Trumbore, S. E., Sierra, C. A. & Pries, C. E. H. Radiocarbon and climate change, mechanisms, applications and laboratory techniques. 45–82 (2016) https://doi.org/10.1007/978-3-319-25643-6_3.
Windmaißer, T. & Reisch, C. Long-term study of an alpine grassland: local constancy in times of global change. Alp. Bot. 123, 1–6 (2013).
De Witte, L. C. D., Armbruster, G. F. J., Gielly, L., Taberlet, P. & Stöcklin, J. AFLP markers reveal high clonal diversity and extreme longevity in four key arctic‐alpine species. Mol. Ecol. 21, 1081–1097 (2012).
Landolt, E. Unsere Alpenflora. (SAC-Verlag, 2012).
Puşcaş, M. & Choler, P. A biogeographic delineation of the European Alpine System based on a cluster analysis of Carex curvula-dominated grasslands. Flora - Morphol. Distrib. Funct. Ecol. Plants 207, 168–178 (2012).
Grabherr, G., Mahr, E. & Reisigl, H. Nettoprimärproduktion und Reproduktion in einem Krummseggenrasen (Caricetum curvulae) der Otztaler Alpen, Tirol. Oecologia Plant. 13, 227–251 (1978).
Chiang, C., Bånkestad, D. & Hoch, G. Reaching natural growth: light quality effects on plant performance in indoor growth facilities. Plants 9, 1273 (2020).
Chiang, C., Olsen, J. E., Basler, D., Bånkestad, D. & Hoch, G. Latitude and weather influences on sun light quality and the relationship to tree growth. Forests 10, 610–621 (2019).
Richardson, A. D. et al. Use of digital webcam images to track spring green-up in a deciduous broadleaf forest. Oecologia 152, 323–334 (2007).
Jiang, Y. & Li, C. Convolutional neural networks for image-based high-throughput plant phenotyping: a review. Plant Phenomics 2020, 4152816 (2020).
Möller, B. et al. rhizoTrak: a flexible open source Fiji plugin for user-friendly manual annotation of time-series images from minirhizotrons. Plant Soil 444, 519–534 (2019).
Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).
Smith, A. G., Petersen, J., Selvan, R. & Rasmussen, C. R. Segmentation of roots in soil with U-Net. Plant Methods 16, 13–27 (2020).
Seethepalli, A. et al. Rhizovision crown: an integrated hardware and software platform for root crown phenotyping. Plant Phenomics 2020, 3074916 (2020).
R Core Team. R: A language and environment for statistical computing. (R Foundation for Statistical Computing, 2021).
Wood, S. N. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. J. R. Stat. Soc. Ser. B Stat. Methodol. 73, 3–36 (2011).
Lenth, R. V. emmeans: Estimated Marginal Means, aka Least-Squares Means. R package version 1.6.2-1. (2021).
Möhl P., von Büren R. S. & Hiltbrunner E. Data from: Growth of alpine grassland will start and stop earlier under climate warming figshare. https://doi.org/10.6084/m9.figshare.20440497 (2022).
This project was funded by the Swiss National Science Foundation, grant number 31003A_182592 awarded to E.H. We are grateful to David Basler for help with Python coding, Georges Grun for support with climate chamber operation, Christian Körner for feedback on the manuscript, O. Tackenberg for providing illustrations and to Lawrence Blem for his dedicated field support. We thank several helpers for valuable assistance in the field, climate chambers, and taking measurements during their civil service. The Alpine Research and Education Station ALPFOR (www.alpfor.ch) provided infrastructure and accommodation.
Department of Environmental Sciences, University of Basel, Schönbeinstrasse 6, CH-4056, Basel, Switzerland
Patrick Möhl, Raphael S. von Büren & Erika Hiltbrunner
Patrick Möhl
Raphael S. von Büren
Erika Hiltbrunner
E.H. and P.M. designed the study. P.M. collected data from monoliths and field plots and R.S.v.B. from microsites. P.M. analyzed the data. P.M. and E.H. wrote the manuscript. All authors revised the manuscript and approved the final version.
Correspondence to Patrick Möhl.
Möhl, P., von Büren, R.S. & Hiltbrunner, E. Growth of alpine grassland will start and stop earlier under climate warming. Nat Commun 13, 7398 (2022). https://doi.org/10.1038/s41467-022-35194-5 | CommonCrawl |
Human Development Index
World map by quartiles of Human Development Index in 2011.
Data unavailable
The Human Development Index (HDI) is a composite statistic used to rank countries by level of "human development" and separate "very high human development", "high human development", "medium human development", and "low human development" countries. The Human Development Index (HDI) is a comparative measure of life expectancy, literacy, education and standards of living for countries worldwide. It is a standard means of measuring well-being, especially child welfare. It is used to distinguish whether the country is a developed, a developing or an under-developed country, and also to measure the impact of economic policies on quality of life. There are also HDI for states, cities, villages, etc. by local organizations or companies.
1 Origins
2 Dimensions and calculation
2.1 New methodology for 2010 data onwards
2.2 Methodology used until 2010
3 2011 report
3.1 Inequality-adjusted HDI
3.2 Countries not included
4.3 Non-UN members (not calculated by UNDP)
6 2008 statistical update
7 2007/2008 report
8 Past top countries
8.1 In each original report
9 Future HDI projections
10 Criticisms
The origins of the HDI are found in the annual Human Development Reports of the United Nations Development Programme (UNDP). These were devised and launched by Pakistani economist Mahbub ul Haq in 1990 and had the explicit purpose "to shift the focus of development economics from national income accounting to people centered policies". To produce the Human Development Reports, Mahbub ul Haq brought together a group of well-known development economists including: Paul Streeten, Frances Stewart, Gustav Ranis, Keith Griffin, Sudhir Anand and Meghnad Desai. But it was Nobel laureate Amartya Sen's work on capabilities and functionings that provided the underlying conceptual framework. Haq was sure that a simple composite measure of human development was needed in order to convince the public, academics, and policy-makers that they can and should evaluate development not only by economic advances but also improvements in human well-being. Sen initially opposed this idea, but he went on to help Haq develop the Human Development Index (HDI). Sen was worried that it was difficult to capture the full complexity of human capabilities in a single index but Haq persuaded him that only a single number would shift the attention of policy-makers from concentration on economic to human well-being.[1][2]
Other organizations and companies also make HD Indices with differing formulae and results (see below).
Dimensions and calculation
Published on 4 November 2010 (and updated on 10 June 2011), starting with the 2010 Human Development Report the HDI combines three dimensions:
A long and healthy life: Life expectancy at birth
Access to knowledge: Mean years of schooling and Expected years of schooling
A decent standard of living: GNI per capita (PPP US$)
The HDI combined three dimensions up until its 2010 report:
Life expectancy at birth, as an index of population health and longevity
Knowledge and education, as measured by the adult literacy rate (with two-thirds weighting) and the combined primary, secondary, and tertiary gross enrollment ratio (with one-third weighting).
Standard of living, as indicated by the natural logarithm of gross domestic product per capita at purchasing power parity.
New methodology for 2010 data onwards
2010 Very High HDI nations, by population size
In its 2010 Human Development Report the UNDP began using a new method of calculating the HDI. The following three indices are used:
1. Life Expectancy Index (LEI) <math>= \frac{\textrm{LE} - 20}{63.2}</math>
2. Education Index (EI) <math>= \frac{\sqrt{\textrm{MYSI} \cdot \textrm{EYSI}}} {0.951}</math>
2.1 Mean Years of Schooling Index (MYSI) <math>= \frac{\textrm{MYS}}{13.2}</math>[3]
2.2 Expected Years of Schooling Index (EYSI) <math>= \frac{\textrm{EYS}}{20.6}</math>[4]
3. Income Index (II) <math>= \frac{\ln(\textrm{GNIpc}) - \ln(163)}{\ln(108,211) - \ln(163)}</math>
Finally, the HDI is the geometric mean of the previous three normalized indices:
<math>\textrm{HDI} = \sqrt[3]{\textrm{LEI}\cdot \textrm{EI} \cdot \textrm{II}}.</math>
LE: Life expectancy at birth
MYS: Mean years of schooling (Years that a 25-year-old person or older has spent in schools)
EYS: Expected years of schooling (Years that a 5-year-old child will spend with his education in his whole life)
GNIpc: Gross national income at purchasing power parity per capita
Methodology used until 2010
HDI trends between 1975 and 2004
(Central and) Eastern Europe and the CIS
This is the methodology used by the UNDP up until its 2010 report.
The formula defining the HDI is promulgated by the United Nations Development Programme (UNDP)[5] In general, to transform a raw variable, say <math>x</math>, into a unit-free index between 0 and 1 (which allows different indices to be added together), the following formula is used:
<math>x\text{-index} = \frac{x - \min\left(x\right)}{\max\left(x\right)-\min\left(x\right)}</math>
where <math>\min\left(x\right)</math> and <math>\max\left(x\right)</math> are the lowest and highest values the variable <math>x</math> can attain, respectively.
The Human Development Index (HDI) then represents the uniformly weighted sum with ⅓ contributed by each of the following factor indices:
Life Expectancy Index = <math>\frac{LE - 25} {85-25}</math>
Education Index = <math>\frac{2} {3} \times ALI + \frac{1} {3} \times GEI</math>
Adult Literacy Index (ALI) = <math>\frac{ALR - 0} {100 - 0}</math>
Gross Enrollment Index (GEI) = <math>\frac{CGER - 0} {100 - 0}</math>
GDP = <math>\frac{\log\left(GDPpc\right) - \log\left(100\right)} {\log\left(40000\right) - \log\left(100\right)}</math>
Other organizations/companies may include Democracy Index, Population, etc. which produces different number of HDI.
Main article: List of countries by Human Development Index
The 2011 Human Development Report was released on 2 November 2011, and calculated HDI values based on estimates for 2011. Below is the list of the "Very High Human Development" countries (equal to the top quartile):[6]
Note: The green arrows ( ), red arrows ( ), and blue dashes ( ) represent changes in rank when compared to the new 2011 data HDI for 2010 - published in the 2011 report (p. 131).
Norway 0.943 ( )
Australia 0.929 ( )
Netherlands 0.910 ( )
United States 0.910 ( )
New Zealand 0.908 ( )
Canada 0.908 ( )
Ireland 0.908 ( )
Liechtenstein 0.905 ( )
Germany 0.905 ( )
Sweden 0.904 ( )
Switzerland 0.903 ( )
Japan 0.901 ( )
Hong Kong 0.898 ( 1)
Iceland 0.898 ( -1)
South Korea 0.897 ( )
Denmark 0.895 ( )
Israel 0.888 ( )
Belgium 0.886 ( )
Austria 0.885 ( )
France 0.884 ( )
Slovenia 0.884 ( )
Finland 0.882 ( )
Spain 0.878 ( )
Italy 0.874 ( )
Luxembourg 0.867 ( )
Singapore 0.866 ( )
Czech Republic 0.865 ( )
United Kingdom 0.863 ( )
Greece 0.861 ( )
United Arab Emirates 0.846 ( )
Cyprus 0.840 ( )
Andorra 0.838 ( )
Brunei 0.838 ( )
Estonia 0.835 ( )
Slovakia 0.834 ( )
Malta 0.832 ( )
Qatar 0.831 ( )
Hungary 0.816 ( )
Poland 0.813 ( )
Lithuania 0.810 ( 1)
Portugal 0.809 ( -1)
Bahrain 0.806 ( )
Latvia 0.805 ( )
Chile 0.805 ( )
Argentina 0.797 ( 1)
Croatia 0.796 ( -1)
Barbados 0.793 ( )
Inequality-adjusted HDI
Main article: List of countries by inequality-adjusted HDI
Below is a list of countries in the top quartile by Inequality-adjusted Human Development Index (IHDI).[7]
Note: The green arrows ( ), red arrows ( ), and blue dashes ( ) represent changes in rank when compared to the 2011 HDI list, for countries listed in both rankings.
Sweden 0.851 ( 5)
Netherlands 0.846 ( 1)
Iceland 0.845 ( 5)
Denmark 0.842 ( 4)
Slovenia 0.837 ( 7)
Finland 0.833 ( 7)
Canada 0.829 ( 7)
Czech Republic 0.821 ( 9)
Austria 0.820 ( 1)
Belgium 0.819 ( 1)
Spain 0.799 ( 2)
Luxembourg 0.799 ( 3)
United Kingdom 0.791 ( 4)
Slovakia 0.787 ( 7)
Israel 0.779 ( 8)
Italy 0.779 ( 2)
United States 0.771 ( 19)
Estonia 0.769 ( 2)
Hungary 0.759 ( 3)
Greece 0.756 ( 2)
Cyprus 0.755 ( 2)
South Korea 0.749 ( 17)
Lithuania 0.730 ( )
Portugal 0.726 ( )
Montenegro 0.718 ( 7)
Latvia 0.717 ( 1)
Serbia 0.694 ( 9)
Countries in the top quartile of HDI ("Very high human development" group) with a missing IHDI include: New Zealand, Liechtenstein, Japan, Hong Kong, Singapore, United Arab Emirates, Andorra, Brunei, Malta, Qatar, Bahrain and Barbados.
Countries not included
Some countries were not included for various reasons, mainly the unavailability of certain crucial data. The following United Nations Member States were not included in the 2011 report:[8] North Korea, Marshall Islands, Monaco, Nauru, San Marino, Somalia and Tuvalu.
The 2010 Human Development Report by the United Nations Development Program was released on November 4, 2010, and calculates HDI values based on estimates for 2010. Below is the list of the "Very High Human Development" countries:[9]
Note: The green arrows ( ), red arrows ( ), and blue dashes ( ) represent changes in rank when compared to the 2007 HDI published in the 2009 report.
New Zealand 0.907 ( 17)
United States 0.902 ( 9)
Liechtenstein 0.891 ( 13)
Germany 0.885 ( 12)
Japan 0.884 ( 1)
Switzerland 0.874 ( 4)
France 0.872 ( 6)
Israel 0.872 ( 12)
Iceland 0.869 ( 14)
Luxembourg 0.852 ( 13)
Austria 0.851 ( 11)
Singapore 0.846 ( 5)
Andorra 0.824 ( 2)
Slovakia 0.818 ( 11)
United Arab Emirates 0.815 ( 3)
Malta 0.815 ( 5)
Brunei 0.805 ( 7)
Qatar 0.803 ( 5)
Portugal 0.795 ( 6)
Barbados 0.788 ( 5)
The 2010 Human Development Report was the first to calculate an Inequality-adjusted Human Development Index (IHDI), which factors in inequalities in the three basic dimensions of human development (income, life expectancy, and education). Below is a list of countries in the top quartile by IHDI:[10]
Germany 0.814 ( 3)
Ireland 0.813 ( 3)
Poland 0.709 ( 1)
Romania 0.675 ( 3)
The Bahamas 0.671 ( 4)
Countries in the top quartile of HDI ("Very high human development" group) with a missing IHDI include: New Zealand, Liechtenstein, Japan, Hong Kong, Singapore, Andorra, United Arab Emirates, Malta, Brunei, Qatar, Bahrain and Barbados.
Some countries were not included for various reasons, mainly the unavailability of certain crucial data. The following United Nations Member States were not included in the 2010 report.[11] Cuba lodged a formal protest at its lack of inclusion. The UNDP explained that Cuba had been excluded due to the lack of an "internationally reported figure for Cuba's Gross National Income adjusted for Purchasing Power Parity". All other indicators for Cuba were available, and reported by the UNDP, but the lack of one indicator meant that no ranking could be attributed to the country.[12][13]
Non-UN members (not calculated by UNDP)
Taiwan 0.868 (Ranked 18th among countries if included).[14]
The 2009 Human Development Report by UNDP was released on October 5, 2009, and covers the period up to 2007. It was titled "Overcoming barriers: Human mobility and development". The top countries by HDI were grouped in a new category called "Very High Human Development". The report refers to these countries as developed countries.[15] They are:
Norway 0.971 ( 1)
Australia 0.970 ( 2)
Liechtenstein 0.951 ( 1)
Kuwait 0.916 ( )
Some countries were not included for various reasons, such as being a non-UN member or unable or unwilling to provide the necessary data at the time of publication. Besides the states with limited recognition, the following states were also not included.
2008 statistical update
A new index was released on December 18, 2008. This so-called "statistical update" covered the period up to 2006 and was published without an accompanying Human Development Report. The update is relevant due to newly released estimates of purchasing power parities (PPP), implying substantial adjustments for many countries, resulting in changes in HDI values and, in many cases, HDI ranks.[16]
Iceland 0.968 ( )
New Zealand 0.944 ( 1)
South Korea 0.928 ( 1)
Kuwait 0.912 ( 4)
Bahrain 0.902 ( 9)
Some countries were not included for various reasons, such as being a non-UN member, unable, or unwilling to provide the necessary data at the time of publication. Besides the states with limited recognition, the following states were also not included.
2007/2008 report
The Human Development Report for 2007/2008 was launched in Brasilia, Brazil, on November 27, 2007. Its focus was on "Fighting climate change: Human solidarity in a divided world."[17] Most of the data used for the report are derived largely from 2005 or earlier, thus indicating an HDI for 2005. Not all UN member states choose to or are able to provide the necessary statistics.
The report showed a small increase in world HDI in comparison with last year's report. This rise was fueled by a general improvement in the developing world, especially of the least developed countries group. This marked improvement at the bottom was offset with a decrease in HDI of high income countries.
A HDI below 0.5 is considered to represent "low development". All 22 countries in that category are located in Africa. The highest-scoring Sub-Saharan countries, Gabon and South Africa, are ranked 119th and 121st, respectively. Nine countries departed from this category this year and joined the "medium development" group.
A HDI of 0.8 or more is considered to represent "high development". This includes all developed countries, such as those in North America, Western Europe, Oceania, and Eastern Asia, as well as some developing countries in Eastern Europe, Central and South America, Southeast Asia, the Caribbean, and the oil-rich Arabian Peninsula. Seven countries were promoted to this category this year, leaving the "medium development" group: Albania, Belarus, Brazil, Libya, Macedonia, Russia and Saudi Arabia.
On the following table, green arrows ( ) represent an increase in ranking over the previous study, while red arrows ( ) represent a decrease in ranking. They are followed by the number of spaces they moved. Blue dashes ( ) represent a nation that did not move in the rankings since the previous study.
Past top countries
The list below displays the top-ranked country from each year of the Human Development Index. Norway have been ranked the highest nine times, Canada eight times, followed by Japan which has been ranked highest three times. Iceland has been ranked highest twice.
In each original report
The year represents when the report was published. In parentheses is the year for which the index was calculated.
2011 (2011)– Norway
2008 (2006)– Iceland
2000 (1998)– Canada
1994 (????)– Canada
1993 (????)– Japan
1991 (1990)– Japan
Future HDI projections
Further information: List of countries by future Human Development Index projections of the United Nations
In April 2010, the Human Development Report Office provided[18] the 2010-2030 HDI projections (quoted in September 2010, by the United Nations Development Programme, in the Human Development Research paper 2010/40, pp. 40–42). These projections were reached by re-calculating the HDI, using (for components of the HDI) projections of the components conducted by agencies that provide the UNDP with data for the HDI.
HDI for a sample of 150 countries shows a very high correlation with logarithm of GDP per capita.
The Human Development Index has been criticised on a number of grounds, including failure to include any ecological considerations, focusing exclusively on national performance and ranking (although many national Human Development Reports, looking at subnational performance, have been published by UNDP and others—so this last claim is untrue), not paying much attention to development from a global perspective and based on grounds of measurement error of the underlying statistics and formula changes by the UNDP which can lead to severe misclassifications of countries in the categories of being a 'low', 'medium', 'high' or 'very high' human development country.[19] Other authors claimed that the Human Development Reports "have lost touch with their original vision and the index fails to capture the essence of the world it seeks to portray".[20] The index has also been criticized as "redundant" and a "reinvention of the wheel", measuring aspects of development that have already been exhaustively studied.[21][22] The index has further been criticised for having an inappropriate treatment of income, lacking year-to-year comparability, and assessing development differently in different groups of countries.[23]
Economist Bryan Caplan has criticised the way HDI scores are produced; each of the three components are bounded between zero and one. As a result of that, rich countries effectively cannot improve their rating (and thus their ranking relative to other countries) in certain categories, even though there is a lot of scope for economic growth and longevity left. "This effectively means that a country of immortals with infinite per-capita GDP would get a score of .666 (lower than South Africa and Tajikistan) if its population were illiterate and never went to school."[24] He argues, "Scandinavia comes out on top according to the HDI because the HDI is basically a measure of how Scandinavian your country is."[24]
Economists Hendrik Wolff, Howard Chong and Maximilian Auffhammer discuss the HDI from the perspective of data error in the underlying health, education and income statistics used to construct the HDI.[19] They identify three sources of data error which are due to (i) data updating, (ii) formula revisions and (iii) thresholds to classify a country's development status and find that 11%, 21% and 34% of all countries can be interpreted as currently misclassified in the development bins due to the three sources of data error, respectively. The authors suggest that the United Nations should discontinue the practice of classifying countries into development bins because the cut-off values seem arbitrary, can provide incentives for strategic behavior in reporting official statistics, and have the potential to misguide politicians, investors, charity donators and the public at large which use the HDI. In 2010 the UNDP reacted to the criticism and updated the thresholds to classify nations as low, medium and high human development countries. In a comment to The Economist in early January 2011, the Human Development Report Office responded[25] to a January 6, 2011 article in The Economist[26] which discusses the Wolff et al. paper. The Human Development Report Office states that they undertook a systematic revision of the methods used for the calculation of the HDI and that the new methodology directly addresses the critique by Wolff et al. in that it generates a system for continuous updating of the human development categories whenever formula or data revisions take place.
The following are common criticisms directed at the HDI: that it is a redundant measure that adds little to the value of the individual measures composing it; that it is a means to provide legitimacy to arbitrary weightings of a few aspects of social development; that it is a number producing a relative ranking which is useless for inter-temporal comparisons, and difficult to compare a country's progress or regression since the HDI for a country in a given year depends on the levels of, say, life expectancy or GDP per capita of other countries in that year.[27][28][29][30] However, each year, UN member states are listed and ranked according to the computed HDI. If high, the rank in the list can be easily used as a means of national aggrandizement; alternatively, if low, it can be used to highlight national insufficiencies. Using the HDI as an absolute index of social welfare, some authors have used panel HDI data to measure the impact of economic policies on quality of life.[31]
Ratan Lal Basu criticises the HDI concept from a completely different angle. According to him the Amartya Sen-Mahbub ul Haq concept of HDI considers that provision of material amenities alone would bring about Human Development, but Basu opines that Human Development in the true sense should embrace both material and moral development. According to him human development based on HDI alone, is similar to dairy farm economics to improve dairy farm output. To quote: 'So human development effort should not end up in amelioration of material deprivations alone: it must undertake to bring about spiritual and moral development to assist the biped to become truly human.'[32] For example, a high suicide rate would bring the index down.
A few authors have proposed alternative indices to address some of the index's shortcomings.[33] However, of those proposed alternatives to the HDI, few have produced alternatives covering so many countries, and that no development index (other than, perhaps, Gross Domestic Product per capita) has been used so extensively—or effectively, in discussions and developmental planning as the HDI.
However, there has been one lament about the HDI that has resulted in an alternative index: David Hastings, of the United Nations Economic and Social Commission for Asia and the Pacific published a report geographically extending the HDI to 230+ economies, whereas the UNDP HDI for 2009 enumerates 182 economies and coverage for the 2010 HDI dropped to 169 countries.[34][35]
Democracy Index
Gini coefficient
Gender Parity Index
Gender-related Development Index
Gender Empowerment Measure
Genuine Progress Indicator
Legatum Prosperity Index
Living Planet Index
Happy Planet Index
Physical quality-of-life index
Human development (humanity)
American Human Development Report
Child Development Index
Satisfaction with Life Index
Multidimensional Poverty Index
List of countries by Human Development Index
List of countries by inequality-adjusted HDI
List of African countries by Human Development Index
List of Australian states and territories by HDI
List of Argentine provinces by Human Development Index
List of Brazilian states by Human Development Index
List of Chilean regions by Human Development Index
List of Chinese administrative divisions by Human Development Index
List of European countries by Human Development Index
List of Indian states by Human Development Index
List of Latin American countries by Human Development Index
List of Mexican states by Human Development Index
List of Pakistani Districts by Human Development Index
List of Philippine provinces by Human Development Index
List of Russian federal subjects by HDI
List of South African provinces by HDI
List of US states by HDI
List of Venezuelan states by Human Development Index
↑ Fukuda-Parr, Sakiko (2003). "The Human Development Paradigm: operationalizing Sen's ideas on capabilities". Feminist Economics 9 (2–3): 301–317. doi:10.1080/1354570022000077980.
↑ United Nations Development Programme (1999). Human Development Report 1999. New York: Oxford University Press.
↑ Mean years of schooling (of adults) (years) is a calculation of the average number of years of education received by people ages 25 and older in their lifetime based on education attainment levels of the population converted into years of schooling based on theoretical durations of each level of education attended. Source: Barro, R. J.; Lee, J.-W. (2010). "A New Data Set of Educational Attainment in the World, 1950-2010". NBER Working Paper No. 15902.
↑ (Expected years of schooling is a calculation of the number of years a child of school entrance age is expected to spend at school, or university, including years spent on repetition. It is the sum of the age-specific enrolment ratios for primary, secondary, post-secondary non-tertiary and tertiary education and is calculated assuming the prevailing patterns of age-specific enrolment rates were to stay the same throughout the child's life. (Source: UNESCO Institute for Statistics (2010). Correspondence on education indicators. March. Montreal.)
↑ Definition, Calculator, etc. at UNDP site
↑ 2011 Human Development Index
↑ 2011 Human Development Complete Report
↑ International Human Rights Development Indicators, UNDP
↑ "Samoa left out of UNDP index", Samoa Observer, January 22, 2010
↑ Cuba country profile, UNDP
↑ Report of Directorate General of Budget, Accounting and Statistics, Executive Yuan, R.O.C.(Taiwan)
↑ http://hdr.undp.org/en/media/HDR_2009_EN_Complete.pdf Human Development Report 2009[ (p. 171, 204)
↑ News – Human Development Reports (UNDP)
↑ HDR 2007/2008 – Human Development Reports (UNDP)
↑ In: Daponte Beth Osborne, and Hu difei: "Technical Note on Re-Calculating the HDI, Using Projections of Components of the HDI", April 2010, United Nations Development Programme, Human Development Report Office.
↑ 19.0 19.1 Wolff, Hendrik; Chong, Howard; Auffhammer, Maximilian (2011). "Classification, Detection and Consequences of Data Error: Evidence from the Human Development Index". Economic Journal 121 (553): 843–870. doi:10.1111/j.1468-0297.2010.02408.x.
↑ Sagara, Ambuj D.; Najam, Adil (1998). "The human development index: a critical review". Ecological Economics 25 (3): 249–264. doi:10.1016/S0921-8009(97)00168-7.
↑ McGillivray, Mark (1991). "The human development index: yet another redundant composite development indicator?". World Development 19 (10): 1461–1468. doi:10.1016/0305-750X(91)90088-Y.
↑ Srinivasan, T. N. (1994). "Human Development: A New Paradigm or Reinvention of the Wheel?". American Economic Review 84 (2): 238–243. JSTOR 2117836.
↑ McGillivray, Mark; White, Howard (2006). "Measuring development? The UNDP's human development index". Journal of International Development 5 (2): 183–192. doi:10.1002/jid.3380050210.
↑ 24.0 24.1 Caplan, Bryan (May 22, 2009). "Against the Human Development Index". Library of Economics and Liberty.
↑ "UNDP Human Development Report Office's comments". The Economist. January 2011. [dead link]
↑ "The Economist (pages 60-61 in the issue of Jan 8, 2011)". January 6, 2011.
↑ Rao, V. V. B. (1991). "Human development report 1990: review and assessment". World Development 19 (10): 1451–1460. doi:10.1016/0305-750X(91)90087-X.
↑ McGillivray, M. (1991). "The Human Development Index: Yet Another Redundant Composite Development Indicator?". World Development 18 (10): 1461–1468. doi:10.1016/0305-750X(91)90088-Y.
↑ Hopkins, M. (1991). "Human development revisited: A new UNDP report". World Development 19 (10): 1461–1468. doi:10.1016/0305-750X(91)90089-Z.
↑ Tapia Granados, J. A. (1995). "Algunas ideas críticas sobre el índice de desarrollo humano". Boletín de la Oficina Sanitaria Panamericana 119 (1): 74–87.
↑ Davies, A.; Quinlivan, G. (2006). "A Panel Data Analysis of the Impact of Trade on Human Development". Journal of Socio-Economics 35 (5): 868–876. doi:10.1016/j.socec.2005.11.048.
↑ HDI-2
↑ Noorbakhsh, Farhad (1998). "The human development index: some technical issues and alternative indices". Journal of International Development 10 (5): 589–605. doi:10.1002/(SICI)1099-1328(199807/08)10:5<589::AID-JID484>3.0.CO;2-S.
↑ Hastings, David A. (2009). "Filling Gaps in the Human Development Index". United Nations Economic and Social Commission for Asia and the Pacific, Working Paper WP/09/02.
↑ Hastings, David A. (2011). "A "Classic" Human Development Index with 232 Countries". HumanSecurityIndex.org. Information Note linked to data
Human Development Report
2011 Human Development Index Update
Human Development Interactive Map
Human Development Tools and Rankings
Technical note explaining the definition of the HDI PDF (5.54 MB)
An independent HDI covering 232 countries, formulated along lines of the traditional (pre-2010) approach.
List of countries by HDI at NationMaster.com
America Is # ... 15? by Dalton Conley, The Nation, March 4, 2009
Economic classification of countries
Developed country · Developing country · Least developed country · High income economy · Newly industrialized country · Heavily Indebted Poor Countries
Worlds Theory
First World · Second World · Third World · Fourth World
By country (future estimates · growth · per capita [future estimates])
Purchasing power parity (PPP)
By country (future estimates · per capita [future estimates] · per hour worked, per person employed)
List of countries by GNI (nominal) per capita · List of countries by GNI (PPP) per capita
per hour · monthly (Europe) · per year · Minimum wage (Europe · USA · Canada)
Other national accounts
Net material product · Gross/Net national wealth · Expenditures on R&D
List of countries by Human Development Index · Human Poverty Index · List of countries by percentage of population living in poverty
Digital Opportunity Index · List of countries by number of Internet users · List of countries by number of broadband Internet users
Lists of countries by population statistics
Population (density · graphical · growth rate · per household · past and future · per unit area of arable land · urban) · Age at first marriage · Birth rate · Natural increase · Death rate · Divorce rate · Fertility rate · Foreign-born (2005) · Life expectancy · Median age · Net migration · Sex ratio · Urbanization
Antiviral medications for pandemic influenza · Health expenditure per capita · HIV/AIDS adult prevalence rate · Infant mortality rate · Percentage suffering from undernourishment · Suicide rate (OECD)
Education and innovation
Education Index · Global Innovation Index · Literacy rate · Patents · Programme for International Student Assessment
Distribution of wealth · Employment rate · Global Gender Gap Report · Human Poverty Index · Income equality · Labour force · Millionaires · Most charitable · Per capita personal income · Percentage living in poverty · Sen social welfare function · Unemployment rate · US dollar billionaires
English-speakers · Human Development Index
Lists by country
List of international rankings
List of top international rankings by country
af:Menslike ontwikkelingsindeks
ar:مؤشر التنمية البشرية az:İnsan İnkişafı İndeksi bn:মানব উন্নয়ন সূচক bar:Human Development Index bg:Индекс на човешкото развитие ca:Índex de Desenvolupament Humà cs:Index lidského rozvoje cy:Indecs Datblygiad Dynol da:Human Development Index de:Human Development Index et:Inimarengu indeks el:Δείκτης ανθρώπινης ανάπτυξης es:Índice de desarrollo humano eo:Indekso de homa disvolviĝo eu:Giza Garapen Indizea fa:شاخص توسعه انسانی fr:Indice de développement humain gl:Índice de Desenvolvemento Humano ko:인간 개발 지수 hi:मानव विकास सूचकांक hr:HDI io:Indexo pri humana developeso id:Indeks Pembangunan Manusia is:Vísitala um þróun lífsgæða it:Indice di sviluppo umano he:מדד הפיתוח האנושי jv:Indèks Pembangunan Manungsa ka:ადამიანის განვითარების ინდექსი kk:Адам даму индексі lo:ດັດສະນີການພັດທະນາມະນຸດ la:Index Evolutionis Humanae lv:Tautas attīstības indekss lt:Žmogaus socialinės raidos indeksas jbo:remna kamfarvi namcu hu:Emberi fejlettségi index mk:Индекс на човековиот развој ml:മാനവ വികസന സൂചിക mr:मानवी विकास निर्देशांक ms:Indeks Pembangunan Manusia mn:Хүний хөгжлийн илтгэлцүүр nl:Index van de menselijke ontwikkeling ja:人間開発指数 no:HDI nn:Human Development Index nds:Human Development Index pl:Wskaźnik rozwoju społecznego pt:Índice de Desenvolvimento Humano ro:Indicele dezvoltării umane rmy:Indekso le manushutne baryaripnasko ru:Индекс развития человеческого потенциала sah:Киhи сайдыытын индекса sq:Indeksi i zhvillimit njerëzor simple:Human Development Index sk:Index ľudského rozvoja sl:Indeks človekovega razvoja ckb:بنەمای پەرەسەندنی مرۆڤی sr:Индекс хуманог развоја sh:Indeks ljudskog razvoja su:Indéks Pangwangunan Manusa fi:Inhimillisen kehityksen indeksi sv:Human Development Index tl:Talatuntunan ng Kaunlaran ng Tao ta:மனித வளர்ச்சிச் சுட்டெண் th:ดัชนีการพัฒนามนุษย์ tr:İnsani Gelişme Endeksi udm:Human development index uk:Індекс розвитку людського потенціалу ur:انسانی ترقیاتی اشاریہ vi:Chỉ số phát triển con người yi:מענטשליכע אנטוויקלונג אינדעקס zh:人类发展指数
Retrieved from "http://www.worldafropedia.com/wiki/index.php?title=Human_Development_Index&oldid=25173"
Articles with dead external links from September 2011 | CommonCrawl |
Bias due to censoring of deaths when calculating extra length of stay for patients acquiring a hospital infection
Shahina Rahman3,
Maja von Cube1,2,
Martin Schumacher1,2 &
Martin Wolkewitz1,2
In many studies the information of patients who are dying in the hospital is censored when examining the change in length of hospital stay (cLOS) due to hospital-acquired infections (HIs). While appropriate estimators of cLOS are available in literature, the existence of the bias due to censoring of deaths was neither mentioned nor discussed by the according authors.
Using multi-state models, we systematically evaluate the bias when estimating cLOS in such a way. We first evaluate the bias in a mathematically closed form assuming a setting with constant hazards. To estimate the cLOS due to HIs non-parametrically, we relax the assumption of constant hazards and consider a time-inhomogeneous Markov model.
In our analytical evaluation we are able to discuss challenging effects of the bias on cLOS. These are in regard to direct and indirect differential mortality. Moreover, we can make statements about the magnitude and direction of the bias. For real-world relevance, we illustrate the bias on a publicly available prospective cohort study on hospital-acquired pneumonia in intensive-care.
Based on our findings, we can conclude that censoring the death cases in the hospital and considering only patients discharged alive should be avoided when estimating cLOS. Moreover, we found that the closed mathematical form can be used to describe the bias for settings with constant hazards.
Change in length of stay (cLOS) in hospital is a key outcome when studying the health impact and economic consequences of hospital acquired infections (HIs). A patient with an HI is likely to stay longer in the hospital, incurring extra costs. Thus, appropriately quantifying the cLOS in hospitals (in days) due to HIs is crucial for economical and policy decision making. However, a correct estimation of cLOS is challenging and prone to bias. This is not only because HIs are time-dependent covariates but also because there are two possible controversy outcomes, namely in-hospital death and discharge from the hospital alive.
Barnett et al. [1] used a multi-state model to show the occurrence of substantial bias in estimating cLOS when studies fail to treat HIs as a time-dependent exposure (this bias is known as 'time-dependent bias').
Brock et al. [2] found that the way in which mortality is handled while investigating other time-related outcomes (such as discharge alive) influences the estimate of cLOS. They contrasted two ad-hoc approaches. In the first approach they restricted the analysis to the patients who survived. In the second approach, individuals who died were right-censored at the longest possible follow-up time. They concluded that the two methods can potentially give different results for the same data. Brock et al. argue that this could lead to conflicting conclusions, unless the investigators are aware of the differences between the estimators.
In many studies patients who are dying in the hospital are censored at the time of death to study the cLOS in hospitals due to HIs. One recent example is a study by Noll et al. [3]. They calculated cLOS by censoring the outcome of patients who died in the hospital, had ventilator dependent respiratory failure, or withdrew from the study. Another recent study by Guerra et al. [4] censored the patients who were not discharged from the hospital to their usual residence within the study period, namely death cases or patients that were transferred, to investigate the cLOS due to HIs. Zuniga et al. [5] censored death cases and analysed the cLOS considering information only of the patients who were discharged alive.
However, death in the hospital is informative censoring and should be treated in a competing risks framework as proposed by Schulgen et al. [6]. In this article, we show that treating death-cases in the hospital as non-informative censoring can lead to biased estimates of cLOS.
It may be argued that the mortality rates in hospitals are usually not very high, as most of the patients are discharged alive. Thus, using only the information of the patients discharged alive might lead to reasonable estimation of cLOS in many cases. However, the efficiency of such an estimator might be questionable. Moreover, in intensive-care units (ICUs) where HIs are a serious problem, the mortality can reach up to 30-50%. This is for instance the case for ventilated and critically-ill patients. Since cLOS is often used to calculate costs as costs are driven by bed days, we argue that the costs of a hospital stay are not affected by the status of the patient at the end of stay.
A reason for censoring the death cases may be the wish to give cLOS (and cost) estimates for a hospital-population which is discharged alive. Therefore, we propose to follow the approach by Allignol et al. [7]. They suggest to first use the combined endpoint 'discharge (dead or alive)' to calculate the overall cLOS (which can also be used for a cost analysis) and second, to distinguish the impact of HIs on cLOS between patients discharged alive and patients deceased. Based on this approach, studies censoring the patients at the time of their death are prone to bias.
To understand and quantify the difference of the competing risks and the censoring approach, we assume the simplified setting of constant hazards. For this setting, we derive an analytical expression for the difference of cLOS estimated as proposed by Allignol et al. [7] and cLOS estimated by censoring the deceased patients. This analytical expression can be used to investigate and analyse the magnitude of the bias that occurs when estimating cLOS by censoring patients that die. Motivated by the work of Joly et al. [8] and Binder and Schumacher [9], we systematically investigate the bias with respect to "differential mortality". In their setting differential mortality is a term which defines the difference in the rate of mortality of the patients with and without the infection. They consider an illness-death model where HI is an intermediate event between admission and death. In our setting we have two competing outcomes (death in hospital or discharge alive). HI is an intermediate event between admission and death or discharge which ever comes first. Therefore, we have considered two kinds of "differential mortality" in the time-constant hazards set up, which affects the absolute mortality risk of a patient: 1. "direct differential mortality", when the death hazards with and without the infection differ while the discharge hazards with and without the infection remain the same. 2. "indirect differential mortality", when discharge hazard rates with and without the infection differ while death hazards with and without the infection remain the same. The type of differential mortality can be studied with cause-specific Cox proportional hazards models for death and discharge with HI as time-dependent covariate.
Moreover, we compare the estimate of cLOS from the biased model with the cLOS attributed to patients discharged alive. To do so, we use the formula derived by Allignol et al. [10]. They propose a simple method to split the extra days due to HIs in the hospital into days attributable to patients that die and attributable to those that are discharged alive. This can be done for both homogeneous Markov models and for time-inhomogeneous Markov models. The methods for the time-inhomogeneous model are implemented in the R-packge etm, developed by Allignol et al. [7].
In "Methods" section part 1 we shortly discuss the formulas to estimate cLOS with the two approaches under the constant hazards assumption. In "Methods" section part 2, we aim to provide a proper analytical expression of the potential bias in estimating the cLOS due to HIs when the information on the death cases in the hospital is censored. Assuming a time-homogeneous Markov model, where the transition hazards are time-independent, we systematically explore the amount and direction of the bias. In "Results and discussion" section, we illustrate the real-world relevance of the bias by analysing a random subset of the SIR-3 prospective cohort study on hospital acquired pneumonia in ICUs in Berlin, Germany. For the real data analysis, we estimate the cLOS by applying the method for time-inhomogenuous Markov models developed by Allignol et al. [7], which is based on the Aalen-Johansen estimator. The paper ends with a short discussion in "Conclusion" section.
Multi-state model for hospital infections
We focus on estimating the cLOS in the hospital due to HIs. We study the amount of bias which can occur when estimating the cLOS by treating patients that die as censored.
To do so, we describe the data setting with a multi-state model as proposed by e.g. [7]. Figure 1 displays this model (model A), which is a multi-state model with states, 0= admission, 1= infection, 2= discharge alive and 3= death. For simplicity we assume that the hazard rates are constant over time so that we can focus on the key points concerning the censoring of the death cases. We denote α ij (t)=α ij as the hazard of moving from state "i" to state "j". An example hazard is,
$${} \alpha_{01}\!(t)\cdot\Delta t \!\approx\! P(\text{HI acquired by time t} \,+\, \Delta t | \text{no HI up to time t}). $$
Model A: The four state Multistate Model; 0 is "Admission" without hospital acquired infection (HI); 1 is hospital acquired "Infection"; 2 is the status of the patients who are "Discharged Alive" and 3 is the "Death" of the patient in the hospital. The constant hazard rates, α01 is the hazard rate to acquire the hospital infection during the hospital stay; α02 is the hazard rate to be discharged alive without the HI; α03 is the hazard rate to dead without the HI and α12 is the hazard rate to be discharged alive after the HI; α13 is the hazard rate to be dead after the HI
The actual hazard α01(t) is obtained by taking limits as Δt→0. We define the hazard rates, α01= infection hazard rate; α02= discharge hazard rate without infection; α03= death hazard rate without infection; α12= discharge hazard rate with infection and α13= death hazard rate with infection. Under a constant hazards assumption, one estimates α ij by using the maximum likelihood estimator
$$ \hat{\alpha}_{ij} = \frac{\text{number of i} \to \text{j transitions}}{\text{person-time in state i}}. $$
Under this model the mean sojourn time of an infected patient in the hospital is \(\frac {1}{\alpha _{12}+\alpha _{13}}\) and of an uninfected patient it is \(\frac {1}{\alpha _{01}+\alpha _{02}+\alpha _{03}}\). We write X t for the state occupied by the patient at time t. At a time point t, the patient status X t ∈{0,1,2,3}. By definition, all individuals start in the initial state 0 of being alive in the hospital and free of HI, i.e., X0=0. We denote T as the smallest time at which the process is in an absorbing state, T=inf{t:X t ∈{2,3}}. Eventually, end of the hospital stay occurs when X T ∈{2,3}.
To evaluate the impact of HIs on the subsequent hospital stay, Schulgen and Schumacher (1996) [6] suggested to consider the difference of the expected subsequent stay given infectious status at time s, ϕ(s)=E(T|X s =1)−E(T|X s =0). Schulgen and Schumacher called ϕ(s) the 'expected extra hospitalization time of an infected individual dependent on time s'. In our setting, the process follows a homogeneous Markov model. Allignol et al. [7] studied the cLOS for model A (Fig. 1) mathematically and found that cLOS does not depend on the time s in the homogeneous case. The cLOS can therefore be expressed as
$$ {} \text{CLOS}_{true} = \phi(s) = \left[\frac{\alpha_{02} + \alpha_{03}}{\alpha_{12} + \alpha_{13}}-1\right]\times \frac{1}{ \alpha_{01} + \alpha_{02} + \alpha_{03}} $$
Furthermore, Allignol et al. provided a formula to separate the estimation of the cLOS for the discharged patients and the deceased patients under the constant hazard set up. This formula is given by
$$ {\begin{aligned} {} \text{CLOS} &= \text{CLOS(due to discharged alive)} \\ &\quad+ \text{CLOS(due to deaths)}\\ &= \frac{\alpha_{12}}{\alpha_{12} + \alpha_{13}}\times \text{CLOS} + \frac{\alpha_{13}}{\alpha_{12} + \alpha_{13}}\times \text{CLOS} \end{aligned}} $$
Hence, we can separately estimate cLOS attributable to patients discharged alive and cLOS attributable to death cases by plugging in the estimates of the constant hazards obtained with (1).
Model B results from model A when treating death cases as censored. In contrast to model A, patients that die are assumed to remain under the same risk of being discharged alive as patients that are still in the hospital. While the discharge hazards of model A and B are the same, the absolute chance of discharge alive in model A depends on the competing risk death and therefore differs from the discharge probability modelled in model B. To derive the cLOS that results from model B, we apply the formula proposed by Allignol et al. which is then
$$ \text{CLOS}^{*} = \left[\frac{\alpha_{02}}{\alpha_{12}}-1\right]\frac{1}{\alpha_{01} + \alpha_{02}}. $$
Analytic expression for the bias
Our focus is on investigating the bias in cLOS when the information of the patients that die is censored. Using the formulas in Eqs. (2) and (4), we deduce that the bias in cLOS due to censoring is,
$$ {\begin{aligned} \text{CLOS}^{*} - \text{CLOS}_{true} =& \frac{\alpha_{03}(\alpha_{02} - \alpha_{12})}{\alpha_{12}(\alpha_{01} + \alpha_{02} + \alpha_{03})(\alpha_{01} + \alpha_{02})} \\ &+\frac{(\alpha_{02}\alpha_{13}- \alpha_{03}\alpha_{12})}{\alpha_{12}(\alpha_{01} + \alpha_{02} + \alpha_{03})(\alpha_{12}+\alpha_{13})}\\ =&\frac{\alpha_{03}(\alpha_{02} \,-\, \alpha_{12})}{\alpha_{12}\alpha_{0\cdot}\alpha^{*}_{0\cdot}} \,+\, \frac{(\alpha_{02}\alpha_{13}\,-\, \alpha_{03}\alpha_{12})}{\alpha_{0\cdot}\alpha_{1\cdot}\alpha_{12}}, \end{aligned}} $$
where α0·=α01+α02+α03, \(\alpha ^{*}_{0\cdot } = \alpha _{01} + \alpha _{02}\), α1·=α12+α13 and \(\alpha ^{*}_{1\cdot } = \alpha _{12}\). The formula shows that the bias depends on the product of the mean LOS in state 0 (α0.) and a term depending on all hazards. The second term determines the direction of the bias which could be positive or negative. In the following, we study the bias in specific settings which we call differential mortality. We define "direct differential mortality" as the setting where the discharge hazards α02 and α12 are the same but the death hazards α03 and α13 differ. In contrast, "indirect differential mortality" is described by equal death hazards but different discharge hazards. Of note, due to the competing risk situation both settings influence - directly or indirectly - the overall hospital mortality. We define Δ1=α13−α03 and Δ2=α02−α12 and emphasize that both quantities are likely to be positive because infected patients often have a higher mortality hazard and a lower discharge hazard, i.e., they stay longer in the hospital.
A formal mathematical derivation of the bias can be found in Additional file 1.
No differential mortality
The bias predominately depends on the hazard rates. In the following we study the magnitude of the bias under differential mortality. When there is no differential mortality, that is, no difference between the death hazards with and without infection and no difference between the discharge hazards with and without infection, Δ1=α13−α03=0 and Δ2=α02−α12=0, the bias becomes 0. The following formula can be used to obtain an idea of the magnitude and the direction of the bias for given values of the hazard functions when the death cases are censored.
Direct differential mortality
Under direct differential mortality, there is a non-zero difference between the death hazards with and without infection while the discharge hazards with and without infection are the same, that is Δ2=α02−α12=0 and Δ1=α13−α03≠0. Then, the bias can be expressed as
$$\begin{array}{*{20}l} \text{CLOS}^{*} - \text{CLOS}_{true} &= (\alpha_{13} - \alpha_{03})\cdot\frac{1}{\alpha_{0\cdot}}\cdot\frac{1}{\alpha_{1\cdot}} \\ &= \Delta_{1}\cdot\frac{1}{\alpha_{0\cdot}}\cdot\frac{1}{\alpha_{1\cdot}}. \end{array} $$
The bias changes with Δ1. Moreover, as \(\frac {1}{\alpha _{0\cdot }}\) and \(\frac {1}{\alpha _{1\cdot }}\) are the average sojourn time in state 0 and state 1 of uninfected and respectively infected patients, the bias also increases when the average sojourn times increase.
Indirect differential mortality
Under indirect differential mortality, there is a non-zero difference between the discharge hazards with and without infection while the death intensities with and without infection are the same, that is Δ1=α13−α03=0 and Δ2=α02−α12≠0. Then, the bias is
$$\begin{array}{*{20}l} {} \text{CLOS}^{*}\! -\! \text{CLOS}_{true} &\,=\, \left(\alpha_{02} - \alpha_{12}\right) \!\cdot\!\frac{1}\!\cdot\!\frac{1}{\alpha_{1\cdot}}\!\cdot\!\frac{\alpha_{03}\left(\alpha_{0\cdot} + \alpha_{12}\right)}{\alpha_{12}\alpha^{*}_{0\cdot}} \\ &= \Delta_{2} \cdot\frac{1}{\alpha_{0\cdot}}\cdot\frac{1}{\alpha_{1\cdot}}\cdot\frac{\alpha_{03}\left(\alpha_{0\cdot} + \alpha_{12}\right)}{\alpha_{12}\alpha^{*}_{0\cdot}}. \end{array} $$
The bias changes with Δ2. The bias also increases with the average waiting time in state 0 and in state 1. Again, in most of the real world situations, we observe Δ2>0, which means the infected patients have lower discharge rates than the uninfected ones. Then, the bias is positive which leads to an overestimation of the cLOS.
The derived analytical expressions demonstrate for a simplified setting (constant hazards, differential mortality) how estimation of cLOS is influenced when information of the death cases is censored. Only in the situation where HIs have neither an effect on the death hazards nor on the discharge hazards, the bias is avoided. Otherwise, the bias increases with increasing magnitude of the differential mortality.
To show the real world relevance of our findings, we apply the method to a data example. The constant hazards assumption is a facilitating way to compare the estimands of cLOS resulting from model A and model B. However, for real data application it is often too restrictive. Therefore, in our data example we compare models A (Fig. 1) and B (Fig. 2) both under the constant hazards assumption (time-homogeneous Markov model) and more generally under a time-inhomogeneous Markov model.
Model B: Multistate Model resulting from censoring the death cases; 0 is the "Admission" state; 1 is HI; 2 is the status of the patients who are "Discharged Alive" and information on rest of the patients are "Censored". The constant hazard rates that can be calculated from the model, α01 is the hazard rate to acquire a HI infection during the hospital stay; α02 is the hazard rate to be discharged alive without the HI; and α12 is the hazard rate to be discharged alive after the HI
We consider a subset of the SIR-3 cohort study from the Charite university hospital in Berlin, Germany, with prospective assessment of data to examine the effect of HIs in intensive care (Beyersmann et al. 2006a) [11]. The aim of this study was to investigate the effect of pneumonia which may be acquired by the patients during their stay in the ICU. The data is publicly available in the format of los.data from the etm R package. Briefly, los.data includes 756 patients who are admitted to the ICU between February 2000 and July 2001. After having been admitted to the ICU, 124 (16.4%) patients acquired pneumonia (infection) in the hospital. Among those who got infected, 34 (27.4%) patients died. Overall, 191 patient died after ICU admission which is 25.3%. None of the patients were censored.
For the analysis, we first modify the data structure such that it corresponds to model A. Moreover, to analyze the cLOS under the "censored model" (model B), the information of the patients who died is censored at the time of their death. Table 1 shows an extract of the dataset under each model.
Table 1 Extract of the data showing the artificial censoring of the patients who died in the hospital at the time of their death, denoted by "cens". It shows the patient identification number ("id"), transition state ("from" and "to"), time taken by the patient to move from state 0 to the current state "to" is given by "time". State "1" defines when the patient is infected, state "2" defines when the patient is discharged alive and in model A, state "3" defines death of the patient at the hospital while the same patients are artificially censored in model B
To obtain first insights into the data structure, we estimate the cause-specific cumulative hazards for model A (shown in Fig. 3). The graph indicates that the cumulative discharge hazards are not straight lines (which implies that they are not constant). Moreover, we observe that the discharge hazard is consistently reduced for patients with an HI. The cumulative hazards are estimated using the R-package mvna, developed by Allignol et al. (2008) [12] based on the Nelson-Aalen estimator. We also estimated the cumulative hazard rates for model B where the patients are censored at the time of their death (also shown in Fig. 3). We can clearly see that the censoring does not affect the other hazard rates. This means the discharge hazards as well as the infection hazard of model A and B are the same. Note that pneumonia appears to have no effect on the death hazard. However, this does not imply that pneumonia has no effect on mortality. The reason is that pneumonia reduces the discharge hazard as a consequence patient with pneumonia stay longer in the ICU. As a consequence, more patients with pneumonia are observed to die in the ICU than patients without pneumonia. The effects of HI on the death and discharge rates can be estimated with two cause-specific hazards model (for death and discharge). The indirect effect on mortality due to a decreased discharge hazard is commonly observed for hospital-acquired infections [13, 14].
Estimated Cumulative hazards rates in the first 80 days for the multi-state models in Figs. 1 and 2. The slope of each line corresponds to the actual hazard rate, e.g a straight line would mean a constant hazard rate. The left figure shows the cumulative hazard functions for model A, when death is considered as competing event. The right figure corresponds to that of model B, when the patients are censored at the time of death
Data analysis on the effect of censoring on the estimated extra length of stay
In this section we estimate the cLOS of the SIR-3 data sample. We first use the non-parametric approach for time-inhomogeneous Markov models followed by the parametric approach assuming the hazard rates of the dataset are constant. In both approaches we estimate the cLOS by describing the data with model A and model B respectively. Then we calculated the magnitude of the bias occurring in model B.
Moreover, we distinguish the cLOS obtained from model A between patients being discharged alive and patients that die. This way we can investigate how many extra ICU days are attributable to patients being discharged alive. This quantity is also compared to the biased model where the cLOS attributable to discharged patients is estimated by treating patients that die as censored.
Non-parametric model
We estimate the difference in cLOS associated with HIs within the framework of model A (no censoring of death cases) and model B (censoring of patients at the time of their death) by using the R-package etm. The package is based on computing the Aalen-Johansen estimators assuming a time-inhomogeneous Markov model. For model A, the estimated cLOS due to HIs is greater for earlier days (see the lower graphs in Fig. 4). The average cLOS over all days is calculated by weighting the differences in length of stay on each day. This gives an estimated cLOS of 1.975 days. The corresponding weight distributions are also illustrated in Fig. 4. The average expected cLOS estimated after censoring the information of the patients at the time of their death is 0.446 days (model B). So the difference in the cLOS estimated from model B and model A using the R package etm is 1.529 days.
Weights and expected LOS for patients with and without an HI in the first 15 daysof los.data, which is a subset of the SIR-3 study. The left figure corresponds to model A (death cases are considered as competing event). The right figure corresponds to model B (death cases are censored). The estimated cLOS due to model A is 1.975 days and that for model B is 0.446 days
We note in Fig. 4 that for model B the estimate for the cLOS in hospitals with and without infection cross each other. This implies that the underlying assumption of a homogeneous Markov model, i.e, when the hazard rates are constant, may not be a viable assumption for the data set. As Allignol et al. noted, these curves should be parallel for the homogeneous Markov assumption to be plausible.
Parametric model with constant hazards
To compare and investigate the results from the etm package with the analytical expressions derived in "Results and discussion" section, we further estimate the cLOS by assuming that the hazard rates are constant.
We first estimate the constant hazard rates with Eq. (1). We obtain, \(\hat {\alpha }_{01} = 0.019\), \(\hat {\alpha }_{02} = 0.074\), \(\hat {\alpha }_{03} = 0.024\), \(\hat {\alpha }_{12} = 0.059\) and \(\hat {\alpha }_{13} = 0.022\). Under a homogeneous Markov process, this data-situation is similar to indirect differential mortality. Plugging the estimates into the formulas in Eqs. (2) and (4), the cLOS from model A is 1.773. The cLOS due to HIs with censoring of the death cases is 2.699 (model B). Thus, censoring of the death cases is overestimating the cLOS by 0.926 days in the time-constant hazards set up. Unlike in the case of etm-estimation (time-inhomogenuous Markov model), in the time-constant hazards set up, model B is overestimating the cLOS with respect to model A.
Comparing the two estimation methods, we find that the cLOS under constant hazards is similar to the value obtained with the etm-package for model A (1.773 days and 1.975 days respectively). From model B, we obtain 2.699 days under the constant hazards assumption and 0.446 days with etm. Thus, the values obtained from model B clearly differ. While in the estimation with etm, model B is underestimating the cLOS with respect to model A, we observe the opposite under the constant hazards assumption.
This difference in behavior could be attributed to the consequence of the violation of the constant hazards assumption. As seen in Fig. 4, the cLOS of patients with and without HI cross for model B indicating a much stronger discrepancy to the assumption than for model A, where the curves rather touch than cross. These circumstances can further be understood when comparing the combined hazards with and without HI of model A and B with their time-constant counterparts shown in the Additional file 1: Figure S5.
For a more detail inspection, we estimate the cause-specific hazard rates non-parametrically with B-splines using the R-package bshazard. A detailed description of the method is given by Rebora et al. [15]. The estimated death and discharge hazards both with and without HI are shown in the Additional file 1: Figure S6. The plots show also the hazard rates obtained by using equation (1), where we assume that they are constant. Comparing the estimated hazard rates with their time-constant analogues we clearly see that the data does not correspond to a homogeneous Markov model. The discharge hazard before HI increases strongly in the first 10 days. After a peak at day 10 it strongly decreases again and remains on a moderate level from day 20 onward. The behavior of the death hazard before HI is similar but on a much lower level. Furthermore, it remains below the discharge hazard before HI at all time-points. In contrast, the discharge hazard after HI seems to be almost constant and is well approximated by \(\hat {\alpha }_{12}\) from formula (1). The death hazard after HI continuously slightly decreases and always remains below the discharge hazard after HI.
While the constant hazards assumption is not plausible, the time-inhomogeneous Markov assumption is. Testing this assumption by including time of HI as covariate in a Cox regression model showed no effect on the death and discharge hazards after HI. The hazard ratios were 0.98 ([0.94 ; 1.01]) and 1.03 ([0.96 ; 1.09]) respectively.
Distinction between discharged (alive) and dead
Using the clos function in the etm-package, we obtain 1.998 days as the estimated cLOS attributable to patients who are discharged alive and −0.0234 days attributable to those who died in the ICU. The difference in the estimated cLOS for model B and that attributable to patients discharged alive under model A is therefore about 1.552 days. Thus, model B also underestimates cLOS attributable to patients discharged alive. It is further to be noted that under estimation with etm (model A) overall cLOS is similar to that of the cLOS estimated for discharged patients (model A). This is due to the circumstance that most of the patients are discharged alive.
Using the formula (3) for the constant hazards approach, we obtain 1.291 days for discharged patients and 0.4815 days for the deceased patients. In Table 2, we see that model B is overestimating the cLOS with respect to the cLOS (due to discharge alive) by 1.408 days. This means model B is clearly overestimating the estimate from cLOS using only the discharged patients assuming a time-constant hazards set up.
Table 2 Estimation of cLOS with respect to model A (no censoring of deaths) as well as cLOS (discharged) and cLOS (death) (based on model A but distinguishing between death and discharge). Moreover, cLOS with respect to model B (censoring of deaths). Additionally we calculate the bias between model A and model B and the bias between model B and cLOS based on model A for discharged patients only. The comparison is done for the estimation of cLOS by assuming constant hazard and by using the etm package (assuming time-dependent hazards)
When comparing the time-inhomogeneous to the homogeneous (constant hazards) approach under model A we observe that the difference between overall cLOS and cLOS due to patients discharged alive is higher for the homogeneous approach. This is due to the circumstance that under constant hazards the effect of HI is averaged over the complete time-interval to estimate cLOS. Using the time-inhomogeneous approach by Allignol et al. cLOS is weighted according to the different lengths of stay. As most patients are discharged alive within the first few days, the weights are highest at these time-points (see Fig. 4). When using the homogeneous approach then the influence of the discharged patients on the estimate of cLOS is less strong.
The complete R code of the data analysis is provided in the Additional file 2.
The major innovation of this study was the systematic evaluation of the bias due to censoring of death cases when studying cLOS in the hospital due to HIs. While Allignol et al. [7] provided an appropriate estimator, the existence of the bias due to censoring of death cases was neither mentioned nor discussed by the authors.
We first evaluated the bias in a mathematically closed form assuming a setting with constant hazards. A similar approach in a simpler setting without competing outcomes has been used by Joly et al. [8]. Our analytical evaluation has the advantage that we are able to discuss challenging effects regarding direct and indirect differential mortality. Moreover, it allows us to make statements about the magnitude and direction of the bias.
The real data application also showed that effects regarding direct and indirect differential mortality do exist and that the bias influences the estimates of cLOS. In model A, the cLOS estimation via the time-homogeneous model gave similar estimates as the one which allows time-inhomogeneity, whereas it was different for model B where we treated patients that die as censored observations. Although a difference in estimation of cLOS has been observed due to censoring of death cases both by using the "time-dependent hazard" (via etm package) and the "time-constant hazard" assumptions, the bias shown in the two set ups goes in different direction. Thus, our closed formula has limitations if the assumptions are not fulfilled. Therefore, a time-dependent hazards model should be considered for future research. However, before dealing with a complicated time-inhomogeneous model one must understand the behavior of the bias for the simpler constant hazards model. Understanding the bias in a simple setting was the aim of this paper. To point out the presence of bias in a real world situation we have used the publicly available SIR-data. Even thought the constant hazards assumption is not possible for this data set we could demonstrate the existence of the bias.
A further limitation of our study was not considering confounding factors as the length of stay of the patients may depend on the underlying morbidity of the patient. We emphasize that the bias due to censoring the death cases is a type of survival bias and systematically different from confounding. Based on our findings, we can conclude that censoring the deaths should be avoided. Moreover, the formula we derived can be used to describe the bias for settings with constant hazards.
HI:
Hospital-acquired infection
cLOS:
change in length of hospital/ICU stay
Barnett AG, Beyersmann J, Allignol A, Rosenthal VD, Graves N, Wolkewitz M. Value Health. 2011; 14(2):381–6.
Brock GN, Barnes C, Ramirez JA, Myers J. How to handle mortality when investigating length of hospital stay and time to clinical stability. BMC Med Res Methodol. 2011; 11(1):144.
Noll DR, Degenhardt BF, Johnson JC. Multicenter osteopathic pneumonia study in the elderly: Subgroup analysis on hospital length of stay, ventilator-dependent respiratory failure rate, and in-hospital mortality rate. J Am Osteopath Assoc. 2016; 116(9):574–87.
Sousa A, Guerra RS, Fonseca I, Pichel F, Amaral T. Sarcopenia and length of hospital stay. Eur J Clin Nutr. 2016; 70(5):595–601.
Zúñiga MFS, Delgado OEC, Merchán-Galvis AM, Caicedo JCC, Calvache JA, Delgado-Noguera M. Factors associated with length of hospital stay in minor and moderate burns at popayan, colombia. analysis of a cohort study. Burns. 2016; 42(1):190–5.
Schulgen G, Schumacher M. Estimation of prolongation of hospital stay attributable to nosocomial infections: new approaches based on multistate models. Lifetime Data Anal. 1996; 2(3):219–40.
Allignol A, Schumacher M, Beyersmann J. Estimating summary functionals in multistate models with an application to hospital infection data. Comput Stat. 2011; 26(2):181–97.
Joly P, Commenges D, Helmer C, Letenneur L. A penalized likelihood approach for an illness–death model with interval-censored data: application to age-specific incidence of dementia. Biostatistics. 2002; 3(3):433–43.
Binder N, Schumacher M. Missing information caused by death leads to bias in relative risk estimates. J Clin Epidemiol. 2014; 67(10):1111–20.
Allignol A, Schumacher M, Beyersmann J, et al. Empirical transition matrix of multi-state models: the etm package. J Stat Softw. 2011; 38(4):1–15.
Beyersmann J, Gastmeier P, Grundmann H, Bärwolff S, Geffers C, Behnke M, Rüden H, Schumacher M. Use of multistate models to assess prolongation of intensive care unit stay due to nosocomial infection. Infect Control. 2006; 27(05):493–9.
Allignol A, Beyersmann J, Schumacher M. mvna: An R package for the nelson-aalen estimator in multistate models. R news. 2008; 8(2):48–50.
Schumacher M, Allignol A, Beyersmann J, Binder N, Wolkewitz M. Hospital-acquired infections—appropriate statistical treatment is urgently needed!. Int J Epidemiol. 2013; 42(5):1502–8.
Melsen WG, Rovers MM, Groenwold RH, Bergmans DC, Camus C, Bauer TT, Hanisch EW, Klarin B, Koeman M, Krueger WA, et al. Lancet Infect Dis. 2013; 13(8):665–71.
Rebora P, Salim A, Reilly M. R Journal. 2014; 6(2):114–22.
We thank Dr. Klaus Kaier and Thomas Heister for fruitful discussions and comments.
MW has been funded by the German Research Foundation (Deutsche Forschungsgemeinschaft) (grant No WO 1746/1-1); MvC has received support from the Innovative Medicines Initiative Joint Undertaking under grant agreement no. 115737-2 (Combatting bacterial resistance in Europe - molecules against Gram negative infections [COMBACTE-MAGNET]). The article processing charge was funded by the German Research Foundation (DFG) and the Albert Ludwigs University Freiburg in the funding programme Open Access Publishing.
R code is available in the supplementary material of this paper. The data is publicly available in the Comprehensive R Archive Network package (R) etm.
The Additional file 1: Figures S5 and S6 as well as the detailed derivation of the formula of the bias is available as separate file in the online supplementary material.
Institute of Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg, Freiburg, Germany
Maja von Cube, Martin Schumacher & Martin Wolkewitz
Freiburg Center of Data Analysis and Modelling, University of Freiburg, Eckerstr. 1, Freiburg, 79104, Germany
Department of Statistics, Texas A&M University, 3143 TAMU, 77843-3143, College Station, Texas, USA
Shahina Rahman
Maja von Cube
Martin Schumacher
Martin Wolkewitz
SR and MW calculated the analytical expression of the bias, SR performed the analysis of the data example; MvC critically revised and rephrased the manuscript and contributed to the analysis of the data example; MS gave insightful contributions to conception of the bias, critically revised the manuscript and provided major comments; MW formulated the main problem, supervised the research project and critically revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Maja von Cube.
MW is a member of the editorial board of BMC research methodology. SR, MvC and MS declare no competing interests.
Additional file 1
The document contains Figures S5 and S6 mentioned in section 'Results and discussion' as well as the detailed mathematical derivation of the bias formula presented in section 'Methods'. (PDF 192 kb)
The document is the complete R script used in section 4 for the data analysis. (R 6 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Rahman, S., von Cube, M., Schumacher, M. et al. Bias due to censoring of deaths when calculating extra length of stay for patients acquiring a hospital infection. BMC Med Res Methodol 18, 49 (2018). https://doi.org/10.1186/s12874-018-0500-3
Censored deaths
Extra length of stay
Hospital acquired infection
Multistate model | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.